2day Cluster 11.1 PDF
2day Cluster 11.1 PDF
May 2012
Oracle Database 2 Day + Real Application Clusters Guide 11g Release 2 (11.2)
E17264-12
Copyright © 2006, 2012, Oracle and/or its affiliates. All rights reserved.
Contributing Authors: Mark Bauer, Vivian Schupmann, Richard Strohm, Douglas Williams
Contributors: David Austin, Eric Belden, David Brower, Jonathan Creighton, Sudip Datta, Venkatadri
Ganesan, Shamik Ganguly, Prabhaker Gongloor, Mayumi Hayasaka, William Hodak, Masakazu Ito, Aneesh
Khandelwal, Sushil Kumar, Rich Long, Barb Lundhild, Venkat Maddali, Gaurav Manglik, Markus
Michalewicz, Mughees Minhas, Tim Misner, Joe Paradise, Srinivas Poovala, Hanlin Qian, Mark Scardina,
Laurent Schneider, Uri Shaft, Cathy Shea, Jacqueline Sideri, Vijay Sriram, Vishwanath Subrahmannya Sastry,
Mark Townsend, Ara Vagharshakian, Mike Zampiceni
This software and related documentation are provided under a license agreement containing restrictions on
use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your
license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license,
transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse
engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is
prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If
you find any errors, please report them to us in writing.
If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it
on behalf of the U.S. Government, the following notice is applicable:
U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data
delivered to U.S. Government customers are "commercial computer software" or "commercial technical data"
pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As
such, the use, duplication, disclosure, modification, and adaptation shall be subject to the restrictions and
license terms set forth in the applicable Government contract, and, to the extent applicable by the terms of
the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software
License (December 2007). Oracle America, Inc., 500 Oracle Parkway, Redwood City, CA 94065.
This software or hardware is developed for general use in a variety of information management
applications. It is not developed or intended for use in any inherently dangerous applications, including
applications that may create a risk of personal injury. If you use this software or hardware in dangerous
applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other
measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages
caused by use of this software or hardware in dangerous applications.
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of
their respective owners.
Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks
are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD,
Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced
Micro Devices. UNIX is a registered trademark of The Open Group.
This software or hardware and documentation may provide access to or information on content, products,
and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly
disclaim all warranties of any kind with respect to third-party content, products, and services. Oracle
Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your
access to or use of third-party content, products, or services.
Contents
v
About Configuring the Software Owner’s Shell Environment ................................................ 2-12
Configuring the Network .................................................................................................................... 2-13
Verifying the Network Configuration ......................................................................................... 2-15
Preparing the Operating System and Software............................................................................... 2-16
About Setting the Time on All Nodes .......................................................................................... 2-17
About Configuring Kernel Parameters ........................................................................................ 2-17
About Performing Platform-Specific Configuration Tasks....................................................... 2-18
Configuring Installation Directories and Shared Storage ........................................................... 2-18
Locating the Oracle Inventory Directory..................................................................................... 2-19
Creating the Oracle Grid Infrastructure for a Cluster Home Directory.................................. 2-19
Creating the Oracle Base Directory .............................................................................................. 2-20
About the Oracle Home Directory................................................................................................ 2-21
Configuring Shared Storage .......................................................................................................... 2-22
Configuring Files on an NAS Device for Use with Oracle ASM.............................................. 2-23
Using ASMLib to Mark the Shared Disks as Candidate Disks ................................................ 2-24
Installing ASMLib .................................................................................................................... 2-25
Configuring ASMLib............................................................................................................... 2-25
Using ASMLib to Create Oracle ASM Disks........................................................................ 2-26
Configuring Disk Device Persistence........................................................................................... 2-27
vi
4 Administering Database Instances and Cluster Databases
About Oracle Real Application Clusters Database Management .................................................. 4-1
About Oracle RAC One Node Database Management..................................................................... 4-2
About Oracle RAC Management Using Enterprise Manager ......................................................... 4-3
Starting and Stopping Oracle RAC Databases and Database Instances ....................................... 4-3
About Oracle Real Application Clusters Initialization Parameters ............................................... 4-4
About Configuring Initialization Parameters for an Oracle RAC Database ............................ 4-5
Parameters that Must Have Identical Settings on All Instances .......................................... 4-5
Parameters that Must Have Unique Settings on All Instances ............................................ 4-6
Parameters that Should Have Identical Settings on All Instances....................................... 4-6
About Modifying the SERVICE_NAMES Parameter for Oracle RAC................................ 4-7
About the Server Parameter File for Oracle Real Application Clusters..................................... 4-8
Editing Initialization Parameter Settings for an Oracle RAC Database..................................... 4-8
Modifying the Initialization Parameter for Oracle RAC Using the Current Tab .............. 4-8
Modifying the Initialization Parameter for Oracle RAC Using the SPFile Tab .............. 4-10
Example: Modifying the OPEN_CURSORS Parameter .................................................... 4-11
About Administering Storage in Oracle RAC ................................................................................. 4-12
About Automatic Undo Management in Oracle RAC............................................................... 4-12
Oracle Automatic Storage Management in Oracle RAC ........................................................... 4-12
About Oracle ASM Components in Oracle RAC ................................................................ 4-13
About Disk Group Configurations for Oracle ASM in Oracle RAC ................................ 4-13
About Standalone Oracle ASM Disk Group Management................................................ 4-13
About Oracle ASM Instance and Disk Group Management ............................................. 4-14
Administering Redo Logs in Oracle RAC ................................................................................... 4-14
About Redo Log Groups and Redo Threads in Oracle RAC Databases.......................... 4-15
About Accessing Redo Log Files for an Oracle RAC Database ........................................ 4-16
Using Enterprise Manager to View and Create Online Redo Log Files .......................... 4-16
vii
Recovering the OCR .......................................................................................................................... 5-7
Checking the Status of the OCR................................................................................................ 5-7
Restoring the OCR from Automatically Generated OCR Backups ..................................... 5-7
Changing the Oracle Cluster Registry Configuration ...................................................................... 5-8
Adding an OCR Location.................................................................................................................. 5-9
Migrating the OCR to Oracle ASM Storage ................................................................................... 5-9
Replacing an OCR ........................................................................................................................... 5-10
Removing an OCR .......................................................................................................................... 5-10
Repairing an OCR Configuration on a Local Node ................................................................... 5-11
Troubleshooting the Oracle Cluster Registry .................................................................................. 5-11
About the OCRCHECK Utility ..................................................................................................... 5-11
Common Oracle Cluster Registry Problems and Solutions...................................................... 5-12
viii
About Connection Load Balancing.................................................................................................. 7-8
About Client-Side Load Balancing ........................................................................................... 7-9
About Server-Side Load Balancing .......................................................................................... 7-9
About Run-time Connection Load Balancing ............................................................................. 7-10
Creating Services ................................................................................................................................... 7-11
Administering Services ........................................................................................................................ 7-15
About Service Administration Using Enterprise Manager....................................................... 7-15
Using the Cluster Managed Database Services Page................................................................. 7-16
Verifying Oracle Net Supports Newly Created Services .......................................................... 7-16
Configuring Clients for High Availability....................................................................................... 7-17
Configuring JDBC Clients.............................................................................................................. 7-18
Configuring JDBC Clients for Fast Connection Failover ................................................... 7-19
Configuring JDBC Clients for Connection Failure Notification ....................................... 7-20
Configuring OCI Clients ................................................................................................................ 7-21
Configuring ODP.NET Clients...................................................................................................... 7-22
ix
About the Oracle Clusterware Alert Log ............................................................................. 8-30
About the Oracle Clusterware Component Log Files ........................................................ 8-31
Checking the Status of the Oracle Clusterware Installation .............................................. 8-31
Running the Oracle Clusterware Diagnostics Collection Script ....................................... 8-32
Enabling Debugging of Oracle Clusterware Components ................................................ 8-32
Enabling Debugging for an Oracle Clusterware Resource ................................................ 8-32
Enabling and Disabling Oracle Clusterware Daemons...................................................... 8-33
Using the Cluster Verification Utility to Diagnose Problems................................................... 8-33
Verifying the Existence of Node Applications .................................................................... 8-33
Verifying the Integrity of Oracle Clusterware Components ............................................. 8-34
Verifying the Integrity of the Oracle Cluster Registry ....................................................... 8-34
Verifying the Integrity of Your Entire Cluster..................................................................... 8-35
Checking the Settings for the Interconnect .......................................................................... 8-35
Enabling Tracing ...................................................................................................................... 8-36
Viewing Oracle RAC Database Alerts ......................................................................................... 8-37
Viewing Oracle RAC Database Alert Log Messages ................................................................. 8-38
Monitoring and Tuning Oracle RAC: Oracle By Example Series ................................................ 8-38
x
Updating the Node List for OPatch............................................................................................ 10-14
About OPatch Log and Trace Files ............................................................................................. 10-15
Resolving the "Not a valid patch area" Error ............................................................................ 10-15
Resolving the "Unable to remove a partially installed interim patch" Error........................ 10-16
Upgrading the Oracle Software........................................................................................................ 10-16
Index
xi
xii
Preface
Oracle Database 2 Day + Real Application Clusters Guide describes how to install,
configure, and administer Oracle Clusterware, Oracle Automatic Storage Management
(Oracle ASM), and Oracle Real Application Clusters (Oracle RAC) on a two-node
system using the Oracle Linux system.
Note: For Linux operating systems other then Oracle Linux, see
Oracle Real Application Clusters Installation Guide for Linux and UNIX.
For other operating systems, see the platform-specific Oracle RAC
installation guide.
Audience
Oracle Database 2 Day + Real Application Clusters Guide is an Oracle RAC database
administration guide for DBAs who want to install and use Oracle RAC. This guide
assumes you have already read Oracle Database 2 Day DBA. This guide is intended for
DBAs who:
■ Want basic DBA skills for managing an Oracle RAC environment
■ Manage Oracle databases for small- to medium-sized businesses
To use this guide, you should be familiar with the administrative procedures described
in Oracle Database 2 Day DBA.
Note: Some DBAs may be interested in moving the data from their
single-instance Oracle Database to their Oracle RAC database. This
guide also explains the procedures for doing this.
Documentation Accessibility
For information about Oracle's commitment to accessibility, visit the Oracle
Accessibility Program website at
http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc.
xiii
Access to Oracle Support
Oracle customers have access to electronic support through My Oracle Support. For
information, visit
http://www.oracle.com/pls/topic/lookup?ctx=acc&id=info or visit
http://www.oracle.com/pls/topic/lookup?ctx=acc&id=trs if you are
hearing impaired.
Related Documents
For more information, see the following in the Oracle Database documentation set:
■ Oracle Real Application Clusters Installation Guide for Linux and UNIX
■ Oracle Grid Infrastructure Installation Guide for Linux
■ Oracle Real Application Clusters Administration and Deployment Guide
■ Oracle Database 2 Day DBA
■ Oracle Automatic Storage Management Administrator's Guide
Conventions
The following text conventions are used in this guide:
Convention Meaning
boldface Boldface type indicates graphical user interface elements associated
with an action, or terms defined in text or the glossary.
italic Italic type indicates book titles, emphasis, or placeholder variables for
which you supply particular values.
monospace Monospace type indicates commands within a paragraph, URLs, code
in examples, text that appears on the screen, or text that you enter.
xiv
1
1 Introduction to Oracle Database 2 Day +
Real Application Clusters Guide
Oracle Real Application Clusters (Oracle RAC) enables an Oracle database to run
across a cluster of servers, providing fault tolerance, performance, and scalability with
no application changes necessary. Oracle RAC provides high availability for
applications by removing the single point of failure with a single server.
This chapter provides an overview of Oracle Real Application Clusters (Oracle RAC)
environments. This chapter includes the following sections:
■ About This Guide
■ About Oracle Grid Infrastructure for a Cluster and Oracle RAC
■ About Oracle Automatic Storage Management
■ About Oracle Real Application Clusters
■ Tools for Installing, Configuring, and Managing Oracle RAC
See Also:
■ Oracle Database Concepts
■ Oracle Database Administrator's Guide
Related Materials
This guide is part of a comprehensive set of learning materials for administering
Oracle Databases, which includes a 2 Day DBA Oracle By Example (OBE) series
(available on the Web) and Oracle University instructor-led classes. The OBE series
also has viewlets, or animated demos you view using a Web browser.
You can view the OBE content for Oracle RAC at the following Web site:
http://www.oracle.com/technetwork/tutorials/
Use the Advanced Search function, and use the following criteria:
■ Product Family: Database
■ Product: Database 11g
■ Tag: RAC
The nodes in a cluster can be organized into a server pool for better resource
management. Each server pool has the following properties:
■ The minimum number of nodes that should be in the server pool
■ The maximum number of nodes that can be in the server pool
■ The relative importance of this server pool to other server pools
Upon installation of Oracle Grid Infrastructure for a cluster, a default server pool,
called the Free pool, is created automatically. All servers in a new installation are
assigned to the Free server pool, initially. If you create a new server pool, then the
servers move from the Free pool to the new server pool automatically.
When you create an Oracle RAC database that is a policy-managed database, you
specify the number of servers that are needed for the database, and a server pool is
automatically created for the database. Oracle Clusterware populates the server pool
with the servers it has available. If you do not use server pools, then you create an
See Also:
■ Oracle Real Application Clusters Administration and Deployment
Guide
■ Oracle Clusterware Administration and Deployment Guide
Figure 1–1 Oracle Clusterware Files Stored in an Oracle ASM Disk Group
Oracle recommends that you use Oracle ASM for your Oracle Clusterware files and
Oracle RAC datafiles, instead of raw devices or the operating system file system.
Oracle databases can use both Oracle ASM files and non-Oracle ASM files. You can
also create a file system using Oracle ACFS to store your database Oracle Home and
any other external (non-database) files in the cluster.
See Also:
■ "About Oracle Clusterware" on page 5-1 for information about the
Oracle Clusterware files
■ Oracle Database 2 Day DBA
■ Oracle Automatic Storage Management Administrator's Guide
cache of one instance is required by another instance, Cache Fusion transfers the data
block directly between the instances using the interconnect, enabling the Oracle RAC
database to access and modify data as if the data resided in a single buffer cache.
Oracle RAC is also a key component for implementing the enterprise grid computing
architecture using Oracle software. Having multiple database instances accessing a
single set of data files prevents the server from being a single point of failure. If a node
in the cluster fails, then the Oracle database continues running on the remaining
nodes. Individual nodes can be shutdown for maintenance while application users
continue to work.
Oracle RAC supports mainstream business applications, such as OLTP, DSS, and also
popular packaged products such as SAP, PeopleSoft, Siebel, and Oracle E*Business
Suite, as well as custom applications. Any packaged or custom application that scales
on an Oracle database scales well on Oracle RAC without requiring changes to the
application code.
You will learn more about the operation of the Oracle RAC database in a cluster, how
to build the cluster, and the structure of an Oracle RAC database in other sections of
this guide.
See Also:
■ Oracle Real Application Clusters Administration and Deployment
Guide
■ Oracle Clusterware Administration and Deployment Guide
See Also:
■ "About Oracle Real Application Clusters"
■ Oracle Real Application Clusters Administration and Deployment
Guide
See Also:
■ Oracle Real Application Clusters Administration and Deployment
Guide
resources include the node applications, called nodeapps, that comprise Oracle
Clusterware, which includes the Oracle Notification Service (ONS), the Global
Services Daemon (GSD), and the Virtual IP (VIP). Other resources that can be
managed by SRVCTL include databases, instances, listeners, services, and
applications. Using SRVCTL you can start and stop nodeapps, databases,
instances, listeners, and services, delete or move instances and services, add
services, and manage configuration information.
■ Cluster Ready Services Control (CRSCTL)—CRSCTL is a command-line tool that
you can use to manage Oracle Clusterware daemons. These daemons include
Cluster Synchronization Services (CSS), Cluster-Ready Services (CRS), and Event
Manager (EVM). You can use CRSCTL to start and stop Oracle Clusterware and to
determine the current status of your Oracle Clusterware installation.
■ Database Configuration Assistant (DBCA)—DBCA is a utility that is used to create
and configure Oracle Databases. DBCA can be launched by OUI, depending upon
the type of install that you select. You can also launch DBCA as a standalone tool
at any time after Oracle Database installation. You can run DBCA in interactive
mode or noninteractive/silent mode. Interactive mode provides a graphical
interface and guided workflow for creating and configuring a database. DBCA is
the preferred way to create a database, because it is a more automated approach,
and your database is ready to use when DBCA completes.
■ Oracle Automatic Storage Management Configuration Assistant
(ASMCA)—ASMCA is a utility that supports installing and configuring Oracle
ASM instances, disk groups, volumes, and Oracle Automatic Storage Management
Cluster File System (Oracle ACFS). ASMCA provides both a GUI and a non-GUI
interface.
■ Oracle Automatic Storage Management Command Line utility
(ASMCMD)—ASMCMD is a command-line utility that you can use to manage
Oracle ASM instances, Oracle ASM disk groups, file access control for disk groups,
files and directories within Oracle ASM disk groups, templates for disk groups,
and Oracle ASM volumes.
■ Listener Control (LSNRCTL)—The Listener Control utility is a command-line
interface that you use to administer listeners.You can use its commands to perform
basic management functions on one or more listeners. Additionally, you can view
and change parameter settings for the listener.
If you have installed Oracle Grid Infrastructure for a cluster for Oracle Database 11g
release 2 (11.2), then, when using utilities to manage your cluster, databases, database
instances, Oracle ASM, and listeners, use the appropriate binary that is in the home
directory of the object or component you are managing. Set the ORACLE_HOME
environment variable to point to this directory, for example:
■ If you use ASMCMD, srvctl, sqlplus, or lsnrctl to manage Oracle ASM or
its listener, then use the binaries located in the Grid home, not the binaries located
in the Oracle Database home, and set the ORACLE_HOME environment variable to
the location of the Grid home.
■ If you use srvctl, sqlplus, or lsnrctl to manage a database instance or its
listener, then use the binaries located in the Oracle home where the database
instance or listener is running, and set the ORACLE_HOME environment variable to
the location of that Oracle home
See Also:
■ Oracle Real Application Clusters Administration and Deployment
Guide
This chapter contains the information that your system administrator and network
administrator need to help you, as the DBA, configure two nodes in your cluster. This
chapter assumes a basic understanding of the Linux operating system. In some cases,
you may need to refer to details in Oracle Real Application Clusters Installation Guide for
Linux and UNIX. In addition, you must have root or sudo privileges to perform
certain tasks in this chapter (or Administrator privileges on Windows systems).
This chapter includes the following sections:
■ Verifying System Requirements
■ Preparing the Server
■ Configuring the Network
■ Preparing the Operating System and Software
■ Configuring Installation Directories and Shared Storage
You can find certification information by selecting the Certifications tab. You can also
review Note 964664.1 for instructions on how to locate the Certification information
for your platform.
https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id
=964664.1
See Also:
■ "Preparing the Server"
■ "Verifying Operating System and Software Requirements"
Note: Refer to the Oracle Grid Infrastructure Installation Guide and the
Oracle Real Application Clusters Installation Guide for your operating
system for the actual disk space requirements. The amount of disk
space used by the Oracle software can vary, and might be higher than
what is listed in this guide.
You need at least 5.5 GB of available disk space for the Grid home directory, which
includes both the binary files for Oracle Clusterware and Oracle Automatic
Storage Management (Oracle ASM) and their associated log files, and at least 4 GB
of available disk space for the Oracle Database home directory, or Oracle home
directory.
See Also:
■ Oracle Grid Infrastructure Installation Guide for your platform
■ "About Performing Platform-Specific Configuration Tasks"
■ "Preparing the Server"
■ "Configuring Installation Directories and Shared Storage"
Note: If you choose not to use Oracle ASM for storing your Oracle
Clusterware files, then both the voting disks and the OCR must reside
on a cluster file system that you configure before you install Oracle
Clusterware in the Grid home.
These Oracle Clusterware components require the following disk space on a shared
file system:
■ Three Oracle Clusterware Registry (OCR) files, 300 MB each, or 900 MB total disk
space
■ Three voting disk files, 300 MB each, or 900 MB total disk space
If you are not using Oracle ASM for storing Oracle Clusterware files, then for best
performance and protection, you should use multiple disks, each using a different disk
controller for voting disk file placement. Ensure that each voting disk is configured so
that it does not have share any hardware device or have a single point of failure.
See Also:
■ Oracle Grid Infrastructure Installation Guide for your platform
■ "Configuring Installation Directories and Shared Storage"
■ "Configuring Shared Storage"
See Also: Oracle Grid Infrastructure Installation Guide for Linux for
more information about configuring redundant interconnect usage
■ Public interface names must be the same for all nodes. If the public interface on
one node uses the network adapter eth0, then you must configure eth0 as the
public interface on all nodes. Network interface names are case-sensitive.
■ You should configure the same private interface names for all nodes as well. If
eth1 is the private interface name for the first node, then eth1 should be the
private interface name for your second node. Network interface names are
case-sensitive.
■ The network adapter for the public interface must support TCP/IP.
■ The network adapter for the private interface must support the user datagram
protocol (UDP) using high-speed network adapters and a network switch that
supports TCP/IP (Gigabit Ethernet or better).
Note:
■ You must use a switch for the interconnect. Oracle recommends
that you use a dedicated network switch. Token-rings or crossover
cables are not supported for the interconnect.
■ Loopback devices are not supported.
■ For the private network, the end points of all designated interconnect interfaces
must be completely reachable on the network. Every node in the cluster must be
able to connect to every private network interface in the cluster.
■ The host name of each node must conform to the RFC 952 standard, which permits
alphanumeric characters. Host names using underscores ("_") are not allowed.
See Also:
■ "Configuring the Network"
■ "Verifying System Requirements"
During installation a SCAN for the cluster is configured, which is a domain name that
resolves to all the SCAN addresses allocated for the cluster. The IP addresses used for
the SCAN addresses must be on the same subnet as the VIP addresses. The SCAN
must be unique within your network. The SCAN addresses should not respond to
ping commands before installation.
During installation of the Oracle Grid Infrastructure for a cluster, a listener is created
for each of the SCAN addresses. Clients that access the Oracle RAC database should
use the SCAN or SCAN address, not the VIP name or address. If an application uses a
SCAN to connect to the cluster database, then the network configuration files on the
client computer do not have to be modified when nodes are added to or removed from
the cluster. The SCAN and its associated IP addresses provide a stable name for clients
to use for connections, independent of the nodes that form the cluster. Clients can
connect to the cluster database using the easy connect naming method and the SCAN.
See Also:
■ "Configuring the Network"
■ "Verifying System Requirements"
■ Oracle Database Net Services Administrator's Guide for information
about the easy connect naming method
To determine if the operating system requirements for Oracle Linux have been
met:
1. To determine which distribution and version of Linux is installed, run the
following command at the operating system prompt as the root user:
# cat /proc/version
2. To determine which chip architecture each server is using and which version of the
software you should install, run the following command at the operating system
prompt as the root user:
# uname -m
This command displays the processor type. For a 64-bit architecture, the output
would be "x86_64".
3. To determine if the required errata level is installed, use the following procedure
as the root user:
# uname -r
2.6.9-55.0.0.0.2.ELsmp
Like most software, the Linux kernel is updated to fix bugs in the operating
system. These kernel updates are referred to as erratum kernels or errata levels.
The output in the previous example shows that the kernel version is 2.6.9, and the
errata level (EL) is 55.0.0.0.2.ELsmp. Review the required errata level for your
distribution. If the errata level is below the required minimum errata level, then
install the latest kernel update for your operating system. The kernel updates are
available from your operating system vendor.
4. To ensure there are no operating system issues affecting installation, make sure
you have installed all the operating system patch updates and packages that are
listed in Oracle Grid Infrastructure Installation Guide for your platform. If you are
using Oracle Linux, then you can determine if the required packages, or programs
that perform specific functions or calculations, are installed by using the following
command as the root user:
# rpm -q package_name
The variable package_name is the name of the package you are verifying, such as
setarch. If a package is not installed, then install it from your Linux distribution
media or download the required package version from your Linux vendor's Web
site.
You can also use either up2date or YUM (Yellow dog Updater Modified) to
install packages and their dependencies on some Linux systems. YUM uses
repositories to automatically locate and obtain the correct RPM packages for your
system.
See Also:
■ "About Installing Oracle RAC on Different Operating Systems"
■ "Preparing the Server"
■ "Preparing the Operating System and Software"
■ "About Configuring the Software Owner’s Shell Environment"
■ "About Performing Platform-Specific Configuration Tasks"
■ Oracle Grid Infrastructure Installation Guide and the Oracle Real
Application Clusters Installation Guide for your platform
See Also:
■ "Preparing the Operating System and Software"
■ "About Configuring Kernel Parameters"
■ "About Configuring the Software Owner’s Shell Environment"
■ "About Performing Platform-Specific Configuration Tasks"
If you use one installation owner for both Oracle Grid Infrastructure for a cluster and
Oracle RAC, then when you want to perform administration tasks, you must change
the value for ORACLE_HOME environment variable to match the instance you want to
administer (Oracle ASM, in the Grid home, or a database instance in the Oracle home).
To change the ORACLE_HOME environment variable, use a command syntax similar to
the following example, where /u01/app/11.2.0/grid is the Oracle Grid
Infrastructure for a cluster home:
Separate Operating System Users and Groups for Oracle Software Installations
Instead of using a single operating system user as the owner of every Oracle software
installation, you can use multiple users, each owning one or more Oracle software
installations. A user created to own only the Oracle Grid Infrastructure for a cluster
installation is called the grid user. This user owns both the Oracle Clusterware and
Oracle Automatic Storage Management binaries. A user created to own either all
Oracle software installations (including Oracle Grid Infrastructure for a cluster), or
only Oracle Database software installations, is called the oracle user.
You can also use different users for each Oracle Database software installation.
Additionally, you can specify a different OSDBA group for each Oracle Database
software installation. By using different operating system groups for authenticating
administrative access to each Oracle Database installation, users have SYSDBA
privileges for the databases associated with their OSDBA group, rather than for all the
databases on the system.
Members of the OSDBA group can also be granted the SYSASM system privilege,
which gives them administrative access to Oracle ASM. As described in the next
section, you can configure a separate operating system group for Oracle ASM
authentication to separate users with SYSASM access to the Oracle ASM instances from
users with SYSDBA access to the database instances.
If you want to create separate Oracle software owners so you can use separate users
and operating system privilege groups for the different Oracle software installations,
then note that each of these users must have the Oracle central inventory group
(oinstall) as their primary group. Members of this group have the required write
privileges to the Oracle Inventory directory.
2. If this is the first time Oracle software has been installed on your server, and the
Oracle Inventory group does not exist, then create the Oracle Inventory group
(oinstall) with a group ID that is currently not in use on all the nodes in your
cluster. Enter a command as the root user that is similar to the following:
# /usr/sbin/groupadd -g 1000 oinstall
3. Create an OSDBA (dba) group with a group ID that is currently not in use on all
the nodes in your cluster by entering a command as the root user that is similar
to the following:
# /usr/sbin/groupadd -g 1001 dba
4. If the user that owns the Oracle software (oracle) does not exist on your server,
then you must create the user. Select a user ID (UID) that is currently not in use on
all the nodes in your cluster. To determine which users have been created on your
server, list the contents of the /etc/passwd file using the command:
cat /etc/passwd
The following command shows how to create the oracle user and the user's
home directory (/home/oracle) with the default group as oinstall and the
secondary group as dba, using a UID of 1100:
# useradd -u 1100 –g oinstall -G dba -d /home/oracle -r oracle
5. Set the password for the oracle account using the following command. Replace
password with your own password.
passwd oracle
See Also:
■ "Configuring Installation Directories and Shared Storage"
■ "About Oracle Automatic Storage Management"
See Also:
■ "Configuring Operating System Users and Groups" on page 2-10
for information about configuring user equivalency
■ Oracle Grid Infrastructure Installation Guide for your platform for
more information about manually configuring SSH
Note: Remove any stty commands from hidden files (such as logon
or profile scripts) before you start the installation. On Linux systems,
if there are any such files that contain stty commands, then when
these files are loaded by the remote shell during installation, OUI
indicates an error and stops the installation.
See Also:
■ "About Operating System Users and Groups"
■ "Preparing the Operating System and Software"
■ "Configuring Installation Directories and Shared Storage"
■ "About Setting the Time on All Nodes"
■ "About Performing Platform-Specific Configuration Tasks"
From the output, identify the interface name (such as eth0) and IP address for
each network adapter you specify as a public network interface.
If your operating system supports a graphic user interface (GUI) for modifying the
system configuration, then you can also use the following command to start a GUI
that you can use to configure the network adapters and /etc/hosts file:
/usr/bin/system-config-network &
Note: When you install Oracle Clusterware and Oracle RAC, you
are asked to provide the interface name and IP address for each
network adapter.
4. On each node in the cluster, assign a public IP address with an associated network
name to one network adapter. The public name for each node should be registered
with your domain name system (DNS). IP addresses on the subnet you identify as
private are assigned as private IP addresses for cluster member nodes. You do not
have to configure these addresses manually in the /etc/hosts file.
Even if you are using a DNS, Oracle recommends that you add lines to the
/etc/hosts file on each node, specifying the public IP addresses. Configure the
/etc/hosts file so that it is similar to the following example:
#eth0 - PUBLIC
192.0.2.100 racnode1.example.com racnode1
192.0.2.101 racnode2.example.com racnode2
New:
hosts: dns files nis
9. After modifying the nsswitch.conf file, restart the nscd daemon on each node
using the following command:
# /sbin/service nscd restart
After you have completed the installation process, configure clients to use the SCAN
to access the cluster. Using the previous example, the clients would use docrac-scan
to connect to the cluster.
See Also:
■ "About Network Hardware Requirements"
■ Oracle Grid Infrastructure Installation Guide for your platform
2. As the root user, verify the network configuration by using the ping command
to test the connection from each node in your cluster to all the other nodes. For
example, as the root user, you might run the following commands on each node:
# ping -c 3 racnode1.example.com
# ping -c 3 racnode1
# ping -c 3 racnode2.example.com
# ping -c 3 racnode2
You should not get a response from the nodes using the ping command for the
virtual IPs (racnode1-vip, racnode2-vip) or the SCAN IPs until after Oracle
Clusterware is installed and running. If the ping commands for the public
addresses fail, then resolve the issue before you proceed.
3. Ensure that you can access the default gateway with a ping command. To identify
the default gateway, use the route command, as described in the Oracle Linux
Help utility.
See Also:
■ "Checking the Settings for the Interconnect"
■ "Configuring the Network"
■ "About Network Hardware Requirements"
See Also:
■ "Preparing the Server"
■ "Verifying Operating System and Software Requirements"
Note: If you use NTP, then you must configure it using the -x flag.
If you do not configure NTP, then Oracle configures and uses the Cluster Time
Synchronization Service (CTSS). CTSS can also be used to synchronize the internal
clocks of all the members in the cluster. CTSS keeps the member nodes of the cluster
synchronized. CTSS designates the first node in the cluster as the master and then
synchronizes all other nodes in the cluster to have the same time as the master node.
CTSS does not use any external clock for synchronization.
Note: Using NTP or CTSS does not protect your system against
human error resulting from a change in the system time for a node.
See Also:
■ "Preparing the Server"
■ "Preparing the Operating System and Software"
■ Oracle Grid Infrastructure Installation Guide for your platform
See Also:
■ "Preparing the Server"
■ "Preparing the Operating System and Software"
■ Oracle Grid Infrastructure Installation Guide for your platform
See Also:
■ "Preparing the Server"
■ "Preparing the Operating System and Software"
■ "About Installing Oracle RAC on Different Operating Systems"
■ Oracle Grid Infrastructure Installation Guide for your platform
See Also:
■ "Verifying System Requirements"
■ "About Operating System Users and Groups"
■ "About Hardware Requirements"
■ Chapter 3 in Oracle Grid Infrastructure Installation Guide for your
platform
2. If the oraInst.loc file exists, then the output from this command is similar to
the following:
inventory_loc=/u01/app/oraInventory
inst_group=oinstall
Oracle ASM into a directory referred to as Grid_home. Ensure that the directory path
you provide meets the following requirements:
■ It should be created in a path outside existing Oracle homes.
■ It should not be located in a user home directory.
■ It should be created either as a subdirectory in a path where all files can be owned
by root, or in a unique path.
■ If you create the path before installation, then it should be owned by the
installation owner of Oracle Grid Infrastructure for a cluster (oracle or grid),
and set to 775 permissions.
Before you start the installation, you must have sufficient disk space on a file system
for the Oracle Grid Infrastructure for a cluster directory. The file system that you use
for the Grid home directory must have at least 4.5 GB of available disk space.
The path to the Grid home directory must be the same on all nodes. As the root user,
you should create a path compliant with Oracle Optimal Flexible Architecture (OFA)
guidelines, so that OUI can select that directory during installation.
See Also:
■ "About Hardware Requirements"
■ "Configuring Shared Storage"
In the preceding path example, the variable mount_point is the mount point
directory for the file system where you intend to install the Oracle software and user
is the Oracle software owner (typically oracle). For OUI to recognize the path as an
Oracle software path, it must be in the form u0[1-9]/app, for example, /u01/app.
The path to the Oracle base directory must be the same on all nodes. The permissions
on the Oracle base directory should be at least 750.
2. Use the mkdir command to create the path to the Oracle base directory.
# mkdir -p /u01/app/oracle/
3. Change the ownership of the Oracle base path to the Oracle software owner,
oracle.
# chown -R oracle:oinstall /u01/app/oracle/
See Also:
■ "About Hardware Requirements"
■ "Configuring Shared Storage"
Note: You cannot use OUI to install Oracle Clusterware files on block
or raw devices. You cannot put Oracle Clusterware binaries and files
on Oracle Automatic Storage Management Cluster File System (Oracle
ACFS).
If you decide to use OCFS2 to store the Oracle Clusterware files, then you must use the
proper version of OCFS2 for your operating system version. OCFS2 works with Oracle
Linux and Red Hat Linux kernel version 2.6
For all installations, you must choose the storage option to use for Oracle Clusterware
files and Oracle Database files. The examples in this guide use Oracle ASM to store the
Oracle Clusterware and Oracle Database files. The Oracle Grid Infrastructure for a
cluster and Oracle RAC software is installed on disks local to each node, not on a
shared file system.
This guide discusses two different methods of configuring the shared disks for use
with Oracle ASM:
■ Configuring Files on an NAS Device for Use with Oracle ASM
■ Using ASMLib to Mark the Shared Disks as Candidate Disks
See Also:
■ Oracle Grid Infrastructure Installation Guide for your platform if you
are using a cluster file system or NFS
■ "Configuring Installation Directories and Shared Storage"
■ "About Hardware Requirements"
4. To ensure that the NFS file system is mounted when the system restarts, add an
entry for the file system in the mount file /etc/fstab.
For more information about editing the mount file for the operating system, refer
to the Linux man pages. For more information about recommended mount
options, refer to Oracle Grid Infrastructure Installation Guide for your platform.
5. Enter a command similar to the following to mount the NFS file system on the
local system, where host is the host name or IP address of the file server, and
pathname is the location of the storage within NFS (for example, /public):
# mount <host>:<pathname> /mnt/oracleasm
6. Choose a name for the disk group to create, for example, nfsdg.
7. Create a directory for the files on the NFS file system, using the disk group name
as the directory name, for example:
# mkdir /mnt/oracleasm/nfsdg
This example creates a 1 GB file named disk1 on the NFS file system. You must
create one, two, or three files respectively to create an external, normal, or high
redundancy disk group.
9. Enter the following commands to change the owner, group, and permissions on
the directory and files that you created:
# chown -R oracle:dba /mnt/oracleasm
# chmod -R 660 /mnt/oracleasm
10. When installing Oracle RAC, if you choose to create an Oracle ASM disk group,
then you must change the disk discovery path to specify a regular expression that
matches the file names you created, for example, /mnt/oracleasm/nfsdg/*.
See Also:
■ "Configuring Shared Storage"
■ "About Hardware Requirements"
The following sections describe how to install and configure ASMLib, and how to use
ASMLib to configure your shared disk devices:
■ Installing ASMLib
■ Configuring ASMLib
■ Using ASMLib to Create Oracle ASM Disks
See Also: If you choose not to use ASMLib, then see the section
"Configuring Disk Device Persistence" on page 2-27.
Installing ASMLib
The ASMLib software is available from the Oracle Technology Network. Select the link
for your platform on the ASMLib download page at:
http://www.oracle.com/technetwork/topics/linux/asmlib/index-1018
39.html
You should see four to six packages for your Linux platform. The oracleasmlib
package provides the actual Oracle ASM library. The oracleasm-support package
provides the utilities used to get the Oracle ASM driver up and running. Both of these
packages must be installed.
The remaining packages provide the kernel driver for the Oracle ASM library. Each
package provides the driver for a different kernel. You must install the appropriate
package for the kernel you run. Use the uname -r command to determine the version
of the kernel on your server. The oracleasm kernel driver package has that version
string in its name. For example, if you run Red Hat Enterprise Linux 4 AS, and the
kernel you are using is the 2.6.9-55.0.12.ELsmp kernel, then you would choose the
oracleasm-2.6.9-55.0.12.ELsmp-2.0.3-1.x86_64.rpm package.
After you have completed these commands, ASMLib is installed on the system.
4. Repeat steps 2 and 3 on each node in your cluster.
See Also:
■ "Using ASMLib to Mark the Shared Disks as Candidate Disks" on
page 2-24
Configuring ASMLib
Now that the ASMLib software is installed, a few steps have to be taken by the system
administrator to make the Oracle ASM driver available. The Oracle ASM driver must
be loaded, and the driver file system must be mounted. This is taken care of by the
initialization script, /usr/sbin/oracleasm.
The script prompts you for the default user and group to own the Oracle ASM
driver access point. Specify the Oracle Database software owner (oracle) and the
OSDBA group (dba).
The script also prompts you to specify whether you want to start the ASMLib
driver when the node is started and whether you want to scan for presence of any
Oracle Automatic Storage Management disks when the node is started. Answer
yes for both of these questions.
2. Repeat step 1 on each node in your cluster.
See Also:
■ "Using ASMLib to Mark the Shared Disks as Candidate Disks" on
page 2-24
In this command, disk_name is the name you choose for the Oracle ASM disk.
The name you choose must contain only ASCII capital letters, numbers, or
underscores, and the disk name must start with a letter, for example, DISK1 or
VOL1, or RAC_FILE1. The name of the disk partition to mark as an Oracle ASM
disk is the device_partition_name. For example:
# /usr/sbin/oracleasm createdisk DISK1 /dev/sdb1
If you must unmark a disk that was used in a createdisk command, then you
can use the following syntax:
# /usr/sbin/oracleasm deletedisk disk_name
4. On all the other nodes in the cluster, use the scandisks command to view the
newly created Oracle ASM disks. You do not have to create the Oracle ASM disks
on each node, only on one node in the cluster.
# /usr/sbin/oracleasm scandisks
5. After scanning for Oracle ASM disks, display the available Oracle ASM disks on
each node to verify their availability:
# /usr/sbin/oracleasm listdisks
DISK1
DISK2
DISK3
Note: At this point, you should restart each node on which you are
installing the Oracle Grid Infrastructure for a cluster software. After
the node has restarted, view the configured shared storage on each
node. This helps to ensure that the system configuration is complete
and persists across node shutdowns.
See Also:
■ "Using ASMLib to Mark the Shared Disks as Candidate Disks" on
page 2-24
See Also:
■ "Configuring Shared Storage" on page 2-22
■ Oracle Grid Infrastructure Installation Guide for your platform
This chapter explains how to install Oracle Real Application Clusters (Oracle RAC)
and Oracle RAC One Node using Oracle Universal Installer (OUI). Before installing
Oracle RAC or Oracle RAC One Node, you must first install the Oracle Grid
Infrastructure for a cluster software, which consists of Oracle Clusterware and Oracle
Automatic Storage Management (Oracle ASM), for 11g release 2 (11.2). After Oracle
Clusterware and Oracle ASM are operational, you can use OUI to install the Oracle
Database software with the Oracle RAC components. Installing Oracle RAC One Node
is available starting with Oracle Database 11g Release 2 (11.2.0.2).
This chapter includes the following sections:
■ Preparing the Oracle Media Installation File
■ Verifying My Oracle Support Credentials
■ Installing the Oracle Grid Infrastructure for a Cluster Software
■ Installing the Oracle Database Software and Creating a Database
■ Performing Postinstallation Tasks
■ About Converting an Oracle Database to an Oracle RAC Database
Installing Oracle Grid Infrastructure and Oracle Real Application Clusters 3-1
Verifying My Oracle Support Credentials
3. Copy the ZIP files to this staging directory. For example, if the files were
downloaded to a directory named /home/user1, and the ZIP file is named
11200_linux_db.zip, then use the following command to move the ZIP file to
the staging directory:
cd /home/user1
cp 11200_linux_db.zip /stage/oracle/11.2.0
4. As the oracle user on the first node, unzip the Oracle media, as shown in the
following example:
cd /stage/oracle/11.2.0
unzip 11200_linux_db.zip
See Also:
■ "Configuring Installation Directories and Shared Storage"
■ "About Operating System Users and Groups"
from My Oracle Support. The software updates are applied to the installed software
during the installation process.
2. Verify the changes have been made by executing the following commands:
[oracle]$ echo $ORACLE_SID
Installing Oracle Grid Infrastructure and Oracle Real Application Clusters 3-3
Installing the Oracle Grid Infrastructure for a Cluster Software
Note: Using fixup scripts does not ensure that all the required
prerequisites for installing Oracle Grid Infrastructure for a cluster and
Oracle RAC are satisfied. You must still verify that all the
requirements listed in Chapter 2, "Preparing Your Cluster" are met to
ensure a successful installation.
Using Oracle Universal Installer to Install the Oracle Grid Infrastructure for a Cluster
As the Oracle Grid Infrastructure for a cluster software owner (oracle) user on the
first node, install the Oracle Grid Infrastructure for a cluster for your cluster. Note that
OUI uses Secure Shell (SSH) to copy the binary files from this node to the other nodes
during the installation. OUI can configure SSH for you during installation.
2. Choose the Install and Configure Grid Infrastructure for a Cluster option, then
click Next.
The Select Installation Type window appears.
3. Select Typical Installation, then click Next.
The Specify Cluster Configuration window appears.
4. In the SCAN Name field, enter a name for your cluster that is unique throughout
your entire enterprise network. For example, you might choose a name that is
based on the node names' common prefix. This guide uses the SCAN name
docrac.
In the Hostname column of the table of cluster nodes, you should see your local
node, for example racnode1.example.com. Click Add to add another node to
the cluster.
The Add Cluster Node Information pop-up window appears.
Note: Specify both nodes during installation even if you plan to use
Oracle RAC One Node.
5. Enter the second node's public name (racnode2), and virtual IP name
(racnode2-vip), and then click OK.
You are returned to the Specify Cluster Configuration window.
6. You should now see both nodes listed in the table of cluster nodes. Click the
Identify Network Interfaces button. In the Identify Network Interfaces window,
verify that each interfaces has the correct interface type (Public or Private)
associated with it. If you have network interfaces that should not be used by
Oracle Clusterware, then set the network interface type to Do Not Use.
Make sure both nodes are selected, then click the SSH Connectivity button at the
bottom of the window.
The bottom panel of the window displays the SSH Connectivity information.
7. Enter the operating system username and password for the Oracle software owner
(oracle). Select the option If you have configured SSH connectivity between the
nodes, then select the Reuse private and public keys existing in user home option.
Click Setup.
A message window appears, indicating that it might take several minutes to
configure SSH connectivity between the nodes. After a short period, another
message window appears indicating that passwordless SSH connectivity has been
established between the cluster nodes. Click OK to continue.
8. When returned to the Specify Cluster Configuration window, click Next to
continue.
After several checks are performed, the Specify Install Locations window appears.
9. Perform the following actions on this page:
■ For the Oracle base field, make sure it is set to the location you chose for your
Oracle Base directory, for example /u01/app/oracle/. If not, then click
Browse. In the Choose Directory window, go up the path until you can select
the /u01/app/oracle/ directory, then click Choose Directory.
Installing Oracle Grid Infrastructure and Oracle Real Application Clusters 3-5
Installing the Oracle Grid Infrastructure for a Cluster Software
■ For the Software Location field, make sure it is set to the location you chose for
your Grid home, for example /u01/app/11.2.0/grid. If not, then click
Browse. In the Choose Directory window, go up the path until you can select
/u01/app/11.2.0/grid, then click Choose Directory.
■ For the Cluster Registry Storage Type, choose Automatic Storage
Management.
■ Enter a password for a SYSASM user in the SYSASM Password and Confirm
Password fields. This password is used for managing Oracle ASM after
installation, so make note of it in a secure place.
■ For the OSASM group, use the drop-down list to choose the operating system
group for managing Oracle ASM, for example, dba.
After you have specified information for all the fields on this page, click Next.
The Create ASM Disk Group page appears.
10. In the Disk Group Name field, enter a name for the disk group, for example
DATA. Choose the Redundancy level for this disk group, and then in the Add
Disks section, choose the disks to add to this disk group.
In the Add Disks section you should see the disks that you configured using the
ASMLIB utility in Chapter 2.
When you have finished selecting the disks for this disk group, click Next.
If you have not installed Oracle software previously on this computer, then the
Create Inventory page appears.
11. Change the path for the inventory directory, if required. If you are using the same
directory names as this book, then it should show a value of
/u01/app/oraInventory. The oraInventory group name should show
oinstall.
Note: The path displayed for the inventory directory should be the
oraInventory subdirectory of the directory one level above the
Oracle base directory. For example, if you set the ORACLE_BASE
environment variable to /u01/app/oracle/ before starting OUI,
then the OraInventory path displayed is /u01/app/oraInventory.
If there are other checks that failed, but are do not have a value of Yes in the
Fixable field, then you must configure the node to meet these requirements in
another window. After you have made the necessary adjustments, return to the
OUI window and click Check Again. Repeat as needed until all the checks have a
status of Succeeded. Click Next.
The Summary window appears.
13. Review the contents of the Summary window and then click Finish.
OUI displays a progress indicator allowing you to monitor the installation process.
14. As part of the installation process, you are required to run certain scripts as the
root user, as specified in the Execute Configuration Scripts window appears. Do
not click OK until you have run the scripts.
Installing Oracle Grid Infrastructure and Oracle Real Application Clusters 3-7
Installing the Oracle Grid Infrastructure for a Cluster Software
The Execute Configuration Scripts window shows configuration scripts, and the
path where the configuration scripts are located. Run the scripts on all nodes as
directed, in the order shown. For example, on Oracle Linux you perform the
following steps (note that for clarity, the examples show the current user, node and
directory in the prompt):
a. As the oracle user on racnode1, open a terminal window, and enter the
following commands:
[oracle@racnode1 oracle]$ cd /u01/app/oraInventory
[oracle@racnode1 oraInventory]$ su
b. Enter the password for the root user, and then enter the following command
to run the first script on racnode1:
[root@racnode1 oraInventory]# ./orainstRoot.sh
d. Enter the password for the root user, and then enter the following command
to run the first script on racnode2:
[root@racnode2 oraInventory]# ./orainstRoot.sh
Note: You must run the root.sh script on the first node and wait
for it to finish. You can run root.sh scripts concurrently on all other
nodes except for the last node on which you run the script. Like the
first node, the root.sh script on the last node must be run separately.
This command provides output showing if all the important cluster services, such
as gsd, ons, and vip, are started on the nodes of your cluster.
2. In the displayed output, you should see the Oracle Clusterware daemons are
online for each node in the cluster.
******************************************************************
racnode1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
******************************************************************
racnode2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
Installing Oracle Grid Infrastructure and Oracle Real Application Clusters 3-9
Installing the Oracle Database Software and Creating a Database
******************************************************************
If you see that one or more Oracle Clusterware resources are offline, or are
missing, then the Oracle Clusterware software did not install properly.
See Also:
■ "Installing the Oracle Grid Infrastructure for a Cluster Software"
on page 3-3
■ "About Oracle Grid Infrastructure for a Cluster and Oracle RAC"
on page 1-3
■ "Configuring Installation Directories and Shared Storage" on
page 2-18
2. Verify the changes have been made by executing the following commands:
[oracle]$ echo $ORACLE_SID
3. Install the Oracle Grid Infrastructure for a cluster software. See “Installing the
Oracle Grid Infrastructure for a Cluster Software” on page 3.
The Oracle ASM Configuration Assistant starts, and displays the Disk Groups
window.
3. Click the Create button at the bottom left-hand side of the window to create a disk
group.
The Create Disk Group window appears.
4. Provide the following information:
■ In the Disk Group Name field, enter a name for the new disk group, for
example, FRA.
■ Choose a Redundancy level, for example, Normal.
■ Select the disks to include in the new disk group.
If you used ASMLIB to configure the disks for use with Oracle ASM, then the
available disks are displayed if you have the Show Eligible option selected,
and they have a Header Status of PROVISIONED.
After you have provided all the information, click OK. A progress window titled
DiskGroup: Creation appears. After a few minutes, a message appears indicating
the disk group was created successfully. Click OK to continue.
5. Repeat Step 3 and 4 to create additional disk groups, or click Exit, then select Yes
to exit the utility.
Installing Oracle Grid Infrastructure and Oracle Real Application Clusters 3-11
Installing the Oracle Database Software and Creating a Database
When you start Oracle Universal Installer, the Configure Security Updates
window appears.
2. (Optional) Enter your email address and My Oracle Support password, then click
Next to continue.
If you want to receive notification by email of any newly discovered security
issues related to the software you are installing, then enter an email address in the
Email field. If you also want to receive the security updates through My Oracle
Support, then use the same email address that is registered with My Oracle
Support, select the I wish to receive security updates via My Oracle Support
option, and enter your My Oracle Support login password in the My Oracle
Support Password field.
If you provide an email address, then the Oracle Configuration Manager (OCM)
tool with also be installed. This utility provides Oracle Support Services with
configuration information about your system when creating service requests. You
can disable the OCM tool after installation, but Oracle strongly discourages this.
OCM does not access, collect, or store any personal information (except for
technical support contact information), or any business data files residing in your
software environment. For more information about OCM, see
http://www.oracle.com/technetwork/documentation/ocm-092152.ht
ml.
After you click Next, the Select Installation Option window appears.
3. If you want to create an Oracle RAC database, then select Create and configure a
database. If you want to create an Oracle RAC One Node database, then select
Install database software only. Click Next to continue.
The System Class window appears.
4. Choose Server Class, then click Next.
If you choose the Desktop Class option, then OUI installs a single-instance
database, not a clustered database.
The Node Selection screen appears.
5. Select the Real Application Clusters database installation type.
Select the nodes on which you want to install Oracle Database software and create
an Oracle RAC instance. All the available nodes in the cluster are selected by
default.
Note: Select both nodes during installation, even if you are creating
an Oracle RAC One Node database.
Click the SSH Connectivity button at the bottom of the window. The bottom
panel of the window displays the SSH Connectivity information.
6. Because you configured SSH connectivity between the nodes for the Oracle Grid
Infrastructure for a cluster installation, select the Reuse private and public keys
existing in user home option. If you are using a network user with home
directory on shared storage, then also select the "User home if shared by the
selected nodes" option. Click Test.
A message window appears, indicating that passwordless SSH connectivity has
been established between the cluster nodes. Click OK to continue.
7. When returned to the Node Selection window, click Next to continue.
The Select Install Type window appears.
8. Choose the Typical install option, and click Next.
A typical installation requires minimal input. It installs the software and
optionally creates a general-purpose database. If you choose the Advanced
installation type (not documented in this guide), then you are prompted to
provide more information about how the database should be configured. For
example, you could set passwords for each user account individually, choose a
different template for the starter database, choose a nondefault language for the
database, and so on.
The Typical Install Configuration window appears.
9. In this window, you must provide the following information:
■ Oracle base location: The default value is /u01/app/oracle/. If you did
not set the ORACLE_BASE environment variable and the default location is
different from the directory location you have chose, then enter the directory
for your Oracle base or click the Browse button to change the directory path.
■ Software location: If you did not set the ORACLE_HOME environment variable
before starting the installation, then enter the directory for your Oracle home
or click the Browse button to change the directory path.
■ Storage Type: In this drop-down list, choose Automatic Storage Management
(ASM). If you do not want to use Oracle ASM, then choose File System.
Because Oracle ASM was installed with the Oracle Grid Infrastructure for a
cluster, Oracle Automatic Storage Management is the default value.
■ Database file location: Choose the disk group to use for storing the database
files. You can use the same disk group that Oracle Clusterware uses. If you do
not want to use the same disk group that is currently being used to store the
Oracle Clusterware files, then you must exit the installation and create a new
disk group using Oracle ASM utilities. Refer to "Creating Additional Oracle
ASM Disk Groups" on page 3-11 for more information on creating a disk
group.
If you chose the File System storage type, then enter the directory location of
the shared storage where the database files will be created.
■ ASMSNMP Password: Enter the password for the ASMSNMP user. The
ASMSNMP user is used primarily by Oracle Enterprise Manager to monitor
Installing Oracle Grid Infrastructure and Oracle Real Application Clusters 3-13
Installing the Oracle Database Software and Creating a Database
See Also: Oracle Real Application Clusters Installation Guide for Linux
and UNIX or other platform for information about creating an Oracle
RAC One Node database using DBCA
12. In the last step of the installation process, you are prompted to perform the task of
running the root.sh script on both nodes, as specified in the Execute
Configuration Scripts window. Do not click OK until you have run the scripts on
all nodes.
Perform the following steps to run the root.sh script (note that for clarity, the
examples show the current user, node and directory in the prompt):
a. Open a terminal window. As the oracle user on the first node. Change
directories to your Oracle home directory, and then switch to the root user by
entering the following commands:
[oracle@racnode1 oracle]$ cd /u01/app/oracle/product/11.2.0/dbhome_1
[oracle@racnode1 dbhome_1]$ su
b. Enter the password for the root user, and then run the script specified in the
Execute Configuration scripts window:
[root@racnode1 dbhome_1]# ./root.sh
Note: You can run the root.sh script simultaneously on all nodes
in the cluster for Oracle RAC installations or upgrades.
c. As the root.sh script runs, it prompts you for the path to the local bin
directory. The information displayed in the brackets is the information it has
obtained from your system configuration. It also writes the dbhome, oraenv,
and coraenv files in the /usr/local/bin directory. If these files exist, then
you are asked to overwrite them. After responding to prompt, press the Enter
key. To accept the default choice, press the Enter key without entering any text.
d. Enter commands similar to the following to run the script on the other nodes:
[root@racnode1 dhome_1]# exit
[oracle@racnode1 dhome_1]$ ssh racnode2
[oracle@racnode2 ~]$ cd /u01/app/oracle/product/11.2.0/dbhome_1
[oracle@racnode2 dbhome_1]$ su
Installing Oracle Grid Infrastructure and Oracle Real Application Clusters 3-15
Installing the Oracle Database Software and Creating a Database
e. Enter the password for the root user, and then run the script specified in the
Execute Configuration scripts window:
[root@racnode2 dbhome_1]# ./root.sh
After you finish executing the script on all nodes, return to the Execute
Configuration scripts window and click OK.
The Install Product window is displayed.
13. Click Next to complete the installation.
The Finish window is displayed, with the URL for Enterprise Manager Database
Control displayed.
14. Click Close to exit the installer.
See Also:
■ "Configuring the Operating System Environment"
■ "Verifying Your Oracle RAC Database Installation"
■ "Performing Postinstallation Tasks"
■ "About Downloading and Installing RDBMS Patches"
■ Oracle Real Application Clusters Administration and Deployment
Guide for more information about configuring disk groups in
Oracle ASM
2. Run the following command to view the status of the resources managed by
Oracle Clusterware that contain the string ’ora’:
[oracle] $ ./crsctl status resource -w "TYPE co ’ora’" -t
The output of the command should show that the Oracle Clusterware, Oracle
ASM, and Oracle Database resources are available (online) for each host. An
example of the output is:
------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
------------------------------------------------------------------------------
Local Resources
------------------------------------------------------------------------------
ora.DATA.dg
ONLINE ONLINE racnode1
ONLINE ONLINE racnode2
ora.LISTENER.lsnr
ONLINE ONLINE racnode1
ONLINE ONLINE racnode2
ora.asm
ora.eons
ONLINE ONLINE racnode1
ONLINE ONLINE racnode2
ora.gsd
OFFLINE OFFLINE racnode1
OFFLINE OFFLINE racnode2
ora.net1.network
ONLINE ONLINE racnode1
ONLINE ONLINE racnode2
ora.ons
ONLINE ONLINE racnode1
ONLINE ONLINE racnode2
ora.registry.acfs
ONLINE ONLINE racnode1
ONLINE ONLINE racnode2
------------------------------------------------------------------------------
Cluster Resources
------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE racnode1
ora.oc4j
1 OFFLINE OFFLINE
ora.orcl.db
1 ONLINE ONLINE racnode1 Open
2 ONLINE ONLINE racnode2 Open
ora.racnode1.vip
1 ONLINE ONLINE racnode1
ora.racnode2.vip
1 ONLINE ONLINE racnode2
ora.scan1.vip
1 ONLINE ONLINE racnode1
Installing Oracle Grid Infrastructure and Oracle Real Application Clusters 3-17
Performing Postinstallation Tasks
See Also:
■ Oracle Clusterware Administration and Deployment Guide for more
information about using CVU and resolving configuration
problems
■ "Verifying Your Oracle RAC Database Installation" on page 3-16
See Also:
■ "Overview of Oracle RAC Database Backup and Recovery" on
page 6-1
■ "Performing Backups of Your Oracle Real Application Clusters
Database" on page 6-7
■ "Performing Postinstallation Tasks"
See Also:
■ "Using Oracle Universal Installer to Install the Oracle Grid
Infrastructure for a Cluster" on page 3-4
■ "Performing Postinstallation Tasks"
To verify Oracle Enterprise Manager Database Control has been started in your
new Oracle RAC environment:
1. Make sure the ORACLE_UNQNAME environment variable is set to the unique name
of the database to which you want to connect, for example orcl. Also make sure
the ORACLE_HOME environment variable is set to the location of the installed
Oracle Database software.
$ export ORACLE_UNQNAME=orcl
$ echo $ORACLE_HOME
/u01/app/oracle/product/11.2.0/dbhome_1
Installing Oracle Grid Infrastructure and Oracle Real Application Clusters 3-19
Performing Postinstallation Tasks
$ echo $ORACLE_UNQNAME
orcl
The Enterprise Manager Control (EMCTL) utility displays the current status of the
Database Control console on the current node.
Oracle Enterprise Manager 11g Database Control Release 11.2.0.1.0
Copyright (c) 1996, 2009 Oracle Corporation. All rights reserved.
https://racnode1.example.com:1158/em/console/aboutApplication
Oracle Enterprise Manager 11g is running.
------------------------------------------------------------------
Logs are generated in directory
/u01/app/oracle/product/11.2.0/dbhome_1/racnode1_orcl/sysman/log
4. If the EMCTL utility reports that Database Control is not started, then use the
following command to start it:
./emctl start dbconsole
Following a typical installation, Database Control serves console pages from the
node where database was created. The console also monitors agents on all nodes
of the cluster. However, you can configure Enterprise Manager to have multiple
Database Control consoles within a cluster using EMCA.
See Also:
■ Oracle Database 2 Day DBA
■ "About Oracle RAC Management Using Enterprise Manager"
See Also:
■ "Verifying My Oracle Support Credentials" on page 3-2
■ "Obtaining the Patch" on page 10-1
■ "Download and Install Patch Updates" in Chapter 5 of Oracle Grid
Infrastructure Installation Guide for Linux
■ Oracle Universal Installer and OPatch User's Guide for Windows and
UNIX
See Also:
■ "About Operating System Users and Groups" on page 2-8
■ "Installing the Oracle Database Software and Creating a Database"
on page 3-10
■ "Performing Postinstallation Tasks" on page 3-18
■ Oracle Database Administrator's Reference for Linux and UNIX-Based
Operating Systems for more information about setting up optional
operating system user accounts you use to manage the database
Installing Oracle Grid Infrastructure and Oracle Real Application Clusters 3-21
About Converting an Oracle Database to an Oracle RAC Database
■ The hardware and operating system software used to implement your Oracle RAC
database must be certified for use with the release of the Oracle RAC software you
are installing.
■ You must configure shared storage for your Oracle RAC database.
■ You must verify that any applications that run against the Oracle RAC database do
not need any additional configuration before they can be used successfully with
the cluster database. This applies to both Oracle applications and database
features, such as Oracle Streams, and applications and products that do not come
from Oracle.
■ Backup procedures should be available before converting from a single-instance
Oracle Database to Oracle RAC.
■ For archiving in Oracle RAC environments, the archive log file format requires a
thread number.
■ The archived redo log files from all instances of an Oracle RAC database are
required for media recovery. If you archive to a file and you do not use a cluster
file system, or some other means to provide shared file systems, then you require a
method of accessing the archived redo log files from all nodes on which the cluster
database has instances.
See Also: Oracle Real Application Clusters Installation Guide for Linux
and UNIX, or for a different platform, for a complete description of
this process
See Also: Oracle Real Application Clusters Installation Guide for Linux
and UNIX, or for a different platform, for a complete description of
this process
Converting an Oracle RAC Database into an Oracle RAC One Node Database
After you use the rconfig utility to convert a single-instance Oracle database into a
single-node Oracle RAC database, you can use the srvctl utility to convert the
database into an Oracle RAC One Node database. This functionality is available
starting with Oracle Database 11g Release 2 (11.2.0.2)
To convert your database to an Oracle RAC One Node database, use the following
command:
srvctl convert database -d <database_name> -c RACONENODE
Installing Oracle Grid Infrastructure and Oracle Real Application Clusters 3-23
About Converting an Oracle Database to an Oracle RAC Database
An Oracle RAC One Node database must be part of a multi-node cluster to support
failover or online database relocation. You must either install Oracle Grid
Infrastructure for a cluster and Oracle RAC on at least two nodes, or add a node to
your existing single-node Oracle RAC database.
See Also:
■ Chapter 9, "Adding and Deleting Nodes and Instances" for more
information about adding nodes
■ "About Oracle RAC One Node" for more information about
Oracle RAC One Node
Cluster Databases
Web-based Oracle Enterprise Manager Database Control and Grid Control interfaces
let you manage Oracle Real Application Clusters (Oracle RAC) databases. The
Enterprise Manager console is a central point of control for the Oracle environment.
Use the Database Control console to initiate cluster database management tasks. Use
the Grid Control console to administer multiple Oracle RAC databases and cluster
nodes.
This chapter describes how to administer your Oracle RAC environment. It explains
the startup and shutdown tasks for database components and how to administer
parameters and parameter files in Oracle RAC. This chapter includes the following
sections:
■ About Oracle Real Application Clusters Database Management
■ About Oracle RAC One Node Database Management
■ About Oracle RAC Management Using Enterprise Manager
■ Starting and Stopping Oracle RAC Databases and Database Instances
■ About Oracle Real Application Clusters Initialization Parameters
■ About Administering Storage in Oracle RAC
Note: If you are using Oracle Database Standard Edition, then your
cluster must adhere to the license restrictions. See Oracle Database
Licensing Information for specific details on these restrictions.
An Oracle RAC database requires three components: cluster nodes, shared storage,
and Oracle Clusterware. Although you can choose how many nodes your cluster
should have and what type of shared storage to use, this guide describes one specific
configuration for a two-node cluster. This two-node configuration uses Oracle
Automatic Storage Management (Oracle ASM) for storage management and Recovery
Manager (RMAN) for the backup and recovery strategy.
Most administration tasks are the same for Oracle single-instance and Oracle RAC
databases. This guide provides additional instructions for database administration
tasks specific to Oracle RAC, and recommendations for managing Oracle RAC
databases.
See Also:
■ Oracle Database 2 Day DBA
■ Chapter 9, "Adding and Deleting Nodes and Instances"
Oracle RAC One Node databases are administered slightly differently from Oracle
RAC or single-instance databases. For administrator-managed Oracle RAC One Node
databases, you must monitor the candidate node list and make sure a server is always
available for failover if possible. Candidate servers reside in the Generic server pool
and the database and its services will fail over to one of those servers. For
policy-managed Oracle RAC One Node databases, you must ensure that the server
pools are configured such that a server will be available for the database to fail over to
in case its current node becomes unavailable. Also, for policy-managed Oracle RAC
One Node databases, the destination node for online database relocation must be
located in the database's server pool.
Oracle Real Application Clusters One Node (Oracle RAC One Node) is a single
instance of an Oracle Real Application Clusters (Oracle RAC) database that runs on
one node in a cluster. Instead of stopping and starting instances, you use Oracle RAC
One Node online database relocation to relocate the Oracle RAC One Node instance to
another server.
See Also:
■ Oracle Database 2 Day DBA
■ Oracle Real Application Clusters Administration and Deployment
Guide for more information about monitoring Oracle RAC
performance
To start and stop an entire Oracle RAC database, assuming you are using a
server parameter file (SPFILE):
1. Go to the following URL and log in to Enterprise Manager:
http://hostname:portnumber/em
2. On the Cluster Database Home page, in the General section, click Startup if the
database is down, or Shutdown if the database is started.
The Startup/Shutdown: Specify Credentials page appears.
3. Enter the host credentials for the cluster nodes. The host credentials are the user
name and password for a user who is a member of the OSDBA or OSOPER
operating system group.
The Startup/Shutdown: Select Operation page appears.
4. Click Select All to select all the instances, or then click Shutdown to stop all the
database instances or Startup to start all the database instances.
The Startup/Shutdown: Confirmation page appears.
5. Click Yes.
To start and stop individual instances, go to the Startup/Shutdown: Select Operation
page and select the database instances, then click Startup or Shutdown to perform the
desired operation on the selected database instances. You can also start and shut down
instances using SQL*Plus or Server Control (SRVCTL).
Note: You can start and shut down individual instances from each
instance's home page. However, it is easier to perform instance startup
and shutdown operations directly from the Startup/Shutdown: Select
Operation page.
See Also:
■ Oracle Real Application Clusters Administration and Deployment
Guide for more information about using command-line interfaces
to start and stop Oracle RAC database instances
See Also:
■ Oracle Database 2 Day DBA
■ Oracle Real Application Clusters Administration and Deployment
Guide for more information about initialization parameters in an
Oracle RAC environment
Parameter Description
ARCHIVE_LAG_TARGET Different values for instances in your Oracle RAC database are
likely to increase overhead because of additional automatic
synchronization performed by the database processing.
When using Oracle Streams with your Oracle RAC database,
the value should be greater than zero.
CONTROL_MANAGEMENT_ This parameter controls the use of the Diagnostics and Tuning
PACK_ACCESS Packs for Oracle Enterprise Manager Database Control
(Database Control). You should set the value for this parameter
on all instance to reflect whether you have purchased the
Diagnostics and Tuning Manageability Packs for your Oracle
RAC database.
DIAGNOSTIC_DEST As of Oracle Database 11g release 1 (11.1), the diagnostics for
each database instance are located in a dedicated directory,
which can be specified through the DIAGNOSTIC_DEST
initialization parameter. This parameter can be set on each
instance. Oracle recommends that each instance in a cluster
specify a DIAGNOSTIC_DEST directory location that is located
on shared disk and that the same value for DIAGNOSTIC_DEST
be specified for each instance.
Parameter Description
LICENSE_MAX_USERS This parameter determines a database-wide limit on the
number of user accounts defined in the database and it is useful
to have the same value on all instances of your database so you
can see the current value no matter which instance you are
using. Setting different values may generate additional warning
messages during instance startup, or cause commands related
to database user account management to fail on some instances.
If you have concurrent usage (session) licensing, then set the
LICENSE_MAX_SESSIONS parameter instead. When using
LICENSE_MAX_SESSIONS, set the value on each instance so
that the sum of all the values is less than or equal to the total
number of sessions licensed for the database. Do not set both
LICENSE_MAX_SESSIONS and LICENSE_MAX_USERS.
LOG_ARCHIVE_FORMAT If you do not use the same value for all your instances, then you
complicate media recovery. The recovering instance expects the
required archived redo log file names to have the format
defined by its own value of LOG_ARCHIVE_FORMAT, regardless
of which instance created the archived redo log files.
Databases that support Oracle Data Guard, either to send or
receive archived redo log files, must use the same value of
LOG_ARCHIVE_FORMAT for all instances.
REDO_TRANSPORT_USER This parameter specifies the name of the user whose password
verifier is used when a remote login password file is used for
redo transport authentication. This parameter is used in Oracle
Data Guard configurations.
SPFILE The SPFILE resides in the $ORACLE_HOME/dbs directory;
however, users can place it any storage accessible to the local
host if it is specified in an initialization parameter file.
If this parameter does not identify the same file to all instances,
then each instance may act differently and unpredictably in
failover, load-balancing, or standard operations. Additionally, a
change you make to the SPFILE using an ALTER SYSTEM SET
or ALTER SYSTEM RESET command is saved only in the
SPFILE used by the instance where you run the command. Your
change is not reflected in instances using different SPFILEs.
TRACE_ENABLED When TRACE_ENABLED is set to true, Oracle records
information in specific files when errors occur. Oracle Support
Services uses this information for debugging the Oracle
software. If you enable tracing on one instance, then you should
enable it on all instances to ensure that the diagnostic
information is recorded in a trace file.
UNDO_RETENTION By setting different values for UNDO_RETENTION in each
instance, you are likely to reduce scalability and encounter
unpredictable actions following a failover. Therefore, you
should carefully consider the potential benefits before you
assign different values for this parameter to the instances in
your Oracle RAC database.
then you can use either Enterprise Manager or SRVCTL. When you use either Oracle
Enterprise Manager or SRVCTL to create and start the service, the SERVICE_NAMES
parameter is updated automatically once the service is active.
See Also:
■ "About Oracle Services" on page 7-2
About the Server Parameter File for Oracle Real Application Clusters
When you create the database, Oracle creates an SPFILE in the file location that you
specify. This location can be an Oracle ASM disk group or a file on a cluster file
system. In the environment described by this guide, the SPFILE is created on an Oracle
ASM disk group.
All instances in the cluster database use the same SPFILE at startup. Oracle RAC uses
a traditional parameter file only if an SPFILE does not exist or if you specify PFILE in
your STARTUP command. Oracle recommends that you use an SPFILE to simplify
administration, maintain parameter setting consistency, and to guarantee parameter
setting persistence across database shutdown and startup events. In addition, you can
configure RMAN to back up your SPFILE.
See Also:
■ Oracle Database 2 Day DBA
■ Oracle Real Application Clusters Administration and Deployment
Guide for more information about using a server parameter file in
an Oracle Real Application Clusters environment
■ Oracle Database Backup and Recovery User's Guide for information
about using RMAN to backup the SPFILE
Modifying the Initialization Parameter for Oracle RAC Using the Current Tab
The Current subpage of the Initialization Parameters page contains a list of
configuration parameters for that instance and database. You can set these parameters
to particular values to initialize many of the memory and process settings of an Oracle
instance. When you modify initialization parameters using the Current tab, the
changes are applied only to the running instances, not the SPFILE, unless the "Apply
changes in current running instance(s) mode to SPFile" option is selected.
The Instance column shows the instances for which the parameter has the value listed
in the table. An asterisk (*) indicates that the parameter has the same value for all
remaining instances of the cluster database. For example, if open_cursors = 250
for node1 and open_cursors = 300 for node2, then the Instance column for
open_cursors = 250 displays an asterisk, but the Instance column for open_
cursors = 300 contains "node2". This shorthand notation saves space when the
cluster databases has many instances.
You can filter the Initialization Parameters page to show only those parameters that
meet the criteria of the filter you enter in the Filter by name field. Optionally, you can
select Show All to display on one page all parameters currently used by the running
instance(s).
Modifying the Initialization Parameter for Oracle RAC Using the SPFile Tab
You can also add or reset parameters using the SPFile tab. When you modify
initialization parameters using the SPFile tab, the changes are applied only to the
SPFILE, not the currently running instances, unless the "Apply changes in SPFile mode
to the current running instance(s)" option is selected.
Resetting parameters using the SPFile tab is different than resetting the same
parameters using the Current tab. You can either reset the parameter value for an
instance back to the default value for all instances, or you can delete the default
parameter setting (unset the parameter) for all instances.
■ If you reset a parameter with an asterisk in the Instance column, then the entry is
deleted from both the SPFILE and the table in Database Control. Only parameters
without an asterisk (or instance-specific parameters) remain.
■ If you reset the only entry for a nonasterisk parameter, then it is deleted from both
the SPFILE and the table in Database Control, but the parameter is replaced by a
dummy parameter with an empty value field and an asterisk in the Instance
column. This enables you to specify a new value for the parameter, add new
instance-specific entries for the parameter, and so on.
Resetting a parameter that is set for only one instance results in the parameter being
unset for all instances.
Value field. Optionally, you can put text in the Comment field describing the
reason for the change.
■ Click Reset to reset the value of the selected parameter.
When you click Reset, one of the following actions is performed:
– If the entry you selected was for a specific instance, then the value of the
selected parameter for that instance is reset to the value of the remaining
instances (indicated by an asterisk in the Instance column). The entry that
has the local instance name in the Instance field is deleted.
– If the entry you selected to reset was the default values for all instances
(indicated by an asterisk in the Instance column), then the value of the
selected parameter is unset for all instances, but any instance-specific
parameter entries for the same parameter are not changed.
– If you reset the only entry for a parameter, regardless of whether the entry
applies to all instances or a single instance, then the parameter is unset for
all instances in the cluster database.
5. After you make changes to one or more of the parameters, click Apply to accept
and implement the changes.
Using the Initialization Parameters page with the SPFile tab selected, if you click Reset
for *.open_cursors, then Enterprise Manager deletes that entry from both the
SPFILE and the displayed list of parameters, leaving only RACDB2.open_cursors =
250 displayed.
If you click Reset for RACDB2.open_cursors, then Enterprise Manager also deletes
this parameter entry from both the SPFILE and the displayed list of parameters, but
then a new entry, *.open_cursors = <NULL> is added to the displayed list of
parameters for the reset parameter.
See Also:
■ Oracle Database 2 Day DBA
■ Oracle Real Application Clusters Administration and Deployment
Guide for more information about using a server parameter file in
an Oracle Real Application Clusters environment
See Also:
■ Oracle Database 2 Day DBA for more information about managing
the undo data for your database
See Also:
■ Oracle Database 2 Day DBA
■ "About Oracle Automatic Storage Management"
See Also:
■ Oracle Database 2 Day DBA
■ Oracle Automatic Storage Management Administrator's Guide for
information about how to use SQL*Plus to administer Oracle
ASM instances
Note: If you installed the Oracle Cluster Registry (OCR) and Voting
Disks on Oracle ASM as part of your Oracle Grid Infrastructure for a
cluster install, then the Oracle ASM instances are created by OUI and
you do not have to run ASMCA. You must use ASMCA only if you
did not specify Oracle ASM storage for the OCR and Voting disks
during installation.
You do not have to create a database to modify Oracle ASM storage properties.
See Also:
■ Oracle Database 2 Day DBA
■ Oracle Automatic Storage Management Administrator's Guide for
information about how to use the Oracle Automatic Storage
Management command-line utility
See Also:
■ Oracle Automatic Storage Management Administrator's Guide
■ Oracle Database 2 Day DBA
See Also:
■ Oracle Database 2 Day DBA
■ Oracle Automatic Storage Management Administrator's Guide
About Redo Log Groups and Redo Threads in Oracle RAC Databases
Redo logs contain a record of changes that have been made to data files. In a
single-instance Oracle database, redo logs are stored in two or more redo log file
groups. Each of these groups contains a redo log file and possibly one or more
mirrored copies of that file. In an Oracle RAC database, each instance requires its own
set of redo log groups, which is known as a redo thread. Mirrored copies of the redo
log files provide your system with extra protection against data loss that is due to
hardware failures or data corruption. If a redo log file is unreadable, then the Oracle
Database attempts to access its mirrored copy. The redo log file mirrors should be
located on different disk devices from the primary redo log files.
Each instance's redo thread must contain at least two redo log groups. Each redo log
group should contain at least two members: a redo log and its mirrored copy. If you
create your Oracle RAC database using DBCA, then your Oracle RAC database
automatically implements a configuration that meets the Oracle recommendations.
You should create redo log groups only if you are using administrator-managed
databases. For policy-managed databases, if an instance starts due to a change in
server pool cardinality, then Oracle Database automatically creates redo log files,
enables a redo thread for the instance if there is not a redo thread allocated to that
instance, and creates the undo tablespace if there is not an undo tablespace allocated to
that instance. The database must be using Oracle Managed Files and Oracle ASM in
this situation. See Oracle Real Application Clusters Administration and Deployment Guide
for more information.
In an Oracle RAC database, all the redo log files reside on shared storage. In addition,
each instance must have access to the redo log files of all the other instances in the
cluster. If your Oracle RAC database uses Oracle ASM, then Oracle ASM manages the
shared storage for the redo log files and the access to those files.
See Also:
■ Oracle Database 2 Day DBA
■ Oracle Automatic Storage Management Administrator's Guide
Using Enterprise Manager to View and Create Online Redo Log Files
On the Redo Log Groups page, you can create additional redo log groups and add
members to the redo log group. The Thread column identifies the instance, or redo
thread, to which a redo log file belongs.
See Also:
■ Oracle Real Application Clusters Administration and Deployment
Guide for additional information about redo threads in an Oracle
RAC environment
■ Oracle Automatic Storage Management Administrator's Guide
■ Oracle Database 2 Day DBA for more information about creating
online redo log files
minimum required number of voting disks, then it is evicted, or removed, from the
cluster. After the cause of the failure has been corrected and access to the voting disks
has been restored, you can instruct Oracle Clusterware to recover the failed node and
restore it to the cluster.
You can start the Oracle Clusterware stack on specific nodes by using the -n option
followed by a space-delimited list of node names, for example:
crsctl start cluster -n racnode1 racnode4
To use the previous command, the OHASD process must be running on the specified
nodes.
To start the entire Oracle Clusterware stack on a node, including the OHASD process,
run the following command on that node:
crsctl start crs
The previous command stops the resources managed by Oracle Clusterware, the
Oracle ASM instance, and all the Oracle Clusterware processes (except for OHASD
and its dependent processes).
To stop Oracle Clusterware and Oracle ASM on select nodes, include the -n option
followed by a space-delimited list of node names, for example:
crsctl stop cluster -n racnode1 racnode3
If you do not include either the -all or the -n option in the stop cluster
command, then Oracle Clusterware and its managed resources are stopped only on the
node where you execute the command.
To completely shut down the entire Oracle Clusterware stack, including the OHASD
process, use the crsctl stop crs command. CRSCTL attempts to gracefully stop
the resources managed by Oracle Clusterware during the shutdown of the Oracle
Clusterware stack. If any resources that Oracle Clusterware manages are still running
after executing the crsctl stop crs command, then the command fails. You must
then use the -f option to unconditionally stop all resources and stop the Oracle
Clusterware stack, for example:
crsctl stop crs -all -f
Note: When you shut down the Oracle Clusterware stack, you also
shut down the Oracle Automatic Storage Management (Oracle ASM)
instances. If the Oracle Clusterware files (voting disk and OCR) are
stored in an Oracle ASM disk group, then the only way to shut down
the Oracle ASM instances is to shut down the Oracle Clusterware
stack.
2. Run the following command as the root user to remove a voting disk:
crsctl delete css votedisk path
2. Use CRSCTL to create a new voting disk in the same location, for example:
crsctl add css votedisk /dev/sda3
To move voting disks from shared storage to an Oracle ASM disk group:
1. Use the Oracle ASM Configuration Assistant (ASMCA) to create an Oracle ASM
disk group.
2. Verify that the ASM Compatibility attribute for the disk group is set to 11.2.0.0 or
higher.
3. Use CRSCTL to create a voting disk in the Oracle ASM disk group by specifying
the disk group name in the following command:
crsctl replace votedisk +ASM_disk_group
See Also:
■ "Creating Additional Oracle ASM Disk Groups" on page 3-11
■ Oracle Automatic Storage Management Administrator's Guide for
more information about disk group compatibility attributes
■ Oracle Clusterware Administration and Deployment Guide for more
information about managing voting disks
The date and identifier of the recently generated OCR backup is displayed.
3. (Optional) If you must change the location for the OCR backup files, then use the
following command, where directory_name is the new location for the
backups:
The default location for generating backups on Oracle Linux systems is Grid_
home/cdata/cluster_name where cluster_name is the name of your cluster and
Grid_home is the home directory of your Oracle Grid Infrastructure for a cluster
software. Because the default backup is on a local file system, Oracle recommends that
you include the backup file created with the ocrconfig command as part of your
operating system backup using standard operating system or third-party tools.
See Also:
■ "About the OCRCHECK Utility"
■ "Repairing an OCR Configuration on a Local Node"
■ "Replacing an OCR"
■ "Backing Up and Recovering the Oracle Cluster Registry"
3. Review the contents of the backup using the following ocrdump command, where
file_name is the name of the OCR backup file for which the contents should be
written out to the file ocr_dump_output_file:
[root]# ocrdump ocr_dump_output_file -backupfile file_name
If you do not specify an output file name, then the OCR contents are written to a
file named OCRDUMPFILE in the current directory.
4. As the root user, stop Oracle Clusterware on all the nodes in your cluster by
executing the following command:
[root]# crsctl stop cluster -all
5. As the root user, restore the OCR by applying an OCR backup file that you
identified in Step 1 using the following command, where file_name is the name
of the OCR to restore. Ensure that the OCR devices that you specify in the OCR
configuration exist, and that these OCR devices are valid before running this
command.
[root]# ocrconfig -restore file_name
6. As the root user, restart Oracle Clusterware on all the nodes in your cluster by
running the following command:
[root]# crsctl start cluster -all
7. Use the Cluster Verification Utility (CVU) to verify the OCR integrity. Exit the
root user account, and, as the software owner of the Oracle Grid Infrastructure
for a cluster installation, run the following command, where the -n all
argument retrieves a list of all the cluster nodes that are configured as part of your
cluster:
cluvfy comp ocr -n all [-verbose]
Note: The operations in this section affect the OCR for the entire
cluster. However, the ocrconfig command cannot modify OCR
configuration information for nodes that are shut down or for nodes
on which Oracle Clusterware is not running. Avoid shutting down
nodes while modifying the OCR using the ocrconfig command.
This command updates the OCR configuration on all the nodes on which Oracle
Clusterware is running.
To move the OCR from shared storage to an Oracle ASM disk group:
1. Use the Oracle ASM Configuration Assistant (ASMCA) to create an Oracle ASM
disk group that is at least the same size of the existing OCR and has at least normal
redundancy.
2. Verify that the ASM Compatibility attribute for the disk group is set to 11.2.0.0 or
higher.
3. Run the following OCRCONFIG command as the root user, specifying the Oracle
ASM disk group name:
# ocrconfig -add +ASM_disk_group
You can run this command more than once if you add multiple OCR locations.
You can have up to five OCR locations. However, each successive run must point
to a different disk group.
4. Remove the non-Oracle ASM storage locations by running the following
command as the root user:
# ocrconfig -delete old_storage_location
You must run this command once for every shared storage location for the OCR
that is not using Oracle ASM.
See Also:
■ "Creating Additional Oracle ASM Disk Groups" on page 3-11
■ Oracle Automatic Storage Management Administrator's Guide for
more information about disk group compatibility attributes
■ Oracle Clusterware Administration and Deployment Guide for more
information about migrating the OCR to Oracle ASM
Replacing an OCR
If you must change the location of an existing OCR, or change the location of a failed
OCR to the location of a working one, then you can use the following procedure if one
OCR file remains online.
Note: The OCR that you are replacing can be either online or offline.
2. Use the following command to verify that Oracle Clusterware is running on the
node on which you are going to perform the replace operation:
crsctl check cluster -all
3. As the root user, enter the following command to designate a new location for
the specified OCR file:
[root]# ocrconfig -replace source_ocr_file -replacement destination_ocr_file
This command updates the OCR configuration on all the nodes on which Oracle
Clusterware is running.
4. Use the OCRCHECK utility to verify that OCR replacement file is online:
ocrcheck
Removing an OCR
To remove an OCR file, at least one copy of the OCR must be online. You can remove
an OCR location to reduce OCR-related overhead or to stop mirroring your OCR
because you moved the OCR to a redundant storage system, such as a redundant
array of independent disks (RAID).
2. As the root user, run the following command on any node in the cluster to
remove a specific OCR file:
[root]# ocrconfig -delete ocr_file_name
This command updates the OCR configuration on all the nodes on which Oracle
Clusterware is running.
These commands update the OCR configuration only on the node from which you
run the command.
each block. It also returns an individual status for each OCR file and a result for the
overall OCR integrity check. The following is a sample of the OCRCHECK output:
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262144
Used space (kbytes) : 16256
Available space (kbytes) : 245888
ID : 570929253
Device/File Name : +CRS_DATA
Device/File integrity check succeeded
...
Decive/File not configured
The OCRCHECK utility creates a log file in the following directory, where Grid_home
is the location of the Oracle Grid Infrastructure for a cluster installation, and
hostname is the name of the local node:
Grid_home/log/hostname/client
The log files have names of the form ocrcheck_nnnnn.log, where nnnnn is the
process ID of the operating session that issued the ocrcheck command.
See Also:
■ "Backing Up and Recovering the Oracle Cluster Registry"
■ "Viewing Available OCR Backups"
See Also:
■ "About Verifying the Oracle Clusterware Installation"
■ "Replacing an OCR"
■ "Adding an OCR Location"
■ "Repairing an OCR Configuration on a Local Node"
To protect the database against data loss and reconstruct the database after data loss,
you must devise, implement, and manage a backup and recovery strategy. This
chapter describes how to back up and recover an Oracle Real Application Clusters
(Oracle RAC) database.
This chapter contains the following sections:
■ Overview of Oracle RAC Database Backup and Recovery
■ About the Fast Recovery Area in Oracle RAC
■ Archiving the Oracle Real Application Clusters Database Redo Logs
■ About Preparing for Backup and Recovery Operations
■ Performing Backups of Your Oracle Real Application Clusters Database
■ About Preparing to Restore and Recover Your Oracle RAC Database
■ Recovering Your Oracle Real Application Clusters Database
■ About Managing Your Database Backup Files
■ Displaying Backup Reports for Your Oracle Real Application Clusters Database
See Also:
■ Oracle Database 2 Day DBA
■ Oracle Database Backup and Recovery User's Guide for more
information about using the Recovery Manager utility
The Enterprise Manager Guided Recovery capability provides a Recovery Wizard that
encapsulates the logic required for a wide range of file restoration and recovery
scenarios, including the following:
■ Complete restoration and recovery of the database
■ Point-in-time recovery of the database or selected tablespaces
■ Flashback Database
■ Other flashback features of Oracle Database for logical-level repair of unwanted
changes to database objects
■ Media recovery at the block level for data files with corrupt blocks
If the database files are damaged or need recovery, then Enterprise Manager can
determine which parts of the database must be restored from a backup and recovered,
including early detection of situations such as corrupted database files. Enterprise
Manager guides you through the recovery process, prompting for needed information
and performing the required recovery actions.
See Also:
■ "Performing Backups of Your Oracle Real Application Clusters
Database"
■ "Recovering Your Oracle Real Application Clusters Database"
■ "About Managing Your Database Backup Files"
■ Oracle Database 2 Day DBA
redo log data. For creating backups of your Oracle RAC database, the strategy that you
choose depends on how you configure the archiving destinations for each node.
Whether only one node or all nodes perform archived redo log backups, you must
ensure that the archived redo logs for every instance are backed up.
To backup the archived redo logs from a single node, that node must have access to the
archived log files of the other instances. The archived redo log naming scheme that
you use is important because when a node writes to a log with a specific filename on
its file system, the file must be readable by any node that must access this archived
redo log. For example, if node1 archives a log to /oracle/arc_dest/log_1_100_
23452345.arc, then node2 can back up this archived redo log only if it can
read/oracle/arc_dest/log_1_100_23452345.arc on its own file system.
See Also:
■ Oracle Database 2 Day DBA
■ Oracle Automatic Storage Management Administrator's Guide
See Also:
■ "About Archived Redo Log Files for an Oracle RAC Database"
■ "Performing Backups of Your Oracle Real Application Clusters
Database"
■ Oracle Database 2 Day DBA for more information about RMAN
backups
■ Oracle Database 2 Day DBA for more information about
configuring backup device settings
that is separate from the Oracle ASM disk group used for your data files. Alternatively,
you can use a cluster file system archiving scheme.
See Also:
■ "About Archived Redo Log Files for an Oracle RAC Database"
■ "About Configuring Initialization Parameters for an Oracle RAC
Database"
■ "Editing Initialization Parameter Settings for an Oracle RAC
Database"
■ Oracle Database 2 Day DBA
■ Oracle Real Application Clusters Administration and Deployment
Guide for more information about configuring and managing
archived redo log files for an Oracle RAC database
See Also:
■ "Verifying Oracle Enterprise Manager Operations"
■ "About Operating System Users and Groups"
See Also:
■ "About Operating System Users and Groups"
■ "About Configuring User Accounts"
■ Oracle Database 2 Day DBA
See Also:
■ "Configuring Archiving for Your Oracle RAC Database"
■ "Overview of Oracle RAC Database Backup and Recovery"
■ Oracle Database 2 Day DBA for more information about
configuring backup policy settings
■ Oracle Database 2 Day DBA for more information about
configuring backup settings
See Also:
■ "About Operating System Users and Groups"
■ "About Credentials for Performing Backup and Recovery"
■ Oracle Database 2 Day DBA for more information about
configuring your database for backup and recovery
■ Oracle Database 2 Day DBA for more information about performing
and scheduling backups using Enterprise Manager Database
Control
then allocate multiple channels to provide RMAN access to all the archived redo log
files.
You can configure RMAN to automatically delete the archived redo log files from disk
after they have been safely backed up. This feature helps to reduce the disk space used
by your Oracle RAC database, and prevent an unnecessary outage that might occur if
you run out of available disk space.
To configure RMAN to automatically delete the archived redo log file files from
disk after they have been safely backed up, when creating or scheduling your
database backups:
1. On the Cluster Database Home page, select Availability.
The Cluster Database Availability page appears.
2. In the Backup/Recovery section, under the heading Manage, select Schedule
Backup.
3. Choose a backup type and click Schedule Customized Backup.
4. While specifying the options for your backup, select Also back up all archived
logs on disk if you are performing an online backup. There is no need to back up
archived redo log files when performing an offline backup because the database is
in a consistent state at the time of backup and does not require media recovery if
you restore.
5. Select Delete all archived logs from disk after they are successfully backed up if
you are using shared storage for your archived redo log files.
Note: Do not select Delete all archived logs from disk after they are
successfully backed up if you are using a fast recovery area as your
only archive log destination. In this case, archived redo log files that
have been backed up are deleted automatically as space is needed for
storage of other files.
See Also:
■ "Performing Backups of Your Oracle Real Application Clusters
Database"
■ "About Archived Redo Log Files for an Oracle RAC Database"
■ Oracle Database 2 Day DBA
See Also:
■ "About Oracle Real Application Clusters"
■ "Overview of Oracle RAC Database Backup and Recovery"
■ "Archiving the Oracle Real Application Clusters Database Redo
Logs"
■ "About the Fast Recovery Area in Oracle RAC"
■ "Performing Backups of Your Oracle Real Application Clusters
Database"
About Putting the Oracle RAC Database Instances into the Correct State
Recovery of a failed instance in Oracle RAC is automatic. If an Oracle RAC database
instance fails, then a surviving database instance processes the online redo logs
generated by the failed instance to ensure that the database contents are in a consistent
state. When recovery completes, Oracle Clusterware attempts to restart the failed
instance automatically.
Media recovery is a manual process that occurs while a database is closed. A media
failure is the failure of a read or write operation of a disk file required to run the
database, due to a physical problem with the disk such as a head malfunction. Any
database file can be vulnerable to a media failure. If a media failure occurs, then you
must perform media recovery to restore and recover the damaged database files.
Media recovery is always done by one instance in the cluster.
Before starting media recovery, the instance that is performing the recovery should be
started in MOUNT mode. The other instances should be started in NOMOUNT mode.
See Also:
■ "Starting and Stopping Oracle RAC Databases and Database
Instances"
■ "About Preparing to Restore and Recover Your Oracle RAC
Database"
■ Oracle Database 2 Day DBA
See Also:
■ "About Archived Redo Log Files for an Oracle RAC Database"
■ "Configuring Archiving for Your Oracle RAC Database"
■ Oracle Database Backup and Recovery User's Guide for more
information about restoring archived redo log file files
See Also:
■ "Recovering Your Oracle Real Application Clusters Database"
■ "Overview of Oracle RAC Database Backup and Recovery"
■ Oracle Database 2 Day DBA for more information about
incremental backups of data files
■ Oracle Database 2 Day DBA for more information about
configuring recovery settings
To use Enterprise Manager and RMAN to restore and recover an Oracle RAC
database:
1. On the Cluster Database Home Page, select Availability.
See Also:
■ "About Preparing to Restore and Recover Your Oracle RAC
Database"
■ "About Credentials for Performing Backup and Recovery"
■ Oracle Database 2 Day DBA for more information about performing
user-directed recovery
5. In the Backup Information section, select Use Other Backup Information and Use
an Autobackup.
6. On the Perform Recovery: Restore SPFILE page, specify a different location for the
SPFILE to be restored to.
7. When finished selecting your options, click Restore, then click Yes to confirm you
want to restore the SPFILE.
8. After the SPFILE is restored, you are prompted to login to the database again.
See Also:
■ "Starting and Stopping Oracle RAC Databases and Database
Instances"
■ "About the Server Parameter File for Oracle Real Application
Clusters"
■ Oracle Database Backup and Recovery User's Guide for more
information about recovering a server parameter file
See Also:
■ Oracle Database 2 Day DBA for more information about these
topics and details on how to perform these tasks
The View Backup Report page appears, with a list of recent backup jobs.
3. In the Search section, specify any filter conditions and click Go to restrict the list to
backups of interest.
You can use the Search section of the page to restrict the backups listed by the time
of the backup, the type of data backed up, and the status of the jobs (whether it
succeeded or failed, and whether warnings were generated during the job).
4. To view detailed information about any backup, click the backup job name in the
Backup Name column.
The Backup Report page is displayed for the selected backup job. This page
contains summary information about this backup job, such as how many files of
each type were backed up, the total size of the data backed up, and the number,
size, and type of backup files created.
The Backup Report page also contains a Search section that you can use to quickly
run a search for another backup job or backup jobs from a specific date range. The
resulting report contains aggregate information for backup jobs matching the
search criteria.
See Also:
■ "About Managing Your Database Backup Files"
■ "Performing Backups of Your Oracle Real Application Clusters
Database"
■ "Overview of Oracle RAC Database Backup and Recovery"
■ Oracle Database 2 Day DBA
Using workload management, you can distribute the workload across database
instances to achieve optimal database and cluster performance for users and
applications. This chapter contains the following sections:
■ About Workload Management
■ Creating Services
■ Administering Services
■ Configuring Clients for High Availability
When a user or application connects to a database, Oracle recommends that you use a
service for the connection. Oracle Database automatically creates one database service
when the database is created. For many installations, this may be all you need. For
more flexibility in the management of the workload using the database, Oracle
Database enables you to create multiple services and specify which database instances
offer the services.
See Also:
■ "Creating Services" on page 7-11
■ "Administering Services" on page 7-15
■ "About Workload Management" on page 7-1
■ Oracle Database 2 Day DBA
■ Oracle Database Administrator's Guide
See Also:
■ "About FAN Callouts" on page 7-6
■ "Creating Services" on page 7-11
■ "About Workload Management" on page 7-1
See Also:
■ "About FAN Callouts" on page 7-6
■ "Creating Services" on page 7-11
■ "About Workload Management" on page 7-1
■ manual: Requires you to start the service manually after the database starts. Prior
to Oracle Database 11g release 2 (11.2), all services worked as though they were
defined with a manual management policy.
See Also:
■ "Creating Services" on page 7-11
■ "About Workload Management" on page 7-1
database resources. The Database Resource Manager enables an Oracle RAC database
running on one or more nodes to support multiple applications and mixed workloads
with optimal efficiency.
The Database Resource Manager provides the ability to prioritize work within an
Oracle database or your Oracle RAC environment. For example, high priority users,
such as online workers, would get more resources to minimize response time, while
lower priority users, such as batch jobs or reports, would get fewer resources, and
could take longer to run. This allows for more granular control over resources.
Resources are allocated to users according to a resource plan specified by the database
administrator. The following terms are used in specifying a resource plan:
■ A resource plan specifies how the resources are to be distributed among various
users (resource consumer groups).
■ Resource consumer groups allow the administrator to group user sessions by
resource requirements. Resource consumer groups are different from user roles;
one database user can have different sessions assigned to different resource
consumer groups.
■ Resource allocation methods are the methods or policies used by the Database
Resource Manager when allocating for a particular resource. Resource allocation
methods are used by resource consumer groups and resource plans. The database
provides the resource allocation methods that are available, but the DBA
determines which method to use.
■ Resource plan directives are a means of assigning consumer groups to particular
plans and partitioning resources among consumer groups by specifying
parameters for each resource allocation method.
■ Subplans, which the DBA can create within a resource plan, allow further
subdivision of resources among different users of an application.
■ Levels provide a mechanism to specify distribution of unused resources among
available users. Up to eight levels of resource allocation can be specified.
The Database Resource Manager enables you to map a resource consumer group to a
service so that users who connect using that service are members of the specified
resource consumer group, and thus restricted to the resources available to that
resource consumer group.
To learn more about managing the Database Resource Manager using Enterprise
Manager:
1. Access the Database Home page.
2. At the top of the page, click Server to display the Server page.
3. In the Resource Manager section, click Getting Started.
See Also:
■ "About Workload Management" on page 7-1
■ Oracle Database Administrator's Guide for more information about
the Database Resource Manager
See Also:
■ "About Oracle Clusterware" on page 5-1
■ "About Workload Management" on page 7-1
executable written in any programming language. Some examples of how you can use
FAN callouts to automate the actions performed when events occur in a cluster
configuration are as follows:
■ Starting and stopping server-side applications
■ Relocating low-priority services when high-priority services come online
■ Sending text or numeric messages to pagers
■ Executing shell scripts
The executable files for FAN callouts are stored in the Grid_home/racg/usrco
subdirectory. If this subdirectory does not exist in your Grid home, then you must
create this directory with the same permissions and ownership as the Grid_
home/racg/tmp subdirectory.
All executables in the Grid_home/racg/usrco subdirectory are executed
immediately, in an asynchronous fashion, when a FAN event received through the
ONS. A copy of the executable files used by FAN callouts should be available on every
node that runs Oracle Clusterware. Example callout scripts are available on Oracle
Technology Network at
http://www.oracle.com/technetwork/database/enterprise-edition/tw
pracwkldmgmt-132994.pdf
See Also:
■ "About Connection Load Balancing" on page 7-8
■ "About the Load Balancing Advisory" on page 7-7
■ Oracle Real Application Clusters Administration and Deployment
Guide for more information about configuring Fast Application
Notification and FAN callouts
JDBC, Oracle Database 11g ODP.NET, and Oracle Database 11g Oracle Call Interface
(OCI).
You configure your Oracle RAC environment to use the Load Balancing Advisory by
defining service-level goals for each service used. Defining a service-level goal enables
the Load Balancing Advisory for that service and enables the publication of FAN load
balancing events. There are two types of service-level goals for Run-time Connection
Load Balancing:
■ Service Time—The Load Balancing Advisory attempts to direct work requests to
instances according to their response time. Load Balancing Advisory data is based
on the elapsed time for work done by connections using the service, and the
available bandwidth to the service. This goal is best suited for workloads that
require varying lengths of time to complete, for example, an internet shopping
system.
■ Throughput—The Load Balancing Advisory measures the percentage of the total
response time that the CPU consumes for the service. This measures the efficiency
of an instance, rather than the response time. This goal is best suited for workloads
where each work request completes in a similar amount of time, for example, a
trading system.
If you do not select the Enable Load Balancing Advisory option, then the service-level
goal is set to None, which disables load balancing for that service.
See Also:
■ "About Fast Application Notification (FAN)" on page 7-6
■ "Configuring Clients for High Availability" on page 7-17
■ "Administering Services" on page 7-15
■ "About Workload Management" on page 7-1
■ Oracle Real Application Clusters Administration and Deployment
Guide for more information about integrated Oracle clients
SCAN represents the SCAN for your cluster. Specifying the TCP port identifier is
optional. The service_name is the name of a database service.
You can also use Net Configuration Assistant (NETCA) to create a net service name, a
simple name for the database service. The net service name resolves to the connect
descriptor, which is the network address of the database and the name of the database
service. The address portion of the connect descriptor is actually the protocol address
of the listener. The client uses a connect descriptor to specify the database or instance
to which the client wants to connect.
When a net service name is used, establishing a connection to a database instance
takes place by first mapping the net service name to the connect descriptor. This
mapped information is stored in one or more repositories of information that are
accessed using naming methods. The most commonly used naming method is Local
Naming, where the net service names and their connect descriptors are stored in a
localized configuration file named tnsnames.ora.
When the client connects to the cluster database using a service, you can use the
Oracle Net connection load balancing feature to spread user connections across all the
instances that are supporting that service. There are two types of load balancing that
you can implement: client-side and server-side load balancing. In an Oracle RAC
database, client connections should use both types of connection load balancing. When
you create an Oracle RAC database using Oracle Database Configuration Assistant
(DBCA), DBCA configures and enables server-side load balancing by default.
See Also:
■ "Verifying Oracle Net Supports Newly Created Services" on
page 7-16
■ Oracle Database 2 Day DBA
Note: If you did not use DBCA to create your database, or if you are
using listener ports other than the default of 1521, then you must
configure the LOCAL_LISTENER and REMOTE_LISTENER database
initialization parameters for your cluster database to point to
SCAN:port.
Oracle Database 11g introduces an additional flag in the load balancing advisory event
called affinity hint. The affinity hint is automatic when load balancing advisory is
turned on through setting the goal on the service. This flag is for temporary affinity
that lasts for the duration of a web session. Web conversations often connect and
disconnect many times during the entire session. During each of these connects, it may
access the same or similar data, for example, a shopping cart, Siebel, and so on.
Affinity can improve buffer cache efficiency, which lowers cpu usage and transaction
latency. The Affinity Hint is a flag that indicates if Affinity is active or inactive for a
particular instance and service combination. Different instances offering the same
service can have different settings for the Affinity Hint.
Applications using Oracle Database 11g and UCP, can take advantage of this new
affinity feature. If the affinity flag is turned on in the Load Balancing Advisory event,
then UCP creates an Affinity Context for the Web session such that when that session
does a get connection from the pool, the pool always tries to give it a connection to the
instance it connected to the first time it acquired a session. The choice of instance for
the first connection is based on the current load balancing advisory information.
Creating Services
To manage workloads, you can define services that you assign to a particular
application or to a subset of an application's operations. You can also use services to
manage the workload for different types of work. You can create a service using Oracle
Enterprise Manager Database Control.
To create a service:
1. On the Cluster Database Home page, click Availability.
The Availability page appears.
8. In the Service Properties section, select Short for Connection Load Balancing Goal
to distribute the connection workload based on elapsed time instead of the overall
number of connections. Otherwise, choose Long.
9. Select Enable Load Balancing Advisory under the sub-heading Notification
Properties to enable the Load Balancing Advisory for this service, as shown in the
following screenshot. Choose a service-level goal of either Service Time or
Throughput.
10. Select Enable Fast Application Notification under the heading Notification
Properties if this service is used by an Oracle Call Interface (OCI) or ODP.NET
application, and you want to enable FAN.
11. In the Service Threshold Levels section, you can optionally set the service-level
thresholds by entering a value in milliseconds for Warning and Critical thresholds
for the Elapsed Time and CPU Time metrics.
12. If you want to use a Resource Plan to control the resources used by this service,
then select the name of the consumer group from the Consumer Group Mapping
list in the Resource Management Properties section. For example, you might
choose the LOW_GROUP consumer group to give development users low priority to
database resources.
Note: You cannot change the consumer group name for a service on
the Edit Service page. This is because there may be several consumer
groups associated with a given service. However, the Edit Service
page contains a link to the Resource Consumer Group Mapping page,
where you can modify the consumer group mapping for the service.
13. If this service is used by a specific Oracle Scheduler job class, then you can specify
the mapping by selecting the name from the Job Scheduler Mapping list in the
Resource Management Properties.
14. Click OK to create the service.
See Also:
■ "About Workload Management" on page 7-1
■ "About Connection Load Balancing" on page 7-8
■ "About the Load Balancing Advisory" on page 7-7
■ "About Fast Application Notification (FAN)" on page 7-6
■ "Administering Services" on page 7-15
■ Oracle Database Administrator's Guide
Administering Services
You can create and administer services using Enterprise Manager. You can also use the
SRVCTL utility to perform most service management tasks.
The following sections describe how to manage services for your cluster database:
■ About Service Administration Using Enterprise Manager
■ Using the Cluster Managed Database Services Page
■ Verifying Oracle Net Supports Newly Created Services
See Also:
■ "Administering Services" on page 7-15
■ "About Oracle Services" on page 7-2
■ "Creating Services" on page 7-11
2. On the Availability subpage, under the Services heading, click Cluster Managed
Database Services.
The Cluster and Database Login page appears.
3. Enter credentials for the database and for the cluster that hosts the Oracle RAC
database, then click Continue.
The Cluster Managed Database Services page appears and displays the services
that are available on the cluster database instances.
See Also:
■ "About Service Administration Using Enterprise Manager" on
page 7-15
■ "About Oracle Services" on page 7-2
■ "Creating Services" on page 7-11
You should see a list for the new service, similar to the following:
Services Summary...
Service "DEVUSERS.example.com" has 2 instance(s).
Instance "sales1", status READY, has 1 handler(s) for this service...
Instance "sales2", status READY, has 1 handler(s) for this service...
The displayed name for your newly created service, for example
DEVUSERS.example.com, is the service name you use in your connection
strings.
2. Test the Oracle Net Services configuration by attempting to connect to the Oracle
RAC database using SQL*Plus and the SCAN. The connect identifier for easy
connect naming has the following format:
"[//]SCAN[:port]/service_name"
After you enter the password, you should see a message indicating you are
successfully connected to the Oracle RAC database. If you get an error message,
then examine the connect identifier and verify the user name, password, and
service name were typed in correctly and all the information is correct for your
Oracle RAC environment.
network time out. To enable the time out limit, set the sqlnet.ora parameter
sqlnet.outbound_connection_timeout = x, where x represents the maximum
amount of time, in seconds, for a client to establish an Oracle Net connection to the
database instance, for example three seconds.
sqlnet.outbound_connection_timeout = 3
This section deals with configuration FAN for application clients, and contains the
following topics:
■ Configuring JDBC Clients
■ Configuring OCI Clients
■ Configuring ODP.NET Clients
See Also:
■ "About Fast Application Notification (FAN)" on page 7-6
■ "About Oracle Services" on page 7-2
See Also:
■ "About Workload Management" on page 7-1
■ "About Oracle Services" on page 7-2
■ Oracle Universal Connection Pool for JDBC Developer's Guide
The remote ONS subscription must contain every host that the client application
can use for failover.
3. Set the oracle.net.ns.SQLnetDef.TCP_CONNTIMEOUT_STR property to a
nonzero value on the data source. When this property is set, if the JDBC client
attempts to connect to a host that is unavailable, then the connection attempt is
bounded to the time specified for oracle.net.ns.SQLnetDef.TCP_
CONNTIMEOUT_STR. After the specified time has elapsed and a successful
connection has not been made, the client attempts to connect to the next host in the
address list. Setting this property to a value of three seconds is sufficient for most
installations.
4. Configure JDBC clients to use a connect descriptor that includes the SCAN and the
service name, and optionally the port that the SCAN listener listens on, for
example:
@//docrac.example.com:1521/orcl_JDBC
In the JDBC application, you would connect to the database using a connect string,
such as the following one:
pds.setURL("jdbc:oracle:thin:@//docrac.example.com:1521/orcl_JDBC")
If you are using a JDBC driver, then you must include the complete connect
descriptor in the URL because the JDBC driver does not use Oracle Net.
5. Make sure that both ucp.jar and ons.jar are in the application CLASSPATH.
See Also:
■ "About Fast Application Notification (FAN)" on page 7-6
■ "Configuring Clients for High Availability" on page 7-17
■ Oracle Database JDBC Developer's Guide for more information about
fast connection failover and configuring ONS
■ Oracle Universal Connection Pool for JDBC Developer's Guide for
more information
■ Oracle Database 2 Day + Java Developer's Guide for information
about creating a method to authenticate users
■ Oracle Real Application Clusters Administration and Deployment
Guide for information about configuring client failover
To set it within the connection properties argument, use code similar to the
following:
Properties props = new Properties();
props.put("ConnectionFailureNotification", "true");
Connection = DriverManager.getConnection(url, props);
2. Configure the transport mechanism by which the node down events are received.
If ONS is the selected transport mechanism, then use the SetONSConfiguration
property, as demonstrated in the following code, where racnode1 and racnode2
represent nodes in the cluster that have ONS running on them:
props.setONSConfiguration(“nodes=racnode1:4200,racnode2:4200”);
See Also:
■ "About Oracle RAC High Availability Framework" on page 7-5
■ "Configuring Clients for High Availability" on page 7-17
■ Oracle Database JDBC Developer's Guide for more information about
configuring ONS
■ Oracle Database 2 Day + Java Developer's Guide for information
about creating a method to authenticate users
3. Link the OCI client applications with thread library libthread or libpthread.
4. In your application, you must check if an event has occurred, using code similar to
the following example:
void evtcallback_fn(ha_ctx, eventhp)
...
printf("HA Event received.\n");
if (OCIHandleAlloc( (dvoid *)envhp, (dvoid **)&errhp, (ub4) OCI_HTYPE_ERROR,
(size_t) 0, (dvoid **) 0))
return;
if (retcode = OCIAttrGet(eventhp, OCT_HTYPE_EVENT, (dvoid *)&srvhp, (ub4 *)0,
OCI_ATTR_HA_SRVFIRST, errhp))
checkerr (errhp, (sword)retcode;
else {
printf("found first server handle.\n");
/*get associated instance name */
if (retcode = OCIAttrGet(srvhp, OCI_HTYPE_SERVER, (dvoid *)&instname,
(ub4 *)&sizep, OCI_ATTR_INSTNAME, errhp))
checkerr (errhp, (sword)retcode);
else
printf("instance name is %s.\n", instname);
5. After a HA event is received, clients and applications can register a callback that is
invoked whenever a high availability event occurs, as shown in the following
example:
/*Registering HA callback function */
if (checkerr(errhp, OCIAttrSet(envhp, (ub4) OCI_HTYPE_ENV,
(dvoid *)evtcallback_fn, (ub4) 0,
(ub4)OCI_ATTR_EVTCBK, errhp)))
{
printf("Failed to set register EVENT callback.\n");
return EX_FAILURE;
}
if (checkerr(errhp, OCIAttrSet(envhp, (ub4) OCI_HTYPE_ENV,
(dvoid *)evtctx, (ub4) 0,
(ub4)OCI_ATTR_EVTCTX, errhp)))
{
printf("Failed to set register EVENT callback context.\n");
return EX_FAILURE;
}
return EX_SUCCESS;
After registering an event callback and context, OCI calls the registered function
once for each high availability event.
See Also:
■ "About Fast Application Notification (FAN)" on page 7-6
■ "Configuring Clients for High Availability" on page 7-17
■ Oracle Call Interface Programmer's Guide for more information
about event notification and user-registered callbacks
■ Oracle Real Application Clusters Administration and Deployment
Guide for more information about configuring fast application
notification for OCI clients.
For Notification Properties, choose "Enable Fast Application Notification for OCI
and ODP.NET Applications". Set the Connection Load Balancing Goal to Long.
2. Enable Fast Connection Failover for ODP.NET connection pools by subscribing to
FAN high availability events. Do this by setting the ha events connection string
attribute to true either at connection time or in the data source definition. Note
that this only works if you are using connection pools (the pooling attribute to
true).
You can also enable Run-time Connection Load Balancing by setting the load
balancing connection string attribute to true.
Use code similar to the following, where username is the name of the database user
that the application uses to connect to the database, password is the password for
that database user, and the service name is odpserv:
// C#
using System;
using Oracle.DataAccess.Client;
class HAEventEnablingSample
{
static void Main()
{
OracleConnection con = new OracleConnection();
con.Open();
// Carry out work against the database here.
con.Close();
// Dispose OracleConnection object
con.Dispose();
}
}
The username specified in this step equals the username used for the User Id
argument in the previous step.
See Also:
■ "About Fast Application Notification (FAN)" on page 7-6
■ "Configuring Clients for High Availability" on page 7-17
■ Oracle Data Provider for .NET Developer's Guide for more
information about event notification and user-registered callbacks
■ Oracle Real Application Clusters Administration and Deployment
Guide for more information about configuring fast application
notification for ODP.NET clients.
Troubleshooting
Performance tuning for an Oracle Real Application Clusters (Oracle RAC) database is
very similar to performance tuning for a single-instance database. Many of the tuning
tasks that you perform on single-instance Oracle databases can also improve
performance of your Oracle RAC database. This chapter focuses on the performance
tuning and monitoring tasks that are unique to Oracle RAC.
This chapter includes the following sections:
■ Monitoring Oracle RAC Database and Cluster Performance
■ Viewing Other Performance Related Charts
■ Viewing the Cluster Database Topology
■ Monitoring Oracle Clusterware
■ Troubleshooting Configuration Problems in Oracle RAC Environments
■ Monitoring and Tuning Oracle RAC: Oracle By Example Series
See Also:
■ Oracle Database 2 Day DBA for more information about basic
database tuning
■ Oracle Database 2 Day + Performance Tuning Guide for more
information about general performance tuning
■ Oracle Real Application Clusters Administration and Deployment
Guide for more information about diagnosing problems for
Oracle Real Application Clusters components
■ Oracle Clusterware Administration and Deployment Guide for
more information about diagnosing problems for Oracle
Clusterware components
access each individual database instance for details if you just want to see
inclusive, aggregated information.
■ View alert messages aggregated across all the instances with lists for the source of
each alert message. An alert message is an indicator that signifies that a particular
metric condition has been encountered. A metric is a unit of measurement used to
report the system's conditions.
■ Review issues that are affecting the entire cluster and those that are affecting
individual instances.
■ Monitor cluster cache coherency statistics to help you identify processing trends
and optimize performance for your Oracle RAC environment. Cache coherency
statistics measure how well the data in caches on multiple instances is
synchronized. If the data caches are completely synchronized with each other, then
reading a memory location from the cache on any instance returns the most recent
data written to that location from any cache on any instance.
■ Determine if any of the services for the cluster database are having availability
problems. A service is considered to be a problem service if it is not running on all
preferred instances, if it's response time thresholds are not met, and so on. Clicking
on the link on the Cluster Database Home page opens the Cluster Managed
Database services page where the service can be managed.
■ Review any outstanding Clusterware interconnect alerts.
Also note the following points about monitoring Oracle RAC environments:
■ Performance monitoring features, such as Automatic Workload Repository (AWR)
and Statspack, are Oracle RAC-aware.
■ You can use global dynamic performance views, or GV$ views, to view statistics
across instances. These views are based on the single-instance V$ views.
This section contains the following topics:
■ About Automatic Database Diagnostic Monitor and Oracle RAC Performance
■ Viewing ADDM for Oracle RAC Findings
■ Using the Cluster Database Performance Page
review the results of the ADDM analysis. An ADDM analysis is performed from the
top down, first identifying symptoms, then refining the analysis to reach the root
causes, and finally providing remedies for the problems.
For the clusterwide analysis, Enterprise Manager reports two types of findings:
■ Database findings: An issue that concerns a resource that is shared by all instances
in the cluster database, or an issue that affects multiple instances. An example of a
database finding is I/O contention on the disk system used for shared storage.
■ Instance findings: An issue that concerns the hardware or software that is
available for only one instance, or an issue that typically affects just a single
instance. Examples of instance findings are high CPU load or sub-optimal memory
allocation.
ADDM reports only the findings that are significant, or findings that take up a
significant amount of instance or database time. Instance time is the amount of time
spent using a resource due to a performance issue for a single instance and database
time is the sum of time spent using a resource due to a performance issue for all
instances of the database, excluding any Oracle Automatic Storage Management
(Oracle ASM) instances.
An instance finding can be reported as a database finding if it relates to a significant
amount of database time. For example, if one instance spends 900 minutes using the
CPU, and the sum of all time spent using the CPU for the cluster database is 1040
minutes, then this finding would be reported as a database finding because it takes up
a significant amount of database time.
A problem finding can be associated with a list of recommendations for reducing the
impact of the performance problem. Each recommendation has a benefit that is an
estimate of the portion of database time that can be saved if the recommendation is
implemented. A list of recommendations can contain various alternatives for solving
the same problem; you do not have to apply the recommendations.
Recommendations are composed of actions and rationales. You must apply all the
actions of a recommendation to gain the estimated benefit of that recommendation.
The rationales explain why the actions were recommended, and provide additional
information to implement the suggested recommendation.
See Also:
■ "Monitoring Oracle RAC Database and Cluster Performance"
■ "About Workload Management"
■ Oracle Database 2 Day + Performance Tuning Guide for more
information about configuring and using AWR and ADDM
■ Oracle Database Performance Tuning Guide for more information
about Automatic Database Diagnostic Monitor
You can also view the ADDM findings per instance by viewing the Instances table
on the Cluster Database Home page.
When you select the number of ADDM Findings, the Automatic Database
Diagnostic Monitor (ADDM) page for the cluster database appears.
2. Review the results of the ADDM run.
7. View the available Recommendations for resolving the performance problem. Run
the SQL Tuning Advisor to tune the SQL statements that are causing the
performance findings.
See Also:
■ "Monitoring Oracle RAC Database and Cluster Performance"
■ Oracle Database 2 Day + Performance Tuning Guide
■ Oracle Database 2 Day DBA for more information about tuning a
database and instance
The charts on the Performance page are described in the following sections:
■ About Global Cache Block Access Latency Chart
■ Viewing the Chart for Global Cache Block Access Latency
■ Viewing the Chart for Cluster Host Load Average
■ Viewing the Chart for Average Active Sessions
■ Viewing the Database Throughput Chart
from the orcl1 instance by using Cache Fusion rather than by reading the data block
from disk.
The Global Cache Block Access Latency chart shows data for two different types of
data block requests: current and consistent-read (CR) blocks. When you update data in
the database, Oracle Database must locate the most recent version of the data block
that contains the data, which is called the current block. If you perform a query, then
only data committed before the query began is visible to the query. Data blocks that
were changed after the start of the query are reconstructed from data in the undo
segments, and the reconstructed data is made available to the query in the form of a
consistent-read block.
The Global Cache Block Access Latency chart on the Cluster Database Performance
page shows the latency for each type of data block request, that is the elapsed time it
takes to locate and transfer consistent-read and current blocks between the buffer
caches.
If the Global Cache Block Access Latency chart shows high latencies (high elapsed
times), then this can be caused by any of the following:
■ A high number of requests caused by SQL statements that are not tuned.
■ A large number of processes in the queue waiting for the CPU, or scheduling
delays.
■ Slow, busy, or faulty interconnects. In these cases, check your network connection
for dropped packets, retransmittals, or cyclic redundancy check (CRC) errors.
Concurrent read and write activity on shared data in a cluster is a frequently occurring
activity. Depending on the service requirements, this activity does not usually cause
performance problems. However, when global cache requests cause a performance
problem, optimizing SQL plans and the schema to improve the rate at which data
blocks are located in the local buffer cache, and minimizing I/O is a successful strategy
for performance tuning. If the latency for consistent-read and current block requests
reaches 10 milliseconds, then your first step in resolving the problem should be to go
to the Cluster Cache Coherency page for more detailed information.
If you must diagnose and fix problems that are causing the higher number of wait
events in a specific category, then you can select an instance of interest and view the
wait events, also the SQL, sessions, services, modules, and actions that are consuming
the most database resources.
See Also:
■ "Monitoring Oracle RAC Database and Cluster Performance"
■ "About Oracle Grid Infrastructure for a Cluster and Oracle RAC"
■ Oracle Database 2 Day + Performance Tuning Guide
■ Oracle Database 2 Day DBA for more information about tuning a
database and instance
For more information about the information on this page, refer to the Enterprise
Manager Help system.
See Also:
■ "Monitoring Oracle RAC Database and Cluster Performance"
■ "About Oracle Grid Infrastructure for a Cluster and Oracle RAC"
■ Oracle Database 2 Day + Performance Tuning Guide
■ Oracle Database 2 Day DBA for more information about tuning a
database and instance
For more information about the information on this page, refer to the Enterprise
Manager Help system.
You can also obtain information at the instance level by clicking a legend to the right of
the chart to access the Top Sessions page. On the Top Session page you can view
real-time data showing the sessions that consume the greatest system resources.
For more information about the information on this page, refer to the Enterprise
Manager Help system.
See Also:
■ "Monitoring Oracle RAC Database and Cluster Performance"
■ Oracle Database 2 Day + Performance Tuning Guide
■ Oracle Database 2 Day DBA for more information about tuning a
database and instance
3. (Optional) Click the portion of a chart representing the consumer or click the link
under the chart for that consumer to view instance-level information about that
consumer.
The page that appears shows the running instances that are serving the consumer.
4. (Optional) Expand the names in the Action or Module column to show data for
individual instances.
See Also:
■ "Monitoring Oracle RAC Database and Cluster Performance"
■ Oracle Database 2 Day + Performance Tuning Guide
See Also:
■ "Monitoring Oracle RAC Database and Cluster Performance"
■ Oracle Database 2 Day + Performance Tuning Guide
See Also:
■ "Monitoring Oracle RAC Database and Cluster Performance"
■ Oracle Database 2 Day + Performance Tuning Guide
■ Oracle Database 2 Day DBA for more information about tuning a
database and instance
4. (Optional) Use the Switch Database Instance list to change the instance for which
the data is displayed in the chart.
See Also:
■ "Monitoring Oracle RAC Database and Cluster Performance"
■ Oracle Database 2 Day + Performance Tuning Guide
See Also:
■ "Monitoring Oracle RAC Database and Cluster Performance"
See Also:
■ "Monitoring Oracle RAC Database and Cluster Performance"
■ Oracle Database Administrator's Guide
2. (Optional) Move the mouse cursor over any component in the topology diagram
to display information about that component in a popup box.
3. Select any component in the topology diagram to change the information
displayed in the Selection Details section.
4. (Optional) Click Legend at the bottom of the page, on the left-hand side, to display
the Topology Legend page.
This page describes the icons used in Cluster Topology and Cluster Database
Topology.
5. (Optional) Right-click the currently selected component to view the menu actions
available for that component.
See Also:
■ "Monitoring Oracle RAC Database and Cluster Performance" on
page 8-1
■ "Viewing the Cluster Topology Page" on page 8-29
■ "About Oracle Grid Infrastructure for a Cluster and Oracle RAC"
on page 1-3
3. In the High Availability section, click the number next to Problem Services to
display the Cluster Home page.
Click the Database tab to return to the Cluster Database Home page.
4. Select Topology. Click a node in the graphical display to activate the controls.
Click the Interface component. Right-click the Interface component, then choose
View Details from the menu to display the Interconnects subpage for the cluster.
See Also:
■ "Monitoring Oracle RAC Database and Cluster Performance" on
page 8-1
■ "Viewing the Cluster Topology Page" on page 8-29
■ "About the Cluster Interconnects Page" on page 8-27
4. In the Configuration section, use the View list to select which of the following
information is displayed for the available hosts in the cluster:
■ Operating Systems (including Hosts and OS Patches)
■ Hardware (including hardware configuration and hosts)
Click the links under Host or OS Patches for detailed information.
5. View the Diagnostic Summary section which contains the number of active
Interconnect alerts. Click the number of alerts to view the Interconnects subpage.
6. Investigate the Cluster Databases table to view the cluster databases associated
with this cluster, their availability, any alerts or policy violations on those
databases, their security compliance score, and the database software version.
7. View the Alerts section, which includes the following items:
■ Category list
Optionally choose a category from the list to view only alerts in that category
■ Critical
This is the number of metrics that have crossed critical thresholds plus the
number of other critical alerts, such as those caused by incidents (critical
errors).
■ Warning
This is the number of metrics that have crossed warning thresholds
■ Alerts table
The Alerts table provides information about any alerts that have been issued
along with the severity rating of each. Click the alert message in the Message
column for more information about the alert.
When an alert is triggered, the name of the metric for which the alert was
triggered is displayed in the Name column. The severity icon for the alert
(Warning or Critical) is displayed, along with the time the alert was triggered,
the value of the alert, and the time the metric's value was last checked.
8. View the date of the Last Security Evaluation and the Compliance score for the
cluster in the Security section.
The compliance score is a value between 0 and 100 where 100 is a state of complete
compliance to the security policies. The compliance score calculation for each
target and policy combination to a great extent is influenced by the severity of the
violation and importance of the policy, and to a lesser extent by the percentage of
violating rows over the total number of rows tested.
9. Review the status of any jobs submitted to the cluster within the last seven days in
the Job Activity section.
10. Determine if there are patches to be applied to Oracle Clusterware by reviewing
the Critical Patch Advisories for Oracle Homes section.
To view available patches, you must have first configured your My Oracle Support
(formerly OracleMetaLink) Credentials as discussed in "Verifying My Oracle
Support Credentials" on page 3-2.
11. View basic performance statistics for each host in the cluster in the Hosts table at
the bottom of the page.
Click any link in this table to view further details about that statistic.
12. Use the subtabs at the top of the page to view detailed information for
Performance, Targets, Interconnects, or Topology.
See Also:
■ "Monitoring Oracle RAC Database and Cluster Performance" on
page 8-1
■ "Viewing the Cluster Topology Page" on page 8-29
■ "Obtaining the Patch" on page 10-1
■ "Verifying Operating System and Software Requirements" on
page 2-6
■ View the CPU, Memory, and Disk I/O charts for each host individually by clicking
the host name in the legend to the right of the chart.
The Cluster Performance page also contains a Hosts table. The Hosts table displays
summary information for the hosts for the cluster, their availability, any alerts on those
hosts, CPU and memory utilization percentage, and total I/O per second. You can
click a host name in the Hosts table to go to the performance page for that host.
See Also:
■ "Monitoring Oracle RAC Database and Cluster Performance" on
page 8-1
■ "Using the Cluster Database Performance Page" on page 8-6
Click a target name to go to the home page for that target. Click the links in the table to
get more information about that particular item, alert, or metric.
The Hosts table displays the hosts for the cluster, their availability, any alerts on those
hosts, CPU and memory utilization percentage, and total I/O per second.
On the Cluster Administration page you can manage Oracle Clusterware resources,
create and manage server pools. You can also use this page to configure Oracle
Database Quality of Service Management. This functionality is available starting with
Oracle Database 11g Release 2 (11.2.0.2)
If you click Manage Resources, then you go to the Resources page where you can start,
stop, or relocate the resources that are registered with Oracle Clusterware.
See Also:
■ "Viewing the Cluster Database Topology" on page 8-19
■ Oracle Clusterware Administration and Deployment Guide
The Private Interconnect Transfer Rate value shows a global view of the private
interconnect traffic, which is the estimated traffic on all the private networks in the
cluster. The traffic is calculated as the summary of the input rate of all private
interfaces known to the cluster. For example, if the traffic rate is high, then the values
in the Total I/O Rate column in the Interfaces by Hosts table for private interfaces are
also high. If the values in this column are high, then you should determine the cause of
the high network usage. You can click a number to access the Network Interface Total
I/O Rate page for historic statistics and metric values.
Using the Interfaces by Hosts table, you can drill down to the following pages:
■ Host Home
■ Hardware Details
■ Network Interface Total I/O Rate
See Also:
■ "Monitoring Oracle Clusterware" on page 8-21
■ "About Oracle Grid Infrastructure for a Cluster and Oracle RAC"
on page 1-3
■ "About Network Hardware Requirements" on page 2-4
See Also:
■ "Tools for Installing, Configuring, and Managing Oracle RAC"
■ "About Verifying the Oracle Clusterware Installation"
See Also:
■ "Tools for Installing, Configuring, and Managing Oracle RAC"
■ "Troubleshooting Configuration Problems in Oracle RAC
Environments"
■ "Monitoring Oracle Clusterware"
See Also:
■ Oracle Clusterware Administration and Deployment Guide
■ "Monitoring Oracle Clusterware"
The log files for the CSS daemon, cssd, can be found in the following directory:
CRS_home/log/hostname/cssd/
The log files for the EVM daemon, evmd, can be found in the following directory:
CRS_home/log/hostname/evmd/
The log files for the Oracle Cluster Registry (OCR) can be found in the following
directory:
CRS_home/log/hostname/client/
Each program that is part of the Oracle RAC high availability component has a
subdirectory assigned exclusively for that program. The name of the subdirectory
equals the name of the program.
3. Check the status of an individual Oracle Clusterware daemon using the following
syntax, where daemon is crsd, cssd, or evmd:
# crsctl check daemon
4. To list the status of all Oracle Clusterware resources running on any node in the
cluster, use the following command:
# crsctl status resource -t
This command lists the status of all registered Oracle Clusterware resources,
which includes the VIPs, listeners, databases, services, and Oracle ASM instances
and disk groups.
2. Use the following command to obtain the module names for a component, where
component_name is crs, evm, css or the name of the component for which you
want to enable debugging:
# crsctl lsmodules component_name
For example, viewing the modules of the css component might return the
following results:
# crsctl lsmodules css
The following are the CSS modules ::
CSSD
COMMCRS
COMMNS
For example, to enable the lowest level of tracing for the CSSD module of the css
component, you would use the following command:
# crsctl debug log css CSSD:1
2. Obtain a list of the resources available for debugging by running the following
command:
# crsctl check crs
See Also:
■ Oracle Real Application Clusters Administration and Deployment
Guide
See Also:
■ "Troubleshooting Configuration Problems in Oracle RAC
Environments"
■ Oracle Clusterware Administration and Deployment Guide
See Also:
■ "Troubleshooting Configuration Problems in Oracle RAC
Environments"
■ Oracle Clusterware Administration and Deployment Guide
See Also:
■ "Troubleshooting Configuration Problems in Oracle RAC
Environments"
■ Oracle Clusterware Administration and Deployment Guide
See Also:
■ "Troubleshooting Configuration Problems in Oracle RAC
Environments"
■ Oracle Clusterware Administration and Deployment Guide
■ "Verifying Your Oracle RAC Database Installation"
3. To verify the connectivity among the nodes, specified by node_list, through the
available network interfaces from the local node or from any other cluster node,
use the comp nodecon command as shown in the following example:
cluvfy comp nodecon -n node_list -verbose
When you issue the nodecon command as shown in the previous example, it
instructs CVU to perform the following tasks:
■ Discover all the network interfaces that are available on the specified cluster
nodes.
■ Review the corresponding IP addresses and subnets for the interfaces.
■ Obtain the list of interfaces that are suitable for use as VIPs and the list of
interfaces to private interconnects.
■ Verify the connectivity among all the nodes through those interfaces.
When you run the nodecon command in verbose mode, it identifies the mappings
between the interfaces, IP addresses, and subnets.
4. To verify the connectivity among the nodes through specific network interfaces,
use the comp nodecon command with the -i option and specify the interfaces to
be checked with the interface_list argument:
cluvfy comp nodecon -n node_list -i interface_list [-verbose]
For example, you can verify the connectivity among the nodes racnode1,
racnode2, and racnode3, through the specific network interface eth0 by
running the following command:
cluvfy comp nodecon -n racnode1,racnode2,racnode3 -i eth0 -verbose
See Also:
■ "Troubleshooting Configuration Problems in Oracle RAC
Environments"
■ "Configuring the Network"
■ "Configuring Secure Shell"
■ Oracle Clusterware Administration and Deployment Guide
Enabling Tracing
CVU does not generate trace files unless you enable tracing. The CVU trace files are
created in the CRS_home/cv/log directory. Oracle RAC automatically rotates the log
files, and the most recently created log file has the name cvutrace.log.0. You
should remove unwanted log files or archive them to reclaim disk space, if needed.
See Also:
■ "Troubleshooting Configuration Problems in Oracle RAC
Environments"
■ Oracle Clusterware Administration and Deployment Guide
See Also:
■ "Monitoring Oracle RAC Database and Cluster Performance"
■ "Troubleshooting Configuration Problems in Oracle RAC
Environments"
■ Oracle Database 2 Day DBA
■ Oracle Database 2 Day + Performance Tuning Guide
2. Click the name of the instance for which you want to view the alert log.
The Cluster Database Instance Home page appears.
3. In the Diagnostic Summary section, click the link next to the heading Alert Log to
display the alert log entries containing ORA- errors.
The Alert Log Errors page appears.
4. (Optional) Click Alert Log Contents in the Related Links section to view all the
entries in the alert log.
On the View Alert Log Contents page, click Go to view the most recent entries, or
you can enter your own search criteria.
See Also:
■ Oracle Real Application Clusters Administration and Deployment
Guide
■ "Monitoring Oracle Clusterware"
From the menu on the lower left side of the screen, select Database Learning Library.
Perform a search using the following parameters:
■ Content Type: Demo
■ Functional Category: Grid
■ Product Suite: Oracle Database 11g (ODB11g)
Click a title to start a demonstration, for example "Use Global ADDM Analysis in a
RAC Environment".
This chapter describes how to add and remove nodes and instances in Oracle Real
Application Clusters (Oracle RAC) environments. You can use these methods when
configuring a new Oracle RAC environment, or when resizing an existing Oracle RAC
environment.
This chapter includes the following sections:
■ Preparing the New Node
■ Verifying the New Node Meets the Prerequisites for Installation
■ Extending the Oracle Grid Infrastructure Home to the New Node
■ Extending the Oracle RAC Home Directory
■ Adding the New Node to the Cluster using Enterprise Manager
■ Creating an Instance on the New Node
■ Deleting an Instance From the Cluster Database
■ Removing a Node From the Cluster
Note: For this chapter, it is very important that you perform each
step in the order shown.
See Also:
■ Oracle Real Application Clusters Administration and Deployment
Guide for more information about adding and removing nodes
from your cluster database
See Also:
■ "Preparing Your Cluster" on page 2-1
■ "Configuring the Network" on page 2-13
■ "About Verifying the Oracle Clusterware Installation" on
page 3-18
To extend the Oracle Grid Infrastructure for a cluster home to include the new
node:
1. Verify the new node has been properly prepared for an Oracle Clusterware
installation by running the following CLUVFY command on the racnode1 node:
cluvfy stage -pre nodeadd -n racnode3 -verbose
2. As the oracle user (owner of the Oracle Grid Infrastructure for a cluster software
installation) on racnode1, go to Grid_home/oui/bin and run the addNode.sh
script in silent mode:
If you are using Grid Naming Service (GNS):
./addNode.sh -silent "CLUSTER_NEW_NODES={racnode3}"
When running this command, the curly braces ( { } ) are not optional and must be
included or the command returns an error.
You can alternatively use a response file instead of placing all the arguments in the
command line. See Oracle Clusterware Administration and Deployment Guide for
more information on using response files.
3. When the script finishes, run the root.sh script as the root user on the new
node, racnode3, from the Oracle home directory on that node.
4. If you are not using Oracle Grid Naming Service (GNS), then you must add the
name and address for racnode3 to DNS.
You should now have Oracle Clusterware running on the new node. To verify the
installation of Oracle Clusterware on the new node, you can run the following
command on the newly configured node, racnode3:
$ cd /u01/app/11.2.0/grid/bin
$ ./cluvfy stage -post nodeadd -n racnode3 -verbose
Note: Avoid changing host names after you complete the Oracle
Clusterware installation, including adding or deleting domain
qualifications. Nodes with changed host names must be deleted from
the cluster and added back with the new name.
See Also:
■ "Completing the Oracle Clusterware Configuration"
■ Oracle Real Application Clusters Administration and Deployment
Guide for more information about adding and removing nodes
from your cluster database
3. When the script finishes, run the root.sh script as the root user on the new
node, racnode3, from the Oracle home directory on that node.
For policy-managed databases with Oracle Managed Files (OMF) enabled, no
further actions are needed.
For a policy-managed database, when you add a new node to the cluster, it is
placed in the Free pool by default. If you increase the cardinality of the database
server pool, then an Oracle RAC instance is added to the new node, racnode3,
and it is moved to the database server pool. No further action is necessary.
4. Add shared storage for the undo tablespace and redo log files.
If OMF is not enabled for your database, then you must manually add an undo
tablespace and redo logs.
5. If you have an administrator-managed database, then add a new instance on the
new node as described in "Creating an Instance on the New Node" on page 9-4.
If you followed the installation instructions in this guide, then your cluster
database is an administrator-managed database and stores the database files on
Oracle Automatic Storage Management (Oracle ASM) with OMF enabled.
After completing these steps, you should have an installed Oracle home on the new
node.
See Also:
■ "Verifying Your Oracle RAC Database Installation"
■ Oracle Real Application Clusters Administration and Deployment
Guide for more information about adding and removing nodes
from your cluster database
See Also:
■ "Completing the Oracle Clusterware Configuration"
■ Oracle Real Application Clusters Administration and Deployment
Guide for more information about adding and removing nodes
from your cluster database
Note: The steps described in this section require a license for the
Enterprise Manager Provisioning Management pack. Refer to the
Oracle Database Licensing Information for information about the
availability of these features on your system.
See Also:
■ "About Oracle Real Application Clusters"
■ "Extending the Oracle Grid Infrastructure Home to the New
Node"
■ "Extending the Oracle RAC Home Directory"
See Also:
■ "Creating an Instance on the New Node" on page 9-4
■ Oracle Real Application Clusters Administration and Deployment
Guide for more information about adding and removing nodes
from your cluster database
After the selected host has been validated, the Add Instance: Review page
appears.
6. Review the information, then click Submit Job to proceed.
A confirmation page appears.
7. Click View Job to check on the status of the submitted job.
The Job Run detail page appears.
8. Click your browser’s Refresh button until the job shows a status of Succeeded or
Failed.
If the job shows a status of Failed, then you can click the name of the step that
failed to view the reason for the failure.
9. Click the Database tab to return to the Cluster Database Home page.
See Also:
■ "Creating an Instance on the New Node"
■ Oracle Real Application Clusters Administration and Deployment
Guide for more information about adding and removing nodes
from your cluster database
Note: The steps described in this section require a license for the
Enterprise Manager Provisioning Management pack. Refer to the
Oracle Database Licensing Information for information about the
availability of these features on your system.
See Also:
■ "About Oracle Real Application Clusters"
■ "Adding the New Node to the Cluster using Enterprise Manager"
■ Oracle Real Application Clusters Administration and Deployment
Guide
See Also:
■ "Deleting an Instance From the Cluster Database"
■ Oracle Real Application Clusters Administration and Deployment
Guide for more information about adding and removing nodes
from your cluster database
After the host information has been validated, the Delete Instance: Review page
appears.
6. Review the information, and if correct, click Submit Job to continue. Otherwise,
click Back and correct the information.
A Confirmation page appears.
7. Click View Job to view the status of the node deletion job.
A Job Run detail page appears.
8. Click your browser’s Refresh button until the job shows a status of Succeeded or
Failed.
If the job shows a status of Failed, then you can click the name of the step that
failed to view the reason for the failure.
9. Click the Cluster tab to view to the Cluster Home page.
The number of hosts available in the cluster database is reduced by one.
See Also:
■ "Deleting an Instance From the Cluster Database"
■ Oracle Real Application Clusters Administration and Deployment
Guide for more information about adding and removing nodes
from your cluster database
See Also:
■ "Completing the Oracle Clusterware Configuration"
■ Oracle Real Application Clusters Administration and Deployment
Guide for more information about adding and removing nodes
from your cluster database
Patches
Oracle issues product fixes for its software called patches. When you apply the patch
to your Oracle software installation, it updates the executable files, libraries, and object
files in the software home directory. The patch application can also update
configuration files and Oracle-supplied SQL schemas. Patches are applied by using
OPatch, a utility supplied by Oracle, or Enterprise Manager Grid Control.
A group of patches form a patch set. When you apply a patch set, many different files
and utilities are modified. This results in a version change for your Oracle software, for
example, from Oracle Database 10.2.0.4.0 to Oracle Database 10.2.0.5.0. You use Oracle
Universal Installer (OUI) to apply a patch set.
This chapter describes how to manage Oracle software and apply patches in Oracle
Real Application Clusters (Oracle RAC) environments using the OPatch utility.
This chapter includes the following sections:
■ Obtaining the Patch
■ Preparing to Use OPatch
■ Applying Patches
■ Applying Patch Sets
■ Troubleshooting Patch Deployment
■ Upgrading the Oracle Software
See Also:
■ Oracle Universal Installer and OPatch User's Guide for Windows and
UNIX for more information about using OPatch and applying
patches to Oracle RAC
■ Oracle Database 2 Day DBA
2. At the top of the page, on the left side, select Patches & Updates.
3. If you know the patch number, then you can enter it into the field to the right of
the Patch Name or Number drop-down list. Next to the Platform drop-down list,
use the Select up to 5 drop-down list to choose the operating systems on which
you plan to apply the patch.
If you want to search for all available patches for your system, then select Product
or Family (Advanced Search) at the top of the page. Then supply the following
information:
■ Choose the products you want to patch (for example, Oracle Clusterware,
RDMBS Server, or an individual product such as Universal Installer)
■ Specify the software release for the products you selected
■ Specify the platform on which the software is installed
Click Search to look for available patches. The Patch Search Results page is
displayed.
4. On the Patch Search Results page, select the number of the patch you want to
download. A details page for that patch appears on your screen.
5. View the ReadMe file for the patch, or click Download to download the patch to
your local computer. If you download the patch to a computer that is not a node in
the cluster, then you must transfer the file using a binary protocol to a cluster
node.
You can also choose to add the patch to your current patch plan or create a new
patch plan. A patch plan is a collection of patches that you want to apply as a
group. To learn more about using patch plans, in the Add to Plan drop-down list
select Why use a Plan?
See Also:
■ "Verifying My Oracle Support Credentials" on page 3-2
■ "About Downloading and Installing RDBMS Patches" on
page 3-20
■ Oracle Database 2 Day DBA
See Also:
■ Oracle Database 2 Day DBA
2. Use the echo command to display the current setting of the ORACLE_HOME
environment variable.
echo $ORACLE_HOME
See Also:
■ "Preparing to Use OPatch" on page 10-2
■ "Configuring the Operating System Environment" on page 3-3
See Also:
■ "Preparing to Use OPatch" on page 10-2
■ "Configuring the Operating System Environment" on page 3-3
■ Oracle Universal Installer and OPatch User's Guide for Windows and
UNIX
See Also:
■ "Preparing to Use OPatch" on page 10-2
■ Oracle Database 2 Day DBA
2. Use a shell command similar to the following to update the value of the PATH
environment variable, where /u01/app/oracle/product/11.2.0/dbhome_1
is the location of your Oracle home directory:
$ export PATH=$PATH:/u01/app/oracle/product/11.2.0/dbhome_1/OPatch
You could also modify the shell profile script for the current user to have this
variable configured every time you log in.
See Also:
■ "Preparing to Use OPatch" on page 10-2
■ "Configuring the Operating System Environment" on page 3-3
■ Oracle Universal Installer and OPatch User's Guide for Windows and
UNIX
If the date is returned, then user equivalency between the source and destination
node has been configured.
3. If you see output similar to the following, then SSH user equivalency is not
enabled:
Enter passphrase for key ’/home/oracle/.ssh/id_rsa’:
Enable SSH user equivalency before continuing with the patching operation.
These commands start the ssh-agent on the local node, and load the RSA and
DSA keys into the current session’s memory so that you are not prompted to use
pass phrases when issuing SSH commands.
3. At the prompt, enter the pass phrase for each key that you generated when
configuring Secure Shell, for example:
[oracle@racnode1 .ssh]$ exec /usr/bin/ssh-agent $SHELL
[oracle@racnode1 .ssh]$ /usr/bin/ssh-add
Enter passphrase for /home/oracle/.ssh/id_rsa
Identity added: /home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa)
Identity added: /home/oracle/.ssh/id_dsa (/home/oracle/.ssh/id_dsa)
4. To test if you have configured SSH correctly, run the following command. If you
have configured SSH correctly, then you are not prompted for a password or a
pass phrase.
[oracle@racnode1] $ ssh racnode2 date
Note: Do not close this command window until you have completed
the patch installation. If you must close the command window in
which you enabled SSH user equivalency before the patch installation
is complete, then repeat Step 1 to Step 4 before starting the patch
installation.
See Also:
■ "Preparing to Use OPatch" on page 10-2
■ "Configuring Operating System Users and Groups" on page 2-10
■ Oracle Grid Infrastructure Installation Guide for your specific
operating system for instructions on how to configure SSH
Applying Patches
Patching in an Oracle RAC environment is slightly different compared to patching a
single node. If OPatch detects a cluster, then it uses OUI to query the software
inventory to find the local node name and node list.
Before you install a patch, you must stop all the applications running from the
software directory that is being patched. In a cluster, you may have to shut down
additional applications, depending upon which software is being patched. Table 10–1
lists the applications to stop when patching Oracle software.
When you run the rootcrs.pl script with the -unlock flag, it stops the Oracle
Clusterware stack and unlocks the files in the Grid home so they can be modified.
2. Change user to the Oracle Grid Infrastructure for a cluster software owner and
apply the patch to the Grid home, using one of the patching methods described in
this section.
3. After you have finished modifying the Grid home, lock it again as the root user
using commands similar to the following:
cd /u01/app/11.2.0/grid/crs/install
perl rootcrs.pl -patch
The rootcrs.pl script with the -patch flag locks the Grid home again and
restarts the Oracle Clusterware stack.
2. If you are patching only the Oracle RAC home directory, then shut down all
Oracle RAC instances on all nodes in the cluster. To shut down all Oracle RAC
instances for a cluster database, enter the following command in a command
window, where CRS_home is the location of the Grid home directory and sales is
the name of the database:
$ CRS_home/bin/srvctl stop database -d sales
3. If you are patching the Grid home directory, then you must stop all the Oracle
RAC databases for the cluster and all single-instance databases in the cluster that
use the ASM installation being patched.
When patching the Grid home directory you can use a single command to stop the
Oracle Clusterware stack on all servers in the cluster. This command attempts to
gracefully stop resources managed by Oracle Clusterware while attempting to
stop the Oracle Clusterware stack. If any resources that Oracle Clusterware
manages are still running after you run the crsctl stop cluster command,
then the command fails. Use the -f option to unconditionally stop all resources
and stop the Oracle Clusterware stack.
Use a command similar to the following, where CRS_home is the home directory
of your Oracle Grid Infrastructure for a cluster installation:
$ CRS_home/crs/bin/crsctl stop cluster -all
4. After you have stopped the database instances and Oracle Clusterware stack on
each node in the cluster, use the crsctl utility to verify that all cluster resources
have been stopped on each node.
$ Grid_home/bin/crsctl status resource -t
5. Set your current directory to the directory where the patch is located, for example:
$ cd Oracle_home/OPatch/4519934/4519934
8. If you applied the patch to the Grid home directory, then restart the Oracle
Clusterware stack on all nodes by issuing the following command as the root
user on any node, where Grid_home is the home directory of your Oracle Grid
Infrastructure for a cluster installation:
# Grid_home/bin/crsctl start cluster -all
9. After you have restarted the Oracle Clusterware stack on all nodes, use the
crsctl utility to verify that the cluster resources were restarted on each node.
$ Grid_home/bin/crsctl status resource -t
If any of the cluster resources did not restart, then use either the CRSCTL or
SRVCTL utility to restart them. For example, you can use commands similar the
following to restart various cluster resources, where Grid_home is the home
directory of your Oracle Grid Infrastructure for a cluster installation and Oracle_
home is the home directory of your Oracle RAC database:
$ Oracle_home/bin/srvctl start instance -d sales -i "sales1"
# Grid_home/bin/crsctl start resource myResource -n docrac2
10. Run any post-patch scripts that are mentioned in the patch instructions, for
example:
$ sqlplus /nolog
SQL> connect sys/password@sales1 AS SYSDBA
SQL> @Oracle_home/rdbms/admin/catbundle.sql cpu apply
SQL> exit
See Also:
■ "Obtaining the Patch" on page 10-1
■ "Preparing to Use OPatch" on page 10-2
■ "Applying Patches" on page 10-5
■ "Applying Patch Sets" on page 10-13
■ "Troubleshooting Patch Deployment" on page 10-14
■ Oracle Universal Installer and OPatch User's Guide for Windows and
UNIX
■ Oracle Real Application Clusters Administration and Deployment
Guide for information about SRVCTL commands
■ Oracle Clusterware Administration and Deployment Guide for
information about CRSCTL commands
Rolling Patching
In rolling patching, one group of nodes is shut down, the patch is applied to those
nodes, and the nodes are brought back up. This is performed group by group,
separately, until all the nodes in the cluster are patched. This is the most efficient
means of applying an interim patch to an Oracle RAC or Oracle Grid Infrastructure for
a cluster installation. By patching groups of nodes individually, there is zero downtime
for the cluster database because at least one instance is available at all times on a
different node.
While most patches can be applied in a rolling fashion, some patches cannot be
applied in this fashion. The README file for the patch indicates whether or not you can
apply the patch using the rolling patch method. If the patch cannot be applied using
the rolling patch method, then you must use either "Minimum Downtime Patching" on
page 10-11 or "All Node Patching" on page 10-6 to apply the patch.
2. Verify the patch can be applied in a rolling fashion, using the following command:
$ opatch query -is_rolling_patch [unzipped patch location]
on the local node, use the following command, where Oracle_home is the home
directory for your Oracle RAC installation:
$ Oracle_home/bin/emctl stop dbconsole
4. If you are patching only the Oracle RAC home directory, then shut down all
Oracle RAC instances in the group of nodes being patched. To shut down an
instance for an Oracle RAC database, enter a command similar to the following
example, where Oracle_home is the location of the Oracle RAC home directory,
sales is the name of the database, and sales1 is the name of the instance:
$ Oracle_home/bin/srvctl stop instance -d sales -i "sales1" -f
5. If you are patching the Grid home directory, then you must stop all Oracle RAC
database instances that are running on the group of nodes being patched.
Additionally, stop all single-instance databases and user applications that use the
ASM installation in the Grid home on the nodes being patched.
When patching the Grid home directory you can use a single command to stop the
Oracle Clusterware stack on specific servers in the cluster. This command attempts
to gracefully stop resources managed by Oracle Clusterware while attempting to
stop the Oracle Clusterware stack. If any resources that Oracle Clusterware
manages are still running after you run the crsctl stop cluster command,
then the command fails. Use the -f option to unconditionally stop all resources
and stop the Oracle Clusterware stack on the specified servers.
Use a command similar to the following, where Grid_home is the home directory
of your Oracle Grid Infrastructure for a cluster installation and docrac1 is one of
the nodes in the group of nodes being patched:
$ Grid_home/crs/bin/crsctl stop cluster -n docrac1
6. After you have stopped the database instances and Oracle Clusterware stack on
specific nodes in the cluster, use the crsctl utility to verify that all cluster
resources have been stopped on those nodes.
$ Grid_home/bin/crsctl status resource -t
8. If you are patching nodes individually, then use the following command to
instruct OPatch to apply the patch to only the local node. If you run this command
from the directory where the patch is located, then you do not need to specify the
patch ID.
$ opatch apply -local
If you are patching nodes individually, then you can use a command similar to the
following to instruct OPatch to apply the patch to only the next node to be
patched. If you run this command from the directory where the patch is located,
then you do not need to specify the patch ID.
$ opatch apply -remote_nodes docrac2
If you are patching a group of nodes, then use a command similar to the following
to instruct OPatch to apply the patch to the group of nodes being patched:
$ opatch apply [-local_node docrac1] -remote_nodes docrac2,docrac3
9. After you have applied the patch to a node or group of nodes, then you can restart
Oracle Clusterware and Oracle RAC on those nodes.
a. If you applied the patch to only the Oracle home, then restart the instances
and database applications on the patched nodes. For example, you would use
commands similar to the following:
$ Oracle_home/bin/srvctl start instance -d sales -i "sales1"
$ Oracle_home/bin/emctl start dbconsole
b. If you applied the patch to the Grid home directory, then restart the Oracle
Clusterware stack on the patched nodes by issuing the following command as
the root user one of those nodes, where Grid_home is the home directory of
your Oracle Grid Infrastructure for a cluster installation:
# Grid_home/bin/crsctl start cluster -n docrac1
After you have restarted Oracle Clusterware and Oracle ASM, you can then
restart any Oracle Database instances and associated applications on the
patched nodes.
10. After you have restarted the Oracle Clusterware stack on all nodes, use the
crsctl utility to verify that the cluster resources were restarted on each node.
$ Grid_home/bin/crsctl status resource -t
If any of the cluster resources did not restart, then use either the CRSCTL or
SRVCTL utility to restart them. For example, you can use commands similar the
following to restart various cluster resources, where Grid_home is the home
directory of your Oracle Grid Infrastructure for a cluster installation and Oracle_
home is the home directory of your Oracle RAC database:
$ Oracle_home/bin/srvctl start instance -d sales -i "sales1"
# Grid_home/bin/crsctl start resource myResource -n docrac1
11. Repeat Step 3 through Step 10 for the next group of nodes.
If you have more than two groups of nodes to be patched, then repeat Step 3
through Step 10 for each group of nodes until all the nodes in the cluster have
been patched.
12. Run any post-patch scripts that are mentioned in the patch instructions, for
example:
$ sqlplus /nolog
SQL> connect sys/password@sales1 AS SYSDBA
SQL> @Oracle_home/rdbms/admin/catbundle.sql cpu apply
SQL> exit
See Also:
■ "Obtaining the Patch" on page 10-1
■ "Preparing to Use OPatch" on page 10-2
■ "Applying Patches" on page 10-5
■ "Applying Patch Sets" on page 10-13
■ "Troubleshooting Patch Deployment" on page 10-14
■ Oracle Universal Installer and OPatch User's Guide for Windows and
UNIX
To apply a patch to your cluster database using the minimum downtime method:
1. Change to the directory where the unzipped patch is staged on disk, for example:
$ cd Oracle_home/OPatch/12419331/12419331
2. Stop all user applications that use the Oracle RAC home directory for the group of
nodes being patched. For example, to stop Enterprise Manager Database Control
on the local node, use the following command, where Oracle_home is the home
directory for your Oracle RAC installation:
$ Oracle_home/bin/emctl stop dbconsole
3. Shut down all Oracle RAC instances on the local node. To shut down an instance
for an Oracle RAC database, enter a command similar to the following example,
where Oracle_home is the home directory for your Oracle RAC database
installation, sales is the name of the database, and sales1 is the name of the
instance:
$ Oracle_home/bin/srvctl stop instance -d sales -i "sales1" -f
If you run the OPatch command from the directory where the patch is staged on
disk, then you do not need to specify the patch ID.
OPatch asks if you are ready to patch the local node. After you confirm that the
Oracle RAC instances on the local node have been shut down, OPatch applies the
patch to the Oracle home directory on the local node. You are then asked to select
the next nodes to be patched.
6. After you shut down the Oracle RAC instances on the other nodes in the cluster,
you can restart the Oracle RAC instance on the local node. Then, instruct OPatch
that you are ready to patch the remaining nodes.
7. After all the nodes have been patched, restart the Oracle RAC instances on the
other nodes in the cluster. The following command shows how to start the orcl2
instance for the Oracle RAC database named orcl:
$ Oracle_home/bin/srvctl start instance -d orcl -i "orcl2"
8. Verify that all the Oracle Clusterware resources were restarted on all the nodes in
the cluster.
$ crsctl check cluster
If any of the cluster resources did not restart, then use either the CRSCTL or
SRVCTL utility to restart them. For example, you can use commands similar the
following to restart various cluster resources, where Grid_home is the home
directory of your Oracle Grid Infrastructure for a cluster installation and Oracle_
home is the home directory of your Oracle RAC database:
$ Oracle_home/bin/srvctl start instance -d sales -i "sales1"
# Grid_home/bin/crsctl start resource myResource -n docrac1
9. Run any post-patch scripts that are mentioned in the patch instructions, for
example:
$ sqlplus /nolog
SQL> connect sys/password@sales1 AS SYSDBA
SQL> @Oracle_home/rdbms/admin/catbundle.sql cpu apply
SQL> exit
See Also:
■ "Obtaining the Patch" on page 10-1
■ "Preparing to Use OPatch" on page 10-2
■ "Applying Patches" on page 10-5
■ "Applying Patch Sets" on page 10-13
■ "Troubleshooting Patch Deployment" on page 10-14
■ Oracle Universal Installer and OPatch User's Guide for Windows and
UNIX
See Also:
■ "Preparing to Use OPatch"
■ "Troubleshooting Patch Deployment"
■ Oracle Universal Installer and OPatch User's Guide for Windows and
UNIX
See Also:
■ Oracle Universal Installer and OPatch User's Guide for Windows and
UNIX
■ "Preparing to Use OPatch" on page 10-2
■ Oracle Database 2 Day DBA
■ "Viewing Oracle RAC Database Alert Log Messages" on page 8-38
■ "About the Oracle Clusterware Alert Log" on page 8-30
See Also:
■ "Troubleshooting Patch Deployment" on page 10-14
■ Oracle Universal Installer and OPatch User's Guide for Windows and
UNIX for more information about Oracle product patching using
OPatch
OPatch also maintains an index of the commands processed by OPatch and the log
files associated with it in the opatch_history.txt file located in the Oracle_
home/cfgtoollogs/opatch directory. A sample of the opatch_history.txt file
is as follows:
Date & Time : Tue Apr 26 23:00:55 PDT 2007
Oracle Home : /u01/app/oracle/product/11.2.0/dbhome_1/
OPatch Ver. : 11.2.0.0.0
Current Dir : /scratch/oui/OPatch
Command : lsinventory
Log File :
/u01/app/oracle/product/11.2.0/dbhome_1/cfgtoollogs/opatch/opatch-2007_Apr_26_
23-00-55-PDT_Tue.log
See Also:
■ "Troubleshooting Patch Deployment" on page 10-14
■ Oracle Universal Installer and OPatch User's Guide for Windows and
UNIX
a. Remove the patch shiphome directory and re-create it with the proper
structure (by extracting the files again).
b. Start the OPatch utility from the directory where the patch to be installed has
been unzipped and staged on disk.
c. Use the following command when starting OPatch:
opatch apply /Patch_Shiphome
where Patch_Shiphome is the location where the patch has been staged on
disk.
See Also:
■ "Troubleshooting Patch Deployment" on page 10-14
■ Oracle Universal Installer and OPatch User's Guide for Windows and
UNIX
See Also:
■ "Troubleshooting Patch Deployment" on page 10-14
■ Oracle Universal Installer and OPatch User's Guide for Windows and
UNIX
new Grid home. When performing the upgrade, you specify the location of the new
Grid home instead of selecting the existing software location.
When performing an out-of-place upgrade, the old and new version of the software
are present on the nodes at the same time, each in a different home location, but only
one version of the software is active at any given time.
Oracle Database release 11.2.0.2 is a full patch set release. To upgrade to Oracle
Database release 11.2.0.2, you install the Oracle Grid Infrastructure for a cluster and
Oracle Database software into new Oracle home directories instead of applying the
patch set to the existing Oracle home directories. This is different from patch set
releases for previous Oracle Database releases, where the patch set was always
installed in place.
You can use Database Upgrade Assistant (DBUA) to upgrade an existing database to
the current release of Oracle Database. Database Upgrade Assistant (DBUA) guides
you through the upgrade process and configures your database for the new release.
DBUA automates the upgrade process and makes appropriate recommendations for
configuration options such as tablespaces and online redo log files.
See Also:
■ Oracle Database 2 Day DBA for more information about using
DBUA to upgrade a database
■ Oracle Grid Infrastructure Installation Guide for more information on
performing software upgrades
■ Oracle Database Upgrade Guide
administrator-managed database
An administrator-managed database is a database created on nodes that are not part of
a server pool and are managed by the database or clusterware administrator.
cache coherency
The synchronization of data in multiple caches so that reading a memory location
through any cache returns the most recent data written to that location through any
other cache. Sometimes called cache consistency.
Cache Fusion
A diskless cache coherency mechanism in Oracle Real Application Clusters that
provides copies of blocks directly from a holding instance's memory cache to a
requesting instance's memory cache.
cluster
Multiple interconnected computers or servers that appear as if they are one server to
end users and applications.
cluster database
The generic term for a Oracle Real Application Clusters database.
Glossary-1
Cluster Verification Utility (CVU)
monitors process health, specifically the health of the database instance. The Global
Enqueue Service Monitor (LMON), a background process that monitors the health of
the cluster database environment and registers and de-registers from CSS. See also,
OCSSD.
CRSD
A Linux or UNIX process that performs high availability recovery and management
operations such as maintaining the OCR. Also manages application resources and runs
as root user (or by a user in the admin group on Mac operating system X-based
systems) and restarts automatically upon failure.
Glossary-2
Global Enqueue Service Monitor (LMON)
Free pool
A default server pool used in policy-based cluster and capacity management of Oracle
Clusterware resources. The free pool contains servers that are not assigned to any
server pool.
Glossary-3
Global Services Daemon (GSD)
Grid home
The Oracle Home directory for the Oracle Grid Infrastructure for a cluster for a cluster
software installation, which includes Oracle Clusterware and Oracle ASM.
grid infrastructure
The software that provides the infrastructure for an enterprise grid architecture. Oracle
Database 11g release 2 (11.2) combines these infrastructure products into one software
bundle called Oracle Grid Infrastructure for a cluster. In an Oracle cluster, Oracle Grid
Infrastructure for a cluster includes Oracle Clusterware and Oracle Automatic Storage
Management (Oracle ASM). For a standalone Oracle Database server, Oracle Grid
Infrastructure for a cluster includes Oracle Restart and Oracle ASM.
high availability
Systems with redundant components that provide consistent and uninterrupted
service, even following hardware or software failures. This involves some degree of
redundancy.
instance
For an Oracle RAC database, each node in a cluster usually has one instance of the
running Oracle software that references the database. When a database is started,
Oracle allocates a memory area called the System Global Area (SGA) and starts one or
more Oracle processes. This combination of the SGA and the Oracle processes is called
an instance. Each instance has unique Oracle System Identifier (SID), instance name,
rollback segments, and thread ID.
instance name
Represents the name of the instance and is used to uniquely identify a specific instance
when clusters share common services names. The instance name is identified by the
INSTANCE_NAME parameter in the instance initialization file, initsid.ora. The
instance name equals the Oracle System Identifier (sid).
Glossary-4
Network Time Protocol (NTP)
instance number
A number that associates extents of data blocks with particular instances. The instance
number enables you to start an instance and ensure that it uses the extents allocated to
it for inserts and updates. This ensures that an instance does not use space allocated
for other instances.
interconnect
The private network communication link that is used to synchronize the memory
cache of the nodes in the cluster.
network switch
A hardware device that connects computers within a network.
Glossary-5
node
node
A node is a computer on which the Oracle Clusterware software is installed or will be
installed.
OCSSD
A Linux or UNIX process that manages the Cluster Synchronization Services (CSS)
daemon. Manages cluster node membership and runs as oracle user; failure of this
process results in cluster restart.
Oracle Clusterware
This is clusterware that is provided by Oracle to manage cluster database processing
including node membership, group services, global resource management, and high
availability functions.
Glossary-6
scalability
policy-managed database
A policy-managed database is created using a server pool. Oracle Clusterware
allocates and reassigns capacity based on policies you define, enabling faster resource
failover and dynamic capacity assignment.
raw device
A disk drive that does not yet have a file system set up. Raw devices are used for
Oracle Real Application Clusters because they enable the sharing of disks. See also raw
partition.
raw partition
A portion of a physical disk that is accessed at the lowest possible level. A raw
partition is created when an extended partition is created and logical partitions are
assigned to it without any formatting. Once formatting is complete, it is called a
cooked partition. See also raw device.
redo thread
The redo generated by a database instance.
rolling patching
In Rolling Patching, one node (or group of nodes) is shutdown, the patch applied and
the node brought back up again. This is repeated for each node in the cluster until all
the nodes in the Real Application Clusters are patched.
scalability
The ability to add additional nodes to Oracle Real Application Clusters applications
and achieve markedly improved scale-up and speed-up.
Glossary-7
SCAN
SCAN
A single name, or network alias, for the cluster. Oracle Database 11g database clients
use SCAN to connect to the database. SCAN can resolve to multiple IP addresses,
reflecting multiple listeners in the cluster handling public client connections.
server pool
A server pool is a logical division of nodes in a cluster into a group to support
policy-managed databases.
services
Entities that you can define in Oracle RAC databases that enable you to group
database workloads and route work to the optimal instances that are assigned to offer
the service.
shared everything
A database architecture in which all instances share access to all of the data.
singleton services
Services that run on only one instance at any one time. By defining the Distributed
Transaction Property (DTP) property of a service, you can force the service to be a
singleton service.
thread
Each Oracle instance has its own set of online redo log groups. These groups are called
a thread of online redo. In non-Oracle Real Application Clusters environments, each
database has only one thread that belongs to the instance accessing it. In Oracle Real
Application Clusters environments, each instance has a separate thread, that is, each
instance has its own online redo log. Each thread has its own current log member.
Glossary-8
voting disk
thread number
An identifier for the redo thread to be used by an instance, specified by the
INSTANCE_NUMBER initialization parameter. You can use any available redo thread
number but an instance cannot use the same redo thread number as another instance.
voting disk
A file that manages information about node membership.
Glossary-9
voting disk
Glossary-10
Index
A recommendations, 8-3
Automatic Storage Management. See Oracle ASM
add instance, 9-5, 9-6
adding OCR locations, 5-9
additional Real Application Clusters B
documentation, 1-2 backups
ADDM archive logs, 6-3, 6-7
See Automatic Database Diagnostic Monitor description of, 6-1
administrative tools emkey.ora, 3-19
overview, 1-8 location of OCR backups, 5-7
administrator-managed database managing backups, 6-12
adding instances, 4-15, 9-4, 9-5 manual OCR backups, 5-6
and rconfig, 3-23 Oracle Cluster Registry (OCR), 5-6
creating redo log groups, 4-15 Oracle software, 10-3
definition of, 1-3 parallelism, 6-4
deleting instances, 9-8 performing using Enterprise Manager, 6-2, 6-7
affinity, 7-11 privileges, 6-6
alert log, 8-38 root.sh script, 3-19
all node patching, 10-6 settings, 6-7
architecture storing in a fast recovery area, 6-2
chip architecture, 1-8, 2-2, 2-6, 9-2 viewing backup reports, 6-12
determining the chip architecture, 2-6 viewing OCR backups, 5-6
enterprise grid computing, 1-7 voting disks, 5-2
archive logs, 6-3 block devices, 2-22
backing up, 6-4, 6-7 configuring partitions, 2-23
deleting after backups, 6-8 blocks
instance access to, 6-3, 6-9 cache transfers of, 8-7
recovery, 6-9, 6-10 cached, 1-6, 8-18
ARCHIVE_LAG_TARGET, 4-6 corrupt, 6-2
archiving OCR format, 5-11
configuring, 6-5 request latency, 8-8 to 8-14
enabling, 6-4 undo, 4-12
ASM. See Oracle ASM bonding
ASMCA. See Oracle ASM Configuration Assistant interconnect, 2-4
(ASMCA) buffer cache, 8-7
ASMCMD
description of, 1-9
ASMLib C
and device persistency, 2-27 Cache Fusion
configuring, 2-25 definition of, 1-6
creating Oracle ASM disks, 2-26 mechanism, 8-7
installing, 2-25 certifications, 2-1
Automatic Database Diagnostic Monitor (ADDM) client connections, 2-5
database findings, 8-3 client-side load balancing, 7-9
description, 8-2 cluster database
instance findings, 8-3 adding a new node, 9-1
Index-1
ADDM findings, 8-3 using rconfig, 3-23
administration, 4-1 creating Oracle ASM disks, 2-26
alert log, 8-38 CRSCTL
alerts, 8-37 See Cluster Ready Services Control
and Oracle ASM, 4-12 CSS
and the OCR See Cluster Synchronization Services
and the private interconnect, 2-4 CVU
archiving, 6-5 See Cluster Verification Utility
backing up, 6-1, 6-7
buffer cache, 8-7
D
comparison of Oracle RAC to single-instance, 1-6
configuring initialization parameters, 4-5 data files
connecting to, 7-9 location, 2-3
converting to, 3-21 parallelized backups, 6-4
performance, 8-1 database
recovery, 6-1, 6-11 administrator-managed, 1-3
server parameter files, 4-8 findings in ADDM analysis, 8-3
starting and stopping, 4-3 policy-managed, 1-3
topology, 8-19 See cluster database
undo tablespaces, 4-12 Database Configuration Assistant (DBCA)
Cluster Ready Services (CRS), 1-9 deleting an instance, 9-4, 9-7
checking daemon status, 8-31 description of, 1-9
checking state of all registered resources, 8-31 database name
Cluster Ready Services Control (CRSCTL) limitations, 3-14
checking the Oracle Clusterware status, 5-10 Database Resource Manager, 7-4
description of, 1-9 terminology, 7-5
using to add and remove voting disks, 5-4 Database Upgrade Assistant (DBUA), 10-17
using to start Oracle Clusterware processes, 5-8 DB_BLOCK_SIZE, 4-5
using to stop Oracle Clusterware processes, 5-8 DB_DOMAIN, 4-5
using to troubleshoot Oracle Clusterware DB_FILES, 4-5
issues, 8-31 DB_NAME, 4-5
Cluster Synchronization Services (CSS), 1-9 DB_RECOVERY_FILE_DEST, 4-5
checking daemon status, 8-31 DB_RECOVERY_FILE_DEST_SIZE, 4-5
Cluster Time Synchronization Service (CTSS), 2-17 DB_UNIQUE_NAME, 4-5
Cluster Verification Utility dba group, 2-8
check postinstallation configuration, 9-3 DBUA. See Database Upgrade Assistant (DBUA)
description of, 1-8 delete instance, 9-7, 9-8
location of trace files, 8-36 Desktop class, 3-12
use by OUI, 3-18 device path persistency, 2-27
verify node applications, 8-33 diagcollection.pl script, 8-32
verifying network connectivity, 8-35 DIAGNOSTIC_DEST, 4-6
verifying OCR integrity, 5-8 disk groups
CLUSTER_DATABASE, 4-5 creating, 3-11
CLUSTER_INTERCONNECTS, 4-6 DML_LOCKS, 4-5
CLUVFY documentation
See Cluster Verification Utility Real Application Clusters, 1-2
COMPATIBLE, 4-5 domain name system, 2-13
configuring ASMLib, 2-25 DSA keys, 2-11
connect descriptor, 7-9
connection load balancing, 7-9 E
goals, 7-9
connection pools, 7-10 Enterprise Manager
control files alerts, 8-37
location, 2-3 Average Active Sessions chart, 8-10
parallelized backups, 6-4 backup and recovery, 6-2
CONTROL_FILES, 4-5 backup reports, 6-12
CONTROL_MANAGEMENT_PACK_ACCESS, 4-6 Cluster Cache Coherency page, 8-14
converting single-instance databases to Oracle Cluster Database page, 8-1
RAC, 3-21 Cluster Database Performance page, 8-6
prerequisites, 3-21 Cluster Host Load Average chart, 8-9
Index-2
Cluster Managed Database Services page, 7-16 hardware requirements, 2-7
Database Locks page, 8-19 high availability framework, 5-1
Database Throughput charts, 8-11
description of, 1-8
I
encryption key, 3-19
Global Cache Block Access Latency chart, 8-7 initialization parameters
Instance Activity page, 8-17 ARCHIVE_LAG_TARGET, 4-6
Instances charts, 8-12 CLUSTER_DATABASE, 4-5
performing recovery, 6-11 CLUSTER_INTERCONNECTS, 4-6
Recovery wizard, 6-2, 6-8 COMPATIBLE, 4-5
Services charts, 8-12 CONTROL_FILES, 4-5
Top Consumers page, 8-15 CONTROL_MANAGEMENT_PACK_
Top Segments page, 8-18 ACCESS, 4-6
Top Sessions page, 8-16 DB_BLOCK_SIZE, 4-5
errata level, 2-7 DB_DOMAIN, 4-5
erratum kernel DB_FILES, 4-5
See errata level DB_NAME, 4-5
event DB_RECOVERY_FILE_DEST, 4-5
FAN load balancing, 7-8 DB_RECOVERY_FILE_DEST_SIZE, 4-5
UP and DOWN, 7-6 DB_UNIQUE_NAME, 4-5
use by FAN callouts, 7-6 DIAGNOSTIC_DEST, 4-6
Event Manager (EVM), 1-9 DML_LOCKS, 4-5
checking daemon status, 8-31 INSTANCE_NAME, 4-6
EVM INSTANCE_NUMBER, 4-6
See Event Manager INSTANCE_TYPE, 4-6
LICENSE_MAX_SESSIONS, 4-7
LICENSE_MAX_USERS, 4-7
F LOG_ARCHIVE_FORMAT, 4-7
FAN modifying for the current instance, 4-8
See Fast Application Notification modifying for the SPFILE, 4-10
Fast Application Notification, 7-6 PARALLEL_EXECUTION_MESSAGE_SIZE, 4-6
callouts, 7-6 to 7-7 REDO_TRANSPORT_USER, 4-7
events, 7-6 REMOTE_LOGIN_PASSWORDFILE, 4-6
fast recovery area, 6-2 RESULT_CACHE_MAX_SIZE, 4-6
configuring, 6-3 ROLLBACK_SEGMENTS, 4-6
creating disk group for, 3-11 SERVICE_NAMES, 4-7
fixup scripts, 3-4 SPFILE, 4-7
that must be different on each instance, 4-6
that must be identical on all instances, 4-5
G
that should be the same on each instance, 4-6
General Parallel File System (GPFS), 2-22 TRACE_ENABLED, 4-7
Global Services Daemon (GSD), 1-9 UNDO_MANAGEMENT, 4-6
GNS UNDO_RETENTION, 4-7
See Grid Naming Service UNDO_TABLESPACE, 4-6
Grid home installing ASMLib, 2-25
patching, 10-6 installing RPMs, 2-7
unlocking, 10-6 instance
Grid Naming Service (GNS), 2-5 adding, 9-5, 9-6
Grid Plug and Play, 4-6 deleting, 9-7, 9-8
grid user, 2-8 findings in ADDM analysis, 8-3
Grid_home Oracle ASM instances for multiple Oracle RAC
choosing, 2-20 databases, 4-3
definition of, 3-1 Oracle ASM instances in Enterprise Manager, 4-3
disk space requirements, 2-20 setting initialization parameters, 4-4
GSD starting for the database, 4-3
See Global Services Daemon stopping for the database, 4-3
instance management, 4-2
H INSTANCE_NAME, 4-6
INSTANCE_NUMBER, 4-6
hardware INSTANCE_TYPE, 4-6
checking certifications, 2-1
Index-3
integrated clients, 7-7 O
interconnect
definition of, 1-7, 2-4 OCFS, 2-22
OCFS2, 2-22
OCR
K migrate to Oracle ASM, 5-9
kernel parameters See Oracle Cluster Registry
configuring on Linux x86 operating system, 2-17 OCRCHECK
description of, 5-11
using to verify status of the OCR, 5-7
L oinstall group, 2-8
LICENSE_MAX_SESSIONS, 4-7 OLR
LICENSE_MAX_USERS, 4-7 See Oracle Local Registry
Linux x86 operating system ONS
configuring kernel parameters, 2-17 See Oracle Notification Service
listener OPatch
and client-side load balancing, 7-9 "Not a valid patch area" error, 10-15
and server-side load balancing, 7-9 partially installed patch, 10-16
checking status of, 7-16 preparing to use, 10-2
description of, 7-8 updating the cluster node list, 10-14
service registration, 4-7 viewing log and trace files, 10-15
load balancing operating system
client-side, 7-9 checking certifications, 2-1
connection goals, 7-9 Optimal Flexible Architecture (OFA), 2-20
definition of, 7-9 Oracle ASM, 2-22
run-time connection, 7-10 and multiple Oracle RAC databases, 4-3
server-side, 7-9 convert to clustered Oracle ASM, 3-23
Load Balancing Advisory, 7-7 to 7-8, 7-10 creating disk groups, 3-11
LOG_ARCHIVE_FORMAT, 4-7 creating voting disks, 5-5
file management, 1-5
installing, 3-4
M managing
membership mirroring, 1-5
and the OCR, 2-3 number of instances per node, 1-5
memory requirements, 2-7 operating system group requirements, 2-9
migrating the OCR to Oracle ASM, 5-9 rebalancing, 1-5
migrating voting disks to Oracle ASM, 5-5 shut down instances, 5-3
minimum downtime patching, 10-11 storing the OCR, 5-9
multipath devices, 2-18 striping, 1-4
use of, 2-3
use with Oracle RAC, 1-5
N
Oracle ASM Cluster File System (Oracle ACFS), 1-5,
NAS devices 2-21
configuring for Oracle ASM, 2-23 Oracle ASM Configuration Assistant (ASMCA)
network adapters, 2-4 description of, 1-9
network file system (NFS) starting, 3-11
configuring for use with Oracle ASM, 2-23 Oracle Automatic Storage Management (Oracle
support for, 2-22 ASM). See Oracle ASM
using for shared storage, 2-22 Oracle base directory, 2-20
network interface names, 2-4 Oracle Cluster Registry (OCR)
Network Time Protocol (NTP), 2-17 backing up, 5-6
NFS changing the location of, 5-10
See network file system definition of, 2-3
NIC bonding, 2-4 location of OCR backups, 5-7
nodeapps manual backups, 5-6
definition of, 1-9 mirroring the OCR, 5-9
nodes multiplexing, 2-3
definition of, 1-3 recovering, 5-7
eviction, 5-1 removing a mirror, 5-10
repairing, 5-11
restoring from backup, 5-7
Index-4
status, 5-7 See Oracle Universal Installer
troubleshooting, 5-11 out-of-place upgrade, 10-16
usage in Oracle RAC, 7-6
viewing backups of, 5-6
P
Oracle Clusterware, 1-3
and process control, 5-1 packages, 2-7
monitoring using Enterprise Manager, 8-21 parallel recovery, 6-10
use of, 5-1 PARALLEL_EXECUTION_MESSAGE_SIZE, 4-6
Oracle Configuration Manager parameters
installation, 3-2 See initialization parameters
Oracle Configuration Manager (OCM), 3-12 patch plan, 10-2
Oracle Flashback, 6-1 patch sets, 10-16
Oracle Grid Infrastructure, 1-3 applying, 10-13
software owner, 2-8 definition of, 10-1
Oracle home directory patches
creating, 2-21 all node patching method, 10-6
definition of, 3-1 applying, 10-5
Oracle Inventory applying previously downloaded patches during
group, 2-8 installation, 3-2
Oracle Local Registry (OLR) definition of, 10-1
defined, 5-2 locating, 10-1
Oracle Net minimum downtime patching, 10-11
configuration, 7-8 rolling patching, 10-8
Oracle Notification Service (ONS), 1-9, 7-6 permissions file for udev, 2-27
Oracle RAC physical RAM requirements, 2-7
creating a database, 3-10 policy-managed database
Oracle RAC One Node definition of, 1-3
administration, 4-2 private interconnect
creating a database, 3-10 and Oracle Clusterware, 2-4
Oracle Real Application Clusters private interface name, 2-4
high availability framework, 7-5 public interface name, 2-4
installation on different platforms, 1-8
overview of administration, 1-1, 3-1 R
patching, 3-20
Oracle software owner, 2-8 RAM requirements, 2-7
Oracle Support rconfig, 3-21
checking certifications, 2-1 recovery
Oracle Universal Installer (OUI) archive logs, 6-9, 6-10
description of, 1-8 database, 6-11
Oracle Grid Infrastructure installation, 3-4 description of, 6-1
Oracle Real Application Clusters installation, 3-1 enabling archiving, 6-4
preparing for installation, 3-1 instance state during, 6-9
starting, 3-4 Oracle Cluster Registry (OCR), 5-7
oracle user, 2-8 parallelism, 6-10
creating, 2-10 privileges, 6-6
modifying environment of, 2-12 restoring the OCR from backup, 5-7
ORACLE_BASE, 2-20 server parameter file, 6-11
ORACLE_HOME, 3-1 voting disks, 5-4
backing up, 10-3 with Enterprise Manager, 6-2
when to set the environment variable, 1-9 Recovery Manager (RMAN)
ORACLE_SID channels, 6-4
limitations, 3-14 parallelism and backups, 6-4
oracleasm redo logs
configure, 2-25 location, 2-3
drivers, 2-25 REDO_TRANSPORT_USER, 4-7
packages, 2-25 redundancy
OSASM group, 2-9 normal, 5-2
OSDBA group, 2-8 redundant interconnect usage, 2-4
OSOPER group, 2-10 REMOTE_LOGIN_PASSWORDFILE, 4-6
OUI removing temporary files, 3-9
Index-5
requirements T
hardware, 2-7
network, 2-4 to 2-6 tablespaces
software, 2-6 undo, 4-12
storage, 2-3 temporary disk space
resource manager requirements, 2-7
See Database Resource Manager temporary files
RESULT_CACHE_MAX_SIZE, 4-6 removing, 3-9
ROLLBACK_SEGMENTS, 4-6 time synchronization, 2-17
rolling patch method, 10-8 tnsnames.ora file, 7-9, 7-16
root.sh, 3-19 topology, 8-19
RPMs TRACE_ENABLED, 4-7
ASMLib, 2-25 Troubleshooting
RSA keys, 2-11 alert log messages, 8-38
rules files for udev, 2-27 interconnect settings, 8-35
Run-time connection load balancing, 7-10 Using Cluster Ready Services Control
(CRSCTL), 8-30, 8-31
Using Cluster Verification Utility (cluvfy), 8-33
S viewing cluster database alerts, 8-37
scripts
fixup, 3-4 U
secure shell
udev, 2-27
configuring, 2-11
undo segments, 4-12
Server class, 3-12
UNDO_MANAGEMENT, 4-6
Server Control (SRVCTL)
UNDO_RETENTION, 4-7
description of, 1-8
UNDO_TABLESPACE, 4-6
server parameter file, 4-8
Universal Connection Pool (UCP), 7-10
description of, 4-5
unlock Grid home, 10-6
recovery, 6-11
up2date, 2-7
server pools, 1-3
upgrades
server-side load balancing, 7-9
out-of-place, 10-16
SERVICE_NAMES, 4-7
user equivalence, 2-11, 10-4
services
administration, 7-15
available instance, 7-3 V
creating, 7-11
VIP
definition of, 7-2
See Virtual IP
failover, 7-3, 7-4
Virtual IP (VIP), 1-9
preferred instance, 7-3
voting disks
use with workload management, 7-2
adding and removing, 5-4
using for database connections, 7-9
backing up, 5-2
shared Oracle home directory, 2-21
definition of, 2-3
shared storage
migrate to Oracle ASM, 5-5
supported types, 2-22
multiple, 2-3
single-instance database, 3-12
recovering, 5-4
software requirements
checking software requirements, 2-6
SPFILE, 4-7 W
See server parameter file workload management, 7-1
SQL*Plus
description of, 1-8
SSH Y
See secure shell YUM, 2-7
swap space
requirements, 2-7
SYSOPER privilege, 2-10
System Global Area (SGA), 8-7
Index-6