Exadata Technical Whitepaper PDF
Exadata Technical Whitepaper PDF
September 2009
Exadata is a joint offering from Oracle and Sun Microsystems. Sun is providing the
hardware technology used in the Database Machine and Exadata Storage Server. Oracle
is providing the software to impart database intelligence to the storage and Database
Machine and is tightly integrated with the Oracle Database and all its features. The Sun
servers combine the power of the latest generation of Intel Xeon processors with
Sun's system engineering expertise. These servers offer the needed density and
expandability to satisfy the most demanding datacenter applications. The Oracle and Sun
partnership makes possible the delivery of the Sun Oracle Database Machine and
Exadata Storage Server and the revolutionary capabilities it provides.
2
White Paper A Technical Overview of the Sun Oracle Exadata Storage Server and Database Machine
3
White Paper A Technical Overview of the Sun Oracle Exadata Storage Server and Database Machine
Exadata is built using wider pipes that provide extremely high bandwidth between the database
servers and the storage servers.
Exadata is database aware and can ship just the data required to satisfy SQL requests resulting
in less data being sent between the database servers and the storage servers.
Exadata overcomes the mechanical limits of disk drive technology by automatically caching
frequently accessed data delivering unprecedented levels of bandwidth and IOPS.
4
White Paper A Technical Overview of the Sun Oracle Exadata Storage Server and Database Machine
Each Exadata cell comes with 384 GB of Exadata Smart Flash Cache. This solid state storage
delivers dramatic performance advantages with Exadata storage. It provides a ten-fold
improvement in response time for reads over regular disk; a hundred-fold improvement in IOPS
for reads over regular disk; and is a less expensive higher capacity alternative to memory. Overall
it delivers a ten-fold increase performing a blended average of read and write operations.
The Exadata Smart Flash Cache manages active data from regular disks in the Exadata cell but
it is not managed in a simple Least Recently Used (LRU) fashion. The Exadata Storage Server
Software in cooperation with the Oracle Database keeps track of data access patterns and knows
what and how to cache data and avoid polluting the cache. This functionality is all managed
automatically and does not require manual tuning. If there are specific tables or indexes that are
known to be key to the performance of a database application they can optionally be identified
and pinned in cache.
The Sun Oracle Exadata Storage Server comes with either twelve 600 GB Serial Attached SCSI
(SAS) disks or twelve 2 TB Serial Advanced Technology Attachment (SATA) disks. SAS based
Exadata Storage Servers provide up to 2 TB of uncompressed user data capacity, and up to 1.5
GB/second of raw data bandwidth. SATA based Exadata Storage Servers provide up to 7 TB of
uncompressed user data capacity, and up to 0.85 GB/second of raw data bandwidth. When
stored in compressed format, the amount of user data and the amount of data bandwidth
delivered by each cell increases up to 10 times. User data capacity is computed after mirroring all
the disk space, and setting aside space for database structures like logs, undo, and temp space.
Actual user data varies by application.
The performance that each cell delivers is extremely high due to the Exadata Smart Flash Cache.
The automated caching of the Flash cache enables each Exadata cell to deliver up to 3.6
GB/second bandwidth and 75,000 IOPS when accessing uncompressed data. When data is
stored in compressed format, the amount of user data capacity, the amount of data bandwidth
5
White Paper A Technical Overview of the Sun Oracle Exadata Storage Server and Database Machine
and IOPS achievable, often increases up to ten times. This represents a significant improvement
over traditional storage devices used with the Oracle Database.
The performance specifications of the Exadata Storage Server are shown below.
Oracle Exadata storage uses a state of the art InfiniBand interconnect between the servers and
storage. An Exadata cell has dual port Quad Data Rate (QDR) InfiniBand connectivity for high
availability. Each InfiniBand link provides 40 Gigabits of bandwidth - many times higher than
traditional storage or server networks. Further, Oracle's interconnect protocol uses direct data
placement (DMA - direct memory access) to ensure very low CPU overhead by directly moving
data from the wire to database buffers with no extra data copies being made. The InfiniBand
network has the flexibility of a LAN network, with the efficiency of a SAN. By using an
InfiniBand network, Oracle ensures that the network will not bottleneck performance. The same
InfiniBand network also provides a high performance cluster interconnect for the Oracle
Database Real Application Cluster (RAC) nodes.
In figure 2 below, a small Exadata storage based database environment is shown. Two Oracle
Databases, one RAC and one single instance, are sharing three Exadata cells. All the components
for this configuration database servers, Exadata cells, InfiniBand switches, Ethernet switches,
and other support hardware can be housed in, and take up less than half of, a typical 19-inch
rack.
6
White Paper A Technical Overview of the Sun Oracle Exadata Storage Server and Database Machine
Single-Instance RAC
Database Database
7
White Paper A Technical Overview of the Sun Oracle Exadata Storage Server and Database Machine
advantages of the solution are delivered without any modification to your application. And all
features of the Oracle Database are fully supported with Exadata. Exadata works equally well
with single-instance or Real Application Cluster deployments of the Oracle Database.
Functionality like Oracle Data Guard, Oracle Recovery Manager (RMAN), Oracle Streams, and
other database tools are administered the same, with or without Exadata. Users and database
administrators leverage the same tools and knowledge they are familiar with today because they
work just as they do with traditional non-Exadata storage. Both Exadata and non-Exadata
storage may be concurrently used for database storage to facilitate migration to, or from, Exadata
storage.
The nature of traditional storage products encourages inefficient deployments of storage for each
database in the IT infrastructure. The Exadata architecture ensures all the bandwidth and I/O
resources of the Exadata storage subsystem can be made available whenever, and to whichever,
database or class of work needs it. I/O bandwidth is metered out to the various classes of work,
or databases, sharing the Exadata server based on user defined policies and service level
agreements (SLAs). The Oracle Database Resource Manager (DBRM) has been enhanced for use
with Exadata storage to manage user-defined intra and inter-database I/O resource usage to
ensure customer defined SLAs are met. The I/O resource management capabilities of Exadata
storage enable tailoring the I/O resources to the business priorities of the organization, and to
build a shared storage grid for the Oracle databases in the environment.
8
White Paper A Technical Overview of the Sun Oracle Exadata Storage Server and Database Machine
There are four models of the Database Machine Database Machine Full Rack, Database
Machine Half Rack, Database Machine Quarter Rack, and Database Machine Eighth Rack are
offered. Depending on the size and purpose of the database to be deployed, and the processing
and I/O bandwidth required there is a system available to meet any need.
Each Database Machine runs the same software, is upgradeable and includes common hardware
components. Common to all Database Machines are:
Exadata Storage Servers, either SAS or SATA.
Industry standard Oracle Database 11g database servers with: two Intel Xeon dual-socket
quad-core E5540 processors running at 2.53 Ghz processors, 72 GB RAM, four 146 GB SAS
drives, dual port InfiniBand Host Channel Adapter (HCA), four 1 Gb/second Ethernet ports,
and dual-redundant, hot-swappable power supplies.
Sun Quad Data Rate (QDR) InfiniBand switches and cables to form a 40 Gb/second
InfiniBand fabric for database server to Exadata storage server communication and RAC
internode communication.
The ratio of components to each other has been chosen to maximize performance and ensure
system resiliency. The hardware composition of each model of Database Machine is depicted in
the following table.
Database Servers 8 4 2 1
Exadata Storage 14 7 3 1
Servers
InfiniBand 3 2 2 1
Switches
Upgradability Connect multiple Field upgrade Field upgrade Custom field
Full Racks via from Half Rack to from Quarter upgrade
included Full Rack Rack to Half Rack
InfiniBand fabric
9
White Paper A Technical Overview of the Sun Oracle Exadata Storage Server and Database Machine
The performance and capacity characteristics of each model of Database Machine is depicted in
the following table.
10
White Paper A Technical Overview of the Sun Oracle Exadata Storage Server and Database Machine
In summary the Exadata products address the key dimensions of database I/O that can hamper
performance.
More pipes: Exadata is based on a massively parallel architecture which provides more pipes to
deliver more data faster between the database servers and storage servers. As Exadata servers
are added to the database configuration bandwidth scales linearly.
Wider pipes: InfiniBand is 8 times faster than Fibre Channel. Exadata is built using wider
InfiniBand pipes that provide extremely high bandwidth between the database servers and
storage servers.
More IOPS: With the intelligent and automatic use and management of Exadata Smart Flash
Cache to avoid physical I/O effective IOPS scale to handle the largest most demanding
applications.
Smart software: With the Smart Scan processing less data needs to be shipped through the
pipes by performing data processing in storage. Exadata is database aware and can ship just the
data required to satisfy SQL requests resulting in less data being sent between the database
servers and the storage servers.
11
White Paper A Technical Overview of the Sun Oracle Exadata Storage Server and Database Machine
Exadata Architecture
The hardware environment for a typical Exadata based storage grid was shown in Figure 2. Each
Exadata cell is a self-contained server which houses disk storage and runs the Exadata software
provided by Oracle. Databases are deployed across Exadata cells, and multiple databases can
share Exadata cells. The database and Exadata cells communicate via a high-speed InfiniBand
interface.
The collection of Exadata cells shared between a set of databases is referred to as an Exadata
Realm. The set of three cells in figure 2 is an example of a realm. Realms ensure the isolation,
and hence protection, across a given set of databases. Mechanisms are provided to move disks
and whole cells between realms in a controlled and safe manner.
The architecture of the Exadata solution includes components on the database server and in the
Exadata cell. The overall architecture is shown below.
Single-Instance RAC
Database Database
DB Server DB Server DB Server
DB Instance DB Instance DB Instance Enterprise
DBRM DBRM DBRM Manager
12
White Paper A Technical Overview of the Sun Oracle Exadata Storage Server and Database Machine
13
White Paper A Technical Overview of the Sun Oracle Exadata Storage Server and Database Machine
by the I/O Resource Manager (IORM). The Exadata cell software ensures that inter-database
I/O resources are managed and properly allocated within, and between, databases. Overall,
DBRM ensures each database receives its specified amount of I/O resources and user defined
SLAs are met.
Exadata Software
Like any storage device the Exadata server is a computer with CPUs, memory, a bus, disks,
NICs, and the other components normally found in a server. It also runs an operating system
(OS), which in the case of Exadata is Oracle Enterprise Linux (OEL) 5.3. The Exadata Storage
Server Software resident in the Exadata cell runs under OEL. OEL is accessible in a restricted
mode to administer and manage the Exadata cell.
CELLSRV (Cell Services) is the primary component of the Exadata software running in the cell
and provides the majority of Exadata storage services. CELLSRV is multi-threaded software that
communicates with the database instance on the database server, and serves blocks to databases
based on the iDB protocol. It provides the advanced SQL offload capabilities, serves Oracle
blocks when SQL offload processing is not possible, and implements the DBRM I/O resource
management functionality to meter out I/O bandwidth to the various databases and consumer
groups issuing I/O.
Two other components of Oracle software running in the cell are the Management Server (MS)
and Restart Server (RS). The MS is the primary interface to administer, manage and query the
status of the Exadata cell. It works in cooperation with the Exadata cell command line interface
14
White Paper A Technical Overview of the Sun Oracle Exadata Storage Server and Database Machine
(CLI) and EM Exadata plug-in, and provides standalone Exadata cell management and
configuration. For example, from the cell, CLI commands are issued to configure storage, query
I/O statistics and restart the cell. Also supplied is a distributed CLI so commands can be sent to
multiple cells to ease management across cells. Restart Server (RS) ensures the ongoing
functioning of the Exadata software and services. It is used to update the Exadata software. It
also ensures storage services are started and running, and services are restarted when required.
X The client issues a SELECT statement with a predicate to filter and return only rows of
interest. Y The database kernel maps this request to the file and extents containing the table
being scanned. Z The database kernel issues the I/O to read the blocks. [ All the blocks of the
table being queried are read into memory. \ Then SQL processing is done against the raw
blocks searching for the rows that satisfy the predicate. ] Lastly the rows are returned to the
client.
As is often the case with the large queries, the predicate filters out most of the rows read. Yet all
the blocks from the table need to be read, transferred across the storage network and copied into
memory. Many more rows are read into memory than required to complete the requested SQL
15
White Paper A Technical Overview of the Sun Oracle Exadata Storage Server and Database Machine
operation. This generates a large number of data transfers which consume bandwidth and impact
application throughput and response time.
Integrating database functionality within the storage layer of the database stack allows queries,
and other database operations, to be executed much more efficiently. Implementing database
functionality as close to the hardware as possible, in the case of Exadata at the disk level, can
dramatically speed database operations and increase system throughput.
With Exadata storage, database operations are handled much more efficiently. Queries that
perform table scans can be processed within Exadata with only the required subset of data
returned to the database server. Row filtering, column filtering and some join processing (among
other functions) are performed within the Exadata storage cells. When this takes place only the
relevant and required data is returned to the database server.
Figure 6 below illustrates how a table scan operates with Exadata storage.
X The client issues a SELECT statement with a predicate to filter and return only rows of
interest. Y The database kernel determines that Exadata storage is available and constructs an
iDB command representing the SQL command issued and sends it the Exadata storage. Z The
CELLSRV component of the Exadata software scans the data blocks to identify those rows and
columns that satisfy the SQL issued. [ Only the rows satisfying the predicate and the requested
columns are read into memory. \ The database kernel consolidates the result sets from across
the Exadata cells. ] Lastly, the rows are returned to the client.
Smart scans are transparent to the application and no application or SQL changes are required.
The SQL EXPLAIN PLAN shows when Exadata smart scan is used. Returned data is fully
consistent and transactional and rigorously adheres to the Oracle Database consistent read
functionality and behavior. If a cell dies during a smart scan, the uncompleted portions of the
smart scan are transparently routed to another cell for completion. Smart scans properly handle
16
White Paper A Technical Overview of the Sun Oracle Exadata Storage Server and Database Machine
the complex internal mechanisms of the Oracle Database including: uncommitted data and
locked rows, chained rows, compressed tables, national language processing, date arithmetic,
regular expression searches, materialized views and partitioned tables.
The Oracle Database and Exadata server cooperatively execute various SQL statements. Moving
SQL processing off the database server frees server CPU cycles and eliminates a massive amount
of bandwidth consumption which is then available to better service other requests. SQL
operations run faster, and more of them can run concurrently because of less contention for the
I/O bandwidth. We will now look at the various SQL operations that benefit from the use of
Exadata.
Exadata enables predicate filtering for table scans. Only the rows requested are returned to the
database server rather than all rows in a table. For example, when the following SQL is issued
only rows where the employees hire date is after the specified date are sent from Exadata to the
database instance.
SELECT * FROM employee_table WHERE hire_date > 1-Jan-2003;
This ability to return only relevant rows to the server will greatly improve database performance.
This performance enhancement also applies as queries become more complicated, so the same
benefits also apply to complex queries, including those with subqueries.
Exadata provides column filtering, also called column projection, for table scans. Only the
columns requested are returned to the database server rather than all columns in a table. For
example, when the following SQL is issued, only the employee_name and employee_number
columns are returned from Exadata to the database kernel.
SELECT employee_name, employee_number FROM employee_table;
For tables with many columns, or columns containing LOBs (Large Objects), the I/O bandwidth
saved can be very large. When used together, predicate and column filtering dramatically
improves performance and reduces I/O bandwidth consumption. In addition, column filtering
also applies to indexes, allowing for even faster query performance.
Exadata performs joins between large tables and small lookup tables, a very common scenario
for data warehouses with star schemas. This is implemented using Bloom Filters, which are a
very efficient probabilistic method to determine whether a row is a member of the desired result
set.
17
White Paper A Technical Overview of the Sun Oracle Exadata Storage Server and Database Machine
With Oracle Database 11g Release 2, several new powerful Smart Scan and offload capabilities
are provided with Exadata storage. These include: Storage Indexing technology, Smart Scan
offload of new Hybrid Columnar Compressed Tables, Smart Scan offload of encrypted
tablespaces and columns, and offload of data mining model scoring.
Storage Indexing
Storage Indexes are a very powerful capability provided in Exadata storage that helps avoid I/O
operations. The Exadata Storage Server Software creates and maintains a Storage Index in
Exadata memory. The Storage Index keeps track of minimum and maximum values of columns
for tables stored on that cell. When a query specifies a WHERE clause, but before any I/O is
done, the Exadata software examines the Storage Index to determine if rows with the specified
column value exists in the cell by comparing the column value to the minimum and maximum
values maintained in the Storage Index. If the column value is outside the minimum and
maximum range, scan I/O for that query is avoided. Many SQL Operations will run dramatically
faster because large numbers of I/O operations are automatically replaced by a few in-memory
lookups. To minimize operational overhead, Storage Indexes are created and maintained
transparently and automatically by the Exadata Storage Server Software.
Another new feature of Oracle Database 11g Release 2 is Hybrid Columnar Compressed Tables.
These new tables offer a high degree of compression for data that is bulk loaded and queried.
Smart Scan processing of Hybrid Columnar Compressed Tables is provided and column
projection and filtering are performed within Exadata. In addition, the decompression of the data
is offloaded to Exadata eliminating CPU overhead on the database servers. Given the typical ten-
fold compression of Hybrid Columnar Compressed Tables, this effectively increases the I/O rate
ten-fold compared to uncompressed data.
New in Exadata is the Smart Scan offload processing of Encrypted Tablespaces (TSE) and
Encrypted Columns (TDE). While the prior release of Exadata fully supported the use of TSE
and TDE on Exadata it did not benefit from Exadata offload processing. This enhancement
increases performance when accessing confidential data.
Another new function offloaded to Exadata is Data Mining model scoring. This makes the
deployment of data warehouses on Exadata or Database Machine an even better and more
performant data analysis platform. All data mining scoring functions (e.g., prediction_probability)
are offloaded to Exadata for processing. This will not only speed warehouse analysis but reduce
18
White Paper A Technical Overview of the Sun Oracle Exadata Storage Server and Database Machine
database server CPU consumption and the I/O load between the database server and Exadata
storage.
Two other database operations that are offloaded to Exadata are incremental database backups
and tablespace creation. The speed and efficiency of incremental database backups has been
significantly enhanced with Exadata. The granularity of change tracking in the database is much
finer when Exadata storage is used. Changes are tracked at the individual Oracle block level with
Exadata rather than at the level of a large group of blocks. This results in less I/O bandwidth
being consumed for backups and faster running backups.
With Exadata the create file operation is also executed much more efficiently. For example, when
issuing a Create Tablespace command, instead of operating synchronously with each block of the
new tablespace being formatted in server memory and written to storage, an iDB command is
sent to Exadata instructing it to create the tablespace and format the blocks. Host memory usage
is reduced and I/O associated with the creation and formatting of the tablespace blocks is
offloaded. The I/O bandwidth saved with these operations means more bandwidth is available
for other business critical work.
19
White Paper A Technical Overview of the Sun Oracle Exadata Storage Server and Database Machine
the username, client program name, function, or length of time the query has been running.
Once these consumer groups are defined, the user can set a hierarchy of which consumer group
gets precedence in I/O resources and how much of the I/O resource is given to each consumer
group. This hierarchy determining I/O resource prioritization can be applied simultaneously to
both intra-database operations (i.e. operations occurring within a database) and inter-database
operations (i.e. operations occurring among various databases).
When Exadata storage is shared between multiple databases you can also prioritize the I/O
resources allocated to each database, preventing one database from monopolizing disk resources
and bandwidth to ensure user defined SLAs are met. For example you may have two databases
sharing Exadata storage as depicted below.
Database A Database B
(Single-Instance) (RAC)
Business objectives dictate that each of these databases has a relative value and importance to the
organization. It is decided that database A should receive 33% of the total I/O resources
available and that database B should receive 67% of the total I/O of resources. To ensure the
different users and tasks within each database are allocated the correct relative amount of I/O
resources, various consumer groups are defined.
20
White Paper A Technical Overview of the Sun Oracle Exadata Storage Server and Database Machine
21
White Paper A Technical Overview of the Sun Oracle Exadata Storage Server and Database Machine
28x
Tablespace Creation Average
Speedup
Index Creation
Handset to Customer Mapping Report
CRM Customer Discount Report
0 10 20 30 40 50 60 70 80
M e r c h a n d is in g L e v e l 1 D e t a il:
P e r io d A g o
M e r c h a n d is in g L e v e l 1 D e t a il:
C urre nt - 52 w ee k s
S u p p l y C h a in V e n d o r - Ye a r - I te m
M o v em e nt
M e rc ha ndi s i ng L ev e l 1 D e t ai l by
W ee k
M a t e r ia li ze d V ie w s R e b u ild
D a te t o D a te M o v e m e n t
C o m p a r is o n - 5 3 w e e k s
P r o m p t 0 4 C lo n e f o r A C L a u d it
16x
S a le s a n d C u s t o m e r C o u n t s
Average
G if t C a r d A c t iv a ti o n s
Speedup
R e c a ll Q u e r y
- 5 .0 10 .0 1 5.0 20 .0 25 . 0 3 0. 0 35 .0 4 0 .0 4 5.0 50 .0
As discussed earlier, the Exadata cell is a server that runs the Oracle Enterprise Linux as well as
the Oracle provided Exadata software. When first started, the cell boots up like any other
computer into Exadata storage serving mode. The first two disk drives have a small Logical Unit
Number (LUN) slice called the System Area, approximately 13 GB of size, reserved for the OEL
22
White Paper A Technical Overview of the Sun Oracle Exadata Storage Server and Database Machine
operating system, Exadata software, and configuration metadata. The System Area contains
Oracle Database 11g Automatic Diagnostic Repository (ADR) data, and other metadata about
the Exadata cell. The administrator does not have to manage the System Area LUN, as it is
automatically created. Its contents are automatically mirrored across the physical disks to protect
against drive failures, and to allow hot disk swapping. The remaining portion of these two disk
drives is available for user data.
Automatic Storage Management (ASM) is used to manage the storage in the Exadata cell. ASM
volume management, striping, and data protection services make it the optimum choice for
volume management. ASM provides data protection against drive and cell failures, the best
possible performance, and extremely flexible configuration and reconfiguration options.
A Cell Disk is the virtual representation of the physical disk, minus the System Area LUN (if
present), and is one of the key disk objects the administrator manages within an Exadata cell. A
Cell Disk is represented by a single LUN, which is created and managed automatically by the
Exadata software when the physical disk is discovered.
Cell Disks can be further virtualized into one or more Grid Disks. Grid Disks are the disk entity
assigned to ASM, as ASM disks, to manage on behalf of the database for user data. The simplest
case is when a single Grid Disk takes up the entire Cell Disk. But it is also possible to partition a
Cell Disk into multiple Grid Disk slices. Placing multiple Grid Disks on a Cell Disk allows the
administrator to segregate the storage into pools with different performance or availability
requirements. Grid Disk slices can be used to allocate hot, warm and cold regions of a
Cell Disk, or to separate databases sharing Exadata disks. For example a Cell Disk could be
partitioned such that one Grid Disk resides on the higher performing portion of the physical disk
and is configured to be triple mirrored, while a second Grid Disk resides on the lower
performing portion of the disk and is used for archive or backup data, without any mirroring. An
Information Lifecycle Management (ILM) strategy could be implemented using Grid Disk
functionality.
Grid
P hysical Cell Grid Grid
Disk Disk
Disk Disk Disk
Grid Disk
The following example illustrates the relationship of Cell Disks to Grid Disks in a more
comprehensive Exadata storage grid.
23
White Paper A Technical Overview of the Sun Oracle Exadata Storage Server and Database Machine
Once the Cell Disks and Grid Disks are configured, ASM disk groups are defined across the
Exadata configuration. Two ASM disk groups are defined; one across the hot grid disks, and a
second across the cold grid disks. All of the hot grid disks are placed into one ASM disk
group and all of the cold grid disks are placed in a separate disk group. When the data is loaded
into the database, ASM will evenly distribute the data and I/O within disk groups. ASM
mirroring can be activated for these disk groups to protect against disk failures for both, either,
or neither of the disk groups. Mirroring can be turned on or off independently for each of the
disk groups.
Hot ASM
Disk Group Ex ada ta C e ll E xa da ta C ell Cold ASM
Disk Group
H ot Hot H ot Hot H ot H ot
Lastly, to protect against the failure of an entire Exadata cell, ASM failure groups are defined.
Failure groups ensure that mirrored ASM extents are placed on different Exadata cells.
ASM
Disk Group
Exadata Cell Exadata Cell
24
White Paper A Technical Overview of the Sun Oracle Exadata Storage Server and Database Machine
Exadata storage can be used in addition to the storage arrays and products traditionally used to
store the Oracle database. A single database can be partially stored on Exadata storage and
partially on traditional storage devices. Tablespaces can reside on Exadata storage, non-Exadata
storage, or a combination of the two, and is transparent to database operations and applications.
But to benefit from the Smart Scan capability of Exadata storage, the entire tablespace must
reside on Exadata storage. This co-residence and co-existence is a key feature to enable online
migration to Exadata storage.
An online non-disruptive migration to Exadata storage can be done for an existing database if
the existing database is deployed on ASM and is using ASM redundancy. The steps to
accomplish this are:
1. Add an Exadata grid disk to the existing ASM disk group.
2. ASM then automatically rebalances the data within the disk group moving a proportional
amount of data to the newly added Exadata grid disk.
3. Then a non-Exadata disk is dropped from the ASM disk group. ASM would then rebalance
or migrate the data from the non-Exadata disk to other disks in the disk group.
4. The above is repeated until the entire database has been migrated onto Exadata storage.
In addition, migration can be done using Oracle Recovery Manager (RMAN) to backup from
traditional storage and restore the data onto Exadata. Oracle Data Guard can also be used to
facilitate a migration. This is done by first creating a standby database based on Exadata storage.
25
White Paper A Technical Overview of the Sun Oracle Exadata Storage Server and Database Machine
The standby can be using Exadata storage and the production database can be on traditional
storage. By executing a fast switchover, taking just seconds, you can transform the standby
database into the production database. All these approaches provide a built-in safety net as you
can undo the migration very gracefully if unforeseen issues arise.
Exadata has been designed to incorporate the same standard of high availability (HA) customers
have come to expect from Oracle products. With Exadata, all database features and tools work
just as they do with traditional non-Exadata storage. Users and database administrators will use
familiar tools and be able to leverage their existing Oracle Database knowledge and procedures.
With the Exadata architecture, all single points of failure are eliminated. Familiar features such as
mirroring, fault isolation, and protection against drive and cell failure have been incorporated
into Exadata to ensure continual availability and protection of data. Other features to ensure high
availability within the Exadata server are described below.
Data Guard
Oracle Data Guard is the software feature of Oracle Database that creates, maintains, and
monitors one or more standby databases to protect your database from failures, disasters, errors,
and corruptions. Data Guard works unmodified with Exadata and can be used for both
production and standby databases. By using Active Data Guard with Exadata storage, queries
and reports can be offloaded from the production database to an extremely fast standby database
and ensure that critical work on the production database is not impacted while still providing
disaster protection.
Flashback
Exadata leverages Oracle Flashback Technology to provide a set of features to view and restore
data back in time. The Flashback feature works in Exadata the same as it would in a non-Exadata
26
White Paper A Technical Overview of the Sun Oracle Exadata Storage Server and Database Machine
environment. The Flashback features offer the capability to query historical data, perform change
analysis, and perform self-service repair to recover from logical corruptions while the database is
online. In essence, with the built-in Oracle Flashback features, Exadata allows the user to have
snapshot-like capabilities and restore a database to a time before an error occurred.
Exadata works with Oracle Recovery Manager (RMAN), a command-line and Enterprise
Manager-based tool, to allow efficient Oracle database backup and recovery. All existing RMAN
scripts work unchanged in the Exadata environment. RMAN is designed to work intimately with
the server, providing block-level corruption detection during backup and restore. RMAN
optimizes performance and space consumption during backup with file multiplexing and backup
set compression, and integrates with Oracle Secure Backup (OSB) and third party media
management products for tape backup.
CONCLUSION
Businesses today increasingly needs to leverage a unified database platform to enable the
deployment and consolidation of all their applications onto one common infrastructure. Whether
OLTP, DW or mixed workload a common infrastructure delivers the efficences and reusability
the datacenter needs and provides the reality of cloud computing in-house. Building or using
custom special purpose systems for different applications is wasteful and expensive. The need to
process more data increases every day while corporations are also finding their IT budgets being
squeezed. Examining the total cost of ownership (TCO) for IT software and hardware lead one
to choose high performance common infrastructure for deployments of all applications.
By incorporating Exadata and the Database Machine into the IT infrastructure, companies will:
Accelerate database performance and be able to do much more in the same amount of time.
Handle change and growth in scalable and incremental steps by consolidating deployments on
to a common infrastructure.
Deliver mission-critical data availability and protection.
Exadata and the Database Machine provide this solution.
27
White Paper Title
September 2009
Author: Ronald Weiss
Contributing Authors:
Copyright 2009, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only and
the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other
Oracle Corporation
warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or
World Headquarters
fitness for a particular purpose. We specifically disclaim any liability with respect to this document and no contractual obligations are
500 Oracle Parkway
formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any
Redwood Shores, CA 94065
means, electronic or mechanical, for any purpose, without our prior written permission.
U.S.A.
Worldwide Inquiries: Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective
Phone: +1.650.506.7000 owners.
Fax: +1.650.506.7200
oracle.com 0109