Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

Using AWR

Report on
Exadata
Exadata Performance Diagnostics with AWR

WHITE PAPER / SEPTEMBER 18, 2018


DISCLAIMER
The following is intended to outline our general product direction. It is intended for information
purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any
material, code, or functionality, and should not be relied upon in making purchasing decisions. The
development, release, and timing of any features or functionality described for Oracle’s products
remains at the sole discretion of Oracle.

2 WHITE PAPER / Using AWR Report on Exadata


Table of Contents
Introduction ................................................................................................... 5

AWR Overview .............................................................................................6

Performance and Scope ............................................................................................................... 6

Exadata Support in AWR ............................................................................. 6

Challenges and AWR Exadata Solutions ..................................................... 7

Noisy Neighbor .............................................................................................................................. 7

Uneven Workload on Cells or Disks .............................................................................................. 8

Configuration Differences ............................................................................................................ 10

High Load .................................................................................................................................... 11

Analyzing Exadata-specific AWR Data ......................................................12

Reviewing the Database Statistics .............................................................................................. 12

Exadata Configuration ................................................................................................................. 13

IO Distribution ............................................................................................................................. 14

Smart Scans ................................................................................................................................ 15

Flash Cache ................................................................................................................................ 15

IO Reasons ................................................................................................................................. 18

Top Databases ............................................................................................................................ 19

Analysis Summary....................................................................................................................... 20

3 WHITE PAPER / Using AWR Report on Exadata


Exadata Performance Data ........................................................................ 21

Conclusion .................................................................................................. 21

Reference ................................................................................................... 22

4 WHITE PAPER / Using AWR Report on Exadata


INTRODUCTION

Oracle Exadata is engineered to deliver dramatically better performance,


cost effectiveness, and availability for Oracle databases. Exadata features
a modern cloud-based architecture with scale-out high-performance
database servers, scale-out intelligent storage servers with state-of-the-art
PCI flash, and an ultra-fast InfiniBand internal fabric. Unique software
algorithms in Exadata implement database intelligence in storage,
compute, and InfiniBand networking to deliver higher performance and
capacity at lower costs than other platforms.
Exadata runs all types of database workloads including Online Transaction
Processing (OLTP), Data Warehousing (DW), In-Memory Analytics and
consolidation of mixed workloads. Exadata can be deployed on-premises
as the foundation for a private database cloud, or can be acquired using a
subscription model and deployed in the Oracle Public Cloud or Cloud at
Customer with all infrastructure managed by Oracle.
As customers around the world make Exadata the platform of choice for
enterprise database deployment, and consolidate an increasing number of
databases onto Exadata systems, monitoring the performance of these
databases from an Exadata system standpoint becomes more important
than ever. This paper outlines how the Oracle Database Automatic
Workload Repository (AWR) feature can be used in conjunction with
Exadata to monitor and analyze database performance characteristics from
an Exadata perspective.
The contents of this paper apply to all Exadata deployments – whether on-
premises, Public Cloud, or Cloud at Customer. Specifically, with Exadata
Cloud, since customers have complete administrative control over their
databases, the Exadata-specific AWR capabilities apply in the same
manner as when these databases are deployed on-premises.

5 WHITE PAPER / Using AWR Report on Exadata


AWR OVERVIEW
The Automatic Workload Repository (AWR) feature, introduced in Oracle Database 10g, is the most
widely used performance diagnostics tool for the Oracle database. The AWR feature collects,
processes, and maintains database performance statistics data for problem detection and self-tuning
purposes. This process of data collection is repeated on a regular time period and the results are
captured in an AWR snapshot. The delta values, calculated from the values captured by the AWR
snapshot, represent the changes for each statistic over the time period, and can be viewed through an
AWR report for further analysis. By default, the AWR snapshots are taken at hourly intervals, and the
snapshots are maintained for eight days. AWR reports can also be generated on-demand for specific
time intervals. Please refer to “Gathering Database Statistics” in Oracle Database Performance Tuning
Guide for more details on AWR.

Performance and Scope


When analyzing performance issues, it is important to understand the scope of the performance
problem, and to ensure that the data and the tools used match the scope of the problem.

For example, if an issue is localized to a small set of users or SQL statements, a SQL Monitor report
1
or a SQL Details report will have data that is relevant to the scope of the problem. A SQL Monitor
report provides detailed statistics about a single execution of a SQL statement or a DB operation,
while SQL Details provides detailed statistics about multiple executions of a SQL statement within a
specified time frame.

If the performance issues are instance-wide or database-wide, then an AWR report will contain data
and statistics for the instance or the entire database. Active Session History, or ASH, which samples
active sessions, can be used for both instance-wide and database-wide issues, along with localized
issues, as it collects data across multiple dimensions, which can be used to filter the data.

EXADATA SUPPORT IN AWR


Exadata support in AWR was added in Oracle Database 12.1.0.2.0 and Exadata System Software
12.1.2.1.0. By including Exadata statistics in the AWR report, customers can now have visibility into
the storage tier through a unified report, without having to collect more data from the storage servers.
This is of particular interest to Exadata customers with Public Cloud and Cloud at Customer
environments as customers do not have access to the storage servers in these environments.

The Exadata statistics are only available in the HTML and Active-HTML formats of the AWR Instance
report, and the AWR Global report; the statistics are not available in the text format of the report. The
Exadata sections in the report are also constantly being enhanced, as new features are included in
new releases of Exadata software. Additionally, Exadata statistics are also available in AWR reports in
Enterprise Manager. The References section in this whitepaper provides a list of documents
describing how to manage Exadata with Enterprise Manager.

It is also important to note that with the addition of Exadata storage level statistics in the AWR report,
the performance tuning methodology does not change. Users should first look at DB time, and find the
top consumers of DB time in order to address performance issues. Only when it has been determined
that there may be IO issues should one start looking at the Exadata sections. The Exadata sections
are not meant to replace, but instead are meant to complement existing tools and methodologies.

1
The use of AWR requires the Oracle Diagnostics Pack. SQL Monitor report and the SQL Details report require Oracle Diagnostics
and Tuning Pack.

6 WHITE PAPER / Using AWR Report on Exadata


CHALLENGES AND AWR EXADATA SOLUTIONS
A fairly common challenge for Oracle DBAs is to better analyze and understand database
performance characteristics that are directly related to the underlying infrastructure such as servers,
network and storage. Optimal database performance is dependent on an optimal configuration of this
infrastructure; however, if this infrastructure is mis-configured or some components turn faulty,
accurately diagnosing resulting database performance issues and correlating them to the specific
component is not an easy task.

The value proposition of an engineered system such as Exadata is that Oracle DBAs are now able to
integrate statistics that are collected and maintained on the Exadata storage servers directly and
automatically into AWR. As will be seen later in this paper, this diagnosis process is remarkably
efficient compared to the time and resources that would be spent otherwise if these databases were
deployed on a generic infrastructure. Oracle DBAs also benefit from the fact that Exadata specific
AWR content continues to get enhanced as the core Exadata platform is enhanced with additional
software and hardware capabilities.

The following sections outline specific scenarios where the Exadata-specific AWR capabilities may be
leveraged.

Noisy Neighbor
Exadata storage adds a new scope when analyzing performance issues. The storage subsystem may
be shared by multiple databases, and as such, the statistics that come from the storage layer are for
the entire system – i.e. it is not restricted to a single database or a single database instance.

In an Exadata system where several databases have been consolidated, it is important to identify the
databases that could be consuming a significant amount of the IO bandwidth on the system, and thus
affecting other databases on the system. This is commonly referred to as the noisy neighbor problem
in cloud deployments. Note however that it is strongly recommended to leverage Exadata’s built-in IO
Resource Management (IORM) capabilities such that IO requests within an Exadata Storage Server
can be prioritized and scheduled based on configured resource plans. Please refer to “Understanding
IO Resource Management” in Exadata System Software User’s Guide for more details on IORM.
2
To address this noisy neighbor issue, the AWR report includes a Top Databases section , by both IO
requests and throughput. At AWR snapshot time, a subset of databases is captured into AWR – which
databases are captured is based on an internal metric to identify top N in each of the storage servers.
As seen in Figure 1, at report time, the data is aggregated in order to report the top databases
captured, by IO requests and IO throughput. In addition, the data is broken down by IOs on flash
devices and IOs on hard disks.

2
If security is a concern, there is a cell attribute, dbPerfDataSuppress, that can be used to suppress databases from appearing in
the v$cell_db view of other databases, and the subsequent AWR views that capture the v$cell_db data. The IOs of databases
that are listed in dbPerfDataSuppress will be included in “OTHER” when the view is queried from a different database. Please
refer to the Oracle Exadata System Software User’s Guide on listing, altering and describing cell attributes.

7 WHITE PAPER / Using AWR Report on Exadata


Figure 1. Example of Top Databases by IO requests. Notice the report shows %Captured instead of %Total, as not
all statistics for all databases are captured by AWR. This data is available aggregated for the entire system, as
shown above, and per cell.

Uneven Workload on Cells or Disks


Exadata is a parallel system, and the workload is expected to be evenly distributed across all storage
servers and disks. If a storage server or disk is performing more work compared to its peers, it has the
potential to cause performance problems. As is commonly the case for parallel systems, the system is
only as fast as its slowest component.

The Exadata report performs simple outlier analysis, using a number of metrics, to compare devices
against its peers. The devices are grouped and compared by device type and size, as different device
types do not have the same performance characteristics. For example, a flash device is expected to
perform very differently from a hard disk. Similarly, a 1.6TB flash device may not be able to sustain the
same amount of IO as a 6.4TB flash device.

The statistics used for outlier analysis include OS statistics, similar to iostat, which include IOPs,
throughput, %utilization, service time and queue time. It also includes cell server statistics, which
break down IOPs, throughput and latencies by the type of IO (read or write), and the size of the IO
(small or large).

In addition to outlier analysis, the Exadata AWR report identifies if a system has reached maximum
capacity. The maximum values used in the report are queried from the cell, and are consistent with
what is published in the Exadata data sheets. Since customer workloads will vary, these maximum
numbers that are reported are meant to be used as guidelines rather than hard rules.

8 WHITE PAPER / Using AWR Report on Exadata


Zoomed image
showing Disks at
capacity

Figure 2a. Example of simple outlier analysis. There are no outliers in this example, but the report has identified the
hard disks may be at maximum IOPs capacity, as indicated by the (*) and the dark red background. The maximum
for the system is 6,408 IOPs for hard disks, and the report is currently showing 9,355.83 IOPs.

Zoomed in
View

Figure 2b. Example of simple outlier analysis. In this example, the report has identified that the hard disks are at
maximum capacity. It has also identified two disks that are performing more IOPs, compared to their peers.

9 WHITE PAPER / Using AWR Report on Exadata


Configuration Differences
Similar to uneven workload on storage servers or disks, configuration differences across the storage
servers could potentially contribute to performance issues. The configuration issues could be
differences in flash cache or flash log sizes, or differences in the number of cell disks or grid disks in
use.

As shown in Figure 3 and Figure 3a, the AWR report includes Exadata configuration information, and
identifies storage servers that are configured differently.

Figure 3. Example of configuration information captured. Storage Server Model will include the cell names. ‘All’
indicates identical configuration across the storage servers.

10 WHITE PAPER / Using AWR Report on Exadata


Figure 3a. Example of configuration information captured from a heterogeneous system. Storage Server Model
includes cell names per model. ‘All’ indicates identical configuration across the storage servers, and the cell names
are displayed when there are configuration differences, as seen in the Exadata Storage Server Version section.

High Load
Changes in performance can be caused by increased load on the system. This can either be
increased IO load or increased CPU on the storage servers. The increased IO load can be caused by
maintenance activities, such as backups, or by changes in user IO, due to increased user workload or
possible changes in execution plans.

On an Exadata system, there is additional information sent with each IO that includes the reason why
the database is performing the IO. With the IO reason, we can easily determine if the additional IO
load is caused by maintenance activity, or by increased database workload.

The reports also have visibility into the Exadata smart features, including smart scans, smart flash log,
and smart flash cache.

11 WHITE PAPER / Using AWR Report on Exadata


Figure 4. Example of IO Reasons by IO requests for each storage cell. This shows IO requests are caused by
typical database workload – redo log writes and buffer cache reads.

ANALYZING EXADATA-SPECIFIC AWR DATA


In order to get familiar with the AWR sections, we will walk through an example that could reflect a real
customer use case.

The scenario is that this customer had just deployed a database on four compute nodes of an Exadata
X5-2 full rack, and immediately started experiencing performance problems. The following sections
outline the diagnosis that could easily be performed by analyzing the Exadata-specific sections of the
report.

Reviewing the Database Statistics


The first check that was performed was validating the IO characteristics of the system. Figure 5 shows
the Top Timed Events from a single instance, but in this case all 4 instances looked fairly similar. The
wait events show that almost 75% of DB time was spent on ‘cell single block physical read’, with an
average wait time of 8.32ms. This read latency could imply that the data is being read from hard disk,
rather than flash cache.

Figure 5. Top 10 Foreground Events by Total Wait Time from the AWR report.

12 WHITE PAPER / Using AWR Report on Exadata


Upon reviewing the database statistics further, it was noticed that a relatively small amount of IO was
issued by this instance, at 154.6 IOPs (Figure 6). Across the 4 instances, this would only be about 600
total IOPs.

However, Figure 6 also shows that the optimized requests are almost 0. Optimized IOs on Exadata
include:

• IOs on smart flash cache

• IOs from smart scans, including IOs saved by storage index, and IOs saved by Columnar Cache

This database did not look to be performing a smart scan workload, so the lack of optimized IOs
further supports the theory that the IOs are not using flash cache.

Figure 6. IO Profile from the AWR report shows minimal IO from this instance.

Figure 7 shows the types of IOs that the database instance is issuing, and within this list, there is
nothing remarkable about the types of IOs that would prevent the database from using flash cache.
Most of the reads are Buffer Cache Reads, which are disk reads that are performed by the database in
order to populate the buffer cache. These reads should normally be satisfied from flash cache.

Figure 7. IOStat by Function summary from the AWR report shows normal database IO that should be cached in
flash cache.

Once an issue with IO performance is verified, the next set of analysis could transition to the Exadata
Configuration and statistics.

Exadata Configuration
A review of the Exadata Configuration section shows that it is an X5-2 Full Rack with 14 storage

13 WHITE PAPER / Using AWR Report on Exadata


servers (Figure 8). The rest of the Configuration and Health sections did not identify any anomalies –
all storage servers were configured the same way, in the expected configuration. There were no alerts,
nor were any of the disks offline. Those sections have not been included in this paper for brevity.

Figure 8. Exadata Storage Server Model from the AWR report shows an X5-2 full rack.

IO Distribution
A review of the IOs on the storage servers shows that there is not that much IO occurring on the
storage servers (Figure 9). However, there is more IO on the hard disks at 564.28/s per cell, than on
the flash devices at 119.80/s per cell. This distribution is not typical in an Exadata environment, as
most of the IOs normally occur on flash.

The outlier sections report the IOs per device type, as different types of devices are expected to have
different performance characteristics. The format used to identify the device type is <F|H>/<size>,
where F is for Flash devices, and H for hard disks.

Zoomed in
Views

Figure 9. Cell Server IOPs from the AWR report.

14 WHITE PAPER / Using AWR Report on Exadata


Smart Scans
A review of the smart scan section, shown in Figure 10, was done next to determine if the disk reads
were due to smart scans. This was unlikely to be the case, as Figure 9 showed most of the IOs were
small reads (Figure 9 shows 133.88 small reads/s), while smart scans would typically be observed as
large reads (Figure 9 shows 0.15 large reads/sec). A quick glance at the smart scan section in Figure
10 verified that the IOs were not coming from smart scans, as there is only 1.5MB/s per cell of data
that was eligible for smart scans.

The Smart IO section also shows an overall picture of smart IO activity on the system, and gives an
idea of how well smart scans are performing, and whether or not the storage servers may be CPU
bound. Note however, when looking at specific smart scan issues for a small set of SQL statements,
the SQL*Monitor report would be a better performance diagnostic tool to use.

Figure 10. Smart IO from the AWR report that shows smart scan information.

Flash Cache
The initial suspicion from looking at the database wait events was that the data was being read from
hard disk, and not from flash cache. If that were the case, the expectation would be to see a low Flash
Cache Hit%. Please refer to Table 1 for the definition and calculation of the Flash Cache Hit ratios.

In Figure 11, the Cell OLTP Hit% (as well as the Cell Scan Hit%) is very high at almost 100%. The
Database Flash Cache Hit%, however, is almost 0. So why do we have this discrepancy?

Figure 11. Flash Cache Savings from the AWR report.

15 WHITE PAPER / Using AWR Report on Exadata


STATISTIC DESCRIPTION

Database Flash Cache Hit% The percentage of read requests from the database satisfied from flash
cache

Cell OLTP Hit% The percentage of OLTP read requests on the storage servers satisfied
from Flash Cache. This is calculated as
𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹ℎ 𝑐𝑐𝑐𝑐𝑐𝑐ℎ𝑒𝑒 𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟 𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟 ℎ𝑖𝑖𝑖𝑖
100 ∗ ( )
𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹ℎ 𝑐𝑐𝑐𝑐𝑐𝑐ℎ𝑒𝑒 𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟 𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟 ℎ𝑖𝑖𝑖𝑖 + 𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹ℎ 𝑐𝑐𝑐𝑐𝑐𝑐ℎ𝑒𝑒 𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚

Cell Scan Hit% The percentage of scan requests on the storage servers satisfied from
Flash Cache. This is calculated as
𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹ℎ 𝑐𝑐𝑐𝑐𝑐𝑐ℎ𝑒𝑒 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟 𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏
100 ∗ ( )
𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹ℎ 𝑐𝑐𝑐𝑐𝑐𝑐ℎ𝑒𝑒 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏 𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎

Table 1. Flash Cache Hit% definition

The Cell Hit% is calculated based on reads that are eligible for caching in flash cache. Exadata has a
Smart Flash Cache that can distinguish between types of IO requests coming from the database, and
whether or not it is beneficial to cache the data in the flash cache. The Cell Hit% does not include
reads that retrieve data that would not benefit from caching in flash cache.

The Database Flash Cache Hit% however considers all IO requests coming from the database.

The discrepancy between the two would indicate that the reads from the database are not eligible for
flash cache.

A review of the distribution of small reads from the cells, as seen in Figure 12, shows the small reads
are almost evenly split between flash and hard disk, with 53.58% of the small reads on hard disk.

Zoomed in
View

Figure 12. Single Block Reads in the AWR report indicates OLTP workload

16 WHITE PAPER / Using AWR Report on Exadata


The Flash Cache User Reads Per Second, in Figure 13, shows a very small amount of reads from
Flash Cache. This further supports the theory that the reads are not occurring on the flash cache,
because the IO requests are not eligible for flash cache.

Figure 13. Flash Cache User Reads from the AWR report.

Figure 14 shows the %Hit for the individual cells, and the flash cache behavior is similar across all the
cells. The storage cells are showing almost no reads from Flash Cache, but a high %Hit. This means
that the disk IOs are not caused by Flash Cache misses, but rather are IOs that are purposely
bypassing Flash Cache.

Figure 14. Flash Cache User Reads Efficiency from the AWR report

Figure 15 shows a review of Flash Cache User Writes, which also shows a minimal amount of writes
on the storage cells. This would seem to indicate that both the reads and the writes being issued by
the database are somehow not eligible for flash cache.

17 WHITE PAPER / Using AWR Report on Exadata


Figure 15. Flash Cache User Writes from the AWR report

In a final check in the Flash Cache sections, the Flash Cache Internal Writes section is also reviewed.
These would be population write requests. A flash cache miss would normally result in a population
write, so that subsequent requests for the same data would hit flash cache. In this case, there is also a
very small amount of population writes into flash cache.

Figure 16. Flash Cache Internal Writes from the AWR report.

All the data from the Flash Cache sections indicate that there is very little flash cache activity, and that
the IOs are considered as not eligible for caching.

IO Reasons
The IO Reasons section tells us why the IOs are issued on the storage servers. The IOs in IO
Reasons include both reads and writes, as well as hard disk and flash.

In Figure 17, most of the IO reasons are normally cached in flash cache:

18 WHITE PAPER / Using AWR Report on Exadata


• Limit dirty buffer writes – writes issued by DBWR to limit the number of dirty buffers in the buffer
cache

• Data file reads to private memory – these may not always be cached in flash cache, as these are
3
large reads. However, this is only 17% of the requests.

• Buffer cache reads – reads into the database buffer cache. These are regular user reads, and
should normally be cached in flash cache

• Redo log writes – writes to the redo logs. With smart flash log, these writes are done to both smart
flash log, and the redo logs.

The rest of the requests are only ~4%, either database control file reads, or Internal IO. Internal IO are
IOs done by the storage servers.

Figure 17. IO Reasons by Requests from the AWR report.

Top Databases
Reviewing the Top Databases, in Figure 18, verifies that most of the IO for database DB0003 is on
disk.

3
Please refer to Oracle Exadata Database Machine System Overview, Appendix A: What’s New in Oracle Exadata Database
Machine, A.2.6 Faster Performance for Large Analytic Queries and Large Loads

19 WHITE PAPER / Using AWR Report on Exadata


Figure 18. Top Databases by IO requests from the AWR report.

Analysis Summary
What this has proven so far:

• Database is experiencing poor IO performance. 75% of DB time is on ‘cell single block physical
read’, and the average wait time is over 8ms.

• Flash Cache sections indicate very little activity on flash cache, and the IOs are most likely being
considered as not eligible for flash cache.

• IO reasons indicates fairly typical IOs that should normally be eligible for flash cache (with the
possible exception of reads to PGA).

• Top Databases confirms this database is performing most of its IO requests on hard disk, and not
on flash.

A review of all this data implies that there is possibly a configuration issue that was the most likely
cause of the IOs bypassing the flash cache. Reviewing configuration data with the customer identified
an IORM plan that was inadvertently disabling the use of flash cache for the database. Once the IORM
plan was fixed, the performance issues were resolved automatically.

20 WHITE PAPER / Using AWR Report on Exadata


EXADATA PERFORMANCE DATA
In addition to the AWR report, there is also a wealth of performance data available on Exadata, which
includes cell metrics and ExaWatcher.

Table 2 summarizes the available data, along with relative characteristics:

Performance Data

AWR CELL METRICS4 EXAWATCHER

• Widely available • Per cell collection • Per cell collection


• Usually sufficient • Includes cumulative and per • Every 5 seconds
• Integrated with existing second rates (calculated every • Retention 7 days
database tools 1 minute)
• Charting available with
• Provides system-level view (all • Retention 7 days GetExaWatcherResults.sh
cells), and per-cell view
• Averaged over report interval
(default 1 hour)

Available Data Available Data Available Data


• Configuration information • Exadata “smart” features, such • OS Statistics
• OS statistics (iostat, etc) as Flash Cache, Flash Log, • Exadata “smart” features, such
IORM, Smart Scans, etc. as Flash Cache, Flash Log,
• Cell server statistics
IORM, Smart Scans, etc.
• Exadata Smart features
• IO reasons
• Top Databases

Table 2. Available performance data on Exadata.

CONCLUSION
AWR, the most widely used performance diagnostics tool on the Oracle database, now includes
Exadata statistics. The integration of Exadata statistics in the AWR report enables significantly better
and easier analysis of database performance issues than what would be possible if the databases
were deployed on a generic infrastructure.

4
Please refer to the Exadata System Software User’s Guide, Monitoring and Tuning Oracle Exadata System Software for more
information on cell metrics.

21 WHITE PAPER / Using AWR Report on Exadata


REFERENCE

1. Exadata Health and Resource Usage Monitoring

2. Exadata Health and Resource Utilization Monitoring - Exadata Database Machine KPIs

3. Exadata Health and Resource Utilization Monitoring - Adaptive Thresholds

4. Exadata Health and Resource Utilization Monitoring - System Baselining for Faster Problem
Resolution

5. Oracle Enterprise Manager for Exadata Cloud - Implementation, Management, and Monitoring Best
Practices

6. Enterprise Manager Oracle Exadata Database Machine Getting Started Guide

22 WHITE PAPER / Using AWR Report on Exadata


ORACLE CORPORATION

Worldwide Headquarters
500 Oracle Parkway, Redwood Shores, CA 94065 USA

Worldwide Inquiries
TELE + 1.650.506.7000 + 1.800.ORACLE1
FAX + 1.650.506.7200
oracle.com

CONNECT WITH US
Call +1.800.ORACLE1 or visit oracle.com. Outside North America, find your local office at oracle.com/contact.

blogs.oracle.com/oracle facebook.com/oracle twitter.com/oracle

Copyright © 2018, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only, and the contents hereof are
subject to change without notice. This document is not warranted to be error-free, nor subject to any other warranties or conditions, whether expressed
orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any
liability with respect to this document, and no contractual obligations are formed either directly or indirectly by this document. This document may not be
reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without our prior written permission.
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or
registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks
of Advanced Micro Devices. UNIX is a registered trademark of The Open Group. 0918
Using AWR Report on Exadata
September 2018
Author: Cecilia Grant, Ashish Ray
Contributing Authors: Curtis Dinkel

You might also like