Exam 1z0 060 Upgrade To Oracle Database 12c v2
Exam 1z0 060 Upgrade To Oracle Database 12c v2
OracelQuesotins&Answers
Search
J U S T
A N O T H E R
A L L
O N E
T E SHome
T K I
NMembership
G
S I T E Subscribe
S
CheckPoint Cisco CIW CompTIA CWNP EC-Council ISC2 Juniper Linux Microsoft Oracle Sun VMWare EMC Exin Isilon PMI HP IBM ISACA
E X A M
I N F O R M AA user
T I Oissues
N
a query on a table on one of the PDBs and receives the following error
Posted by seenagape on January 14, 2014
6 comments
Your multitenant container (CDB) contains tw o pluggable databases (PDB), HR_PDB and
ACCOUNTS_PDB, both of w hich use the CDB tablespace. The temp file is called temp01.tmp.
A user issues a query on a table on one of the PDBs and receives the follow ing error:
ERROR at line 1:
ORA-01565: error in identifying file /u01/app/oracle/oradata/CDB1/temp01.tmp
ORA-27037: unable to obtain file status
Identify tw o w ays to rectify the error.
A.
Add a new temp file to the temporary tablespace and drop the temp file that that produced the
error.
B.
Shut dow n the database instance, restore the temp01.tmp file from the backup, and then restart
the database.
C.
Take the temporary tablespace offline, recover the missing temp file by applying redo logs, and
then bring the temporary tablespace online.
D.
Shutdow n the database instance, restore and recover the temp file from the backup, and then
open the database w ith RESETLOGS.
E.
Shut down the database instance and then restart the CDB and PDBs.
Explanation:
* Because temp files cannot be backed up and because no redo is ever generated
for them, RMAN never restores or recovers temp files. RMAN does track the names of temp files,
but only so that it can automatically re-create them when needed.
* If you use RMAN in a Data Guard environment, then RMAN transparently converts primary
control files to standby control files and vice versa. RMAN automatically updates file names for
data files, online redo logs, standby redo logs, and temp files when you issue RESTORE and
RECOVER.
The primary key constraint on the EMPLOYEES table is disabled during redefinition.
1 comment
C.
VPD policies are copied from the original table to the new table during online redefinition.
D.
You must copy the VPD policies manually from the original table to the new table during online
redefinition.
Explanation:
C (not D): CONS_VPD_AUTO
Used to indicate to copy VPD policies automatically
* DBMS_RLS.ADD_POLICY
/ The DBMS_RLS package contains the fine-grained access control administrative interface, which
is used to implement Virtual Private Database (VPD).DBMS_RLS is available with the Enterprise
Edition only.
Note:
* CONS_USE_PK and CONS_USE_ROWID are constants used as input to the options_flag
parameter in both the START_REDEF_TABLE Procedure and CAN_REDEF_TABLE Procedure.
CONS_USE_ROWID is used to indicate that the redefinition should be done using rowids while
CONS_USE_PK implies that the redefinition should be done using primary keys or pseudoprimary keys (which are unique
keys with all component columns having NOT NULL constraints).
* DBMS_REDEFINITION.START_REDEF_TABLE
To achieve online redefinition, incrementally maintainable local materialized views are used.
These logs keep track of the changes to the master tables and are used by the materialized views
during refresh synchronization.
* START_REDEF_TABLE Procedure
Prior to calling this procedure, you must manually create an empty interim table (in the same
schema as the table to be redefined) with the desired attributes of the post-redefinition table, and
then call this procedure to initiate the redefinition.
Which two statements are true about the use of the procedures listed in the
v$sysaux_occupants.move_procedure column?
Posted by seenagape on January 14, 2014
3 comments
W hich tw o statements are true about the use of the procedures listed in the
v$sysaux_occupants.move_procedure column?
A.
The procedure may be used for some components to relocate component data to the SYSAUX
tablespace from its current tablespace.
B.
The procedure may be used for some components to relocate component data from the
SYSAUX tablespace to another tablespace.
C.
All the components may be moved into SYSAUX tablespace.
D.
4 comments
Service registration with the listener is performed by the process monitor (PMON) process of
each database instance.
D.
The listener.ora configuration file must be configured w ith one or more listening protocol
addresses to allow remote users to connect to a database instance.
E.
The listener.ora configuration file must be located in the ORACLE_HOME/netw ork/admin
directly.
Explanation:
Supported services, that is, the services to which the listener forwards client
requests, can be configured in the listener.ora file or this information can be dynamically registered
with the listener. This dynamic registration feature is called service registration. The registration is
performed by the PMON processan instance background processof each database instance
that has the necessary configuration in the database initialization parameter file. Dynamic service
registration does not require any configuration in the listener.ora file.
Incorrect:
Not B: Service registration reduces the need for the SID_LIST_listener_name parameter setting,
which specifies information about the databases served by the listener, in the listener.ora file.
Note:
* Oracle Net Listener is a separate process that runs on the database server computer. It receives
incoming client connection requests and manages the traffic of these requests to the database
server.
* A remote listener is a listener residing on one computer that redirects connections to a database
instance on another computer. Remote listeners are typically used in an Oracle Real Application
Clusters (Oracle RAC) environment. You can configure registration to remote listeners, such as in
the case of Oracle RAC, for dedicated server or shared server environments.
which three ways can you re-create the lost disk group and restore the data?
Posted by seenagape on January 14, 2014
2 comments
You are administering a database stored in Automatic Storage Management (ASM). You use
RMAN to back up the database and the MD_BACKUP command to back up the ASM metadata
regularly. You lost an ASM disk group DG1 due to hardw are failure.
In w hich three w ays can you re-create the lost disk group and restore the data?
A.
Use the MD_RESTORE command to restore metadata for an existing disk group by passing
the existing disk group name as an input parameter and use RMAN to restore the data.
B.
Use the MKDG command to restore the disk group w ith the same configuration as the backedup disk group and data
on the disk group.
C.
Use the MD_RESTORE command to restore the disk group with the changed disk group
specification, failure group specification, name, and other attributes and use RMAN to restore the
data.
D.
Use the MKDG command to restore the disk group w ith the same configuration as the backedup disk group name and
same set of disks and failure group configuration, and use RMAN to
restore the data.
E.
Use the MD_RESTORE command to restore both the metadata and data for the failed disk
group.
F.
Use the MKDG command to add a new disk group DG1 with the same or different specifications
for failure group and other attributes and use RMAN to restore the data.
Explanation:
Note:
* The md_restore command allows you to restore a disk group from the metadata created by the
md_backup command.
/md_restore Command
Purpose
This command restores a disk group backup using various options that are described in this
section.
/ In the restore mode md_restore, it re-create the disk group based on the backup file with all userdefined templates with
the exact configuration as the backuped disk group. There are several
options when restore the disk group
full re-create the disk group with the exact configuration
nodg Restores metadata in an existing disk group provided as an input parameter
newdg Change the configuration like failure group, disk group name, etc..
* The MD_BACKUP command creates a backup file containing metadata for one or more disk
groups. By default all the mounted disk groups are included in the backup file which is saved in the
current working directory. If the name of the backup file is not specified, ASM names the file
AMBR_BACKUP_INTERMEDIATE_FILE.
What should you do before executing the commands to restore and recover the data file in
ACCOUNTS_PDB?
Posted by seenagape on January 14, 2014
Your multitenant container database, CDB1, is running in ARCHIVELOG mode and has tw o
pluggable databases, HR_PDB and ACCOUNTS_PDB. An RMAN backup exists for the database.
You issue the command to open ACCOUNTS_PDB and find that the USERDATA.DBF data file for
the default permanent tablespace USERDATA belonging to ACCOUNTS_PDB is corrupted.
W hat should you do before executing the commands to restore and recover the data file in
ACCOUNTS_PDB?
A.
Place CDB1 in the mount stage and then the USERDATA tablespace offline in
1 comment
ACCOUNTS_PDB.
B.
Place CDB1 in the mount stage and issue the ALTER PLUGGABLE DATABASE accounts_pdb
CLOSE IMMEDIATE command.
C.
Issue the ALTER PLUGGABLE DATABASE accounts_pdb RESTRICTED command.
D.
Which Oracle Database component is audited by default if the unified Auditing option is enabled?
Posted by seenagape on January 14, 2014
3 comments
W hich Oracle Database component is audited by default if the unified Auditing option is enabled?
A.
Oracle Data Pump
B.
Oracle Recovery Manager (RMAN)
C.
Oracle Label Security
D.
Oracle Database Vault
E.
Which option identifies the correct sequence to recover the SYSAUX tablespace?
Posted by seenagape on January 14, 2014
Your multitenant container (CDB) containing three pluggable databases (PDBs) is running in
ARCHIVELOG mode. You find that the SYSAUX tablespace is corrupted in the root container.
The steps to recover the tablespace are as follow s:
1. Mount the CDB.
2. Close all the PDBs.
3. Open the database.
4. Apply the archive redo logs.
5. Restore the data file.
6. Take the SYSAUX tablespace offline.
7. Place the SYSAUX tablespace offline.
8. Open all the PDBs w ith RESETLOGS.
9. Open the database w ith RESETLOGS.
10. Execute the command SHUTDOW N ABORT.
W hich option identifies the correct sequence to recover the SYSAUX tablespace?
A.
6, 5, 4, 7
B.
10, 1, 2, 5, 8
C.
10, 1, 2, 5, 4, 9, 8
5 comments
D.
10, 1, 5, 8, 10
Explanation:
* Example:
While evaluating the 12c beta3 I was not able to do the recover while testing all pdb files lost.
Cannot close the pdb as the system datafile was missing
So only option to recover was:
Shutdown cdb (10)
startup mount; (1)
restore pluggable database
recover pluggable databsae
alter database open;
alter pluggable database name open;
Oracle support says: You should be able to close the pdb and restore/recover the system
tablespace of PDB.
* Open the database with the RESETLOGS option after finishing recovery:
SQL> ALTER DATABASE OPEN RESETLOGS;
Which three are direct benefits of the multiprocess, multithreaded architecture of Oracle Database
12c when it is enabled?
Posted by seenagape on January 14, 2014
2 comments
W hich three are direct benefits of the multiprocess, multithreaded architecture of Oracle Database
12c w hen it is enabled?
A.
Reduced logical I/O
B.
Reduced virtual memory utilization
C.
In order to exploit some new storage tiers that have been provisioned by a storage
administrator?
Posted by seenagape on January 14, 2014
In order to exploit some new storage tiers that have been provisioned by a storage administrator,
the partitions of a large heap table must be moved to other tablespaces in your Oracle 12c
database?
Both local and global partitioned B-tree Indexes are defined on the table.
A high volume of transactions access the table during the day and a medium volume of
transactions access it at night and during w eekends.
Minimal disrupt ion to availability is required.
W hich three statements are true about this requirement?
A.
1 comment
E.
Local indexes must be rebuilt manually after moving the partitions.
Explanation:
A: You can create and rebuild indexes online. Therefore, you can update base
tables at
the same time you are building or rebuilding indexes on that table. You can perform
DML operations while the index build is taking place, but DDL operations are not
allowed. Parallel execution is not supported when creating or rebuilding an index
online.
B:
Note:
* Transporting and Attaching Partitions for Data Warehousing Typical enterprise data
warehouses contain one or more large fact tables. These fact tables can be partitioned
by date, making the enterprise data warehouse a historical database. You can build
indexes to speed up star queries. Oracle recommends that you build local indexes for
such historically partitioned tables to avoid rebuilding global indexes every time you
drop the oldest partition from the historical database.
D: Moving (Rebuilding) Index-Organized Tables
Because index-organized tables are primarily stored in a B-tree index, you can
encounter fragmentation as a consequence of incremental updates. However, you can
use the ALTER TABLEMOVE statement to rebuild the index and reduce this
fragmentation.
Which three are true about the large pool for an Oracle database instance that supports shared
server connections?
Posted by seenagape on January 14, 2014
1 comment
W hich three are true about the large pool for an Oracle database instance that supports shared
server connections?
A.
10 comments
B.
7 comments
You notice that the performance of your production 24/7 Oracle database significantly degraded.
Sometimes you are not able to connect to the instance because it hangs. You do not w ant to
restart the database instance.
How can you detect the cause of the degraded performance?
A.
Enable Memory Access Mode, w hich reads performance data from SGA.
B.
Use emergency monitoring to fetch data directly from SGA analysis.
C.
Run Automatic Database Diagnostic Monitor (ADDM) to fetch information from the latest
Automatic Workload Repository (AWR) snapshots.
D.
Use Active Session History (ASH) data and hang analysis in regular performance monitoring.
E.
Run ADDM in diagnostic mode.
Explanation:
* In most cases, ADDM output should be the first place that a DBA looks when
notified of a performance problem.
* Performance degradation of the database occurs when your database was performing optimally
in the past, such as 6 months ago, but has gradually degraded to a point where it becomes
noticeable to the users. The Automatic Workload Repository (AWR) Compare Periods report
enables you to compare database performance between two periods of time.
While an AWR report shows AWR data between two snapshots (or two points in time), the AWR
Compare Periods report shows the difference between two periods (or two AWR reports with a
total of four snapshots). Using the AWR Compare Periods report helps you to identify detailed
performance attributes and configuration settings that differ between two time periods.
Reference: Resolving Performance Degradation Over Time
ASM disk groups with ASM disks consisting of Exadata Grid Disks.
B.
ASM disk groups w ith ASM disks consisting of LUNS on any Storage Area Netw ork array
C.
ASM disk groups w ith ASM disks consisting of any zero padded NFS-mounted files
D.
Database files stored in ZFS and accessed using conventional NFS mounts.
E.
Database files stored in ZFS and accessed using the Oracle Direct NFS feature
F.
Database files stored in any file system and accessed using the Oracle Direct NFS feature
No comments
G.
ASM disk groups with ASM disks consisting of LUNs on Pillar Axiom Storage arrays
Explanation:
HCC requires the use of Oracle Storage Exadata (A), Pillar Axiom (G) or Sun ZFS
Storage Appliance (ZFSSA).
Note:
* Hybrid Columnar Compression, initially only available on Exadata, has been extended to support
Pillar Axiom and Sun ZFS Storage Appliance (ZFSSA) storage when used with Oracle Database
Enterprise Edition 11.2.0.3 and above
* Oracle offers the ability to manage NFS using a feature called Oracle Direct NFS (dNFS). Oracle
Direct NFS implements NFS V3 protocol within the Oracle database kernel itself. Oracle Direct
NFS client overcomes many of the challenges associated with using NFS with the Oracle
Database with simple configuration, better performance than traditional NFS clients, and offers
consistent configuration across platforms.
How does real-time Automatic database Diagnostic Monitor (ADDM) check performance
degradation and provide solutions?
Posted by seenagape on January 14, 2014
5 comments
In your multitenant container database (CDB) containing pluggable databases (PDB), users
complain about performance degradation.
How does real-time Automatic database Diagnostic Monitor (ADDM) check performance
degradation and provide solutions?
A.
It collects data from SGA and compares it w ith a preserved snapshot.
B.
The TNS ping command executes successfully w hen tested w ith ORCL; how ever, from the same
OS user session, you are not able to connect to the database instance w ith the follow ing
command:
SQL > CONNECT scott/tiger@orcl
W hat could be the reason for this?
A.
The listener is not running on the database node.
B.
The TNS_ADMIN environment variable is set to the w rong value.
C.
1 comment
1 comment
Examine the follow ing steps of privilege analysis for checking and revoking excessive, unused
privileges granted to users:
1. Create a policy to capture the privilege used by a user for privilege analysis.
2. Generate a report w ith the data captured for a specified privilege capture.
3. Start analyzing the data captured by the policy.
4. Revoke the unused privileges.
5. Compare the used and unused privileges lists.
6. Stop analyzing the data.
Identify the correct sequence of steps.
A.
1, 3, 5, 6, 2, 4
B.
1, 3, 6, 2, 5, 4
C.
1, 3, 2, 5, 6, 4
D.
1, 3, 2, 5, 6, 4
E.
1, 3, 5, 2, 6, 4
Explanation:
1. Create a policy to capture the privilege used by a user for privilege analysis.
3. Start analyzing the data captured by the policy.
6. Stop analyzing the data.
2. Generate a report with the data captured for a specified privilege capture.
5. Compare the used and unused privileges lists.
4. Revoke the unused privileges.
They are created only in the location specified by the LOG_ARCHIVE_DEST_1 parameter.
B.
They are created only in the Fast Recovery Area.
C.
They are created in the location specified by the LOG_ARCHIVE_DEST_1 parameter and in
the default location $ORACLE_HOME/dbs/arch.
D.
They are created in the location specified by the LOG_ARCHIVE_DEST_1 parameter and the
location specified by the DB_RECOVERY_FILE_DEST parameter.
Explanation:
You can choose to archive redo logs to a single destination or to multiple
destinations.
Destinations can be localwithin the local file system or an Oracle Automatic Storage
Management (Oracle ASM) disk groupor remote (on a standby database). When you
archive to multiple destinations, a copy of each filled redo log file is written to each
destination. These redundant copies help ensure that archived logs are always
available in the event of a failure at one of the destinations.
To archive to only a single destination, specify that destination using the LOG_ARCHIVE_DEST
and LOG_ARCHIVE_DUPLEX_DEST initialization parameters.
ARCHIVE_DEST initialization parameter. To archive to multiple destinations, you can
choose to archive to two or more locations using the LOG_ARCHIVE_DEST_n initialization
parameters, or to archive only to a primary and secondary destination using the LOG_
ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST initialization parameters.
2 comments
1 comment
Your multitenant container database (CDB) is running in ARCHIVELOG mode. You connect to the CDB RMAN.
Examine the follow ing command and its output:
Data files that belong to the root container and all the pluggable databases (PDBs)
C.
Data files that belong to only the root container and PDB$SEED
D.
Data files that belong to the root container and all the PDBs excluding PDB$SEED
Explanation:
Backing Up a Whole CDB
Backing up a whole CDB is similar to backing up a non-CDB. When you back up a whole CDB,
RMAN backs up the root, all the PDBs, and the archived redo logs. You can then recover either
the whole CDB, the root only, or one or more PDBs from the CDB backup.
Note:
* You can back up and recover a whole CDB, the root only, or one or more PDBs.
* Backing Up Archived Redo Logs with RMAN
Archived redo logs are the key to successful media recovery. Back them up regularly. You can
back up logs with BACKUP ARCHIVELOG, or back up logs while backing up datafiles and control
files by specifying BACKUP PLUS ARCHIVELOG.
No comments
You are administering a database stored in Automatic Storage management (ASM). The files are
stored in the DATA disk group. You execute the follow ing command:
SQL > ALTER DISKGROUP data ADD ALIAS +data/prod/myfile.dbf FOR +data.231.45678;
W hat is the result?
A.
The file +data.231.54769 is physically relocated to +data/prod and renamed as myfile.dbf.
B.
The file +data.231.54769 is renamed as myfile.dbf, and copied to +data/prod.
C.
The file +data.231.54769 remains in the same location and a synonym myfile.dbf is created.
D.
The file myfile.dbf is created in +data/prod and the reference to +data.231.54769 in the data
dictionary removed.
Explanation:
ADD ALIAS
Use this clause to create an alias name for an Oracle ASM filename. The alias_name consists of
the full directory path and the alias itself.
Recommending the restructuring of SQL queries that are using bad plans
No comments
Explanation:
The SQL Tuning Advisor takes one or more SQL statements as an input and
invokes the Automatic Tuning Optimizer to perform SQL tuning on the statements. The output of
the SQL Tuning Advisor is in the form of an advice or recommendations, along with a rationale for
each recommendation and its expected benefit. The recommendation relates to collection of
statistics on objects (C), creation of new indexes, restructuring of the SQL statement (E), or
creation of a SQL profile (A). You can choose to accept the recommendation to complete the
tuning of the SQL statements.
3 comments
None of the data definition language (DDL) statements are logged in the trace file.
B.
Only DDL commands that resulted in errors are logged in the alert log file.
C.
A new log.xml file that contains the DDL statements is created, and the DDL command details
are removed from the alert log file.
D.
Only DDL commands that resulted in the creation of new database files are logged.
Explanation:
ENABLE_DDL_LOGGING enables or disables the writing of a subset of data
definition language (DDL) statements to a DDL alert log.
The DDL log is a file that has the same format and basic behavior as the alert log, but it only
contains the DDL statements issued by the database. The DDL log is created only for the RDBMS
component and only if the ENABLE_DDL_LOGGING initialization parameter is set to true. When
this parameter is set to false, DDL statements are not included in any log.
Which three steps should you perform to recover the control file and make the database fully
operational?
Posted by seenagape on January 14, 2014
No comments
Your multitenant container database (CDB) contains three pluggable database (PDBs). You find
that the control file is damaged. You plan to use RMAN to recover the control file. There are no
startup triggers associated w ith the PDBs.
W hich three steps should you perform to recover the control file and make the database fully
operational?
A.
Mount the container database (CDB) and restore the control file from the control file auto
backup.
B.
Recover and open the CDB in NORMAL mode.
C.
Mount the CDB and then recover and open the database, with the RESETLOGS option.
D.
Start the database instance in the nomount stage and restore the control file from control file
auto backup.
Explanation:
Step 1: F
Step 2: D
Step 3: C: If all copies of the current control file are lost or damaged, then you must restore and
mount a backup control file. You must then run the RECOVERcommand, even if no data files have
been restored, and open the database with the RESETLOGS option.
Note:
* RMAN and Oracle Enterprise Manager Cloud Control (Cloud Control) provide full support for
backup and recovery in a multitenant environment. You can back up and recover a whole
multitenant container database (CDB), root only, or one or more pluggable databases (PDBs).
No comments
Identify three valid options for adding a pluggable database (PDB) to an existing multitenant
container database (CDB).
Posted by seenagape on January 14, 2014
1 comment
Identify three valid options for adding a pluggable database (PDB) to an existing multitenant
container database (CDB).
A.
Use the CREATE PLUGGABLE DATABASE statement to create a PDB using the files from the
SEED.
B.
Use the CREATE DATABASE . . . ENABLE PLUGGABLE DATABASE statement to provision a
PDB by copying file from the SEED.
C.
Use the DBMS_PDB package to plug an Oracle 12c non-CDB database into an existing CDB.
E.
Use the DBMS_PDB package to plug an Oracle 11 g Release 2 (11.2.0.3.0) non-CDB database
into an existing CDB.
Explanation:
Use the CREATE PLUGGABLE DATABASE statement to create a pluggable
database (PDB).
This statement enables you to perform the following tasks:
* (A) Create a PDB by using the seed as a template
Use the create_pdb_from_seed clause to create a PDB by using the seed in the multitenant
container database (CDB) as a template. The files associated with the seed are copied to a new
location and the copied files are then associated with the new PDB.
* (C) Create a PDB by cloning an existing PDB
Use the create_pdb_clone clause to create a PDB by copying an existing PDB (the source PDB)
and then plugging the copy into the CDB. The files associated with the source PDB are copied to a
new location and the copied files are associated with the new PDB. This operation is called
cloning a PDB.
The source PDB can be plugged in or unplugged. If plugged in, then the source PDB can be in the
same CDB or in a remote CDB. If the source PDB is in a remote CDB, then a database link is
used to connect to the remote CDB and copy the files.
* Create a PDB by plugging an unplugged PDB or a non-CDB into a CDB
Use the create_pdb_from_xml clause to plug an unplugged PDB or a non-CDB into a CDB, using
an XML metadata file.
What must you do to receive recommendations about the efficient use of indexes and materialized
views to improve query performance?
Posted by seenagape on January 14, 2014
Your database supports a DSS w orkload that involves the execution of complex queries:
No comments
Currently, the library cache contains the ideal w orkload for analysis. You w ant to analyze some of
the queries for an application that are cached in the library cache.
W hat must you do to receive recommendations about the efficient use of indexes and materialized
view s to improve query performance?
A.
Create a SQL Tuning Set (STS) that contains the queries cached in the library cache and run
the SQL Tuning Advisor (STA) on the w orkload captured in the STS.
B.
Run the Automatic W orkload Repository Monitor (ADDM).
C.
Create an STS that contains the queries cached in the library cache and run the SQL
Performance Analyzer (SPA) on the w orkload captured in the STS.
D.
Create an STS that contains the queries cached in the library cache and run the SQL Access
Advisor on the workload captured in the STS.
Explanation:
* SQL Access Advisor is primarily responsible for making schema modification
recommendations, such as adding or dropping indexes and materialized views. SQL Tuning
Advisor makes other types of recommendations, such as creating SQL profiles and restructuring
SQL statements.
* The query optimizer can also help you tune SQL statements. By using SQL Tuning Advisor and
SQL Access Advisor, you can invoke the query optimizer in advisory mode to examine a SQL
statement or set of statements and determine how to improve their efficiency. SQL Tuning Advisor
and SQL Access Advisor can make various recommendations, such as creating SQL profiles,
restructuring SQL statements, creating additional indexes or materialized views, and refreshing
optimizer statistics.
Note:
* Decision support system (DSS) workload
* The library cache is a shared pool memory structure that stores executable SQL and PL/SQL
code. This cache contains the shared SQL and PL/SQL areas and control structures such as locks
and library cache handles.
Reference: Tuning SQL Statements
2, 1, 4, 3, 5
C.
1, 2, 3, 4, 5
D.
1, 2, 4, 5
Explanation:
* Evolving SQL Plan Baselines
No comments
This function creates an advisor task to prepare the plan evolution of one or more plans for a
specified SQL statement. The input parameters can be a SQL handle, plan name or a list of plan
names, time limit, task name, and description.
1. Set the evolve task parameters.
SET_EVOLVE_TASK_PARAMETER
This function updates the value of an evolve task parameter. In this release, the only valid
parameter is TIME_LIMIT.
4. Execute the evolve task by using the DBMS_SPM.EXECUTE_EVOLVE_TASK function.
This function executes an evolution task. The input parameters can be the task name, execution
name, and execution description. If not specified, the advisor generates the name, which is
returned by the function.
3: IMPLEMENT_EVOLVE_TASK
This function implements all recommendations for an evolve task. Essentially, this function is
equivalent to using ACCEPT_SQL_PLAN_BASELINE for all recommended plans. Input
parameters include task name, plan name, owner name, and execution name.
5. Report the task outcome by using the DBMS_SPM_EVOLVE_TASK function.
This function displays the results of an evolve task as a CLOB. Input parameters include the task
name and section of the report to include.
Reference: Oracle Database SQL Tuning Guide 12c, Managing SQL Plan Baselines
Which option would you consider first to decrease the wait event immediately?
Posted by seenagape on January 14, 2014
1 comment
In a recent Automatic W orkload Repository (AW R) report for your database, you notice a high
number of buffer busy w aits. The database consists of locally managed tablespaces w ith free list
managed segments.
On further investigation, you find that buffer busy w aits is caused by contention on data blocks.
W hich option w ould you consider first to decrease the w ait event immediately?
A.
Decreasing PCTUSED
B.
Decreasing PCTFREE
C.
Increasing the number of DBW N process
D.
Which three statements are true about the effect of this command?
Posted by seenagape on January 14, 2014
2 comments
Statistics collection is not done for the CUSTOMERS table when schema stats are gathered.
B.
Statistics collection is not done for the CUSTOMERS table w hen database stats are gathered.
C.
Any existing statistics for the CUSTOMERS table are still available to the optimizer at parse
time.
D.
Statistics gathered on the CUSTOMERS table when schema stats are gathered are stored as
pending statistics.
E.
Statistics gathered on the CUSTOMERS table w hen database stats are gathered are stored as
pending statistics.
Explanation:
* SET_TABLE_PREFS Procedure
This procedure is used to set the statistics preferences of the specified table in the specified
schema.
* Example:
Using Pending Statistics
Assume many modifications have been made to the employees table since the last time statistics
were gathered. To ensure that the cost-based optimizer is still picking the best plan, statistics
should be gathered once again; however, the user is concerned that new statistics will cause the
optimizer to choose bad plans when the current ones are acceptable. The user can do the
following:
EXEC DBMS_STATS.SET_TABLE_PREFS(hr, employees, PUBLISH, false);
By setting the employees tables publish preference to FALSE, any statistics gather from now on
will not be automatically published. The newly gathered statistics will be marked as pending.
3 comments
Examine the follow ing impdp command to import a database over the netw ork from a pre-12c Oracle database
(source):
The import operation must be performed by a user on the target database with the
DATAPUMP_IMP_FULL_DATABASE role, and the database link must connect to a user on the
source database with the DATAPUMP_EXD_FULL_DATABASE role.
B.
All the user-defined tablespaces must be in read-only mode on the source database.
C.
The export dump file must be created before starting the import on the target database.
D.
The source and target database must be running on the same platform w ith the same
endianness.
E.
The path of data files on the target database must be the same as that on the source database.
F.
The impdp operation must be performed by the same user that performed the expdp operation.
Explanation:
A, Not F :The DATAPUMP_EXP_FULL_DATABASE and
DATAPUMP_IMP_FULL_DATABASE roles allow privileged users to take full advantage of the
API. The Data Pump API will use these roles to determine whether privileged application roles
should be assigned to the processes comprising the job.
Note:
* The Data Pump Import utility is invoked using the impdp command.
Incorrect:
Not D, Not E: The source and target databases can have different hardware, operating systems,
character sets, and time zones.
Which two are true concerning a multitenant container database with three pluggable database?
Posted by seenagape on January 14, 2014
W hich tw o are true concerning a multitenant container database w ith three pluggable database?
A.
All administration tasks must be done to a specific pluggable database.
B.
The pluggable databases increase patching time.
C.
2 comments
choice. The following list calls out the most compelling examples.
* High consolidation density. (E)
The many pluggable databases in a single multitenant container database share its memory and
background processes, letting you operate many more pluggable databases on a particular
platform than you can single databases that use the old architecture. This is the same benefit that
schema-based consolidation brings.
* Rapid provisioning and cloning using SQL.
* New paradigms for rapid patching and upgrades. (D, not B)
The investment of time and effort to patch one multitenant container database results in patching
all of its many pluggable databases. To patch a single pluggable database, you simply unplug/plug
to a multitenant container database at a different Oracle Database software version.
* (C, not A) Manage many databases as one.
By consolidating existing databases as pluggable databases, administrators can manage many
databases as one. For example, tasks like backup and disaster recovery are performed at the
multitenant container database level.
* Dynamic between pluggable database resource management. In Oracle Database 12c,
Resource Manager is extended with specific functionality to control the competition for resources
between the pluggable databases within a multitenant container database.
Note:
* Oracle Multitenant is a new option for Oracle Database 12c Enterprise Edition that helps
customers reduce IT costs by simplifying consolidation, provisioning, upgrades, and more. It is
supported by a new architecture that allows a multitenant container database to hold many
pluggable databases. And it fully complements other options, including Oracle Real Application
Clusters and Oracle Active Data Guard. An existing database can be simply adopted, with no
change, as a pluggable database; and no changes are needed in the other tiers of the application.
Reference: 12c Oracle Multitenant
2 comments
Examine the current value for the follow ing parameters in your database instance:
SGA_MAX_SIZE = 1024M
SGA_TARGET = 700M
DB_8K_CACHE_SIZE = 124M
LOG_BUFFER = 200M
You issue the follow ing command to increase the value of DB_8K_CACHE_SIZE:
SQL> ALTER SYSTEM SET DB_8K_CACHE_SIZE=140M;
W hich statement is true?
A.
It fails because the DB_8K_CACHE_SIZE parameter cannot be changed dynamically.
B.
It succeeds only if memory is available from the autotuned components if SGA.
C.
It fails because an increase in DB_8K_CACHE_SIZE cannot be accommodated w ithin
SGA_TARGET.
D.
Which three statements are true concerning unplugging a pluggable database (PDB)?
Posted by seenagape on January 14, 2014
W hich three statements are true concerning unplugging a pluggable database (PDB)?
A.
The PDB must be open in read only mode.
B.
4 comments
C.
The unplugged PDB becomes a non-CDB.
D.
The unplugged PDB can be plugged into the same multitenant container database (CDB)
E.
Which three statements are true about using an invisible column in the PRODUCTS table?
Posted by seenagape on January 14, 2014
6 comments
The %ROWTYPE attribute declarations in PL/SQL to access a row will not display the invisible
column in the output.
B.
The DESCRIBE commands in SQL *Plus will not display the invisible column in the output.
C.
No comments
Explanation:
If you run multiple AUDIT statements on the same unified audit policy but specify
different EXCEPT users, then Oracle Database uses the last exception user list, not any of the
users from the preceding lists. This means the effect of the earlier AUDIT POLICY EXCEPT
statements are overridden by the latest AUDIT POLICY EXCEPT statement.
Note:
* The ORA_DATABASE_PARAMETER policy audits commonly used Oracle Database parameter
settings. By default, this policy is not enabled.
* You can use the keyword ALL to audit all actions. The following example shows how to audit all
actions on the HR.EMPLOYEES table, except actions by user pmulligan.
Example Auditing All Actions on a Table
CREATE AUDIT POLICY all_actions_on_hr_emp_pol
ACTIONS ALL ON HR.EMPLOYEES;
AUDIT POLICY all_actions_on_hr_emp_pol EXCEPT pmulligan;
Reference: Oracle Database Security Guide 12c, About Enabling Unified Audit Policies
3 comments
On your Oracle 12c database, you invoked SQL *Loader to load data into the EMPLOYEES table
in the HR schema by issuing the follow ing command:
$> sqlldr hr/hr@pdb table=employees
W hich tw o statements are true regarding the command?
A.
It succeeds with default settings if the EMPLOYEES table belonging to HR is already defined in
the database.
B.
It fails because no SQL *Loader data file location is specified.
C.
It fails if the HR user does not have the CREATE ANY DIRECTORY privilege.
D.
It fails because no SQL *Loader control file location is specified.
Explanation:
Note:
* SQL*Loader is invoked when you specify the sqlldr command and, optionally, parameters that
establish session characteristics.
What must you do to activate the new default value for numeric full redaction?
Posted by seenagape on January 14, 2014
No comments
After implementing full Oracle Data Redaction, you change the default value for the NUMBER data type as follow s:
After changing the value, you notice that FULL redaction continues to redact numeric data w ith
zero.
W hat must you do to activate the new default value for numeric full redaction?
A.
Re-enable redaction policies that use FULL data redaction.
B.
Re-create redaction policies that use FULL data redaction.
C.
Re-connect the sessions that access objects w ith redaction policies defined on them.
D.
Flush the shared pool.
E.
No comments
You must track all transactions that modify certain tables in the sales schema for at least three
years.
Automatic undo management is enabled for the database w ith a retention of one day.
W hich tw o must you do to track the transactions?
A.
Enable supplemental logging for the database.
B.
Specify undo retention guarantee for the database.
C.
Create a Flashback Data Archive in the tablespace w here the tables are stored.
D.
Enable Flashback Data Archiving for the tables that require tracking.
Explanation:
E: By default, flashback archiving is disabled for any table. You can enable
flashback archiving for a table if you have the FLASHBACK ARCHIVE object privilege on the
Flashback Data Archive that you want to use for that table.
D: Creating a Flashback Data Archive
/ Create a Flashback Data Archive with the CREATE FLASHBACK ARCHIVE statement,
specifying the following:
Name of the Flashback Data Archive
Name of the first tablespace of the Flashback Data Archive
(Optional) Maximum amount of space that the Flashback Data Archive can use in the first
tablespace
/ Create a Flashback Data Archive named fla2 that uses tablespace tbs2, whose data will be
retained for two years:
CREATE FLASHBACK ARCHIVE fla2 TABLESPACE tbs2 RETENTION 2 YEAR;
Which technique will move the table and indexes while maintaining the highest level of availability
to the application?
Posted by seenagape on January 14, 2014
Your are the DBA supporting an Oracle 11g Release 2 database and w ish to move a table
containing several DATE, CHAR, VARCHAR2, and NUMBER data types, and the tables indexes,
to another tablespace.
The table does not have a primary key and is used by an OLTP application.
W hich technique w ill move the table and indexes w hile maintaining the highest level of availability
to the application?
A.
Oracle Data Pump.
B.
An ALTER TABLE MOVE to move the table and ALTER INDEX REBUILD to move the indexes.
C.
An ALTER TABLE MOVE to move the table and ALTER INDEX REBUILD ONLINE to move the
indexes.
D.
No comments
E.
Edition-Based Table Redefinition.
Explanation:
* Oracle Database provides a mechanism to make table structure modifications
without significantly affecting the availability of the table. The mechanism is called online table
redefinition. Redefining tables online provides a substantial increase in availability compared to
traditional methods of redefining tables.
* To redefine a table online:
Choose the redefinition method: by key or by rowid
* By keySelect a primary key or pseudo-primary key to use for the redefinition. Pseudo-primary
keys are unique keys with all component columns having NOT NULL constraints. For this method,
the versions of the tables before and after redefinition should have the same primary key columns.
This is the preferred and default method of redefinition.
* By rowidUse this method if no key is available. In this method, a hidden column named
M_ROW$$ is added to the post-redefined version of the table. It is recommended that this column
be dropped or marked as unused after the redefinition is complete. If COMPATIBLE is set to
10.2.0 or higher, the final phase of redefinition automatically sets this column unused. You can
then use the ALTER TABLE DROP UNUSED COLUMNS statement to drop it.
You cannot use this method on index-organized tables.
Note:
* When you rebuild an index, you use an existing index as the data source. Creating an index in
this manner enables you to change storage characteristics or move to a new tablespace.
Rebuilding an index based on an existing data source removes intra-block fragmentation.
Compared to dropping the index and using the CREATE INDEX statement, re-creating an existing
index offers better performance.
Incorrect:
Not E: Edition-based redefinition enables you to upgrade the database component of an
application while it is in use, thereby minimizing or eliminating down time.
No comments
To implement Automatic Management (AMM), you set the follow ing parameters:
W hen you try to start the database instance w ith these parameter settings, you receive the
follow ing error message:
SQL > startup
ORA-00824: cannot set SGA_TARGET or MEMORY_TARGET due to existing internal settings,
see alert log for more information.
Identify the reason the instance failed to start.
A.
The PGA_AGGREGATE_TARGET parameter is set to zero.
B.
What are two benefits of installing Grid Infrastructure software for a stand-alone server before
installing and creating an Oracle database?
Posted by seenagape on January 14, 2014
W hat are tw o benefits of installing Grid Infrastructure softw are for a stand-alone server before
installing and creating an Oracle database?
A.
Effectively implements role separation
B.
1 comment
4 comments
Multiple non-RAC CDB instances can mount the same PDB as long as they are on the same
server.
E.
Patches are alw ays applied at the CDB level.
F.
A PDB can have a private undo tablespace.
Explanation:
Not A: Oracle Multitenant is a new option for Oracle Database 12c Enterprise
Edition that helps
customers reduce IT costs by simplifying consolidation, provisioning, upgrades, and more.
It is supported by a new architecture that allows a container database to hold many
pluggable databases. And it fully complements other options, including Oracle
Real Application Clusters and Oracle Active Data Guard. An existing database can be
simply adopted, with no change, as a pluggable database; and no changes are needed in
the other tiers of the application.
Not E: New paradigms for rapid patching and upgrades.
The investment of time and effort to patch one multitenant container database results in patching
all of its many pluggable databases. To patch a single pluggable database, you simply unplug/plug
to a multitenant container database at a different Oracle Database software version.
not F:
* Redo and undo go hand in hand, and so the CDB as a whole has a single undo tablespace per
RAC instance.
No comments
3 comments
1 comment
You notice a high number of w aits for the db file scattered read and db file sequential read events
in the recent Automatic Database Diagnostic Monitor (ADDM) report. After further investigation,
you find that queries are performing too many full table scans and indexes are not being used
even though the filter columns are indexed.
Identify three possible reasons for this.
A.
Which three features work together, to allow a SQL statement to have different cursors for the
same statement based on different selectivity ranges?
Posted by seenagape on January 14, 2014
W hich three features w ork together, to allow a SQL statement to have different cursors for the
same statement based on different selectivity ranges?
A.
3 comments
2 comments
You notice a performance change in your production Oracle 12c database. You w ant to know
w hich change caused this performance difference.
W hich method or feature should you use?
A.
Compare Period ADDM report
B.
2, 3, 4, 1
C.
4, 1, 3, 2
D.
3, 2, 4, 1
Explanation:
Step 1 (2). Seed column usage
Oracle must observe a representative workload, in order to determine the appropriate column
No comments
groups. Using the new procedure DBMS_STATS.SEED_COL_USAGE, you tell Oracle how long it
should observe the workload.
Step 2: (3) You dont need to execute all of the queries in your work during this window. You can
simply run explain plan for some of your longer running queries to ensure column group
information is recorded for these queries.
Step 3. (1) Create the column groups
At this point you can get Oracle to automatically create the column groups for each of the tables
based on the usage information captured during the monitoring window. You simply have to call
the DBMS_STATS.CREATE_EXTENDED_STATS function for each table.This function requires
just two arguments, the schema name and the table name. From then on, statistics will be
maintained for each column group whenever statistics are gathered on the table.
Note:
* DBMS_STATS.REPORT_COL_USAGE reports column usage information and records all the
SQL operations the database has processed for a given object.
* The Oracle SQL optimizer has always been ignorant of the implied relationships between data
columns within the same table. While the optimizer has traditionally analyzed the distribution of
values within a column, he does not collect value-based relationships between columns.
* Creating extended statisticsHere are the steps to create extended statistics for related table
columns withdbms_stats.created_extended_stats:
1 The first step is to create column histograms for the related columns.2 Next, we run
dbms_stats.create_extended_stats to relate the columns together.
Unlike a traditional procedure that is invoked via an execute (exec) statement, Oracle extended
statistics are created via a select statement.
Which three statements are true about Automatic Workload Repository (AWR)?
Posted by seenagape on January 14, 2014
4 comments
W hich three statements are true about Automatic W orkload Repository (AW R)?
A.
All AW R tables belong to the SYSTEM schema.
B.
The snapshots collected by AWR are used by the self-tuning components in the database
D.
AW R computes time model statistics based on time usage for activities, w hich are displayed in
the v$SYS time model and V$SESS_TIME_MODEL view s.
E.
Which two tasks must you perform to add users with SYSBACKUP, SYSDG, and SYSKM privilege
to the password file?
Posted by seenagape on January 14, 2014
You upgraded your database from pre-12c to a multitenant container database (CDB) containing
pluggable databases (PDBs).
Examine the query and its output:
W hich tw o tasks must you perform to add users w ith SYSBACKUP, SYSDG, and SYSKM privilege
to the passw ord file?
A.
Assign the appropriate operating system groups to SYSBACKUP, SYSDG, SYSKM.
B.
Re-create the password file with SYSBACKUP, SYSDG, and SYSKM privilege, and FORCE
arguments set to Yes.
4 comments
E.
Re-create the passw ord file in the Oracle Database 12c format.
Explanation:
* orapwd
/ You can create a database password file using the password file creation utility, ORAPWD.
The syntax of the ORAPWD command is as follows:
orapwd FILE=filename [ENTRIES=numusers] [FORCE={y|n}] [ASM={y|n}]
[DBUNIQUENAME=dbname] [FORMAT={12|legacy}] [SYSBACKUP={y|n}] [SYSDG={y|n}]
[SYSKM={y|n}] [DELETE={y|n}] [INPUT_FILE=input-fname]
force whether to overwrite existing file (optional),
* v$PWFILE_users
/ 12c: V$PWFILE_USERS lists all users in the password file, and indicates whether the user has
been granted the SYSDBA, SYSOPER, SYSASM, SYSBACKUP, SYSDG, and SYSKM privileges.
/ 10c: sts users who have been granted SYSDBA and SYSOPER privileges as derived from the
password file.
ColumnDatatypeDescription
USERNAMEVARCHAR2(30)The name of the user that is contained in the password file
SYSDBAVARCHAR2(5)If TRUE, the user can connect with SYSDBA privileges
SYSOPERVARCHAR2(5)If TRUE, the user can connect with SYSOPER privileges
Incorrect:
not E: The format of the v$PWFILE_users file is already in 12c format.
How would you guarantee that the blocks for the table never age out?
Posted by seenagape on January 14, 2014
1 comment
An application accesses a small lookup table frequently. You notice that the required data blocks
are getting aged out of the default buffer cache.
How w ould you guarantee that the blocks for the table never age out?
A.
Configure the KEEP buffer pool and alter the table with the corresponding storage clause.
B.
Increase the database buffer cache size.
C.
Configure the RECYCLE buffer pool and alter the table with the corresponding storage clause.
D.
Configure Automata Shared Memory Management.
E.
Explanation:
Schema objects are referenced with varying usage patterns; therefore, their cache
behavior may be quite different. Multiple buffer pools enable you to address these
differences. You can use a KEEP buffer pool to maintain objects in the buffer cache
and a RECYCLE buffer pool to prevent objects from consuming unnecessary space in the
cache. When an object is allocated to a cache, all blocks from that object are placed in
that cache. Oracle maintains a DEFAULT buffer pool for objects that have
not been assigned to one of the buffer pools.
1 comment
You conned using SQL Plus to the root container of a multitenant container database (CDB) w ith
SYSDBA privilege.
The CDB has several pluggable databases (PDBs) open in the read/w rite mode.
There are ongoing transactions in both the CDB and PDBs.
W hat happens alter issuing the SHUTDOW N TRANSACTIONAL statement?
A.
The shutdow n proceeds immediately.
The shutdow n proceeds as soon as all transactions in the PDBs are either committed or rolled
hack.
B.
The shutdown proceeds as soon as all transactions in the CDB are either committed or rolled
back.
C.
The shutdow n proceeds as soon as all transactions in both the CDB and PDBs are either
committed or rolled back.
D.
The statement results in an error because there are open PDBs.
Explanation:
* SHUTDOWN [ABORT | IMMEDIATE | NORMAL | TRANSACTIONAL [LOCAL]]
Shuts down a currently running Oracle Database instance, optionally closing and dismounting a
database. If the current database is a pluggable database, only the pluggable database is closed.
The consolidated instance continues to run.
Shutdown commands that wait for current calls to complete or users to disconnect such as
SHUTDOWN NORMAL and SHUTDOWN TRANSACTIONAL have a time limit that the
SHUTDOWN command will wait. If all events blocking the shutdown have not occurred within the
time limit, the shutdown command cancels with the following message:
ORA-01013: user requested cancel of current operation
* If logged into a CDB, shutdown closes the CDB instance.
To shutdown a CDB or non CDB, you must be connected to the CDB or non CDB instance that
you want to close, and then enter
SHUTDOWN
Database closed.
Database dismounted.
Oracle instance shut down.
To shutdown a PDB, you must log into the PDB to issue the SHUTDOWN command.
SHUTDOWN
Pluggable Database closed.
Note:
* Prerequisites for PDB Shutdown
When the current container is a pluggable database (PDB), the SHUTDOWN command can only
be used if:
The current user has SYSDBA, SYSOPER, SYSBACKUP, or SYSDG system privilege.
The privilege is either commonly granted or locally granted in the PDB.
The current user exercises the privilege using AS SYSDBA, AS SYSOPER, AS SYSBACKUP, or
AS SYSDG at connect time.
To close a PDB, the PDB must be open.
4 comments
You are planning the creation of a new multitenant container database (CDB) and w ant to store
the ROOT and SEED container data files in separate directories.
You plan to create the database using SQL statements.
W hich three techniques can you use to achieve this?
A.
Use Oracle Managed Files (OMF).
B.
Specify the SEED FILE_NAME_CONVERT clause.
C.
Specify all files in the CREATE DATABASE statement without using Oracle managed Files
(OMF).
Explanation:
* (C,E,not a) file_name_convert
Use this clause to determine how the database generates the names of files (such as data files
and wallet files) for the PDB.
For filename_pattern, specify a string found in names of files associated with the seed (when
creating a PDB by using the seed), associated with the source PDB (when cloning a PDB), or
listed in the XML file (when plugging a PDB into a CDB).
For replacement_filename_pattern, specify a replacement string.
Oracle Database will replace filename_pattern with replacement_filename_pattern when
generating the names of files associated with the new PDB.
File name patterns cannot match files or directories managed by Oracle Managed Files.
You can specify FILE_NAME_CONVERT = NONE, which is the same as omitting this clause. If
you omit this clause, then the database first attempts to use Oracle Managed Files to generate file
names. If you are not using Oracle Managed Files, then the database uses the
PDB_FILE_NAME_CONVERT initialization parameter to generate file names. If this parameter is
not set, then an error occurs.
Note:
* Oracle Database 12c Release 1 (12.1) introduces the multitenant architecture. This database
architecture has a multitenant container database (CDB) that includes a root container,
CDB$ROOT, a seed database, PDB$SEED, and multiple pluggable databases (PDBs).
Which technique should you use to minimize down time while plugging this non-CDB into the
CDB?
Posted by seenagape on January 14, 2014
You are about to plug a multi-terabyte non-CDB into an existing multitenant container database
(CDB).
The characteristics of the non-CDB are as follow s:
Version: Oracle Database 11g Release 2 (11.2.0.2.0) 64-bit
Character set: AL32UTF8
National character set: AL16UTF16
O/S: Oracle Linux 6 64-bit
The characteristics of the CDB are as follow s:
Version: Oracle Database 12c Release 1 64-bit
Character Set: AL32UTF8
National character set: AL16UTF16
3 comments
2 comments
Your database supports an online transaction processing (OLTP) application. The application is
undergoing some major schema changes, such as addition of new indexes and materialized
view s. You w ant to check the impact of these changes on w orkload performance.
W hat should you use to achieve this?
A.
Database replay
B.
SQL Tuning Advisor
C.
SQL Access Advisor
D.
SQL Performance Analyzer
E.
Which four statements are true about this administrator establishing connections to root in a CDB
that has been opened in read only mode?
Posted by seenagape on January 14, 2014
An administrator account is granted the CREATE SESSION and SET CONTAINER system
privileges.
A multitenant container database (CDB) instant has the follow ing parameter set:
THREADED_EXECUTION = FALSE
W hich four statements are true about this administrator establishing connections to root in a CDB
that has been opened in read only mode?
A.
You can conned as a common user by using the connect statement.
B.
2 comments
You can connect as a local user by using the SET CONTAINER statement.
Explanation:
* The choice of threading model is dictated by the THREADED_EXECUTION initialization
parameter.
THREADED_EXECUTION=FALSE : The default value causes Oracle to run using the
multiprocess model.
THREADED_EXECUTION=TRUE : Oracle runs with the multithreaded model.
* OS Authentication is not supported with the multithreaded model.
* THREADED_EXECUTION
When this initialization parameter is set to TRUE, which enables the multithreaded Oracle model,
operating system authentication is not supported. Attempts to connect to the database using
operating system authentication (for example, CONNECT / AS SYSDBA or CONNECT / ) when
this initialization parameter is set to TRUE receive an ORA-01031insufficient privileges error.
F: The new SET CONTAINER statement within a call back function:
The advantage of SET CONTAINER is that the pool does not have to create a new connection to
a PDB, if there is an exisitng connection to a different PDB. The pool can use the existing
connection, and through SET CONTAINER, can connect to the desired PDB. This can be done
using:
ALTER SESSION SET CONTAINER=<PDB Name>
This avoids the need to create a new connection from scratch.
You issue the follow ing command to import tables into the hr schema:
$ > impdp hr/hr directory = dumpdir dumpfile = hr_new .dmp schemas=hr
TRANSFORM=DISABLE_ARCHIVE_LOGGING: Y
W hich statement is true?
A.
10 comments
2 comments
You notice a performance change in your production Oracle database and you w ant to know
w hich change has made this performance difference.
You generate the Compare Period Automatic Database Diagnostic Monitor (ADDM) report to
further investigation.
W hich three findings w ould you get from the report?
A.
It detects any configuration change that caused a performance difference in both time periods.
B.
It identifies any workload change that caused a performance difference in both time periods.
C.
It detects the top w ait events causing performance degradation.
D.
It show s the resource usage for CPU, memory, and I/O in both time periods.
E.
It shows the difference in the size of memory pools in both time periods.
F.
It gives information about statistics collection in both time periods.
Explanation:
Keyword: shows the difference.
* Full ADDM analysis across two AWR snapshot periods
Detects causes, measure effects, then correlates them
Causes: workload changes, configuration changes
Effects: regressed SQL, reach resource limits (CPU, I/O, memory, interconnect)
Makes actionable recommendations along with quantified impact
* Identify what changed
/ Configuration changes, workload changes
* Performance degradation of the database occurs when your database was performing optimally
in the past, such as 6 months ago, but has gradually degraded to a point where it becomes
noticeable to the users. The Automatic Workload Repository (AWR) Compare Periods report
enables you to compare database performance between two periods of time.
While an AWR report shows AWR data between two snapshots (or two points in time), the AWR
Compare Periods report shows the difference (ABE) between two periods (or two AWR reports
with a total of four snapshots). Using the AWR Compare Periods report helps you to identify
detailed performance attributes and configuration settings that differ between two time periods.
Reference: Resolving Performance Degradation Over Time
After actual execution of the query, you notice that the hash join was done in the execution plan:
Identify the reason why the optimizer chose different execution plans.
Posted by seenagape on January 14, 2014
Examine the parameter for your database instance:
You generated the execution plan for the follow ing query in the plan table and noticed that the
nested loop join w as done. After actual execution of the query, you notice that the hash join w as
done in the execution plan:
The optimizer chose different plans because automatic dynamic sampling was enabled.
C.
The optimizer used re-optimization cardinality feedback for the query.
D.
The optimizer chose different plan because extended statistics w ere created for the columns
used.
Explanation:
2 comments
* optimizer_dynamic_sampling
OPTIMIZER_DYNAMIC_SAMPLING controls both when the database gathers dynamic statistics,
and the size of the sample that the optimizer uses to gather the statistics.
Range of values0 to 11
Which three statements are true about adaptive SQL plan management?
Posted by seenagape on January 14, 2014
No comments
W hich three statements are true about adaptive SQL plan management?
A.
The non-accepted plans are automatically accepted and become usable by the optimizer if they
perform better than the existing accepted plans.
E.
SYSTEM
B.
SYSAUX
C.
EXAMPLE
D.
UNDO
E.
TEMP
F.
USERS
Explanation:
* A PDB would have its SYSTEM, SYSAUX, TEMP tablespaces.It can also contains
other user created tablespaces in it.
*
* Oracle Database creates both the SYSTEM and SYSAUX tablespaces as part of every
database.
* tablespace_datafile_clauses
Use these clauses to specify attributes for all data files comprising the SYSTEM and SYSAUX
tablespaces in the seed PDB.
6 comments
Incorrect:
Not D: a PDB can not have an undo tablespace. Instead, it uses the undo tablespace belonging to
the CDB.
Note:
* Example:
CONN pdb_admin@pdb1
SELECT tablespace_name FROM dba_tablespaces;
TABLESPACE_NAME
SYSTEM
SYSAUX
TEMP
USERS
SQL>
Which two statements are true about variable extent size support for large ASM files?
Posted by seenagape on January 14, 2014
No comments
W hich tw o statements are true about variable extent size support for large ASM files?
A.
What is the quickest way to recover the contents of the OCA.EXAM_RESULTS table to the OCP
schema?
Posted by seenagape on January 14, 2014
You executed a DROP USER CASCADE on an Oracle 11g release 1 database and immediately
realized that you forgot to copy the OCA.EXAM_RESULTS table to the OCP schema.
The RECYCLE_BIN enabled before the DROP USER w as executed and the OCP user has been
granted the FLASHBACK ANY TABLE system privilege.
W hat is the quickest w ay to recover the contents of the OCA.EXAM_RESULTS table to the OCP
schema?
A.
Execute FLASHBACK TABLE OCA.EXAM_RESULTS TO BEFORE DROP RENAME TO
OCP.EXAM_RESULTS; connected as SYSTEM.
B.
Recover the table using traditional Tablespace Point In Time Recovery.
C.
Recover the table using Automated Tablespace Point In Time Recovery.
D.
Recovery the table sing Database Point In Time Recovery.
E.
4 comments
* From question: the OCP user has been granted the FLASHBACK ANY TABLE system privilege.
* Syntax
flashback_table::=
No comments
In your multitenant container database (CDB) containing pluggable database (PDBs), the HR user
executes the follow ing commands to create and grant privileges on a procedure:
CREATE OR REPLACE PROCEDURE create_test_v (v_emp_id NUMBER, v_ename
VARCHAR2, v_SALARY NUMBER, v_dept_id NUMBER)
BEGIN
INSERT INTO hr.test VALUES (V_emp_id, V_ename, V_salary, V_dept_id);
END;
/
GRANT EXECUTE ON CREATE_TEST TO john, jim, smith, king;
How can you prevent users having the EXECUTE privilege on the CREATE_TEST procedure from
inserting values into tables on w hich they do not have any privileges?
A.
Create the CREATE_TEST procedure w ith definers rights.
B.
Grant the EXECUTE privilege to users w ith GRANT OPTION on the CREATE_TEST
procedure.
C.
What are two effects of not using the "ENABLE PLUGGABLE database" clause?
Posted by seenagape on January 14, 2014
You created a new database using the create database statement w ithout specifying the
ENABLE PLUGGABLE clause.
W hat are tw o effects of not using the ENABLE PLUGGABLE database clause?
A.
The database is created as a non-CDB and can never be plugged into a CDB.
D.
The database is created as a non-CDB but can be plugged into an existing CDB.
E.
The database is created as a non-CDB but w ill become a CDB w henever the first PDB is
plugged in.
Explanation:
2 comments
What is the effect of specifying the "ENABLE PLUGGABLE DATABASE" clause in a "CREATE
DATABASE statement?
Posted by seenagape on January 14, 2014
No comments
W hat is the effect of specifying the ENABLE PLUGGABLE DATABASE clause in a CREATE
DATABASE statement?
A.
It w ill create a multitenant container database (CDB) w ith only the root opened.
B.
It will create a CDB with root opened and seed read only.
C.
It w ill create a CDB w ith root and seed opened and one PDB mounted.
D.
It w ill create a CDB that must be plugged into an existing CDB.
E.
It w ill create a CDB w ith root opened and seed mounted.
Explanation:
* The CREATE DATABASE ENABLE PLUGGABLE DATABASE SQL statement
creates a new CDB. If you do not specify the ENABLE PLUGGABLE DATABASE clause, then the
newly created database is a non-CDB and can never contain PDBs.
Along with the root (CDB$ROOT), Oracle Database automatically creates a seed PDB
(PDB$SEED). The following graphic shows a newly created CDB:
* Creating a PDB
Rather than constructing the data dictionary tables that define an empty PDB from scratch,
and then populating its Obj$ and Dependency$ tables, the empty PDB is created when the CDB
is created. (Here, we use empty to mean containing no customer-created artifacts.) It is referred
to as the seed PDB and has the name PDB$Seed. Every CDB non-negotiably contains a
seed PDB; it is non-negotiably always open in read-only mode. This has no conceptual
significance; rather, it is just an optimization device. The create PDB operation is implemented
as a special case of the clone PDB operation.
3 comments
supported.
* DB_FLASH_CACHE_FILE filename for the flash memory or disk group representing a collection
of flash memory.
Specifying this parameter without also specifying the DB_FLASH_CACHE_SIZE initialization
parameter is not allowed.
Which three initialization parameters are not controlled by Automatic Shared Memory Management
(ASMM)?
Posted by seenagape on January 14, 2014
1 comment
LOG_BUFFER
B.
SORT_AREA_SIZE
C.
JAVA_POOL_SIZE
D.
STREAMS_POOL_SIZE
E.
DB_16K_CACHE_SZIE
F.
DB_KEEP_CACHE_SIZE
Explanation:
Manually Sized SGA Components that Use SGA_TARGET Space
SGA Component, Initialization Parameter
/ The log buffer
LOG_BUFFER
/ The keep and recycle buffer caches
DB_KEEP_CACHE_SIZE
DB_RECYCLE_CACHE_SIZE
/ Nonstandard block size buffer caches
DB_nK_CACHE_SIZE
Note:
* In addition to setting SGA_TARGET to a nonzero value, you must set to zero all initialization
parameters listed in the table below to enable full automatic tuning of the automatically sized SGA
components.
* Table, Automatically Sized SGA Components and Corresponding Parameters
Which three statements are true regarding the SQL* Loader operation performed using the control
file?
Posted by seenagape on January 14, 2014
Examine the contents of SQL loader control file:
W hich three statements are true regarding the SQL* Loader operation performed using the control
file?
No comments
A.
An EMP table is created if a table does not exist. Otherwise, if the EMP table is appended with
the loaded data.
B.
The SQL* Loader data file myfile1.dat has the column names for the EMP table.
C.
The SQL* Loader operation fails because no record terminators are specified.
D.
Field names should be the first line in the both the SQL* Loader data files.
E.
The SQL* Loader operation assumes that the file must be a stream record format file with the
normal carriage return string as the record terminator.
Explanation:
A: The APPEND keyword tells SQL*Loader to preserve any preexisting data in the
table. Other options allow you to delete preexisting data, or to fail with an error if the table is not
empty to begin with.
B (not D):
Note:
* SQL*Loader-00210: first data file is empty, cannot process the FIELD NAMES record
Cause: The data file listed in the next message was empty. Therefore, the FIELD NAMES FIRST
FILE directive could not be processed.
Action: Check the listed data file and fix it. Then retry the operation
E:
* A comma-separated values (CSV) (also sometimes called character-separated values, because
the separator character does not have to be a comma) file stores tabular data (numbers and text)
in plain-text form. Plain text means that the file is a sequence of characters, with no data that has
to be interpreted instead, as binary numbers. A CSV file consists of any number of records,
separated by line breaks of some kind; each record consists of fields, separated by some other
character or string, most commonly a literal comma or tab. Usually, all records have an identical
sequence of fields.
* Fields with embedded commas must be quoted.
Example:
1997,Ford,E350,Super, luxurious truck
Note:
* SQL*Loader is a bulk loader utility used for moving data from external files into the Oracle
database.
9 comments
In your multitenant container database (CDB) containing pluggable database (PDBs), you granted
the CREATE TABLE privilege to the common user C # # A_ADMIN in root and all PDBs. You
execute the follow ing command from the root container:
SQL > REVOKE create table FROM C # # A_ADMIN;
W hat is the result?
A.
It executes successfully and the CREATE TABLE privilege is revoked from C # # A_ADMIN in
root only.
B.
It fails and reports an error because the CONTAINER=ALL clause is not used.
C.
It excludes successfully and the CREATE TABLE privilege is revoked from C # # A_ADMIN in
root and all PDBs.
D.
It fails and reports an error because the CONTAINER=CURRENT clause is not used.
E.
It executes successfully and the CREATE TABLE privilege is revoked from C # # A_ADMIN in
all PDBs.
Explanation:
REVOKE ..FROM
If the current container is the root:
/ Specify CONTAINER = CURRENT to revoke a locally granted system privilege, object privilege,
or role from a common user or common role. The privilege or role is revoked from the user or role
only in the root. This clause does not revoke privileges granted with CONTAINER = ALL.
/ Specify CONTAINER = ALL to revoke a commonly granted system privilege, object privilege on a
common object, or role from a common user or common role. The privilege or role is revoked from
the user or role across the entire CDB. This clause can revoke only a privilege or role granted with
CONTAINER = ALL from the specified common user or common role. This clause does not revoke
privileges granted locally with CONTAINER = CURRENT. However, any locally granted privileges
that depend on the commonly granted privilege being revoked are also revoked.
If you omit this clause, then CONTAINER = CURRENT is the default.
Reference: Oracle Database SQL Language Reference 12c, Revoke
Which two statements are true concerning the Resource Manager plans for individual pluggable
1 comment
W hich tw o statements are true concerning the Resource Manager plans for individual pluggable
databases (PDB plans) in a multitenant container database (CDB)?
A.
If no PDB plan is enabled for a pluggable database, then all sessions for that PDB are treated
to an equal degree of the resource share of that PDB.
B.
In a PDB plan, subplans may be used w ith up to eight consumer groups.
C.
If a PDB plan is enabled for a pluggable database, then resources are allocated to consumer
groups across all PDBs in the CDB.
D.
If no PDB plan is enabled for a pluggable database, then the PDB share in the CDB plan is
dynamically calculated.
E.
If a PDB plan is enabled for a pluggable database, then resources are allocated to consumer
groups based on the shares provided to the PDB in the CDB plan and the shares provided to the
consumer groups in the PDB plan.
Explanation:
A: Setting a PDB resource plan is optional. If not specified, all sessions within the
PDB are treated equally.
*
In a non-CDB database, workloads within a database are managed with resource plans.
In a PDB, workloads are also managed with resource plans, also called PDB resource plans.
The functionality is similar except for the following differences:
/ Non-CDB Database
Multi-level resource plans
Up to 32 consumer groups
Subplans
/ PDB Database
Single-level resource plans only
Up to 8 consumer groups
(not B) No subplans
IncorrectNot C
3 comments
1 comment
B.
1 comment
You notice that the elapsed time for an important database scheduler Job is unacceptably long.
The job belongs to a scheduler job class and w indow .
W hich tw o actions w ould reduce the jobs elapsed time?
A.
Increasing the priority of the job class to w hich the job belongs
B.
Increasing the jobs relative priority within the Job class to which it belongs
C.
Increasing the resource allocation for the consumer group mapped to the scheduler jobs job
class within the plan mapped to the scheduler window
D.
Moving the job to an existing higher priority scheduler w indow w ith the same schedule and
duration
E.
Increasing the value of the JOB_QUEUE_PROCESSES parameter
F.
Increasing the priority of the scheduler w indow to w hich the job belongs
Explanation:
B: Job priorities are used only to prioritize among jobs in the same class.
Note: Group jobs for prioritization
Within the same job class, you can assign priority values of 1-5 to individual jobs
so that if two jobs in the class are scheduled to start at the same time, the one with
the higher priority takes precedence. This ensures that you do not have a less
important job preventing the timely completion of a more important one.
C: Set resource allocation for member jobs
Job classes provide the link between the Database Resource Manager and the
Scheduler, because each job class can specify a resource consumer group as an
attribute. Member jobs then belong to the specified consumer group and are
assigned resources according to settings in the current resource plan.
Which two methods or commands would you use to accomplish this task?
Posted by seenagape on January 14, 2014
You plan to migrate your database from a File system to Automata Storage Management (ASM)
on same platform.
W hich tw o methods or commands w ould you use to accomplish this task?
A.
1 comment
C.
Conventional Export and Import
D.
Which two statements are true about the outcome after running the script?
Posted by seenagape on January 14, 2014
No comments
You run a script that completes successfully using SQL*Plus that performs these actions:
1. Creates a multitenant container database (CDB)
2. Plugs in three pluggable databases (PDBs)
3. Shuts dow n the CDB instance
4. Starts up the CDB instance using STARTUP OPEN READ W RITE
W hich tw o statements are true about the outcome after running the script?
A.
The seed w ill be in mount state.
B.
Which two statements are true when a session logged in as SCOTT queries the SAL column in the
view and the table?
Posted by seenagape on January 14, 2014
You execute the follow ing piece of code w ith appropriate privileges:
User SCOTT has been granted the CREATE SESSION privilege and the MGR role.
No comments
W hich tw o statements are true w hen a session logged in as SCOTT queries the SAL column in
the view and the table?
A.
Data is redacted for the EMP.SAL column only if the SCOTT session does not have the MGR
role set.
B.
Data is redacted for EMP.SAL column only if the SCOTT session has the MGR role set.
C.
What happens to the sessions that are presently connected to the database Instance?
Posted by seenagape on January 14, 2014
3 comments
Your database is open and the LISTENER listener running. You stopped the w rong listener
LISTENER by issuing the follow ing command:
1snrctl > STOP
W hat happens to the sessions that are presently connected to the database Instance?
A.
They are able to perform only queries.
B.
Which three statements are true about using flashback database in a multitenant container
database (CDB)?
Posted by seenagape on January 14, 2014
W hich three statements are true about using flashback database in a multitenant container
database (CDB)?
A.
The root container can be flashed back w ithout flashing back the pluggable databases (PDBs).
B.
To enable flashback database, the CDB must be mounted.
C.
Individual PDBs can be flashed back without flashing back the entire CDB.
D.
A CDB can be flashed back specifying the desired target point in time or an SCN, but not a
restore point.
Explanation:
C: * RMAN provides support for point-in-time recovery for one or more pluggable
9 comments
databases (PDBs). The process of performing recovery is similar to that of DBPITR. You use the
RECOVER command to perform point-in-time recovery of one or more PDBs. However, to recover
PDBs, you must connect to the root as a user with SYSDBA or SYSBACKUP privilege
D: DB_FLASHBACK_RETENTION_TARGET specifies the upper limit (in minutes) on how far
back in time the database may be flashed back. How far back one can flashback a database
depends on how much flashback data Oracle has kept in the flash recovery area.
Range of values0 to 231 1
Note:
Reference; Oracle Database, Backup and Recovery Users Guide 12c
No comments
Fine-Grained Auditing (FGA) is enabled for the PRICE column in the PRODUCTS table for
SELECT statements only when a row with PRICE > 10000 is accessed.
B.
FGA is enabled for the PRODUCTS.PRICE column and an audit record is written whenever a
row with PRICE > 10000 is accessed.
C.
FGA is enabled for all DML operations by JIM on the PRODUCTS.PRICE column.
D.
FGA is enabled for the PRICE column of the PRODUCTS table and the SQL statements is
captured in the FGA audit trial.
Explanation:
DBMS_FGA.add_policy
* The DBMS_FGA package provides fine-grained security functions.
* ADD_POLICY Procedure
This procedure creates an audit policy using the supplied predicate as the audit condition.
Incorrect:
Not C: object_schema
The schema of the object to be audited. (If NULL, the current log-on user schema is assumed.)
Which statement is true about the audit record that generated when auditing after instance
restarts?
Posted by seenagape on January 14, 2014
You execute the follow ing commands to audit database activities:
SQL > ALTER SYSTEM SET AUDIT_TRIAL=DB, EXTENDED SCOPE=SPFILE;
SQL > AUDIT SELECT TABLE, INSERT TABLE, DELETE TABLE BY JOHN By SESSION
W HENEVER SUCCESSFUL;
W hich statement is true about the audit record that generated w hen auditing after instance
restarts?
A.
One audit record is created for every successful execution of a SELECT, INSERT OR DELETE
command on a table, and contains the SQL text for the SQL Statements.
B.
One audit record is created for every successful execution of a SELECT, INSERT OR DELETE
command, and contains the execution plan for the SQL statements.
C.
One audit record is created for the w hole session if john successfully executes a SELECT,
INSERT, or DELETE command, and contains the execution plan for the SQL statements.
D.
One audit record is created for the w hole session if JOHN successfully executes a select
3 comments
command, and contains the SQL text and bind variables used.
E.
One audit record is created for the w hole session if john successfully executes a SELECT,
INSERT, or DELETE command on a table, and contains the execution plan, SQL text, and bind
variables used.
Explanation:
Note:
* BY SESSION
In earlier releases, BY SESSION caused the database to write a single record for all SQL
statements or operations of the same type executed on the same schema objects in the same
session. Beginning with this release (11g) of Oracle Database, both BY SESSION and BY
ACCESS cause Oracle Database to write one audit record for each audited statement and
operation.
* BY ACCESS
Specify BY ACCESS if you want Oracle Database to write one record for each audited statement
and operation.
Note:
If you specify either a SQL statement shortcut or a system privilege that audits a data definition
language (DDL) statement, then the database always audits by access. In all other cases, the
database honors the BY SESSION or BY ACCESS specification.
* For each audited operation, Oracle Database produces an audit record containing this
information:
/ The user performing the operation
/ The type of operation
/ The object involved in the operation
/ The date and time of the operation
Reference: Oracle Database SQL Language Reference 12c
Which three statements are true about the ASM disk group compatibility attributes that are set for a
disk group?
Posted by seenagape on January 14, 2014
No comments
You support Oracle Database 12c Oracle Database 11g, and Oracle Database log on the same
server.
All databases of all versions use Automatic Storage Management (ASM).
W hich three statements are true about the ASM disk group compatibility attributes that are set for
a disk group?
A.
The ASM compatibility attribute controls the format of the disk group metadata.
B.
RDBMS compatibility together with the database version determines whether a database
Instance can mount the ASM disk group.
C.
The RDBMS compatibility setting allow s only databases set to the same version as the
compatibility value, to mount the ASM disk group.
D.
The ASM compatibility attribute determines some of the ASM features that may be used by the
Oracle disk group.
E.
The ADVM compatibility attribute determines the ACFS features that may be used by the
Oracle 10 g database.
Explanation:
AD: The value for the disk group COMPATIBLE.ASM attribute determines the
minimum software version for an Oracle ASM instance that can use the disk group. This setting
also affects the format of the data structures for the Oracle ASM metadata on the disk.
B: The value for the disk group COMPATIBLE.RDBMS attribute determines the minimum
COMPATIBLE database initialization parameter setting for any database instance that is allowed
to use the disk group. Before advancing the COMPATIBLE.RDBMS attribute, ensure that the
values for the COMPATIBLE initialization parameter for all of the databases that access the disk
group are set to at least the value of the new setting for COMPATIBLE.RDBMS.
For example, if the COMPATIBLE initialization parameters of the databases are set to either 11.1
or 11.2, then COMPATIBLE.RDBMS can be set to any value between 10.1 and 11.1 inclusively.
Not E:
/The value for the disk group COMPATIBLE.ADVM attribute determines whether the disk group
can contain Oracle ASM volumes. The value must be set to 11.2 or higher. Before setting this
attribute, the COMPATIBLE.ASM value must be 11.2 or higher. Also, the Oracle ADVM volume
drivers must be loaded in the supported environment.
/ You can create an Oracle ASM Dynamic Volume Manager (Oracle ADVM) volume in a disk
group. The volume device associated with the dynamic volume can then be used to host an
Oracle ACFS file system.
The compatibility parameters COMPATIBLE.ASM and COMPATIBLE.ADVM must be set to 11.2
or higher for the disk group.
Note:
* The disk group attributes that determine compatibility are COMPATIBLE.ASM,
COMPATIBLE.RDBMS. and COMPATIBLE.ADVM. The COMPATIBLE.ASM and
2 comments
To enable the Database Smart Flash Cache, you configure the follow ing parameters:
DB_FLASH_CACHE_FILE = /dev/flash_device_1 , /dev/flash_device_2
DB_FLASH_CACHE_SIZE=64G
W hat is the result w hen you start up the database instance?
A.
It results in an error because these parameter settings are invalid.
B.
No comments
It will permit the use of uppercase passwords for database users who have been granted the
SYSOPER role.
B.
It contains username and passw ords of database users w ho are members of the OSOPER
operating system group.
C.
It contains usernames and passw ords of database users w ho are members of the OSDBA
operating system group.
D.
It will permit the use of lowercase passwords for database users who have granted the
SYSDBA role.
E.
It w ill not permit the use of mixed case passw ords for the database users w ho have been
granted the SYSDBA role.
Explanation:
* You can create a password file using the password file creation utility, ORAPWD.
* Adding Users to a Password File
When you grant SYSDBA or SYSOPER privileges to a user, that users name and privilege
information are added to the password file. If the server does not have an EXCLUSIVE password
file (that is, if the initialization parameter REMOTE_LOGIN_PASSWORDFILE is NONE or
SHARED, or the password file is missing), Oracle Database issues an error if you attempt to grant
these privileges.
A users name remains in the password file only as long as that user has at least one of these two
privileges. If you revoke both of these privileges, Oracle Database removes the user from the
password file.
* The syntax of the ORAPWD command is as follows:
ORAPWD FILE=filename [ENTRIES=numusers]
[FORCE={Y|N}] [IGNORECASE={Y|N}] [NOSYSDBA={Y|N}]
* IGNORECASE
If this argument is set to y, passwords are case-insensitive. That is, case is ignored when
comparing the password that the user supplies during login with the password in the password file.
1 comment
B.
ALTER PLUGGABLE DATABASE OPEN ALL ISSUED from a PDB
C.
ALTER PLUGGABLE DATABASE PDB OPEN issued from the seed
D.
ALTER DATABASE PDB OPEN issued from the root
E.
Which two recommendations should you make to speed up the rebalance operation if this type of
failure happens again?
Posted by seenagape on January 14, 2014
2 comments
You administer an online transaction processing (OLTP) system w hose database is stored in
Automatic Storage Management (ASM) and w hose disk group use normal redundancy.
One of the ASM disks goes offline, and is then dropped because it w as not brought online before
DISK_REPAIR_TIME elapsed.
W hen the disk is replaced and added back to the disk group, the ensuing rebalance operation is
too slow .
W hich tw o recommendations should you make to speed up the rebalance operation if this type of
failure happens again?
A.
11 comments
2. A user should not be able to create more than four simultaneous sessions.
3. User session must be terminated after 15 minutes of inactivity.
4. Users must be prompted to change their passw ords every 15 days.
How w ould you accomplish these requirements?
A.
No comments
A senior DBA asked you to execute the follow ing command to improve performance:
SQL> ALTER TABLE subscribe log STORAGE (BUFFER_POOL recycle);
You checked the data in the SUBSCRIBE_LOG table and found that it is a large table containing
one million row s.
W hat could be a reason for this recommendation?
A.
The keep pool is not configured.
B.
Automatic W orkarea Management is not configured.
C.
Automatic Shared Memory Management is not enabled.
D.
Which three tasks can be automatically performed by the Automatic Data Optimization feature of
Information lifecycle Management (ILM)?
Posted by seenagape on January 14, 2014
3 comments
W hich three tasks can be automatically performed by the Automatic Data Optimization feature of
Information lifecycle Management (ILM)?
A.
Tracking the most recent read time for a table segment in a user tablespace
B.
Tracking the most recent write time for a table segment in a user tablespace
C.
Which two partitioned table maintenance operations support asynchronous Global Index
Maintenance in Oracle database 12c?
Posted by seenagape on January 14, 2014
No comments
Which two memory areas that are part of PGA are stored in SGA instead, for shared server
connection?
Posted by seenagape on January 14, 2014
You configure your database Instance to support shared server connections.
W hich tw o memory areas that are part of PGA are stored in SGA instead, for shared server
connection?
A.
User session data
B.
Stack space
C.
14 comments
Note:
* System global area (SGA)
The SGA is a group of shared memory structures, known as SGA components, that contain data
and control information for one Oracle Database instance. The SGA is shared by all server and
background processes. Examples of data stored in the SGA include cached data blocks and
shared SQL areas.
* Program global area (PGA)
A PGA is a memory region that contains data and control information for a server process. It is
nonshared memory created by Oracle Database when a server process is started. Access to the
PGA is exclusive to the server process. There is one PGA for each server process. Background
processes also allocate their own PGAs. The total memory used by all individual PGAs is known
as the total instance PGA memory, and the collection of individual PGAs is referred to as the total
instance PGA, or just instance PGA. You use database initialization parameters to set the size of
the instance PGA, not individual PGAs.
Reference: Oracle Database Concepts 12c
Which two statements are true about Oracle Managed Files (OMF)?
Posted by seenagape on January 14, 2014
2 comments
The file system directions that are specified by OMF parameters are created automatically.
C.
OMF can be used w ith ASM disk groups, as w ell as w ith raw devices, for better file
management.
D.
OMF automatically creates unique file names for table spaces and control files.
E.
OMF may affect the location of the redo log files and archived log files.
Explanation:
B: Through initialization parameters, you specify the file system directory to be used
for a particular type of file. The database then ensures that a unique file, an Oracle-managed file,
is created and deleted when no longer needed.
D: The database internally uses standard file system interfaces to create and delete files as
needed for the following database structures:
Tablespaces
Redo log files
Control files
Archived logs
Block change tracking files
Flashback logs
RMAN backups
Note:
* Using Oracle-managed files simplifies the administration of an Oracle Database. Oraclemanaged files eliminate the need
for you, the DBA, to directly manage the operating system files
that make up an Oracle Database. With Oracle-managed files, you specify file system directories
in which the database automatically creates, names, and manages files at the database object
level. For example, you need only specify that you want to create a tablespace; you do not need to
specify the name and path of the tablespaces datafile with the DATAFILE clause.
Reference: What Are Oracle-Managed Files?
Which four actions are possible during an Online Data file Move operation?
Posted by seenagape on January 14, 2014
W hich four actions are possible during an Online Data file Move operation?
2 comments
A.
Executing DML statements on objects stored in the data file being moved
Explanation:
Incorrect:
Not B: The online move data file operation may get aborted if the standby recovery process takes
the data file offline, shrinks the file (not B), or drops the tablespace.
Not D: The online move data file operation cannot be executed on physical standby while standby
recovery is running in a mounted but not open instance.
Note:
You can move the location of an online data file from one physical file to another physical file while
the database is actively accessing the file. To do so, you use the SQL statement ALTER
DATABASE MOVE DATAFILE.
An operation performed with the ALTER DATABASE MOVE DATAFILE statement increases the
availability of the database because it does not require that the database be shut down to move
the location of an online data file. In releases prior to Oracle Database 12c Release 1 (12.1), you
could only move the location of an online data file if the database was down or not open, or by first
taking the file offline.
You can perform an online move data file operation independently on the primary and on the
standby (either physical or logical). The standby is not affected when a data file is moved on the
primary, and vice versa.
Reference: Oracle Data Guard Concepts and Administration 12c, Moving the Location of Online
Data Files
2 comments
Your multitenant container database (CDB) contains a pluggable database, HR_PDB. The default
permanent tablespace in HR_PDB is USERDATA. The container database (CDB) is open and you
connect RMAN.
You w ant to Issue the follow ing RMAN command:
RMAN > BACKUP TABLESPACE hr_pdb:userdata;
W hich task should you perform before issuing the command?
A.
Place the root container in ARHCHIVELOG mode.
B.
Take the user data tablespace offline.
C.
Place the root container in the nomount stage.
D.
Identify three scenarios in which you would recommend the use of SQL Performance Analyzer to
analyze impact on the performance of SQL statements.
Posted by seenagape on January 14, 2014
Identify three scenarios in w hich you w ould recommend the use of SQL Performance Analyzer to
analyze impact on the performance of SQL statements.
A.
No comments
D.
Migration of database storage from non-ASM to ASM storage
E.
Which two statements are true about the RMAN validate database command?
Posted by seenagape on January 14, 2014
4 comments
W hich tw o statements are true about the RMAN validate database command?
A.
No comments
You install a non-RAC Oracle Database. During Installation, the Oracle Universal Installer (OUI)
prompts you to enter the path of the Inventory directory and also to specify an operating system
group name.
W hich statement is true?
A.
The ORACLE_BASE base parameter is not set.
B.
The installation is being performed by the root user.
C.
The operating system group that is specified should have the root user as its member.
D.
The operating system group that is specified must have permission to write to the inventory
directory.
Explanation:
Note:
Providing a UNIX Group Name
If you are installing a product on a UNIX system, the Installer will also prompt you to provide the
name of the group which should own the base directory.
You must choose a UNIX group name which will have permissions to update, install, and deinstall
Oracle software. Members of this group must have write permissions to the base directory chosen.
Only users who belong to this group are able to install or deinstall software on this machine.
3 comments
You are required to migrate your 11.2.0.3 database as a pluggable database (PDB) to a
multitenant container database (CDB).
The follow ing are the possible steps to accomplish this task:
1. Place all the user-defined tablespace in read-only mode on the source database.
2. Upgrade the source database to a 12c version.
3. Create a new PDB in the target container database.
4. Perform a full transportable export on the source database w ith the VERSION parameter set to
12 using the expdp utility.
5. Copy the associated data files and export the dump file to the desired location in the target
database.
6. Invoke the Data Pump import utility on the new PDB database as a user w ith the
DATAPUMP_IMP_FULL_DATABASE role and specify the full transportable import options.
7. Synchronize the PDB on the target container database by using the DBMS_PDS.SYNC_ODB
function.
Identify the correct order of the required steps.
A.
2, 1, 3, 4, 5, 6
B.
1, 3, 4, 5, 6, 7
C.
1, 4, 3, 5, 6, 7
D.
2, 1, 3, 4, 5, 6, 7
E.
1, 5, 6, 4, 3, 2
Explanation:
Step 0: (2) Upgrade the source database to 12c version.
Note:
Full Transportable Export/Import Support for Pluggable Databases
Full transportable export/import was designed with pluggable databases as a migration
destination.
You can use full transportable export/import to migrate from a non-CDB database into a PDB, from
one PDB to another PDB, or from a PDB to a non-CDB. Pluggable databases act exactly like
nonCDBs when importing and exporting both data and metadata.
The steps for migrating from a non-CDB into a pluggable database are as follows:
Step 1. (1) Set the user and application tablespaces in the source database to be READ ONLY
Step 2. (3) Create a new PDB in the destination CDB using the create pluggable database
command
Step 3. (5) Copy the tablespace data files to the destination
Step 4. (6) Using an account that has the DATAPUMP_IMP_FULL_DATABASE privilege, either
(6) Export from the source database using expdp with the FULL=Y
TRANPSORTABLE=ALWAYS
options, and import into the target database using impdp, or
Import over a database link from the source to the target using impdp
Step 5. Perform post-migration validation or testing according your normal practice
Which two statements are true about the Oracle Direct Network File system (DNFS)?
3 comments
No comments
W hich tw o statements are true about the Oracle Direct Netw ork File system (DNFS)?
A.
It utilizes the OS file system cache.
B.
A traditional NFS mount is not required w hen using Direct NFS.
C.
Oracle Disk Manager can manage NFS on its own, without using the operating kernel NFS
driver.
D.
Direct NFS is available only in UNIX platforms.
E.
Direct NFS can load-balance I/O traffic across multiple network adapters.
Explanation:
E: Performance is improved by load balancing across multiple network interfaces (if
available).
Note:
* To enable Direct NFS Client, you must replace the standard Oracle Disk Manager (ODM) library
with one that supports Direct NFS Client.
Incorrect:
Not A: Direct NFS Client is capable of performing concurrent
direct I/O, which bypasses any operating system level caches and eliminates any
operating system write-ordering locks
Not B:
* To use Direct NFS Client, the NFS file systems must first be mounted and available
over regular NFS mounts.
* Oracle Direct NFS (dNFS) is an optimized NFS (Network File System) client that provides faster
and more scalable access to NFS storage located on NAS storage devices (accessible over
TCP/IP).
Not D: Direct NFS is provided as part of the database kernel, and is thus available on all
supported database platforms even those that dont support NFS natively, like Windows.
Note:
* Oracle Direct NFS (dNFS) is an optimized NFS (Network File System) client that provides faster
and more scalable access to NFS storage located on NAS storage devices (accessible over
TCP/IP). Direct NFS is built directly into the database kernel just like ASM which is mainly used
when using DAS or SAN storage.
* Oracle Direct NFS (dNFS) is an internal I/O layer that provides faster access to large NFS files
than traditional NFS clients.
Which three statements are true about the process of automatic optimization by using cardinality
feedback?
Posted by seenagape on January 14, 2014
2 comments
W hich three statements are true about the process of automatic optimization by using cardinality
feedback?
A.
The optimizer automatically changes a plan during subsequent execution of a SQL statement if
there is a huge difference in optimizer estimates and execution statistics.
B.
The optimizer can re optimize a query only once using cardinality feedback.
C.
The optimizer enables monitoring for cardinality feedback after the first execution of a query.
D.
The optimizer does not monitor cardinality feedback if dynamic sampling and multicolumn
statistics are enabled.
E.
After the optimizer identifies a query as a re-optimization candidate, statistics collected by the
collectors are submitted to the optimizer.
Explanation:
C: During the first execution of a SQL statement, an execution plan is generated as
usual.
D: if multi-column statistics are not present for the relevant combination of columns, the optimizer
can fall back on cardinality feedback.
(not B)* Cardinality feedback. This feature, enabled by default in 11.2, is intended to improve plans
for repeated executions.
optimizer_dynamic_sampling
optimizer_features_enable
* dynamic sampling or multi-column statistics allow the optimizer to more accurately estimate
selectivity of conjunctive predicates.
Note:
* OPTIMIZER_DYNAMIC_SAMPLING controls the level of dynamic sampling performed by the
optimizer.
Range of values. 0 to 10
* Cardinality feedback was introduced in Oracle Database 11gR2. The purpose of this feature is to
automatically improve plans for queries that are executed repeatedly, for which the optimizer does
not estimate cardinalities in the plan properly. The optimizer may misestimate cardinalities for a
variety of reasons, such as missing or inaccurate statistics, or complex predicates. Whatever the
reason for the misestimate, cardinality feedback may be able to help.
Which three statements are true when the listener handles connection requests to an Oracle 12c
database instance with multithreaded architecture enabled In UNIX?
Posted by seenagape on January 14, 2014
No comments
W hich three statements are true w hen the listener handles connection requests to an Oracle 12c
database instance w ith multithreaded architecture enabled In UNIX?
A.
The local listener may pass the request to an existing process which in turn will create a thread.
Explanation:
4 comments
What is the result of the last SET CONTAINER statement and why is it so?
Posted by seenagape on January 14, 2014
2 comments
You are connected using SQL* Plus to a multitenant container database (CDB) w ith SYSDBA
privileges and execute the follow ing sequence statements:
W hat is the result of the last SET CONTAINER statement and w hy is it so?
A.
What are three possible causes for the latch-related wait events?
Posted by seenagape on January 14, 2014
No comments
Examine the details of the Top 5 Timed Events in the follow ing Automatic W orkloads Repository (AW R) report:
W hat are three possible causes for the latch-related w ait events?
A.
The buffers are being read into the buffer cache, but some other session is changing the
buffers.
Explanation:
SYS, SYSTEM
1 comment
B.
SCOTT
C.
Only for successful executions
D.
Only for failed executions
E.
4 comments
SYS sessions, regardless of the roles that are set in the session
B.
SYSTEM sessions, regardless of the roles that are set in the session
C.
SCOTT sessions, only if the MGR role is set in the session
D.
What is the result of executing a TRUNCATE TABLE command on a table that has Flashback
Archiving enabled?
Posted by seenagape on January 14, 2014
W hat is the result of executing a TRUNCATE TABLE command on a table that has Flashback
Archiving enabled?
A.
4 comments
D.
The row s in both the table and the archive are truncated.
Explanation:
* Using any of the following DDL statements on a table enabled for Flashback Data
Archive causes error ORA-55610:
ALTER TABLE statement that does any of the following:
Drops, renames, or modifies a column
Performs partition or subpartition operations
Converts a LONG column to a LOB column
Includes an UPGRADE TABLE clause, with or without an INCLUDING DATA clause
DROP TABLE statement
RENAME TABLE statement
TRUNCATE TABLE statement
* After flashback archiving is enabled for a table, you can disable it only if you either have the
FLASHBACK ARCHIVE ADMINISTER system privilege or you are logged on as SYSDBA. While
flashback archiving is enabled for a table, some DDL statements are not allowed on that table.
No comments
Which three statements are true concerning the use of the Valid Time Temporal feature for the
EMPLOYEES table?
Posted by seenagape on January 14, 2014
3 comments
You create a table w ith the PERIOD FOR clause to enable the use of the Temporal Validity
feature of Oracle Database 12c.
Examine the table definition:
W hich three statements are true concerning the use of the Valid Time Temporal feature for the
EMPLOYEES table?
A.
The same statement may filter on both transaction time and valid temporal time by using the AS
OF TIMESTAMP and PERIOD FOR clauses.
C.
The valid time columns are not populated by the Oracle Server automatically.
D.
The valid time columns are visible by default w hen the table is described.
E.
Which three statements are true regarding the use of the Database Migration Assistant for Unicode
(DMU)?
Posted by seenagape on January 14, 2014
No comments
W hich three statements are true regarding the use of the Database Migration Assistant for
Unicode (DMU)?
A.
The DMU can report columns that are too long in the converted characterset.
E.
The DMU can report columns that are not represented in the converted characterset.
Explanation:
A: In certain situations, you may want to exclude selected columns or tables from
scanning or conversion steps of the migration process.
D: Exceed column limit
The cell data will not fit into a column after conversion.
E: Need conversion
The cell data needs to be converted, because its binary representation in the
target character set is different than the representation in the current character
set, but neither length limit issues nor invalid representation issues have been
found.
* Oracle Database Migration Assistant for Unicode (DMU) is a unique next-generation migration
tool providing an end-to-end solution for migrating your databases from legacy encodings to
Unicode.
Incorrect:
Not C: The release of Oracle Database must be 10.2.0.4, 10.2.0.5, 11.1.0.7, 11.2.0.1, or later.
2 comments
Oracle Grid Infrastructure for a stand-alone server is installed on your production host before
installing the Oracle Database server. The database and listener are configured by using Oracle
Restart.
Examine the follow ing command and its output:
$ crsctl config has
CRS-4622: Oracle High Availability Services auto start is enabled.
W hat does this imply?
A.
When you start an instance on a high with SQL *Plus dependent listeners and ASM disk groups
are automatically started.
B.
W hen a database instance is started by using the SRVCTL utility and listener startup fails, the
instance is still started.
C.
W hen a database is created by using SQL* Plus, it is automatically added to the Oracle Restart
configuration.
D.
W hen you create a database service by modifying the SERVICE_NAMES initialization
parameter, it is automatically added to the Oracle Restart configuration.
Explanation:
Previously (10g and earlier), in the case of Oracle RAC, the CRS took care of the
detection and restarts. If you didnt use RAC, then this was not an option for you. However, in this
version of Oracle, you do have that ability even if you do not use RAC. The functionality known
as Oracle Restart is available in Grid Infrastructure. An agent checks the availability of important
components such as database, listener, ASM, etc. and brings them up automatically if they are
down. The functionality is available out of the box and does not need additional programming
beyond basic configuration. The component that checks the availability and restarts the failed
components is called HAS (High Availability Service).
Here is how you check the availability of HAS itself (from the Grid Infrastructure home):
$ crsctl check has
CRS-4638: Oracle High Availability Services is online
Note:
* crsctl config has
Use the crsctl check has command to display the automatic startup configuration of the Oracle
High Availability Services stack on the server.
* The crsctl config has command returns output similar to the following:
CRS-4622: Oracle High Availability Services autostart is enabled.
3 comments
Your multitenant container database (CDB) contains some pluggable databases (PDBs), you
execute the follow ing command in the root container:
The C # # A_ADMIN user will be able to use the TEMP_TS temporary tablespace only in root.
C.
The command w ill, create a common user w hose description is contained in the root and each
PDB.
D.
The schema for the common user C # # A_ADMIN can be different in each container.
E.
The command will create a user in the root container only because the container clause is not
used.
Explanation:
* Example, Creating Common User in a CDB
This example creates the common user c##testcdb.
CREATE USER c##testcdb IDENTIFIED BY password
DEFAULT TABLESPACE cdb_tbs
QUOTA UNLIMITED ON cdb_tbs
CONTAINER = ALL;
A common users user name must start with C## or c## and consist only of ASCII characters. The
specified tablespace must exist in the root and in all PDBs.
* CREATE USER with CONTAINER (optional) clause
/ CONTAINER = ALL
Creates a common user.
/ CONTAINER = CURRENT
Creates a local user in the current PDB.
* CREATE USER
* The following rules apply to the current container in a CDB:
The current container can be CDB$ROOT (root) only for common users. The current container
can be a particular PDB for both common users and local users.
The current container must be the root when a SQL statement includes CONTAINER = ALL.
You can include the CONTAINER clause in several SQL statements, such as the CREATE USER,
ALTER USER, CREATE ROLE, GRANT, REVOKE, and ALTER SYSTEM statements.
Only a common user with the commonly granted SET CONTAINER privilege can run a SQL
statement that includes CONTAINER = ALL.
5 comments
A.
Backup change tracking w ill sometimes reduce I/O performed during cumulative incremental
backups.
B.
The change tracking file must always be backed up when you perform a full database backup.
C.
Block change tracking will always reduce I/O performed during cumulative incremental
backups.
D.
More than one database block may be read by an incremental backup for a change made to a
single block.
E.
The incremental level 1 backup that immediately follows the enabling of block change tracking
will not read the change tracking file to discover changed blocks.
Explanation:
Note:
* An incremental level 0 backup backs up all blocks that have ever been in use in this database.
* In a cumulative level 1 backup, RMAN backs up all the blocks used since the most recent level 0
incremental backup.
* Oracle Block Change Tracking
Once enabled; this new 10g feature records the modified since last backup and stores the log of it
in a block change tracking file using the CTW (Change Tracking Writer) process. During backups
RMAN uses the log file to identify the specific blocks that must be backed up. This improves
RMANs performance as it does not have to scan whole datafiles to detect changed blocks.
Logging of changed blocks is performed by the CTRW process which is also responsible for
writing data to the block change tracking file.
Which method a used by the optimizer to limit the rows being returned?
Posted by seenagape on January 14, 2014
4 comments
You find this query being used in your Oracle 12c database:
W hich method a used by the optimizer to limit the row s being returned?
A.
A filter is added to the table query dynamically using ROW NUM to limit the row s to 20 percent
of the total row s
B.
All the row s are returned to the client or middle tier but only the first 20 percent are returned to
the screen or the application.
C.
A view is created during execution and a filter on the view limits the rows to 20 percent of the
total rows.
D.
A TOP-N query is created to limit the row s to 20 percent of the total row s
Explanation:
Which three resources might be prioritized between competing pluggable databases when
creating a multitenant container database plan?
Posted by seenagape on January 14, 2014
W hich three resources might be prioritized betw een competing pluggable databases w hen
creating a multitenant container database plan (CDB plan) using Oracle Database Resource
Manager?
A.
CPU
E.
Exadata I/O
3 comments
F.
Local file system I/O
Explanation:
C: parallel_server_limit
Maximum percentage of parallel execution servers that a PDB can use.
D: utilization_limit
Resource utilization limit for CPU.
1 comment
You then closed the encryption w allet because you w ere advised that this is secure.
Later in the day, you attempt to create the EMPLOYEES table in the SECURESPACE tablespace
w ith the SALT option on the EMPLOYEE column.
W hich is true about the result?
A.
It creates the table successfully but does not encrypt any inserted data in the EMPNAME
column because the w allet must be opened to encrypt columns w ith SALT.
B.
It generates an error w hen creating the table because the w allet is closed.
C.
It creates the table successfully, and encrypts any inserted data in the EMPNAME column
because the wallet needs to be open only for tablespace creation.
D.
It generates error w hen creating the table, because the salt option cannot be used w ith
encrypted tablespaces.
Explanation:
* The environment setup for tablespace encryption is the same as that for transparent data
encryption. Before attempting to create an encrypted tablespace, a wallet must be created to hold
the encryption key.
* Setting the tablespace master encryption key is a one-time activity. This creates the master
encryption key for tablespace encryption. This key is stored in an external security module (Oracle
wallet) and is used to encrypt the tablespace encryption keys.
* Before you can create an encrypted tablespace, the Oracle wallet containing the tablespace
master encryption key must be open. The wallet must also be open before you can access data in
an encrypted tablespace.
* Salt is a way to strengthen the security of encrypted data. It is a random string added to the data
before it is encrypted, causing repetition of text in the clear to appear different when encrypted.
Salt removes the one common method attackers use to steal data, namely, matching patterns of
encrypted text.
* ALT | NO SALT By default the database appends a random string, called salt, to the clear text
of the column before encrypting it. This default behavior imposes some limitations on encrypted
columns:
/ If you specify SALT during column encryption, then the database does not compress the data in
the encrypted column even if you specify table compression for the table. However, the database
does compress data in unencrypted columns and encrypted columns without the SALT parameter.
Both the indexes are updated when a row is inserted, updated, or deleted in the ORDERS
table.
C.
Both the indexes are created: how ever, only ORD_CUSTOMERS_IX1 is used by the optimizer
for queries on the ORDERS table.
D.
The ORD_CUSTOMER_IX1 index is not used by the optimizer even w hen the
OPTIMIZER_USE_INVISIBLE_INDEXES parameters is set to true.
4 comments
E.
Both the indexes are created and used by the optimizer for queries on the ORDERS table.
Explanation:
* Specify BITMAP to indicate that index is to be created with a bitmap for each distinct key, rather
than indexing each row separately. Bitmap indexes store the rowids associated with a key value
as a bitmap. Each bit in the bitmap corresponds to a possible rowid. If the bit is set, then it means
that the row with the corresponding rowid contains the key value. The internal representation of
bitmaps is best suited for applications with low levels of concurrent transactions, such as data
warehousing.
* VISIBLE | INVISIBLE Use this clause to specify whether the index is visible or invisible to the
optimizer. An invisible index is maintained by DML operations, but it is not be used by the
optimizer during queries unless you explicitly set the parameter
OPTIMIZER_USE_INVISIBLE_INDEXES to TRUE at the session or system level.
Which two statements are true when row archival management is enabled?
Posted by seenagape on January 14, 2014
4 comments
4 comments
A w arehouse fact table in your Oracle 12c Database is range-partitioned by month and accessed
frequently w ith queries that span multiple partitions
The table has a local prefixed, range partitioned index.
Some of these queries access very few row s in some partitions and all the row s in other partitions,
but these queries still perform a full scan for all accessed partitions.
This commonly occurs w hen the range of dates begins at the end of a month or ends close to the
start of a month.
You w ant an execution plan to be generated that uses indexed access w hen only a few row s are
accessed from a segment, w hile still allow ing full scans for segments w here many row s are
returned.
W hich three methods could transparently help to achieve this result?
A.
Using a partial local Index on the w arehouse fact table month column w ith indexing disabled to
the table partitions that return most of their row s to the queries.
B.
Using a partial local Index on the warehouse fact table month column with indexing disabled for
the table partitions that return a few rows to the queries.
C.
Using a partitioned view that does a UNION ALL query on the partitions of the warehouse fact
table, which retains the existing local partitioned column.
D.
Converting the partitioned table to a partitioned view that does a UNION ALL query on the
monthly tables, w hich retains the existing local partitioned column.
E.
Using a partial global index on the w arehouse fact table month column w ith indexing disabling
for the table partitions that return most of their row s to the queries.
F.
Using a partial global index on the warehouse fact table month column with indexing disabled
for the table partitions that return a few rows to the queries.
Explanation:
Note:
* Oracle 12c now provides the ability to index a subset of partitions and to exclude the others.
Local and global indexes can now be created on a subset of the partitions of a table. Partial Global
indexes provide more flexibility in index creation for partitioned tables. For example, index
segments can be omitted for the most recent partitions to ensure maximum data ingest rates
without impacting the overall data model and access for the partitioned object.
Partial Global Indexes save space and improve performance during loads and queries. This
feature supports global indexes that include or index a certain subset of table partitions or
subpartitions, and exclude the others. This operation is supported using a default table indexing
property. When a table is created or altered, a default indexing property can be specified for the
table or its partitions.
Which three statements are true about the advisor given by the segment advisor?
Posted by seenagape on January 14, 2014
2 comments
You use the segment advisor to help determine objects for w hich space may be reclaimed.
W hich three statements are true about the advisor given by the segment advisor?
A.
It may advise the use of online table redefinition for tables in dictionary managed tablespace.
B.
It may advise the use of segment shrink for tables in dictionary managed tablespaces it the no
chained rows.
C.
It may advise the use of online table redefinition for tables in locally managed tablespaces
D.
No comments
Explanation:
Unlike unusable indexes, an invisible index is maintained during DML statements.
Note:
* Oracle 11g allows indexes to be marked as invisible. Invisible indexes are maintained like any
other index, but they are ignored by the optimizer unless the
OPTIMIZER_USE_INVISIBLE_INDEXES parameter is set to TRUE at the instance or session
level. Indexes can be created as invisible by using the INVISIBLE keyword, and their visibility can
be toggled using the ALTER INDEX command.
9 comments
In your multitenant container database (CDB) containing same pluggable databases (PDBs), you
execute the follow ing commands in the root container:
The C # # ROLE1 role is created only in the root database because the container clause is not
used.
C.
Privileges are granted to the C##A_ADMIN user only in the root database.
D.
Privileges are granted to the C##A_ADMIN user in the root database and all PDBs.
E.
The statement for granting a role to a user fails because the CONTAINER clause is not used.
Explanation:
* You can include the CONTAINER clause in several SQL statements, such as the
CREATE USER, ALTER USER, CREATE ROLE, GRANT, REVOKE, and ALTER SYSTEM
statements.
* * CREATE ROLE with CONTAINER (optional) clause
/ CONTAINER = ALL
Creates a common role.
/ CONTAINER = CURRENT
Creates a local role in the current PDB.
5 comments
Note:
* If you specify the SECTION SIZE parameter on the BACKUP command, then RMAN produces a
multisection backup. This is a backup of a single large file, produced by multiple channels in
parallel, each of which produces one backup piece. Each backup piece contains one file section of
the file being backed up.
* Some points to remember about multisection backups include:
Which command or commands should you execute next to allow updates to the flashback back
schema?
Posted by seenagape on January 14, 2014
3 comments
Flashback is enabled for your multitenant container database (CDB), w hich contains tw o
pluggable database (PDBs). A local user w as accidently dropped from one of the PDBs.
You w ant to flash back the PDB to the time before the local user w as dropped. You connect to the
CDB and execute the follow ing commands:
SQL > SHUTDOW N IMMEDIATE
SQL > STARTUP MOUNT
SQL > FLASHBACK DATABASE to TIME TO_DATE (08/20/12 , MM/DD/YY);
Examine follow ing commands:
1. ALTER PLUGGABLE DATABASE ALL OPEN;
2. ALTER DATABASE OPEN;
3. ALTER DATABASE OPEN RESETLOGS;
W hich command or commands should you execute next to allow updates to the flashback back
schema?
A.
Only 1
B.
Only 2
C.
Only 3
D.
3 and 1
E.
1 and 2
Explanation:
Example (see step23):
Step 1:
Run the RMAN FLASHBACK DATABASE command.
You can specify the target time by using a form of the command shown in the following examples:
FLASHBACK DATABASE TO SCN 46963;
FLASHBACK DATABASE
TO RESTORE POINT BEFORE_CHANGES;
FLASHBACK DATABASE TO TIME
TO_DATE(09/20/05,MM/DD/YY);
When the FLASHBACK DATABASE command completes, the database is left mounted and
recovered to the specified target time.
Step 2:
Make the database available for updates by opening the database with the RESETLOGS option. If
the database is currently open read-only, then execute the following commands in SQL*Plus:
SHUTDOWN IMMEDIATE
STARTUP MOUNT
ALTER DATABASE OPEN RESETLOGS;
No comments
All subsequent statements in the session will be treated as one database operation and will be
monitored.
Explanation:
C: Setting the CONTROL_MANAGEMENT_PACK_ACCESS initialization parameter
to DIAGNOSTIC+TUNING (default) enables monitoring of database operations. Real-Time SQL
Monitoring is a feature of the Oracle Database Tuning Pack.
Note:
* The DBMS_SQL_MONITOR package provides information about Real-time SQL Monitoring and
Real-time Database Operation Monitoring.
*(not B) BEGIN_OPERATION Function
starts a composite database operation in the current session.
/ (E) FORCE_TRACKING forces the composite database operation to be tracked when the
operation starts. You can also use the string variable Y.
/ (not A) NO_FORCE_TRACKING the operation will be tracked only when it has consumed at
least 5 seconds of CPU or I/O time. You can also use the string variable N.
Which three statements are true about the working of system privileges in a multitenant control
database (CDB) that has pluggable databases (PDBs)?
Posted by seenagape on January 14, 2014
1 comment
W hich three statements are true about the w orking of system privileges in a multitenant control
database (CDB) that has pluggable databases (PDBs)?
A.
System privileges apply only to the PDB in which they are used.
B.
Local users cannot use local system privileges on the schema of a common user.
C.
The granter of system privileges must possess the set container privilege.
D.
Common users connected to a PDB can exercise privileges across other PDBs.
E.
System privileges with the with grant option container all clause must be granted to a common
user before the common user can grant privileges to other users.
Explanation:
A, Not D: In a CDB, PUBLIC is a common role. In a PDB, privileges granted locally
to PUBLIC enable all local and common users to exercise these privileges in this PDB only.
C: A user can only perform common operations on a common role, for example, granting
privileges commonly to the role, when the following criteria are met:
The user is a common user whose current container is root.
The user has the SET CONTAINER privilege granted commonly, which means that the privilege
applies in all containers.
The user has privilege controlling the ability to perform the specified operation, and this privilege
has been granted commonly
-Incorrect:
Note:
* Every privilege and role granted to Oracle-supplied users and roles is granted commonly except
for system privileges granted to PUBLIC, which are granted locally.
Which technique should you use to minimize down time while plugging this non-CDB into the
CDB?
Posted by seenagape on January 14, 2014
You are about to plug a multi-terabyte non-CDB into an existing multitenant container database
(CDB) as a pluggable database (PDB).
The characteristics of the non-CDB are as follow s:
Version: Oracle Database 12c Releases 1 64-bit
Character set: W E8ISO8859P15
National character set: AL16UTF16
O/S: Oracle Linux 6 64-bit
The characteristics of the CDB are as follow s:
Version: Oracle Database 12c Release 1 64-bit
Character set: AL32UTF8
O/S: Oracle Linux 6 64-bit
W hich technique should you use to minimize dow n time w hile plugging this non-CDB into the
CDB?
A.
Transportable database
B.
Transportable tablespace
C.
Data Pump full export / import
D.
No comments
Identify the correct outcome and the step to aggregate by using tkprof utility?
Posted by seenagape on January 14, 2014
3 comments
Your database has the SRV1 service configured for an application that runs on middle-tier
application server. The application has multiple modules. You enable tracing at the service level
by executing the follow ing command:
SQL > exec DBMS_MONITOR.SERV_MOD_ACT_TRACE_ENABLE (SRV1);
The possible outcome and actions to aggregate the trace files are as follow s:
1. The command fails because a module name is not specified.
2. A trace file is created for each session that is running the SRV1 service.
3. An aggregated trace file is created for all the sessions that are running the SRV1 service.
4. The trace files may be aggregated by using the trcess utility.
5. The trace files be aggregated by using the tkprof utility.
Identify the correct outcome and the step to aggregate by using tkprof utility?
A.
1
B.
2 and 4
C.
2 and 5
D.
3 and 4
E.
3 and 5
Explanation:
Tracing information is present in multiple trace files and you must use the trcsess
tool to collect it into a single file.
Incorrect:
Not 1: Parameter service_name
Name of the service for which tracing is enabled.
module_name
Name of the MODULE. An optional additional qualifier for the service.
Note:
* The procedure enables a trace for a given combination of Service, MODULE and ACTION name.
The specification is strictly hierarchical: Service Name or Service Name/MODULE, or Service
Name, MODULE, and ACTION name must be specified. Omitting a qualifier behaves like a wildcard, so that not specifying
an ACTION means all ACTIONs. Using the ALL_ACTIONS constant
achieves the same purpose.
* SERV_MOD_ACT_TRACE_ENABLE Procedure
This procedure will enable SQL tracing for a given combination of Service Name, MODULE and
ACTION globally unless an instance_name is specified.
* DBMS_MONITOR.SERV_MOD_ACT_TRACE_ENABLE(
service_name IN VARCHAR2,
module_name IN VARCHAR2 DEFAULT ANY_MODULE,
action_name IN VARCHAR2 DEFAULT ANY_ACTION,
waits IN BOOLEAN DEFAULT TRUE,
binds IN BOOLEAN DEFAULT FALSE,
instance_name IN VARCHAR2 DEFAULT NULL);
3 comments
C.
It fails and reports an error because the CONTAINER=ALL clause is not specified in the
command.
D.
It fails and reports an error because the CONTAINER=CURRENT clause is not specified in the
command.
E.
It executes successfully but neither tablespace nor the data file is created.
Explanation:
Interesting behavior in 12.1.0.1 DB of creating an undo tablespace in a PDB. With
the new Multitenant architecture the undo tablespace resides at the CDB level and PDBs all share
the same UNDO tablespace.
When the current container is a PDB, an attempt to create an undo tablespace fails without
returning an error.
1 comment
They instruct the maintenance job to collect missing statistics or perform dynamic sampling to
generate a more optimal plan.
C.
They are used to gather only missing statistics.
D.
They are created for a query expression where statistics are missing or the cardinality
estimates by the optimizer are incorrect.
E.
The database must be opened in restricted mode for the flashback to succeed.
E.
The database must be opened w ith the RESETLOGS option after the flashback is complete.
4 comments
F.
The database must be opened in read-only mode to check if the database has been flashed
back to the correct SCN.
Explanation:
B: The target database must be mounted with a current control file, that is, the
control file cannot be a backup or have been re-created.
D: You must OPEN RESETLOGS after running FLASHBACK DATABASE. If datafiles are not
flashed back because they are offline, then the RESETLOGS may fail with an error.
Note:
* RMAN uses flashback logs to undo changes to a point before the target time or SCN, and then
uses archived redo logs to recover the database forward to make it consistent. RMAN
automatically restores from backup any archived logs that are needed.
* SCN: System Change Number
* FLASHBACK DATABASE to One Hour Ago: Example
The following command flashes the database by 1/24 of a day, or one hour:
RMAN> FLASHBACK DATABASE TO TIMESTAMP (SYSDATE-1/24);
3 comments
Users who were using the old default tablespace will have their default tablespaces changed to
the MRKT tablespace.
D.
No more data files can be added to the tablespace.
E.
The relative file number of the tablespace is not stored in rowids for the table rows that are
stored in the MRKT tablespace.
Explanation:
incorrect:
Not A: To create a bigfile tablespace, specify the BIGFILE keyword of the CREATE TABLESPACE
statement (CREATE BIGFILE TABLESPACE ). Oracle Database automatically creates a locally
managed tablespace with automatic segment space management.
You can specify SIZE in kilobytes (K), megabytes (M), gigabytes (G), or terabytes (T).
Not D: Although automatic segment space management is the default for all new permanent,
locally managed tablespaces, you can explicitly enable it with the SEGMENT SPACE
MANAGEMENT AUTO clause.
Decreasing the value of the IDLE_TIME resource limit in the default profile
Explanation:
An Oracle session is sniped when you set the idle_time parameter to disconnect
inactive sessions. (Its only like sniping on ebay in that a time is set for an action to occur.)
3 comments
Oracle has several ways to disconnect inactive or idle sessions, both from within SQL*Plus via
resources profiles (connect_time, idle_time), and with the SQL*net expire time parameter. Here
are two ways to disconnect an idle session:
Set the idle_time parameter in the user profile
Set the sqlnet.ora parameter expire_time
5 comments
You Execute the Follow ing command to create a passw ord file in the database server:
$ orapw d file = +DATA/PROD/orapw prod entries = 5 ignorecase = N format = 12
W hich tw o statements are true about the passw ord tile?
A.
It records the usernames and passwords of users when granted the DBA role.
B.
It contains the usernames and passw ords of users for w hom auditing is enabled.
C.
Rebuilding an index using ALTER INDEX . . . REBUILD fails with an ORA-01578: ORACLE
data block corrupted (file # 14, block # 50) error.
Explanation:
The alert log is a chronological log of messages and errors, and includes the
following items:
*All internal errors (ORA-600), block corruption errors (ORA-1578), and deadlock errors (ORA-60)
that occur
* Administrative operations, such as CREATE, ALTER, and DROP statements and STARTUP,
SHUTDOWN, and ARCHIVELOG statements
* Messages and errors relating to the functions of shared server and dispatcher processes
* Errors occurring during the automatic refresh of a materialized view
* The values of all initialization parameters that had nondefault values at the time the database
and instance start
Note:
* The alert log file (also referred to as the ALERT.LOG) is a chronological log of messages and
errors written out by an Oracle Database. Typical messages found in this file is: database startup,
No comments
shutdown, log switches, space errors, etc. This file should constantly be monitored to detect
unexpected messages and corruptions.
Which three statements are true about Oracle Data Pump export and import operations?
Posted by seenagape on January 14, 2014
4 comments
W hich three statements are true about Oracle Data Pump export and import operations?
A.
You can detach from a data pump export job and reattach later.
B.
Data pump uses parallel execution server processes to implement parallel import.
C.
Data pump import requires the import file to be in a directory ow ned by the oracle ow ner.
D.
The master table is the last object to be exported by the data pump.
E.
You can detach from a data pump import job and reattach later.
Explanation:
B: Data Pump can employ multiple worker processes, running in parallel, to
increase job performance.
D: For export jobs, the master table records the location of database objects within a dump file set.
/ Export builds and maintains the master table for the duration of the job. At the end of an export
job, the content of the master table is written to a file in the dump file set.
/ For import jobs, the master table is loaded from the dump file set and is used to control the
sequence of operations for locating objects that need to be imported into the target database.
Which three statements are true about the users (other than sys) in the output?
Posted by seenagape on January 14, 2014
6 comments
W hich three statements are true about the users (other than sys) in the output?
A.
The C # # B_ADMIN user can perform all backup and recovery operations using RMAN only.
B.
The C # # C_ADMIN user can perform the data guard operation with Data Guard Broker.
C.
The C # # A_ADMIN user can perform w allet operations.
D.
The C # # D_ADMIN user can perform backup and recovery operations for Automatic Storage
Management (ASM).
E.
The C # # B_ADMIN user can perform all backup and recovery operations using RMAN or
SQL* Plus.
Explanation:
A: a SYSBA can perform backup and recovery operations.
B: SYSDG administrative privilege has ability to perform Data Guard operations (including startup
and shutdown) using Data Guard Broker or dgmgrl.
Incorrect:
Not C: SYSKM. SYSKM administrative privilege has ability to perform transparent data encryption
wallet operations.
Which two storage-tiering actions might be automated when using information Lifecycle
Management (ILM) to automate data movement?
Posted by seenagape on January 14, 2014
In your Database, the TBS PERCENT USED parameter is set to 60 and the TBS PERCENT
FREE parameter is set to 20.
W hich tw o storage-tiering actions might be automated w hen using information Lifecycle
Management (ILM) to automate data movement?
A.
The movement of all segments to a target tablespace w ith a higher degree of compression, on
a different storage tier, w hen the source tablespace exceeds TBS PERCENT USED
No comments
B.
The movement of some segments to a target tablespace with a higher degree of compression,
on a different storage tier, when the source tablespace exceeds TBS PERCENT USED
D.
Setting the target tablespace offline
E.
The movement of some blocks to a target tablespace w ith a low er degree of compression, on a
different storage tier, w hen the source tablespace exceeds TBS PERCENT USED
Explanation:
The value for TBS_PERCENT_USED specifies the percentage of the tablespace quota when a
tablespace is considered full. The value for TBS_PERCENT_FREE specifies the targeted free
percentage for the tablespace. When the percentage of the tablespace quota reaches the value of
TBS_PERCENT_USED, ADO begins to move data so that percent free of the tablespace quota
approaches the value of TBS_PERCENT_FREE. This action by ADO is a best effort and not a
guarantee.
2 comments
The Oracle database automatically creates, deletes, and resides flashback logs in the Fast
Recovery Area.
D.
Flashback Database can recover a database to the state that it w as in before a reset logs
operation.
E.
Flashback Database can recover a data file that w as dropped during the span of time of the
flashback.
F.
Flashback logs are used to restore to the blocks before images, and then the redo data may be
used to roll forward to the desired flashback time.
Explanation:
* Flashback Database uses its own logging mechanism, creating flashback logs and
storing them in the fast recovery area (C). You can only use Flashback Database if flashback logs
are available. To take advantage of this feature, you must set up your database in advance to
create flashback logs.
* To enable Flashback Database, you configure a fast recovery area and set a flashback retention
target. This retention target specifies how far back you can rewind a database with Flashback
Database.
From that time onwards, at regular intervals, the database copies images of each altered block in
every data file into the flashback logs. These block images can later be reused to reconstruct the
data file contents for any moment at which logs were captured. (F)
Incorrect:
Not E: You cannot use Flashback Database alone to retrieve a dropped data file. If you flash back
a database to a time when a dropped data file existed in the database, only the data file entry is
added to the control file. You can only recover the dropped data file by using RMAN to fully restore
and recover the data file.
Reference: Oracle Database Backup and Recovery Users Guide 12c R
Which statement is true about Enterprise Manager (EM) express in Oracle Database 12c?
Posted by seenagape on January 14, 2014
W hich statement is true about Enterprise Manager (EM) express in Oracle Database 12c?
A.
3 comments
5 comments
All DDL commands are logged in XML format in the alert directory under the Automatic
Diagnostic Repository (ADR) home.
Explanation:
* By default Oracle database does not log any DDL operations performed by any
user. The default settings for auditing only logs DML operations.
* Oracle 12c DDL Logging ENABLE_DDL_LOGGING
The first method is by using the enabling a DDL logging feature built into the database. By default
it is turned off and you can turn it on by setting the value of ENABLE_DDL_LOGGING initialization
parameter to true.
* We can turn it on using the following command. The parameter is dynamic and you can turn it
on/off on the go.
SQL> alter system set ENABLE_DDL_LOGGING=true;
System altered.
Elapsed: 00:00:00.05
SQL>
Once it is turned on, every DDL command will be logged in the alert log file and also the log.xml
file.
1 comment
3 comments
You are connected to a pluggable database (PDB) as a common user w ith DBA privileges.
The STATISTICS_LEVEL parameter is PDB_MODIFIABLE. You execute the follow ing:
SQL > ALTER SYSTEM SET STATISTICS_LEVEL = ALL SID = * SCOPE = SPFILE;
W hich is true about the result of this command?
A.
The STATISTICS_LEVEL parameter is set to all w henever this PDB is re-opened.
B.
The STATISTICS_LEVEL parameter is set to ALL w henever any PDB is reopened.
C.
The STATISTICS_LEVEL parameter is set to all whenever the multitenant container database
(CDB) is restarted.
D.
Nothing happens; because there is no SPFILE for each PDB, the statement is ignored.
Explanation:
Note:
* In a container architecture, the parameters for PDB will inherit from the root database. That
means if statistics_level=all in the root that will cascade to the PDB databases.
You can over ride this by using Alter system set, if that parameter is pdb modifiable, there is a new
column in v$system_parameter for the same.
2 comments
EXECUTE privilege on the DBMS_FLASHBACK package must be granted to the user flashing
back transaction.
D.
Supplemental logging must be enabled.
E.
Recycle bin must be enabled for the database.
F.
Block change tracking must be enabled tor the database.
Explanation:
B: Specify the RETENTION GUARANTEE clause for the undo tablespace to ensure
that unexpired undo data is not discarded.
C: You must have the EXECUTE privilege on the DBMS_FLASHBACK package.
Note:
* Use Flashback Transaction to roll back a transaction and its dependent transactions while the
database remains online. This recovery operation uses undo data to create and run the
corresponding compensating transactions that return the affected data to its original state.
(Flashback Transaction is part of DBMS_FLASHBACK package.)
Reference: Oracle Database Advanced Application Developers Guide 11g, Using Oracle
Flashback Technology
What happens if the CONTROLLER1 failure group becomes unavailable due to error of for
maintenance?
Posted by seenagape on January 14, 2014
No comments
A database is stored in an Automatic Storage Management (ASM) disk group, disk group, DGROUP1 w ith SQL:
There is enough free space in the disk group for mirroring to be done.
W hat happens if the CONTROLLER1 failure group becomes unavailable due to error of for
maintenance?
A.
Transactions and queries accessing database objects contained in any tablespace stored in
DGROUP1 w ill fall.
B.
Mirroring of allocation units will be done to ASM disks in the CONTROLLER2 failure group until
the CONTROLLER1 for failure group is brought back online.
C.
The data in the CONTROLLER1 failure group is copied to the controller2 failure group and
rebalancing is initiated.
D.
ASM does not mirror any data until the controller failure group is brought back online, and
new ly allocated primary allocation units (AU) are stored in the controller2 failure group, w ithout
mirroring.
E.
Transactions accessing database objects contained in any tablespace stored in DGROUP1 w ill
fail but queries w ill succeed.
Explanation:
CREATE DISKGROUP NORMAL REDUNDANCY
* For Oracle ASM to mirror files, specify the redundancy level as NORMAL REDUNDANCY (2-way
mirroring by default for most file types) or HIGH REDUNDANCY (3-way mirroring for all files).
1 comment
On your Oracle 12c database, you Issue the follow ing commands to create indexes
SQL > CREATE INDEX oe.ord_customer_ix1 ON oe.orders (customers_id, sales_rep_id)
INVISIBLE;
SQL> CREATE BITMAP INDEX oe.ord_customer_ix2 ON oe.orders (customers_id, sales_rep_id);
W hich tw o statement are correct?
A.
Both the indexes are created; however, only the ORD_COSTOMER index is visible.
B.
The optimizer evaluates index access from both the Indexes before deciding on w hich index to
use for query execution plan.
C.
Only the ORD_CUSTOMER_IX1 index is created.
D.
Only the ORD_CUSTOMER_IX2 index is created.
E.
Both the indexes are updated when a new row is inserted, updated, or deleted In the orders
table.
Explanation:
11G has a new feature called Invisible Indexes. An invisible index is invisible to the
optimizer as default. Using this feature we can test a new index without effecting the execution
plans of the existing sql statements or we can test the effect of dropping an index without dropping
it.
Which two RMAN commands may be; used to back up only the PDB1 pluggable database?
Posted by seenagape on January 14, 2014
Your multitenant container database has three pluggable databases (PDBs): PDB1, PDB2, and
PDB3.
W hich tw o RMAN commands may be; used to back up only the PDB1 pluggable database?
A.
1 comment
5 comments
1 comment
cause of performance degradation, you w ant to collect basic statistics such as the level of
parallelism, total database time, and the number of I/O requests for the ETL jobs.
How do you accomplish this?
A.
Examine the Active Session History (ASH) reports for the time period of the ETL or batch
reporting runs.
B.
Enable SQL tracing for the queries in the ETL and batch reporting queries and gather
diagnostic data from the trace file.
C.
Enable real-time SQL monitoring for ETL jobs and gather diagnostic data from the
V$SQL_MONITOR view .
D.