Download as pdf or txt
Download as pdf or txt
You are on page 1of 230

Exam A

QUESTION 1
DRAG DROP

Note: This question is part of a series of questions that use the same scenario. For your convenience, the
scenario is repeated in each question. Each question presents a different goal and answer choices, but the
text of the scenario is exactly the same in each question in this series.

You have a database named Sales that contains the following database tables: Customer, Order, and
Products. The Products table and the Order table are shown in the following diagram.

The customer table includes a column that stores the data for the last order that the customer placed.

You plan to create a table named Leads. The Leads table is expected to contain approximately 20,000
records. Storage requirements for the Leads table must be minimized.

Changes to the price of any product must be less a 25 percent increase from the current price. The shipping
department must be notified about order and shipping details when an order is entered into the database.

You need to implement the appropriate table objects.

Which object should you use for each table? To answer, drag the appropriate objects to the correct tables.
Each object may be used once, more than once, or not at all. You may need to drag the split bar between
panes or scroll to view content.

Select and Place:

07B13F58239056B81577933EB624485B
Correct Answer:

Section: (none)
Explanation

Explanation/Reference:
The Products table needs a primary key constraint on the ProductID field.
The Orders table needs a foreign key constraint on the productID field, with a reference to the ProductID field
in the Products table.

QUESTION 2
HOTSPOT

Note: This question is part of a series of questions that use the same scenario. For your convenience, the
scenario is repeated in each question. Each question presents a different goal and answer choices, but the
text of the scenario is exactly the same in each question in this series.

You have a database named Sales that contains the following database tables: Customer, Order, and
Products. The Products table and the Order table are shown in the following diagram.

07B13F58239056B81577933EB624485B
The customer table includes a column that stores the data for the last order that the customer placed.

You plan to create a table named Leads. The Leads table is expected to contain approximately 20,000
records. Storage requirements for the Leads table must be minimized.

You need to implement a stored procedure that deletes a discontinued product from the Products table. You
identify the following requirements:

If an open order includes a discontinued product, the records for the product must not be deleted.
The stored procedure must return a custom error message if a product record cannot be deleted. The
message must identify the OrderID for the open order.

What should you do? To answer, select the appropriate Transact-SQL segments in the answer area.

Hot Area:

07B13F58239056B81577933EB624485B
Correct Answer:

Section: (none)
Explanation

Explanation/Reference:
Using TRY...CATCH in Transact-SQL

Errors in Transact-SQL code can be processed by using a TRY…CATCH construct.

07B13F58239056B81577933EB624485B
TRY…CATCH can use the following error function to capture error information:
ERROR_MESSAGE() returns the complete text of the error message. The text includes the values supplied for
any substitutable parameters such as lengths, object names, or times.

References:
https://technet.microsoft.com/en-us/library/ms179296(v=sql.105).aspx

QUESTION 3
HOTSPOT

Note: This question is part of a series of questions that use the same scenario. For your convenience, the
scenario is repeated in each question. Each question presents a different goal and answer choices, but the
text of the scenario is exactly the same in each question in this series.

You have a database named Sales that contains the following database tables: Customer, Order, and
Products. The Products table and the Order table are shown in the following diagram.

The customer table includes a column that stores the data for the last order that the customer placed.

You plan to create a table named Leads. The Leads table is expected to contain approximately 20,000
records. Storage requirements for the Leads table must be minimized.

You need to create triggers that meet the following requirements:


Optimize the performance and data integrity of the tables.
Provide a custom error if a user attempts to create an order for a customer that does not exist.
In the Customers table, update the value for the last order placed.
Complete all actions as part of the original transaction.

In the table below, identify the trigger types that meet the requirements.
NOTE: Make only selection in each column. Each correct selection is worth one point.

Hot Area:

07B13F58239056B81577933EB624485B
Correct Answer:

Section: (none)
Explanation

Explanation/Reference:
INSTEAD OF INSERT triggers can be defined on a view or table to replace the standard action of the INSERT
statement.
AFTER specifies that the DML trigger is fired only when all operations specified in the triggering SQL
statement have executed successfully.

References:
https://technet.microsoft.com/en-us/library/ms175089(v=sql.105).aspx

QUESTION 4
HOTSPOT

Note: This question is part of a series of questions that use the same scenario. For your convenience, the
scenario is repeated in each question. Each question presents a different goal and answer choices, but the
text of the scenario is exactly the same in each question in this series.

You have a database named Sales that contains the following database tables: Customer, Order, and
Products. The Products table and the Order table are shown in the following diagram.

07B13F58239056B81577933EB624485B
The customer table includes a column that stores the data for the last order that the customer placed.

You plan to create a table named Leads. The Leads table is expected to contain approximately 20,000
records. Storage requirements for the Leads table must be minimized.

The Leads table must include the columns described in the following table.

The data types chosen must consume the least amount of storage possible.

You need to select the appropriate data types for the Leads table.

In the table below, identify the data type that must be used for each table column.

NOTE: Make only one selection in each column.

Hot Area:

Correct Answer:

07B13F58239056B81577933EB624485B
Section: (none)
Explanation

Explanation/Reference:
Bit is aTransact-SQL integer data type that can take a value of 1, 0, or NULL.
Smallint is a Transact-SQL integer data type that can take a value in the range from -32,768 to 32,767.
int, bigint, smallint, and tinyint (Transact-SQL)
Exact-number data types that use integer data.

References: https://msdn.microsoft.com/en-us/library/ms187745.aspx
https://msdn.microsoft.com/en-us/library/ms177603.aspx

QUESTION 5
HOTSPOT

Note: This question is part of a series of questions that use the same scenario. For your convenience, the
scenario is repeated in each question. Each question presents a different goal and answer choices, but the
text of the scenario is exactly the same in each question in this series.

You have a database named Sales that contains the following database tables: Customer, Order, and
Products. The Products table and the Order table are shown in the following diagram.

07B13F58239056B81577933EB624485B
The customer table includes a column that stores the data for the last order that the customer placed.

You plan to create a table named Leads. The Leads table is expected to contain approximately 20,000
records. Storage requirements for the Leads table must be minimized.

You need to modify the database design to meet the following requirements:
Rows in the Orders table must always have a valid value for the ProductID column.
Rows in the Products table must not be deleted if they arepart of any rows in the Orders table.
All rows in both tables must be unique.

In the table below, identify the constraint that must be configured for each table.

NOTE: Make only one selection in each column.

Hot Area:

Correct Answer:

07B13F58239056B81577933EB624485B
Section: (none)
Explanation

Explanation/Reference:
A FOREIGN KEY in one table points to a PRIMARY KEY in another table. Here the foreign key constraint is
put on the ProductID in the Orders, and points to the ProductID of the Products table.
With a check constraint on the ProductID we can ensure that the Products table contains only unique rows.

References: http://www.w3schools.com/sql/sql_foreignkey.asp

QUESTION 6
DRAG DROP

Note: This question is part of a series of questions that use the same scenario. For your convenience, the
scenario is repeated in each question. Each question presents a different goal and answer choices, but the
text of the scenario is exactly the same in each question in the series.

You have a database named Sales that contains the following database tables. Customer, Order, and
Products. The Products table and the order table shown in the following diagram.

The Customer table includes a column that stores the date for the last order that the customer placed.
You plan to create a table named Leads. The Leads table is expected to contain approximately 20,000
records. Storage requirements for the Leads table must be minimized.

07B13F58239056B81577933EB624485B
You need to begin to modify the table design to adhere to third normal form.
Which column should you remove for each table? To answer? drag the appropriate column names to the
correct locations. Each column name may be used once, more than once, or not at all. You may need to drag
the split bar between panes or scroll to view content.

Select and Place:

Correct Answer:

Section: (none)
Explanation

Explanation/Reference:
In the Products table the SupplierName is dependant on the SupplierID, not on the ProductID.
In the Orders table the ProductName is dependant on the ProductID, not on the OrderID.

Note:
A table is in third normal form when the following conditions are met:
It is in second normal form.
All nonprimary fields are dependent on the primary key.

07B13F58239056B81577933EB624485B
Second normal form states that it should meet all the rules for First 1Normnal Form and there must be no
partial dependences of any of the columns onthe primary key.

First normal form (1NF) sets the very basic rules for an organized database:
Define the data items required, because they become the columns in a table. Place related data items in a
table.
Ensure that there are no repeating groups ofdata.
Ensure that there is a primary key.

References: https://www.tutorialspoint.com/sql/third-normal-form.htm

QUESTION 7
HOTSPOT

Note: This question is part of a series of questions that use the same scenario. For your convenience,
the scenario is repeated in each question. Each question presents a different goal and answer choices,
but the text of the scenario is exactly the same in each question in this series.

You have a database that contains the following tables: BlogCategory, BlogEntry, ProductReview, Product,
and SalesPerson. The tables were created using the following Transact SQL statements:

07B13F58239056B81577933EB624485B
You must modify the ProductReview Table to meet the following requirements:
The table must reference the ProductID column in the Product table
Existing records in the ProductReview table must not be validated with the Product table.
Deleting records in the Product table must not be allowed if records are referenced by the ProductReview
table.
Changes to records in the Product table must propagate to the ProductReview table.

You also have the following database tables: Order, ProductTypes, and SalesHistory, The transact-SQL
statements for these tables are not available.

You must modify the Orders table to meet the following requirements:
Create new rows in the table without granting INSERT permissions to the table.
Notify the sales person who places an order whether or not the order was completed.

You must add the following constraints to the SalesHistory table:


a constraint on the SaleID column that allows the field to be used as a record identifier
a constant that uses the ProductID column to reference the Product column of the ProductTypes table
a constraint on the CategoryID column that allows one row with a null value in the column
a constraint that limits the SalePrice column to values greater than four

07B13F58239056B81577933EB624485B
Finance department users must be able to retrieve data from the SalesHistory table for sales persons where
the value of the SalesYTD column is above a certain threshold.

You plan to create a memory-optimized table named SalesOrder. The table must meet the following
requirements:
The table must hold 10 million unique sales orders.
The table must use checkpoints to minimize I/O operations and must not use transaction logging.
Data loss is acceptable.

Performance for queries against the SalesOrder table that use Where clauses with exact equality operations
must be optimized.

You need to enable referential integrity for the ProductReview table.

How should you complete the relevant Transact-SQL statement? To answer? select the appropriate Transact-
SQL segments in the answer area.

Hot Area:

Correct Answer:

07B13F58239056B81577933EB624485B
Section: (none)
Explanation

Explanation/Reference:
Explanation:

Box 1: WITH NOCHECK


We should use WITH NOCHECK as existing records in the ProductReview table must not be validated with
the Product table.

Box 2: ON DELETE NO ACTION ON DELETE NO CASCADE


Deletes should not be allowed, so we use ON DELETE NO ACTION.
Updates should be allowed, so we use ON DELETE NO CASCADE

NO ACTION: the Database Engine raises an error, and the update action on the row in the parent table is
rolled back.
CASCADE: corresponding rows are updated in the referencing table when that row is updated in the parent
table.

Note: ON DELETE { NO ACTION | CASCADE | SET NULL | SET DEFAULT }


Specifies what action happens to rows in the table that is altered, if those rows have a referential relationship
and the referenced row is deleted from the parent table. The default is NO ACTION.

ON UPDATE { NO ACTION | CASCADE | SET NULL | SET DEFAULT }


Specifies what action happens to rows in the table altered when those rows have a referential relationship and
the referenced row is updated in the parent table. The default is NO ACTION.

Note: You must modify the ProductReview Table to meet the following requirements:

1. The table must reference the ProductID column in the Product table
2. Existing records in the ProductReview table must not be validated with the Product table.
3. Deleting records in the Product table must not be allowed if records are referenced by the ProductReview
table.
4. Changes to records in the Product table must propagate to the ProductReview table.

References: https://msdn.microsoft.com/en-us/library/ms190273.aspx
https://msdn.microsoft.com/en-us/library/ms188066.aspx

QUESTION 8
HOTSPOT

Note: This question is part of a series of questions that use the same scenario. For your convenience, the
scenario is repeated in each question. Each question presents a different goal and answer choices, but the
text of the scenario is exactly the same in each question in this series.

You have a database that contains the following tables: BlogCategory, BlogEntry, ProductReview, Product,
and SalesPerson. The tables were created using the following Transact SQL statements:

07B13F58239056B81577933EB624485B
You must modify the ProductReview Table to meet the following requirements:
The table must reference the ProductID column in the Product table
Existing records in the ProductReview table must not be validated with the Product table.
Deleting records in the Product table must not be allowed if records are referenced by the ProductReview
table.
Changes to records in the Product table must propagate to the ProductReview table.

You also have the following database tables: Order, ProductTypes, and SalesHistory, The transact-SQL
statements for these tables are not available.

You must modify the Orders table to meet the following requirements:
Create new rows in the table without granting INSERT permissions to the table.
Notify the sales person who places an order whether or not the order was completed.

You must add the following constraints to the SalesHistory table:


a constraint on the SaleID column that allows the field to be used as a record identifier
a constant that uses the ProductID column to reference the Product column of the ProductTypes table
a constraint on the CategoryID column that allows one row with a null value in the column
a constraint that limits the SalePrice column to values greater than four

07B13F58239056B81577933EB624485B
Finance department users must be able to retrieve data from the SalesHistory table for sales persons where
the value of the SalesYTD column is above a certain threshold.

You plan to create a memory-optimized table named SalesOrder. The table must meet the following
requirements:
The table must hold 10 million unique sales orders.
The table must use checkpoints to minimize I/O operations and must not use transaction logging.
Data loss is acceptable.

Performance for queries against the SalesOrder table that use Where clauses with exact equality operations
must be optimized.

You need to create an object that allows finance users to be able to retrieve the required data. The object must
not have a negative performance impact.

How should you complete the Transact-SQL statements? To answer, select the appropriate Transact-SQL
segments in the answer area.

Hot Area:

Correct Answer:

07B13F58239056B81577933EB624485B
Section: (none)
Explanation

Explanation/Reference:
Explanation:

A user defined function can return a table, which can be produces by a SELECT statement.

From question: Finance department users must be able to retrieve data from the SalesHistory table for sales
persons where the value of the SalesYTD column is above a certain threshold.

Incorrect:
Not VIEW: The RETURN clause is not used when you create a view.

References: https://docs.microsoft.com/en-us/sql/t-sql/statements/create-function-transact-sql?view=sql-
server-2017

QUESTION 9
Note: This question is part of a series of questions that use the same scenario. For your convenience, the
scenario is repeated in each question. Each question presents a different goal and answer choices, but the
text of the scenario is exactly the same in each question in this series.

You have a database that contains the following tables: BlogCategory, BlogEntry, ProductReview, Product,
and SalesPerson. The tables were created using the following Transact SQL statements:

07B13F58239056B81577933EB624485B
You must modify the ProductReview Table to meet the following requirements:

1. The table must reference the ProductID column in the Product table
2. Existing records in the ProductReview table must not be validated with the Product table.
3. Deleting records in the Product table must not be allowed if records are referenced by the ProductReview
table.
4. Changes to records in the Product table must propagate to the ProductReview table.

You also have the following databse tables: Order, ProductTypes, and SalesHistory, The transact-SQL
statements for these tables are not available.

You must modify the Orders table to meet the following requirements:
1. Create new rows in the table without granting INSERT permissions to the table.
2. Notify the sales person who places an order whether or not the order was completed.

You must add the following constraints to the SalesHistory table:


- a constraint on the SaleID column that allows the field to be used as a record identifier
- a constant that uses the ProductID column to reference the Product column of the ProductTypes table
- a constraint on the CategoryID column that allows one row with a null value in the column
- a constraint that limits the SalePrice column to values greater than four

07B13F58239056B81577933EB624485B
Finance department users must be able to retrieve data from the SalesHistory table for sales persons where
the value of the SalesYTD column is above a certain threshold.

You plan to create a memory-optimized table named SalesOrder. The table must meet the following
requirments:

- The table must hold 10 million unique sales orders.


- The table must use checkpoints to minimize I/O operations and must not use transaction logging.
- Data loss is acceptable.

Performance for queries against the SalesOrder table that use Where clauses with exact equality operations
must be optimized.

You need to modify the design of the Orders table.

What should you create?

A. a stored procedure with the RETURN statement


B. a FOR UPDATE trigger
C. an AFTER UPDATE trigger
D. a user defined function

Correct Answer: D
Section: (none)
Explanation

Explanation/Reference:
Explanation:

Requirements: You must modify the Orders table to meet the following requirements:
1. Create new rows in the table without granting INSERT permissions to the table.
2. Notify the sales person who places an order whether or not the order was completed.

References:https://msdn.microsoft.com/en-us/library/ms186755.aspx

QUESTION 10
HOTSPOT

Note: This question is part of a series of questions that use the same scenario. For your convenience, the
scenario is repeated in each question. Each question presents a different goal and answer choices, but the
text of the scenario is exactly the same in each question in this series.

You have a database that contains the following tables: BlogCategory, BlogEntry, ProductReview, Product,
and SalesPerson. The tables were created using the following Transact SQL statements:

07B13F58239056B81577933EB624485B
You must modify the ProductReview Table to meet the following requirements:

1. The table must reference the ProductID column in the Product table
2. Existing records in the ProductReview table must not be validated with the Product table.
3. Deleting records in the Product table must not be allowed if records are referenced by the ProductReview
table.
4. Changes to records in the Product table must propagate to the ProductReview table.

You also have the following databse tables: Order, ProductTypes, and SalesHistory, The transact-SQL
statements for these tables are not available.

You must modify the Orders table to meet the following requirements:
1. Create new rows in the table without granting INSERT permissions to the table.
2. Notify the sales person who places an order whether or not the order was completed.

You must add the following constraints to the SalesHistory table:


- a constraint on the SaleID column that allows the field to be used as a record identifier
- a constant that uses the ProductID column to reference the Product column of the ProductTypes table
- a constraint on the CategoryID column that allows one row with a null value in the column
- a constraint that limits the SalePrice column to values greater than four

Finance department users must be able to retrieve data from the SalesHistory table for sales persons where

07B13F58239056B81577933EB624485B
the value of the SalesYTD column is above a certain threshold.

You plan to create a memory-optimized table named SalesOrder. The table must meet the following
requirments:

- The table must hold 10 million unique sales orders.


- The table must use checkpoints to minimize I/O operations and must not use transaction logging.
- Data loss is acceptable.

Performance for queries against the SalesOrder table that use Where clauses with exact equality operations
must be optimized.

You need to update the SalesHistory table

How should you complete the Transact_SQL statement? To answer? select the appropriate Transact-SQL,
segments in the answer area.

Hot Area:

07B13F58239056B81577933EB624485B
Correct Answer:

Section: (none)
Explanation

Explanation/Reference:
Box 1:
SaleID must be the primary key, as a constraint on the SaleID column that allows the field to be used as a
record identifier is required.

Box 2:
A constraint that limits the SalePrice column to values greater than four.

Box 3: UNIQUE
A constraint on the CategoryID column that allows one row with a null value in the column.

Box 4:
A foreign keyconstraint must be put on the productID referencing the ProductTypes table, as a constraint that
uses the ProductID column to reference the Product column of the ProductTypes table is required.

07B13F58239056B81577933EB624485B
Note: Requirements are:
You must add the following constraints to the SalesHistory table:
- a constraint on the SaleID column that allows the field to be used as a record identifier
- a constraint that uses the ProductID column to reference the Product column of the ProductTypes table
- a constraint on the CategoryID column that allows one row with a null value in the column
- a constraint that limits the SalePrice column to values greater than four

QUESTION 11
HOTSPOT

Note: This question is part of a series of questions that use the same scenario. For your convenience, the
scenario is repeated in each question. Each question presents a different goal and answer choices, but the
text of the scenario is exactly the same in each question in this series.

You have a database that contains the following tables: BlogCategory, BlogEntry, ProductReview, Product,
and SalesPerson. The tables were created using the following Transact SQL statements:

You must modify the ProductReview Table to meet the following requirements:

1. The table must reference the ProductID column in the Product table
2. Existing records in the ProductReview table must not be validated with the Product table.

07B13F58239056B81577933EB624485B
3. Deleting records in the Product table must not be allowed if records are referenced by the ProductReview
table.
4. Changes to records in the Product table must propagate to the ProductReview table.

You also have the following databse tables: Order, ProductTypes, and SalesHistory, The transact-SQL
statements for these tables are not available.

You must modify the Orders table to meet the following requirements:
1. Create new rows in the table without granting INSERT permissions to the table.
2. Notify the sales person who places an order whether or not the order was completed.

You must add the following constraints to the SalesHistory table:


- a constraint on the SaleID column that allows the field to be used as a record identifier
- a constant that uses the ProductID column to reference the Product column of the ProductTypes table
- a constraint on the CategoryID column that allows one row with a null value in the column
- a constraint that limits the SalePrice column to values greater than four

Finance department users must be able to retrieve data from the SalesHistory table for sales persons where
the value of the SalesYTD column is above a certain threshold.

You plan to create a memory-optimized table named SalesOrder. The table must meet the following
requirments:

- The table must hold 10 million unique sales orders.


- The table must use checkpoints to minimize I/O operations and must not use transaction logging.
- Data loss is acceptable.

Performance for queries against the SalesOrder table that use Where clauses with exact equality operations
must be optimized.

You need to create the Sales Order table

How should you complete the table definition? To answer? select the appropriate Transact-SQL segments in
the answer area.

Hot Area:

Correct Answer:

07B13F58239056B81577933EB624485B
Section: (none)
Explanation

Explanation/Reference:
Box 1: NONCLUSTERED HASHWITH (BUCKET_COUNT = 10000000)
Hash index is preferable over a nonclustered index when queries test the indexed columns by use of a
WHERE clause with an exact equality on all index key columns.We should use a bucket count of 10 million.

Box 2: SCHEMA_ONLY
Durability: The value of SCHEMA_AND_DATA indicates that the table is durable, meaning that changes are
persisted on disk and survive restart or failover. SCHEMA_AND_DATA is the default value.
The value of SCHEMA_ONLY indicates that the table is non-durable. The table schema is persisted but any
data updates are not persisted upon a restart or failover of the database. DURABILITY=SCHEMA_ONLY is
only allowed with MEMORY_OPTIMIZED=ON.

References:
https://msdn.microsoft.com/en-us/library/mt670614.aspx

QUESTION 12
HOTSPOT

Note: This question is part of a series of questions that use the same scenario. For your convenience, the
scenario is repeated in each question. Each question presents a different goal and answer choices, but the
text of the scenario is exactly the same in each question in this series.

You have a database that contains the following tables: BlogCategory, BlogEntry, ProductReview, Product,
and SalesPerson. The tables were created using the following Transact SQL statements:

07B13F58239056B81577933EB624485B
You must modify the ProductReview Table to meet the following requirements:

1. The table must reference the ProductID column in the Product table
2. Existing records in the ProductReview table must not be validated with the Product table.
3. Deleting records in the Product table must not be allowed if records are referenced by the ProductReview
table.
4. Changes to records in the Product table must propagate to the ProductReview table.

You also have the following databse tables: Order, ProductTypes, and SalesHistory, The transact-SQL
statements for these tables are not available.

You must modify the Orders table to meet the following requirements:
1. Create new rows in the table without granting INSERT permissions to the table.
2. Notify the sales person who places an order whether or not the order was completed.

You must add the following constraints to the SalesHistory table:


- a constraint on the SaleID column that allows the field to be used as a record identifier
- a constant that uses the ProductID column to reference the Product column of the ProductTypes table
- a constraint on the CategoryID column that allows one row with a null value in the column
- a constraint that limits the SalePrice column to values greater than four

Finance department users must be able to retrieve data from the SalesHistory table for sales persons where

07B13F58239056B81577933EB624485B
the value of the SalesYTD column is above a certain threshold.

You plan to create a memory-optimized table named SalesOrder. The table must meet the following
requirments:

- The table must hold 10 million unique sales orders.


- The table must use checkpoints to minimize I/O operations and must not use transaction logging.
- Data loss is acceptable.

Performance for queries against the SalesOrder table that use Where clauses with exact equality operations
must be optimized.

You need to create a stored procedure named spDeleteCategory to delete records in the database. The stored
procedure must meet the following requirments:

1. Delete records in both the BlogEntry and BlogCategory tables where CategoryId equals parameter
@CategoryId.
2. Avoid locking the entire table when deleting records from the BlogCategory table.
3. If an error occurs during a delete operation on either table, all changes must be rolled back, otherwise all
changes should be committed.

How should you complete the procedure? To answer, select the appropriate Transact-SQL segments in the
answer area.

Hot Area:

07B13F58239056B81577933EB624485B
Correct Answer:

Section: (none)
Explanation

Explanation/Reference:
Explanation:

Box 1: SET TRANSACTION ISOLATION LEVEL READ COMMITTED


You can minimize locking contention while protecting transactions from dirty reads of uncommitted data
modifications by using either of the following:
* The READ COMMITTED isolation level with the READ_COMMITTED_SNAPSHOT database option set ON.
* The SNAPSHOT isolation level.
With ROWLOCK we should use READ COMMITTED

Box 2: ROWLOCK
Requirement: Avoid locking the entire table when deleting records from the BlogCategory table
ROWLOCK specifies that row locks are taken when page or table locks are ordinarily taken. When specified in
transactions operating at the SNAPSHOT isolation level, row locks are not taken unless ROWLOCK is
combined with other table hints that require locks, such as UPDLOCK and HOLDLOCK.

Box 3: COMMIT
Box 4: ROLLBACK

QUESTION 13
DRAG DROP

07B13F58239056B81577933EB624485B
You are analyzing the performance of a database environment.
Applications that access the database are experiencing locks that are held for a large amount of time. You are
experiencing isolation phenomena such as dirty, nonrepeatable and phantom reads.

You need to identify the impact of specific transaction isolation levels on the concurrency and consistency of
data.
What are the consistency and concurrency implications of each transaction isolation level? To answer, drag
the appropriate isolation levels to the correct locations. Each isolation level may be used once, more than
once, or not at all. You may need to drag the split bar between panes or scroll to view content.

Select and Place:

Correct Answer:

Section: (none)
Explanation

Explanation/Reference:
Read Uncommitted (aka dirty read): A transaction T1executing under this isolation level can access data
changed by concurrent transaction(s).
Pros: No read locks needed to read data (i.e. no reader/writer blocking). Note, T1 still takes transaction

07B13F58239056B81577933EB624485B
duration locks for any data modified.
Cons: Data is notguaranteed to be transactionally consistent.

Read Committed: A transaction T1 executing under this isolation level can only access committed data.
Pros: Good compromise between concurrency and consistency.
Cons: Locking and blocking. The data can change when accessed multiple times within the same transaction.

Repeatable Read: A transaction T1 executing under this isolation level can only access committed data with
an additional guarantee that any data read cannot change (i.e. it is repeatable) for the duration of the
transaction.
Pros: Higher data consistency.
Cons: Locking and blocking. The S locks are held for the duration of the transaction that can lower the
concurrency. It does not protect against phantom rows.

Serializable: A transaction T1 executing under this isolation level provides the highest data consistency
including elimination of phantoms but at the cost of reduced concurrency. It prevents phantoms by taking a
range lock or table level lock if range lock can’t be acquired (i.e. no index on the predicate column) for the
duration of the transaction.
Pros: Full data consistency including phantom protection.
Cons: Locking and blocking. The S locks are held for the duration of the transaction that can lower the
concurrency.

References:
https://blogs.msdn.microsoft.com/sqlcat/2011/02/20/concurrency-series-basics-of-transaction-isolation-levels/

QUESTION 14
DRAG DROP

You are evaluating the performance of a database environment.


You must avoid unnecessary locks and ensure that lost updates do not occur.
You need to choose the transaction isolation level for each data scenario.
Which isolation level should you use for each scenario? To answer, drag the appropriate isolation levels to the
correct scenarios. Each isolation may be used once, more than once, or not at all. You may need to drag the
split bar between panes or scroll to view content.

Select and Place:

Correct Answer:

07B13F58239056B81577933EB624485B
Section: (none)
Explanation

Explanation/Reference:
Box 1: Read committed
Read Committed: A transaction T1 executing under this isolation level can only access committed data.
Pros: Good compromise between concurrency and consistency.
Cons: Locking and blocking. The data can change whenaccessed multiple times within the same transaction.

Box 2: Read Uncommitted


Read Uncommitted (aka dirty read): A transaction T1 executing under this isolation level can access data
changed by concurrent transaction(s).
Pros: No read locks needed to readdata (i.e. no reader/writer blocking). Note, T1 still takes transaction
duration locks for any data modified.
Cons: Data is not guaranteed to be transactionally consistent.

Box 3: Serializable
Serializable: A transaction T1 executing under thisisolation level provides the highest data consistency
including elimination of phantoms but at the cost of reduced concurrency. It prevents phantoms by taking a
range lock or table level lock if range lock can’t be acquired (i.e. no index on the predicatecolumn) for the
duration of the transaction.
Pros: Full data consistency including phantom protection.
Cons: Locking and blocking. The S locks are held for the duration of the transaction that can lower the
concurrency.

References:
https://blogs.msdn.microsoft.com/sqlcat/2011/02/20/concurrency-series-basics-of-transaction-isolation-levels/

QUESTION 15
DRAG DROP

You have two database tables. Table1 is a partioned table and Table 2 is a nonpartioned table.
Users report that queries take a long time to complete. You monitor queries by using Microsoft SQL Server
Profiler. You observe lock escalation for Table1 and Table 2.
You need to allow escalation of Table1 locks to the partition level and prevent all lock escalation for Table2.
Which Transact-SQL statement should you run for each table? To answer, drag the appropriate Transact-SQL
statements to the correct tables. Each command may be used once, more than once, or not at all. You may

07B13F58239056B81577933EB624485B
need to drag the split bar between panes or scroll to view content.

Select and Place:

Correct Answer:

Section: (none)
Explanation

Explanation/Reference:
Explanation:
Since SQL Server 2008 you can also control how SQL Server performs the Lock Escalation – through the
ALTER TABLE statement and the property LOCK_ESCALATION. There are 3 different options available:

TABLE
AUTO
DISABLE

Box 1: Table1, Auto


The default option is TABLE, means that SQL Server *always* performs the Lock Escalation to the table level
–even when the table is partitioned. If you have your table partitioned, and you want to have a Partition Level
Lock Escalation (because you have tested your data access pattern, and you don’t cause deadlocks with it),
then you can change the option to AUTO. AUTO means that the Lock Escalation is performed to the partition
level, if the table is partitioned, and otherwise to the table level.

Box 2: Table 2, DISABLE

07B13F58239056B81577933EB624485B
With the option DISABLE you can completely disable the Lock Escalation for that specific table.

For partitioned tables, use the LOCK_ESCALATION option of ALTER TABLE to escalate locks to the HoBT
level instead of the table or to disable lock escalation.

References:
http://www.sqlpassion.at/archive/2014/02/25/lock-escalations/

QUESTION 16
DRAG DROP

You have a database that contains three encrypted store procedures named dbo.Proc1, dbo.Proc2 and
dbo.Proc3. The stored procedures include INSERT, UPDATE, DELETE and BACKUP DATABASE statements.

You have the following requirements:

- You must run all the stored procedures within the same transaction.
- You must automatically start a transaction when stored procedures include DML statements.
- You must not automatically start a transaction when stored procedures include DDL statements.

You need to run all three stored procedures.

Which four Transact-SQL segments should you use to develop the solution? To answer, move the appropriate
Transact-SQL segments to the answer area and arrange then in the correct order.

Select and Place:

Correct Answer:

07B13F58239056B81577933EB624485B
Section: (none)
Explanation

Explanation/Reference:
Note:
Implicit transaction mode remains in effect until the connection executes a SET IMPLICIT_TRANSACTIONS
OFF statement, which returns the connection to autocommit mode. In autocommit mode, allindividual
statements are committed if they complete successfully.

When a connection is in implicit transaction mode and the connection is not currently in a transaction,
executing any of the following statements starts a transaction:
ALTER TABLE (DDL)
FETCH
REVOKE
BEGIN TRANSACTION
GRANT
SELECT
CREATE (DDL)
INSERT
TRUNCATE TABLE
DELETE (DML)
OPEN
UPDATE (DML)
DROP (DDL)

Note 2: XACT_STATE returns the following values.


1 The current request has an active user transaction. The request can perform any actions, including writing
data and committing the transaction. The transaction is committable.
-1 The current request has an active user transaction, but an error has occurred that has caused the
transaction to be classified as an uncommittable transaction.the transaction is uncommittable and should be
rolled back.
0 There is no active user transaction for the current request. Acommit or rollback operation would generate an
error.

References:

07B13F58239056B81577933EB624485B
https://technet.microsoft.com/en-us/library/ms187807(v=sql.105).aspx
https://technet.microsoft.com/en-us/library/ms189797(v=sql.110).aspx

QUESTION 17
HOTSPOT

You are profiling a frequently used database table named UserEvents. The READ_COMMITED_SNAPSHOT
database option is set to OFF.

In the trace results, you observe that lock escalation occurred for one stored procedure even though the
number of locks in the database did not exceed memory or configuration thresholds. Events details are
provided in the following table:

You need to modify the uspDeleteEvents stored procedure to avoid lock escalation.
How should you modify the stored procedure? To answer, select the appropriate Transact-SQL segments in
the answer area.

Hot Area:

Correct Answer:

Section: (none)
Explanation

07B13F58239056B81577933EB624485B
Explanation/Reference:
Delete up to 4000 rows at a time. Keep doing it until all rows have been deleted.
Note that @@ROWCOUNT returns the number of rows affected by the last statement.

References:
https://msdn.microsoft.com/en-us/library/ms187316.aspx

QUESTION 18
You have a database that is experiencing deadlock issues when users run queries.
You need to ensure that all deadlocks are recorded in XML format.
What should you do?

A. Create a Microsoft SQL Server Integration Services package that uses sys.dm_tran_locks.
B. Enable trace flag 1224 by using the Database Cpmsistency Checker(BDCC).
C. Enable trace flag 1222 in the startup options for Microsoft SQL Server.
D. Use the Microsoft SQL Server Profiler Lock:Deadlock event class.

Correct Answer: C
Section: (none)
Explanation

Explanation/Reference:
Explanation:
When deadlocks occur, trace flag 1204 and trace flag 1222 return information that is capturedin the SQL
Server error log. Trace flag 1204 reports deadlock information formatted by each node involved in the
deadlock. Trace flag 1222 formats deadlock information, first by processes and then by resources.

The output format for Trace Flag 1222 only returns information in an XML-like format.

References:
https://technet.microsoft.com/en-us/library/ms178104(v=sql.105).aspx

QUESTION 19
You are developing an application that connects to a database.
The application runs the following jobs:

The READ_COMMITTED_SNAPSHOT database option is set to OFF, and auto-content is set to ON. Within
the stored procedures, no explicit transactions are defined.
If JobB starts before JobA, it can finish in seconds. If JobA starts first, JobB takes a long time to complete.
You need to use Microsoft SQL Server Profiler to determine whether the blocking that you observe in JobB is
caused by locks acquired by JobA.

07B13F58239056B81577933EB624485B
Which trace event class in the Locks event category should you use?

A. LockAcquired
B. LockCancel
C. LockDeadlock
D. LockEscalation

Correct Answer: A
Section: (none)
Explanation

Explanation/Reference:
Explanation:
The Lock:Acquiredevent class indicates that acquisition of a lock on a resource, such asa data page, has been
achieved.
The Lock:Acquired and Lock:Released event classes can be used to monitor when objects are being locked,
the type of locks taken, and for how long the locks were retained. Locks retained for long periods of time may
cause contention issues and should be investigated.

QUESTION 20
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution. Determine whether the solution meets the stated goals.

You have a database named dbReporting. Users run a large number of read-only ad hoc queries against
the database. The application and all ad hoc queries use default database transaction isolation levels. You set
the value of the READ_COMMITTED_SNAPSHOT database option to ON.

You have an application that updates 10 tables sequentially and modifies a large volume of records in a single
transaction. The updates are isolated from each other.

Users report an error which indicates that the version store us full.

You need to reduce the number of occurrences of the error.

Solution: You increase the maximum database size for the tempdb database.

Does the solution meet the goal?

A. Yes
B. No

Correct Answer: A
Section: (none)
Explanation

Explanation/Reference:
References:
https://docs.microsoft.com/en-us/sql/relational-databases/databases/tempdb-database?view=sql-server-2017

QUESTION 21
Note: This question is part of a series of questions that use the same scenario. For your convenience, the
scenario is repeated in each question. Each question presents a different goal and answer choices, but the
text of the scenario is exactly the same in each question in this series.

You have a database named DB1 that contains the following tables: Customer, CustomerToAccountBridge,
and CustomerDetails. The three tables are part of the Sales schema. The database also contains a schema
named Website. You create the Customer table by running the following Transact-SQL statement:

07B13F58239056B81577933EB624485B
The value of the CustomerStatus column is equal to one for active customers. The value of the
Account1Status and Account2Status columns are equal to one for active accounts. The following table
displays selected columns and rows from the Customer table.

You plan to create a view named Website.Customer and a view named Sales.FemaleCustomers.
Website.Customer must meet the following requirements:
1. Allow users access to the CustomerName and CustomerNumber columns for active customers.
2. Allow changes to the columns that the view references. Modified data must be visible through the view.
3. Prevent the view from being published as part of Microsoft SQL Server replication.
Sales.Female.Customers must meet the following requirements:
1. Allow users access to the CustomerName, Address, City, State and PostalCode columns.
2. Prevent changes to the columns that the view references.
3. Only allow updates through the views that adhere to the view filter.

You have the following stored procedures: spDeleteCustAcctRelationship and spUpdateCustomerSummary.


The spUpdateCustomerSummary stored procedure was created by running the following Transacr-SQL
statement:

07B13F58239056B81577933EB624485B
You run the spUpdateCustomerSummary stored procedure to make changes to customer account summaries.
Other stored procedures call the spDeleteCustAcctRelationship to delete records from the
CustomerToAccountBridge table.

You must update the design of the Customer table to meet the following requirements.
1. You must be able to store up to 50 accounts for each customer.
2. Users must be able to retrieve customer information by supplying an account number.
3. Users must be able to retrieve an account number by supplying customer information.
You need to implement the design changes while minimizing data redundancy.
What should you do?

A. Split the table into three separate tables. Include the AccountNumber and CustomerID columns in the first
table. Include the CustomerName and Gender columns in the second table. Include the AccountStatus
column in the third table.
B. Split the table into two separate tables. Include AccountNumber, CustomerID, CustomerName and Gender
columns in the first table. Include the AccountNumber and AccountStatus columns in the second table.
C. Split the table into two separate tables, Include the CustomerID and AccountNumber columns in the first
table. Include the AccountNumber, AccountStatus, CustomerName and Gender columns in the second
table.
D. Split the table into two separate tables, Include the CustomerID, CustomerName and Gender columns in
the first table. Include AccountNumber, AccountStatus and CustomerID columns in the second table.

Correct Answer: D
Section: (none)
Explanation

Explanation/Reference:
Explanation:
Two tables is enough.CustomerID must be in both tables.

QUESTION 22
DRAG DROP

Note: This question is part of a series of questions that use the same scenario. For your convenience, the
scenario is repeated in each question. Each question presents a different goal and answer choices, but the
text of the scenario is exactly the same in each question in this series.

You have a database named DB1 that contains the following tables: Customer, CustomerToAccountBridge,
and CustomerDetails. The three tables are part of the Sales schema. The database also contains a schema
named Website. You create the Customer table by running the following Transact-SQL statement:

07B13F58239056B81577933EB624485B
The value of the CustomerStatus column is equal to one for active customers. The value of the
Account1Status and Account2Status columns are equal to one for active accounts. The following table
displays selected columns and rows from the Customer table.

You plan to create a view named Website.Customer and a view named Sales.FemaleCustomers.
Website.Customer must meet the following requirements:
1. Allow users access to the CustomerName and CustomerNumber columns for active customers.
2. Allow changes to the columns that the view references. Modified data must be visible through the view.
3. Prevent the view from being published as part of Microsoft SQL Server replication.
Sales.Female.Customers must meet the following requirements:
1. Allow users access to the CustomerName, Address, City, State and PostalCode columns.
2. Prevent changes to the columns that the view references.
3. Only allow updates through the views that adhere to the view filter.

You have the following stored procedures: spDeleteCustAcctRelationship and spUpdateCustomerSummary.


The spUpdateCustomerSummary stored procedure was created by running the following Transacr-SQL
statement:

The following table displays a selected columns and rows from the Customer table. Thr value of the

07B13F58239056B81577933EB624485B
CustomerStatus column is equal to one for active customers. The value of the Account1Status and
Account2Status columns are equal to one for active accounts.
You run the spUpdateCustomerSummary stored procedure to make changes to customer account summaries.
Other stored procedures call the spDeleteCustAcctRelationship to delete records from the
CustomerToAccountBridge table.

Users report that the following SELECT statement statement takes a long time to complete:

You need to create an index that meets the following requirements:


1. Improve the performance of the SELECT statement.
2. requires minimum index key size.
3. Only contains active customers.
4. Makes no change to the existing primary key.
5. Contains all of the columns required by the SELECT statement.

Which three Transact_SQL segments should you use to develop the solution? To answer, move the
appropriate code blocks from the list of code blocks to the answer area and arrange them in the correct order.

Select and Place:

Correct Answer:

07B13F58239056B81577933EB624485B
Section: (none)
Explanation

Explanation/Reference:
Box 1: Clustered Index
With the same size of keys, the nonclustered indexes need more space than clustered indexes.

Box 2, Box 3:
Include the CustomerStatus column in the index, and only when CustomerStatusnot equal to 1 (the active
customers).

References:
http://www.sqlserverlogexplorer.com/overview-of-cluster-and-noncluster-index/

QUESTION 23
DRAG DROP

Note: This question is part of a series of questions that use the same scenario. For your convenience, the
scenario is repeated in each question. Each question presents a different goal and answer choices, but the
text of the scenario is exactly the same in each question in this series.

You have a database named DB1 that contains the following tables: Customer, CustomerToAccountBridge,
and CustomerDetails. The three tables are part of the Sales schema. The database also contains a schema
named Website. You create the Customer table by running the following Transact-SQL statement:

07B13F58239056B81577933EB624485B
The value of the CustomerStatus column is equal to one for active customers. The value of the
Account1Status and Account2Status columns are equal to one for active accounts. The following table
displays selected columns and rows from the Customer table.

You plan to create a view named Website.Customer and a view named Sales.FemaleCustomers.
Website.Customer must meet the following requirements:
1. Allow users access to the CustomerName and CustomerNumber columns for active customers.
2. Allow changes to the columns that the view references. Modified data must be visible through the view.
3. Prevent the view from being published as part of Microsoft SQL Server replication.

Sales.Female.Customers must meet the following requirements:


1. Allow users access to the CustomerName, Address, City, State and PostalCode columns.
2. Prevent changes to the columns that the view references.
3. Only allow updates through the views that adhere to the view filter.

You have the following stored procedures: spDeleteCustAcctRelationship and spUpdateCustomerSummary.


The spUpdateCustomerSummary stored procedure was created by running the following Transacr-SQL
statement:

You run the spUpdateCustomerSummary stored procedure to make changes to customer account summaries.

07B13F58239056B81577933EB624485B
Other stored procedures call the spDeleteCustAcctRelationship to delete records from the
CustomerToAccountBridge table.

When a procedure calls spDeleteCustAcctRelationship, if the calling stored procedures has already started an
active transaction, all the detections made by the spDeleteCustAccRelationship stored procedure must be
committed by the caller; otherwise changes must be committed within the spDeleteCustAcctRelationship
stored procedure.
If any error occurs during the delete operation, only the deletes made by the soDeleteCustACCTRelationships
stored procedure must be rolled back and the status must be updated.

You need to complete the stored procedure to ensure all the requirements are met.

How should you complete the procedure? To answer, drag the Transact-SQL segments to the correct location.
Each transact-SQL segment may be used once, more than once or not at all. You may need to drag the split
bar between panes or scroll to view content.

NOTE: Each correct selection is worth one point.

Select and Place:

Correct Answer:

07B13F58239056B81577933EB624485B
Section: (none)
Explanation

Explanation/Reference:
Savepoints offer a mechanism to roll back portions of transactions. You create a savepoint using the SAVE
TRANSACTION savepoint_name statement. Later, you execute a ROLLBACK TRANSACTION
savepoint_name statement to roll back to the savepoint instead of rolling back to the start of the transaction.

References:
https://technet.microsoft.com/en-us/library/ms178157(v=sql.105).aspx

QUESTION 24
DRAG DROP

Note: This question is part of a series of questions that use the same scenario. For your convenience, the
scenario is repeated in each question. Each question presents a different goal and answer choices, but the
text of the scenario is exactly the same in each question in this series.

You have a database named DB1 that contains the following tables: Customer, CustomerToAccountBridge,
and CustomerDetails. The three tables are part of the Sales schema. The database also contains a schema
named Website. You create the Customer table by running the following Transact-SQL statement:

07B13F58239056B81577933EB624485B
The value of the CustomerStatus column is equal to one for active customers. The value of the
Account1Status and Account2Status columns are equal to one for active accounts. The following table
displays selected columns and rows from the Customer table.

You plan to create a view named Website.Customer and a view named Sales.FemaleCustomers.
Website.Customer must meet the following requirements:
1. Allow users access to the CustomerName and CustomerNumber columns for active customers.
2. Allow changes to the columns that the view references. Modified data must be visible through the view.
3. Prevent the view from being published as part of Microsoft SQL Server replication.
Sales.Female.Customers must meet the following requirements:
1. Allow users access to the CustomerName, Address, City, State and PostalCode columns.
2. Prevent changes to the columns that the view references.
3. Only allow updates through the views that adhere to the view filter.

You have the following stored procedures: spDeleteCustAcctRelationship and spUpdateCustomerSummary.


The spUpdateCustomerSummary stored procedure was created by running the following Transacr-SQL
statement:

You run the spUpdateCustomerSummary stored procedure to make changes to customer account summaries.

07B13F58239056B81577933EB624485B
Other stored procedures call the spDeleteCustAcctRelationship to delete records from the
CustomerToAccountBridge table.

You need to create Website Customer.


How should you complete the view definition? To answer, drag the appropriate Transact-SQL segments to the
correct locations, Each Transact-SQL segment may be used once, more than once or not at all. You may need
to drag the split bar between panes or scroll to view content.

NOTE: Each correct selection is worth one point.

Select and Place:

Correct Answer:

Section: (none)
Explanation

Explanation/Reference:
Box 1: WITH ENCRYPTION
Using WITH ENCRYPTION prevents the view from being published as part of SQL Server replication.

Box 2: WITH CHECK OPTION


CHECK OPTION forces all data modification statements executed against the view to follow the criteria set
within select_statement. When a row is modified through a view, the WITH CHECK OPTION makes sure the
data remains visible through the view after the modification is committed.

Note: Website.Customer must meet the following requirements:


1. Allow users access to the CustomerName and CustomerNumber columns for active customers.
2. Allow changes to the columns that the view references. Modified data must be visible through the view.
3. Prevent the view from being published as part of Microsoft SQL Server replication.

Incorrect Asnwers:
SCHEMABINDING binds the view to the schema of the underlying table or tables. When SCHEMABINDING is
specified, the base table or tables cannot be modified in a way that would affect the view definition.

VIEW_METADATA specifies that theinstance of SQL Server will return to the DB-Library, ODBC, and OLE DB
APIs the metadata information about the view, instead of the base table or tables, when browse-mode

07B13F58239056B81577933EB624485B
metadata is being requested for a query that references the view.

References:
https://msdn.microsoft.com/en-us/library/ms187956.aspx

QUESTION 25
DRAG DROP

Note: This question is part of a series of questions that use the same scenario. For your convenience, the
scenario is repeated in each question. Each question presents a different goal and answer choices, but the
text of the scenario is exactly the same in each question in this series.

You have a database named DB1 that contains the following tables: Customer, CustomerToAccountBridge,
and CustomerDetails. The three tables are part of the Sales schema. The database also contains a schema
named Website. You create the Customer table by running the following Transact-SQL statement:

The value of the CustomerStatus column is equal to one for active customers. The value of the
Account1Status and Account2Status columns are equal to one for active accounts. The following table
displays selected columns and rows from the Customer table.

You plan to create a view named Website.Customer and a view named Sales.FemaleCustomers.
Website.Customer must meet the following requirements:
1. Allow users access to the CustomerName and CustomerNumber columns for active customers.
2. Allow changes to the columns that the view references. Modified data must be visible through the view.
3. Prevent the view from being published as part of Microsoft SQL Server replication.

Sales.Female.Customers must meet the following requirements:


1. Allow users access to the CustomerName, Address, City, State and PostalCode columns.
2. Prevent changes to the columns that the view references.
3. Only allow updates through the views that adhere to the view filter.

You have the following stored procedures: spDeleteCustAcctRelationship and spUpdateCustomerSummary.

07B13F58239056B81577933EB624485B
The spUpdateCustomerSummary stored procedure was created by running the following Transacr-SQL
statement:

You run the spUpdateCustomerSummary stored procedure to make changes to customer account summaries.
Other stored procedures call the spDeleteCustAcctRelationship to delete records from the
CustomerToAccountBridge table.

You need to create Sales.FemaleCustomers.


How should you complete the view definition? To answer, drag the appropriate Transact-SQL segments to the
correct locations. Each Transact_SQL segment may be used once, more than once or not at all. You may
need to drag the split bar between panes or scroll to view content.

Select and Place:

Correct Answer:

Section: (none)
Explanation

Explanation/Reference:
Box 1:WITH SCHEMABINDING:
SCHEMABINDING binds the viewto the schema of the underlying table or tables. When SCHEMABINDING is
specified, the base table or tables cannot be modified in a way that would affect the view definition.

07B13F58239056B81577933EB624485B
Box 2:Box 2: WITH CHECK OPTION
CHECK OPTION forces all data modification statements executed against the view to follow the criteria set
within select_statement. When a row is modified through a view, the WITH CHECK OPTION makes sure the
data remains visible through the view after the modification is committed.

Note:Sales.Female.Customers must meet the following requirements:


1. Allow users access to the CustomerName, Address, City, State and PostalCode columns.
2. Prevent changes to the columns that the view references.
3. Only allow updates through the views that adhere to the view filter.

References:
https://msdn.microsoft.com/en-us/library/ms187956.aspx

QUESTION 26
Note: This question is part of a series of questions that use the same scenario. For your convenience, the
scenario is repeated in each question. Each question presents a different goal and answer choices, but the
text of the scenario is exactly the same in each question in this series.

You have a database named DB1 that contains the following tables: Customer, CustomerToAccountBridge,
and CustomerDetails. The three tables are part of the Sales schema. The database also contains a schema
named Website. You create the Customer table by running the following Transact-SQL statement:

The value of the CustomerStatus column is equal to one for active customers. The value of the
Account1Status and Account2Status columns are equal to one for active accounts. The following table
displays selected columns and rows from the Customer table.

07B13F58239056B81577933EB624485B
You plan to create a view named Website.Customer and a view named Sales.FemaleCustomers.
Website.Customer must meet the following requirements:
Allow users access to the CustomerName and CustomerNumber columns for active customers.
Allow changes to the columns that the view references. Modified data must be visible through the view.
Prevent the view from being published as part of Microsoft SQL Server replication.
Sales.Female.Customers must meet the following requirements:
Allow users access to the CustomerName, Address, City, State and PostalCode columns.
Prevent changes to the columns that the view references.
Only allow updates through the views that adhere to the view filter.

You have the following stored procedures: spDeleteCustAcctRelationship and spUpdateCustomerSummary.


The spUpdateCustomerSummary stored procedure was created by running the following Transacr-SQL
statement:

You run the uspUpdateCustomerSummary stored procedure to make changes to customer account
summaries. Other stored procedures call the spDeleteCustAcctRelationship to delete records from the
CustomerToAccountBridge table.

When you start uspUpdateCustomerSummary, there are no active transactions. The procedure fails at the
second update statement due to a CHECK constraint violation on the TotalDepositAccountCount column.

What is the impact of the stored procedure on the CustomerDetails table?

A. A. The value of the TotalAccountCount column decreased.


B. The value of the TotalDepositAccountCount column is not changed.
C. The statement that modifies TotalDepositAccountCount is excluded from the transaction.
D. The value of the TotalAccountCount column is not changed.
E. The value of the TotalDepositAccountCount column is increased.
F. The statement that modifies TotalAccountCount column is excluded from the transaction.
G. The value of the TotalDepositAcountCount column is decreased.

Correct Answer: D
Section: (none)
Explanation

07B13F58239056B81577933EB624485B
Explanation/Reference:
Explanation:

QUESTION 27
Note: This question is part of a series of questions that use the same answer choices. An answer choice may
be correct for more than one question on the series. Each question is independent of the other questions in
this series. Information and details provided in a question apply only to that question.

You work on an OLTP database that has no memory-optimized file group defined.
You have a table names tblTransaction that is persisted on disk and contains the information described in the
following table:

Users report that the following query takes a long time to complete.

You need to create an index that:


- improves the query performance
- does not impact the existing index
- minimizes storage size of the table (inclusive of index pages).

What should you do?

A. Create aclustered index on the table.


B. Create a nonclustered index on the table.
C. Create a nonclustered filtered index on the table.
D. Create a clustered columnstore index on the table.
E. Create a nonclustered columnstore index on the table.
F. Create a hashindex on the table.

Correct Answer: C
Section: (none)
Explanation

Explanation/Reference:

07B13F58239056B81577933EB624485B
Explanation:
A filtered index is an optimized nonclustered index, especially suited to cover queries that select from a well-
defined subset of data. It uses a filter predicate to index a portion of rows in the table. A well-designed filtered
index can improve query performance, reduce index maintenance costs, and reduce index storage costs
compared with full-table indexes.

QUESTION 28
Note: This question is part of a series of questions that use the same or similar answer choices. An answer
choice may be correct for more than one question in the series. Each question os independent of the other
questions in this series. Information and details provided in a question apply only to that question.

You have a database named DB1. There is no memory-optimized filegroup in the database.
You run the following query:

The following image displays the execution plan the query optimizer generates for this query:

Users frequently run the same query with different values for the local variable @lastName. The table named
Person is persisted on disk.
You need to create an index on the Person.Person table that meets the following requirements:
1. All users must be able to benefit from the index.
2. FirstName must be added to the index as an included column.

What should you do?

A. Create a clustered index on the table.


B. Create a nonclustered index on the table.
C. Create a nonclustered filtered index on the table.
D. Create a clustered columnstore index on the table.
E. Create a nonclustered columnstore index on the table.
F. Create a hash index on the table.

Correct Answer: B
Section: (none)
Explanation

07B13F58239056B81577933EB624485B
Explanation/Reference:
Explanation:
By including nonkey columns, you can create nonclustered indexes that cover more queries. This is because
the nonkeycolumns have the following benefits:
They can be data types not allowed as index key columns.
They are not considered by the Database Engine when calculating the number of index key columns or index
key size.
Reference: https://docs.microsoft.com/en-us/sql/relational-databases/indexes/create-indexes-with-included-
columns?view=sql-server-2017

QUESTION 29
Note: The question is part of a series of questions that use the same or similar answer choices. An answer
choice may be correct for more than one question in the series. Each question is independent of the other
question in the series. Information and details provided in a question apply only to that question.

You have a reporting database that includes a non-partitioned fact table named Fact_Sales. The table is
persisted on disk.
Users report that their queries take a long time to complete. The system administrator reports that the table
takes too much space in the database. You observe that there are no indexes defined on the table, and many
columns have repeating values.
You need to create the most efficient index on the table, minimize disk storage and improve reporting query
performance.

What should you do?

A. Create a clustered index on the table.


B. Create a nonclustered index on the table.
C. Create a nonclustered filtered index on the table.
D. Create a clustered columnstore index on the table.
E. Create a nonclustered columnstore index on the table.
F. Create a hash index on the table.

Correct Answer: D
Section: (none)
Explanation

Explanation/Reference:
Explanation:
The columnstore index is the standard for storing and querying largedata warehousing fact tables. It uses
column-based data storage and query processing to achieve up to 10x query performance gains in your data
warehouse over traditional row-oriented storage, and up to 10x data compression over the uncompressed data
size.
A clustered columnstore index is the physical storage for the entire table.

QUESTION 30
Note: The question is part of a series of questions that use the same or similar answer choices. An answer
choice may be correct for more than one question in the series. Each question is independent of the other
question in the series. Information and details provided in a question apply only to that question.

You have a database named DB1. The database does not use a memory-optimized filegroup. The database
contains a table named Table1. The table must support the following workloads:

07B13F58239056B81577933EB624485B
You need to add the most efficient index to support the new OLTP workload, while not deteriorating the
existing Reporting query performance.

What should you do?

A. Create a clustered index on the table.


B. Create a nonclustered index on the table.
C. Create a nonclustered filtered index on the table.
D. Create a clustered columnstore index on the table.
E. Create a nonclustered columnstore index on the table.
F. Create a hash index on the table.

Correct Answer: C
Section: (none)
Explanation

Explanation/Reference:
Explanation:
A filtered index is an optimized nonclustered index, especially suited to cover queries that select from a well-
defined subset of data. It uses a filter predicate to index a portion of rows in the table. A well-designed filtered
index can improve query performance, reduce index maintenance costs, and reduce index storage costs
compared with full-table indexes.

References:https://technet.microsoft.com/en-us/library/cc280372(v=sql.105).aspx

QUESTION 31
Note: The question is part of a series of questions that use the same or similar answer choices. An answer
choice may be correct for more than one question in the series. Each question is independent of the other
question in the series. Information and details provided in a question apply only to that question.

You have a database named DB1. The database does not have a memory optimized filegroup.
You create a table by running the following Transact-SQL statement:

The table is currently used for OLTP workloads. The analytics user group needs to perform real-time
operational analytics that scan most of the records in the table to aggregate on a number of columns.

07B13F58239056B81577933EB624485B
You need to add the most efficient index to support the analytics workload without changing the OLTP
application.

What should you do?

A. Create a clustered indexon the table.


B. Create a nonclustered index on the table.
C. Create a nonclustered filtered index on the table.
D. Create a clustered columnstore index on the table.
E. Create a nonclustered columnstore index on the table.
F. Create a hash index on the table.

Correct Answer: E
Section: (none)
Explanation

Explanation/Reference:
Explanation:
A nonclustered columnstore index enables real-time operational analytics in which the OLTP workload uses
the underlying clustered index, while analytics run concurrently on the columnstore index.

Columnstore indexes can achieve up to 100xbetter performance on analytics and data warehousing workloads
and up to 10x better data compression than traditional rowstore indexes. These recommendations will help
your queries achieve the very fast query performance that columnstore indexes are designed to provide.

References:https://msdn.microsoft.com/en-us/library/gg492088.aspx

QUESTION 32
DRAG DROP

You are analyzing the performance of a database environment.


You suspect there are several missing indexes in the current database.
You need to return a prioritized list of the missing indexes on the current database.

How should you complete the Transact-SQL statement? To answer, drag the appropriate Transact-SQL
segments to the correct locations. Each Transact-SQL segment may be used once, more than once or not at
all. You may need to drag the split bar between panes or scroll to view content.

Select and Place:

07B13F58239056B81577933EB624485B
Correct Answer:

Section: (none)
Explanation

Explanation/Reference:
Box 1: sys.db_db_missing_index_group_stats
Box 2: group_handle
Example: The following query determines which missing indexes comprise a particular missing index group,
and displays their column details. For the sake of this example, the missing index group handle is 24.
SELECT migs.group_handle, mid.*
FROM sys.dm_db_missing_index_group_stats AS migs
INNER JOIN sys.dm_db_missing_index_groups AS mig
ON (migs.group_handle = mig.index_group_handle)
INNER JOIN sys.dm_db_missing_index_details AS mid
ON (mig.index_handle = mid.index_handle)
WHERE migs.group_handle = 24;

Box 3: sys.db_db_missing_index_group_stats
The sys.db_db_missing_index_group_stats table include the required columns for the subquery:
avg_total_user_cost and avg_user_impact.
Example: Find the 10 missing indexes with the highest anticipated improvement for user queries
The following query determines which 10 missing indexes would produce the highest anticipated cumulative
improvement, in descending order, for user queries.
SELECT TOP 10 *
FROM sys.dm_db_missing_index_group_stats
ORDER BY avg_total_user_cost * avg_user_impact * (user_seeks + user_scans)DESC;

QUESTION 33
DRAG DROP

You are monitoring a Microsoft Azure SQL Database.


The database is experiencing high CPU consumption.
You need to determine which query uses the most cumulative CPU.

How should you complete the Transact-SQL statement? To answer, drag the appropriate Transact-SQL
segments to the correct locations. Each Transact-SQL segment may be used once, more than one or not at
all. You may need to drag the split bar between panes or scroll to view content.

Select and Place:

07B13F58239056B81577933EB624485B
Correct Answer:

Section: (none)
Explanation

Explanation/Reference:
Box 1: sys.dm_exec_query_stats
sys.dm_exec_query_stats returns aggregateperformance statistics for cached query plans in SQL Server.

Box 2: highest_cpu_queries.total_worker_time DESC


Sort on total_worker_time column

Example: The following example returns information about the top five queries ranked by average CPU time.
Thisexample aggregates the queries according to their query hash so that logically equivalentqueries are
grouped by their cumulative resource consumption.
USE AdventureWorks2012;
GO
SELECT TOP 5 query_stats.query_hash AS "Query Hash",
SUM(query_stats.total_worker_time) / SUM(query_stats.execution_count) AS "Avg CPU Time",
MIN(query_stats.statement_text) AS "Statement Text"
FROM
(SELECT QS.*,
SUBSTRING(ST.text, (QS.statement_start_offset/2) + 1,
((CASE statement_end_offset

07B13F58239056B81577933EB624485B
WHEN -1 THEN DATALENGTH(ST.text)
ELSE QS.statement_end_offset END
- QS.statement_start_offset)/2) + 1) AS statement_text
FROM sys.dm_exec_query_stats AS QS
CROSS APPLY sys.dm_exec_sql_text(QS.sql_handle)as ST) as query_stats
GROUP BY query_stats.query_hash
ORDER BY 2 DESC;

References: https://msdn.microsoft.com/en-us/library/ms189741.aspx

QUESTION 34
DRAG DROP

You are analyzing the memory usage of a Microsoft SQL Server instance.
You need to obtain the information described on the following table.

Which performance counter should you use for each requirement? To answer, drag the appropriate
performance counters to the correct requirements. Each performance counter may be used once, more than
once or not at all. You may need to drag the split bat between panes or scroll to view content.

NOTE: Each correct selection is worth one point.

Select and Place:

Correct Answer:

07B13F58239056B81577933EB624485B
Section: (none)
Explanation

Explanation/Reference:
Requirement1: SQL Server: Memory Manager: Total Server Memory (KB)
This counter specifies theamount of memory the server has committed using the memory manager.

Requirement2: SQL Server: Memory Manager: Granted Workspace Memory (KB)


Specifies the total amount of memory currently granted to executing processes, such as hash, sort, bulk copy,
andindex creation operations.

Requirement3: Process: working Set


Each time a process is created, it reserves the minimum working set size for the process. The virtual memory
manager attempts to keep enough memory for the minimum working set resident when the process is active,
but keeps no more than the maximum size.

References:
https://msdn.microsoft.com/en-us/library/ms190924.aspx
https://blogs.technet.microsoft.com/askperf/2007/05/18/sql-and-the-working-set/

QUESTION 35
You use Microsoft SQL Server Profile to evaluate a query named Query1. The Profiler report indicates the
following issues:
At each level of the query plan, a low total number of rows are processed.
The query uses many operations. This results in a high overall cost for the query.

You need to identify the information that will be useful for the optimizer.

What should you do?

A. Start a SQL Server Profiler trace for the event class Auto Stats in the Performance event category.
B. Create one Extended Events session with the sqlserver.missing_column_statistics event added.
C. Start a SQL Server Profiler trace for the event class Soft Warnings in the Errors and Warnings event
category.
D. Create one Extended Events session with the sqlserver.missing_join_predicate event added.

Correct Answer: D
Section: (none)
Explanation

Explanation/Reference:
Explanation:
The Missing Join Predicate event class indicates that a query is being executed that has no join predicate.

07B13F58239056B81577933EB624485B
This could result in a long-running query.

QUESTION 36
You are experiencing performance issues with the database server.
You need to evaluate schema locking issues, plan cache memory pressure points, and backup I/O problems.

What should you create?

A. a System Monitor report


B. a sys.dm_exec_query_stats dynamic management view query
C. a sys.dm_exec_session_wait_stats dynamicmanagement view query
D. an Activity Monitor session in Microsoft SQL Management Studio.

Correct Answer: C
Section: (none)
Explanation

Explanation/Reference:
Explanation:
sys.dm_exec_session_wait_stats returns information about all the waits encountered by threads that executed
for each session. You can use this view to diagnose performance issues with the SQL Server session and also
with specific queries and batches.

Note: SQL Server wait stats are, at their highest conceptual level, grouped into two broad categories: signal
waits and resource waits. A signal wait is accumulated by processes running on SQL Server which are waiting
for a CPU to become available (so called because the process has “signaled” that it is ready for processing). A
resource wait is accumulated by processes running on SQL Server which are waiting fora specific resource to
become available, such as waiting for the release of a lock on a specific record.

QUESTION 37
HOTSPOT

You are maintaining statistics for a database table named tblTransaction. The table contains more than 10
million records.
You need to create a stored procedure that meets the following requirements:
On weekdays, update statistics for a sample of the total number of records in the table.
On weekends, update statistics by sampling all rows in the table.
A maintenance task will call this stored procedure daily.
How should you complete the stored procedure? To answer, select the appropriate Transact-SQL segments in
the answer area.

NOTE: Each correct selection is worth one point.

Hot Area:

07B13F58239056B81577933EB624485B
Correct Answer:

Section: (none)
Explanation

Explanation/Reference:
Explanation:

07B13F58239056B81577933EB624485B
Box 1: UPDATE STATISTICS
Box 2: SAMPLE 20 PERCENT
UPDATE STATISTICS tablenameSAMPLE number { PERCENT | ROWS }
Specifies the approximate percentage or number of rows in the table or indexed view for the query optimizer to
use when it updates statistics. For PERCENT, number can be from 0 through 100 and for ROWS, number can
be from0 to the total number of rows.

Box 3: UPDATE STATISTICS


Box 4: WITH FULLSCAN
FULLSCAN computes statistics by scanning all rows in the table or indexed view. FULLSCAN and SAMPLE
100 PERCENT have the same results. FULLSCAN cannot be used with the SAMPLE option.

References: https://msdn.microsoft.com/en-us/library/ms187348.aspx

QUESTION 38
DRAG DROP

You have a database named MyDatabase. You must monitor all the execution plans in XML format by using
Microsoft SQL Trace. The trace must meet the following requirements:

Capture execution plans only for queries that run the MyDatabase database.
Filter out plans with event duration of less than or equal to 100 microseconds.
Save trace results to a disk on the server.

You need to create the trace.

In which order should you arrange the Transact-SQL segments to develop the solution? To answer, move all
Transact-SQL segments to the answer area and arrange them in the correct order.

NOTE: More than one order of answer choices is correct. You will receive credit for any of the correct orders
you select.

Select and Place:

Correct Answer:

07B13F58239056B81577933EB624485B
Section: (none)
Explanation

Explanation/Reference:
The following system stored procedures are used to define and manage traces:
* sp_trace_create is used to define a trace and specify an output file location as well asother options that I’ll
cover in the coming pages. This stored procedure returns a handle to the created trace, in the form of an
integer trace ID.
* sp_trace_setevent is used to add event/column combinations to traces based on the trace ID, as well as
toremove them, if necessary, from traces in which they have already been defined.
* sp_trace_setfilter is used to define event filters based on trace columns.
* sp_trace_setstatus is called to turn on a trace, to stop a trace, and to delete a trace definitiononce you’re
done with it. Traces can be started and stopped multiple times over their lifespan.

References: https://msdn.microsoft.com/en-us/library/cc293613.aspx

QUESTION 39
HOTSPOT

You are analyzing the performance of a database environment.


You need to find all unused indexes in the current database.
How should you complete the Transact-SQL statement? To answer, select the appropriate Transact-SQL
segments in the answer area.

Hot Area:

07B13F58239056B81577933EB624485B
Correct Answer:

Section: (none)
Explanation

Explanation/Reference:
Example: Following query helps you to find all unused indexes within database using
sys.dm_db_index_usage_stats DMV.

-- Ensure a USE statement has been executed first.


SELECT u.*
FROM [sys].[indexes] i
INNER JOIN [sys].[objects] o ON (i.OBJECT_ID = o.OBJECT_ID)
LEFT JOIN [sys].[dm_db_index_usage_stats] u ON (i.OBJECT_ID = u.OBJECT_ID)
AND i.[index_id] = u.[index_id]
AND u.[database_id] = DB_ID() --returning the database ID of the current database
WHERE o.[type] <> 'S' --shouldn't be a system base table

07B13F58239056B81577933EB624485B
AND i.[type_desc] <> 'HEAP'
AND i.[name] NOT LIKE 'PK_%'
AND u.[user_seeks] + u.[user_scans] + u.[user_lookups] = 0
AND u.[last_system_scan] IS NOT NULL
ORDER BY 1 ASC

References:
https://basitaalishan.com/2012/06/15/find-unused-indexes-using-sys-dm_db_index_usage_stats/

QUESTION 40
HOTSPOT

You are reviewing the execution plans in the query plan cache. You observe the following:
- There are a large number of single use plans.
- There are a large number of simple execution plans that use multiple CPU cores.
You need to configure the server to optimize query plan execution.

Which two setting should you modify on the properties page for the Microsoft SQL Server instance? To
answer, select the appropriate settings in the answer area.

Hot Area:

Correct Answer:

07B13F58239056B81577933EB624485B
Section: (none)
Explanation

Explanation/Reference:
* Optimize for ad hoc workloads
The optimize for ad hoc workloads option is used to improve the efficiency of the plan cache for workloads that
contain many single use ad hoc batches. When this option is set to 1, the Database Engine stores a small
compiled plan stub in the plan cache when a batch is compiled for the first time, instead of the full compiled
plan. This helps to relieve memory pressure by not allowing the plan cache to become filled with compiled
plans that are not reused.

* Cost Threshold for Parallelism


Use the cost threshold for parallelism option to specify the threshold at which Microsoft SQL Server creates
and runs parallel plans for queries. SQL Server creates and runs a parallel plan for a query only when the
estimated cost to run a serial plan for the same query is higher than the value set in cost threshold for
parallelism. The cost refers to an estimated elapsed time in seconds required to run the serial plan on a
specific hardware configuration.

5 means 5 seconds, but is is 5 seconds on a machine internal to Microsoft from some time in the 1990s.
There's no way to relate it to execution time on your current machine, so we treat it as a pure number now.
Raising it to 50 is a common suggestion nowadays, so that more of your simpler queries run on a single
thread.

QUESTION 41
Note: this question is part of a series of questions that use the same or similar answer choices. An answer
choice may be correct for more than one question in the series. Each question is independent of the other
questions in the series. Information and details provided in a question apply only to that question.

You are developing an application to track customer sales.


You need to create a database object that meets the following requirements:

07B13F58239056B81577933EB624485B
- Return a value of 0 if data inserted successfully into the Customers table.
- Return a value of 1 if data is not inserted successfully into the Customers table.
- Support logic that is written by using managed code.

What should you create?

A. extended procedure
B. CLR procedure
C. user-defined procedure
D. DML trigger
E. DDL trigger
F. scalar-valued function
G. table-valued function

Correct Answer: B
Section: (none)
Explanation

Explanation/Reference:
Explanation:
DML triggers is a special type of stored procedure that automatically takes effect when a data manipulation
language (DML) event takes place that affects the table or view defined in the trigger. DML events include
INSERT, UPDATE, or DELETE statements.DML triggers can be used to enforce business rules and data
integrity, query other tables, and include complex Transact-SQL statements.

A CLR trigger is a type of DDL trigger. A CLR Trigger can be either an AFTER or INSTEAD OF trigger. A CLR
trigger canalso be a DDL trigger. Instead of executing a Transact-SQL stored procedure, a CLR trigger
executes one or more methods written in managed code that are members of an assembly created in the .NET
Framework and uploaded in SQL Server.

References:https://msdn.microsoft.com/en-us/library/ms178110.aspx

QUESTION 42
Note: this question is part of a series of questions that use the same or similar answer choices. An answer
choice may be correct for more than one question in the series. Each question is independent of the other
questions in the series. Information and details provided in a question apply only to that question.

You are developing an application to track customer sales.


You need to create a database object that meets the following requirements:
- Return a value of 0 if data inserted successfully into the Customers table.
- Return a value of 1 if data is not inserted successfully into the Customers table.
- Support TRY…CATCH error handling
- Be written by using Transact-SQL statements.

What should you create?

A. extended procedure
B. CLR procedure
C. user-defined procedure
D. DML trigger
E. scalar-valued function
F. table-valued function

Correct Answer: D
Section: (none)

07B13F58239056B81577933EB624485B
Explanation

Explanation/Reference:
Explanation:
DML triggers is a special type of stored procedure that automatically takes effect when a data manipulation
language (DML) event takes place that affects the table or view defined in the trigger. DML events include
INSERT, UPDATE, or DELETE statements. DML triggers can be usedto enforce business rules and data
integrity, query other tables, and include complex Transact-SQL statements.

References:https://msdn.microsoft.com/en-us/library/ms178110.aspx

QUESTION 43
Note: this question is part of a series of questions that use the same or similar answer choices. An answer
choice may be correct for more than one question in the series. Each question is independent of the other
questions in the series. Information and details provided in a question apply only to that question.

You are developing an application to track customer sales.


You need to create a database object that meets the following requirements:
Launch when table data is modified.
Evaluate the state of a table before and after a data modification and take actions based on the difference.
Prevent malicious or incorrect table data operations.
Prevent changes that violate referential integrity by cancelling the attempted data modification.
Run managed code packaged in an assembly that is created in the Microsoft.NET Framework and located
into Microsoft SQL Server.

What should you create?

A. extended procedure
B. CLR procedure
C. user-defined procedure
D. DML trigger
E. scalar-valued function
F. table-valued function
G. DDL trigger

Correct Answer: B
Section: (none)
Explanation

Explanation/Reference:
Explanation:
You can create a database object inside SQL Server that is programmed in an assembly created in the
Microsoft .NET Framework common language runtime (CLR). Database objects that can leverage the rich
programmingmodel provided by the CLR include DML triggers, DDL triggers, stored procedures, functions,
aggregate functions, and types.

Creating a CLR trigger (DML or DDL) in SQL Server involves the following steps:
Define the trigger as a class in a .NETFramework-supported language. For more information about how to
program triggers in the CLR, see CLR Triggers. Then, compile the class to build an assembly in the .NET
Framework using the appropriate language compiler.
Register the assembly in SQL Server using the CREATE ASSEMBLY statement. For more information about
assemblies in SQL Server, see Assemblies (Database Engine).
Create the trigger that references the registered assembly.

References:https://msdn.microsoft.com/en-us/library/ms179562.aspx

07B13F58239056B81577933EB624485B
QUESTION 44
Note: this question is part of a series of questions that use the same or similar answer choices. An answer
choice may be correct for more than one question in the series. Each question is independent of the other
questions in the series. Information and details provided in a question apply only to that question.

You are developing and application to track customer sales.


You need to return the sum of orders that have been finalized, given a specified order identifier. This value will
be used in other Transact-SQL statements.
You need to create a database object.

What should you create?

A. extended procedure
B. CLR procedure
C. user-defined procedure
D. DML trigger
E. scalar-valued function
F. table-valued function

Correct Answer: F
Section: (none)
Explanation

Explanation/Reference:
Explanation:
User-defined scalar functions return a single data value of the type defined in the RETURNS clause.

References:https://technet.microsoft.com/en-us/library/ms177499(v=sql.105).aspx

QUESTION 45
Note: this question is part of a series of questions that use the same or similar answer choices. An
answer choice may be correct for more than one question in the series. Each question is independent
of the other questions in the series. Information and details provided in a question apply only to that
question.

You are developing and application to track customer sales.


You need to create an object that meet the following requirements:
- Run managed code packaged in an assembly that was created in the Microsoft.NET Framework and
uploaded in Microsoft SQL Server.
- Run within a transaction and roll back if a future occurs.
- Run when a table is created or modified.

What should you create?

A. extended procedure
B. CLR procedure
C. user-defined procedure
D. DML trigger
E. scalar-valued function
F. table-valued function
G. DDL trigger

Correct Answer: B
Section: (none)

07B13F58239056B81577933EB624485B
Explanation

Explanation/Reference:
Explanation:
The common language runtime (CLR) is the heart of the Microsoft .NET Framework andprovides the execution
environment for all .NET Framework code. Code that runs within the CLR is referred to as managed code.
With the CLR hosted in Microsoft SQL Server (called CLR integration), you can author stored procedures,
triggers, user-defined functions, user-defined types, and user-defined aggregates in managed code. Because
managed code compiles to native code prior to execution, you can achieve significant performance increases
in some scenarios.

QUESTION 46
You have a view that includes an aggregate.

You must be able to change the values of columns in the view. The changes must be reflected in the tables
that the view uses.

You need to ensure that you can update the view.

What should you create?

A. table-valued function
B. a schema-bound view
C. a partitioned view
D. a DML trigger

Correct Answer: D
Section: (none)
Explanation

Explanation/Reference:
Explanation:
When you create a view, you must obey the following conditions in order to make the view updateable:
The columns being modified in the view have to directly reference the underlying table columns. As a
consequence, the view’s columns being modified cannot be the result of an aggregate function or a
computed column.

The only way to make data changes on a non-updateable view is by using INSTEAD OF DML triggers. This
way you can use procedural code to overcome the limitation.

Reference:
https://www.mssqltips.com/sqlservertip/5984/sql-server-trigger-on-view-example/

QUESTION 47
DRAG DROP
You are creating a stored procedure which will insert data into the table shown in the Database schema
exhibit. (Click the exhibit button.)

07B13F58239056B81577933EB624485B
You need to insert a new customer record into the tables as a single unit of work.
Which five Transact-SQL segments should you use to develop the solution? To answer, move the appropriate
Transact-SQL segments to the answer area and arrange the, in the correct order.

NOTE: More than one order of answer choices is correct. You will receive credit for any of the correct orders
you select.

Select and Place:

Correct Answer:

07B13F58239056B81577933EB624485B
Section: (none)
Explanation

Explanation/Reference:
The entities on the many side, of the 1-many relations, must be added before we add the entities on the 1-side.
We must insert new rows into BusinessEntityContact and BusinessEntityAddress tables, before we insert the
corresponding rows into the BusinessEntity and AddressType tables.

QUESTION 48
Note: This question is part of a series of questions that use the same or similar answer choices. An Answer
choice may be correct for more than one question in the series. Each question independent of the other
questions in this series. Information and details provided in a question apply only to that question.

You are a database developer for a company. The company has a server that has multiple physical disks. The
disks are not part of a RAID array. The server hosts three Microsoft SQL Server instances. There are many
SQL jobs that run during off-peak hours.
You must monitor and optimize the SQL Server to maximize throughput, response time, and overall SQL
performance.
You need to identify previous situations where a modification has prevented queries from selecting data in
tables.

What should you do?

A. Create a sys.dm_os_waiting_tasks query.


B. Create a sys.dm_exec_sessions query.
C. Create a Performance Monitor Data Collector Set.
D. Create a sys.dm_os_memory_objects query.
E. Create a sp_configure ‘max server memory’ query.
F. Create a SQL Profiler trace.
G. Create a sys.dm_os_wait_stats query.
H. Create an Extended Event.

Correct Answer: G
Section: (none)
Explanation

Explanation/Reference:
Explanation:
sys.dm_os_wait_stats returns information about all the waits encountered by threads that executed. You can
use this aggregated view to diagnose performance issues with SQL Server and also with specific queries and
batches.

07B13F58239056B81577933EB624485B
QUESTION 49
Note: This question is part of a series of questions that use the same or similar answer choices. An
Answer choice may be correct for more than one question in the series. Each question independent of
the other questions in this series. Information and details provided in a question apply only to that
question.

You are a database developer for a company. The company has a server that has multiple physical disks. The
disks are not part of a RAID array. The server hosts three Microsoft SQL Server instances. There are many
SQL jobs that run during off-peak hours.

You observe that many deadlocks appear to be happening during specific times of the day.

You need to monitor the SQL environment and capture the information about the processes that are causing
the deadlocks. Captured information must be viewable as the queries are running.

What should you do?

A. A. Create a sys.dm_os_waiting_tasks query.


B. Create a sys.dm_exec_sessions query.
C. Create a PerformanceMonitor Data Collector Set.
D. Create a sys.dm_os_memory_objects query.
E. Create a sp_configure ‘max server memory’ query.
F. Create a SQL Profiler trace.
G. Create a sys.dm_os_wait_stats query.
H. Create an Extended Event.

Correct Answer: F
Section: (none)
Explanation

Explanation/Reference:
Explanation:
To view deadlock information, the Database Engine provides monitoring tools in the form of two trace flags,
and the deadlock graph event in SQL Server Profiler.

Trace Flag 1204 and Trace Flag 1222


When deadlocks occur, trace flag 1204 and trace flag 1222 return information that is captured in the SQL
Server error log. Trace flag 1204 reports deadlock information formatted by each nodeinvolved in the
deadlock. Trace flag 1222 formats deadlock information, first by processes and then by resources. It is
possible to enable bothtrace flags to obtain two representations of the same deadlock event.

References:https://technet.microsoft.com/en-us/library/ms178104(v=sql.105).aspx

QUESTION 50
Note: This question is part of a series of questions that use the same or similar answer choices. An Answer
choice may be correct for more than one question in the series. Each question independent of the other
questions in this series. Information and details provided in a question apply only to that question.

You are a database developer for a company. The company has a server that has multiple physical disks. The
disks are not part of a RAID array. The server hosts three Microsoft SQL Server instances. There are many
SQL jobs that run during off-peak hours.

You must monitor the SQL Server instances in real time and optimize the server to maximize throughput,
response time, and overall SQL performance.

What should you do?

07B13F58239056B81577933EB624485B
A. A. Create asys.dm_os_waiting_tasks query.
B. Create a sys.dm_exec_sessions query.
C. Create a Performance Monitor Data Collector Set.
D. Create a sys.dm_os_memory_objects query.
E. Create a sp_configure ‘max server memory’ query.
F. Create a SQL Profiler trace.
G. Create a sys.dm_os_wait_stats query.
H. Create an Extended Event.

Correct Answer: B
Section: (none)
Explanation

Explanation/Reference:
Explanation:
sys.dm_exec_sessions returns one row per authenticated session on SQL Server. sys.dm_exec_sessions is a
server-scope view that shows information about all active user connections and internal tasks. This information
includes client version, client program name, client login time, login user, current session setting, and more.
Use sys.dm_exec_sessions to first view the current system load and to identify a session of interest, and then
learn more information about that session by using other dynamic management views or dynamic
management functions.

Examples of use include finding long-running cursors, and finding idle sessions that have open transactions.

QUESTION 51
Note: This question is part of a series of questions that use the same or similar answer choices. An Answer
choice may be correct for more than one question in the series. Each question independent of the other
questions in this series. Information and details provided in a question apply only to that question.

You are a database developer for a company. The company has a server that has multiple physical disks. The
disks are not part of a RAID array. The server hosts three Microsoft SQL Server instances. There are many
SQL jobs that run during off-peak hours.

You must monitor the SQL Server instances in real time and optimize the server to maximize throughput,
response time, and overall SQL performance.

You need to ensure that the performance of each instance is consistent for the same queried and query plans.

What should you do?

A. A. Create a sys.dm_os_waiting_tasks query.


B. Create a sys.dm_exec_sessions query.
C. Create a Performance Monitor Data Collector Set.
D. Create a sys.dm_os_memory_objects query.
E. Create a sp_configure ‘max server memory’ query.
F. Create a SQL Profiler trace.
G. Create asys.dm_os_wait_stats query.
H. Create an Extended Event.

Correct Answer: H
Section: (none)
Explanation

Explanation/Reference:
Explanation:

07B13F58239056B81577933EB624485B
Advanced Viewing of Target Data from Extended Events in SQL Server

When your event session is currently active, you might want to watch the event data in real time, as it is
received by the target.
Management > Extended Events > Sessions > [your-session] > Watch Live Data.

The query_post_execution_showplan extended event enables you to see the actual query plan in the SQL
Server Management Studio (SSMS) UI. When the Details pane is visible, you can see a graph of the query
plan on the Query Plan tab. By hovering over a node on the query plan, you cansee a list of property names
and their values for the node.

References:
https://msdn.microsoft.com/en-us/library/mt752502.aspx

QUESTION 52
Note: This question is part of a series of questions that use the same or similar answer choices. An
Answer choice may be correct for more than one question in the series. Each question independent of
the other questions in this series. Information and details provided in a question apply only to that
question.

You are a database developer for a company. The company has a server that has multiple physical disks. The
disks are not part of a RAID array. The server hosts three SQL Server instances. There are many SQL jobs
that run during off-peak hours.

You must monitor the SQL Server instances in real time and optimize the server to maximize throughput,
response time, and overall SQL performance.

You need to collect query performance data while minimizing the performance impact on the SQL Server.

What should you do?

A. A. Create a sys.dm_os_waiting_tasks query.

07B13F58239056B81577933EB624485B
B. Create a sys.dm_exec_sessions query.
C. Create a Performance Monitor Data Collector Set.
D. Create a sys.dm_os_memory_objects query.
E. Create a sp_configure ‘max server memory’ query.
F. Create a SQL Profiler trace.
G. Create a sys.dm_os_wait_stats query.
H. Create an Extended Event.

Correct Answer: C
Section: (none)
Explanation

Explanation/Reference:
Explanation:
SQL Server Data Collector is a feature for performance monitoring and tuning available in SQL Server
Management Studio.

Integration Services packages transform and load the collected data into the Microsoft Data Warehouse
database.

Collection sets are defined and deployed on a server instance and can be run independently of each other.
Each collection set can be applied to a target that matches the target types of all the collector types that are
part of a collection set. The collection set is run by a SQL Server Agent job or jobs, and data is uploaded to the
management data warehouse on a predefined schedule.

Predefined data collection sets include:


The Query Statistics data collection set collects information about query statistics, activity, execution plans
and text on the SQL Server instance. It does not store all executed statements, only 10 worst performing
ones.
Disk Usage data collection set collects information about disk space used by both data and log files for all
databases on the SQL Server instance, growth trends, and average day growth.
Etc.

References:
http://www.sqlshack.com/sql-server-performance-monitoring-data-collector/

QUESTION 53
Note: This question is part of a series of questions that use the same or similar answer choices. An
Answer choice may be correct for more than one question in the series. Each question independent of
the other questions in this series. Information and details provided in a question apply only to that
question.

You are a database developer for a company. The company has a server that has multiple physical disks. The
disks are not part of a RAID array. The server hosts three SQL Server instances. There are many SQL jobs
that run during off-peak hours.

You must monitor the SQL Server instances in real time and optimize the server to maximize throughput,
response time, and overall SQL performance.

You need to create a baseline set of metrics to report how the computer running SQL Server operates under
normal load. The baseline must include the resource usage associated with the server processes.

What should you do?

A. Create a sys.dm_os_waiting_tasks query.


B. Create a sys.dm_exec_sessions query.

07B13F58239056B81577933EB624485B
C. Create a Performance Monitor Data Collector Set.
D. Create a sys.dm_os_memory_objects query.
E. Create a sp_configure ‘max server memory’ query.
F. Create a SQL Profiler trace.
G. Create asys.dm_os_wait_stats query.
H. Create an Extended Event.

Correct Answer: D
Section: (none)
Explanation

Explanation/Reference:
Explanation:
sys.dm_os_memory_objects returns memory objects that are currently allocated by SQL Server. You can
usesys.dm_os_memory_objects to analyze memory use and to identify possible memory leaks.

Example: The following example returns the amount of memory allocated by each memory object type.
SELECT SUM (pages_in_bytes) as 'Bytes Used', type
FROMsys.dm_os_memory_objects
GROUP BY type
ORDER BY 'Bytes Used' DESC;
GO

QUESTION 54
HOTSPOT

You have a database named Sales.

You need to create a table named Customer that includes the columns described in the following table:

How should you complete the Transact SQL statement? To answer, select the appropriate Transact-SQL
segments in the answer area.

Hot Area:

07B13F58239056B81577933EB624485B
Correct Answer:

Section: (none)

07B13F58239056B81577933EB624485B
Explanation

Explanation/Reference:
Box 1: MASKED WITH (FUNCTION ='default()')
TheDefualt masking method provides full masking according to the data types of the designated fields.
Example column definition syntax: Phone# varchar(12) MASKED WITH (FUNCTION = 'default()') NULL

Box 2: MASKED WITH (FUNCTION ='partial(3,"XXXXXX",0)')


The Custom String Masking method exposes the first and last letters and adds a custom padding string in the
middle. prefix,[padding],suffix
examples:

Phone Number varchar(10) MASKED WITH (FUNCTION = 'partial(5,"XXXXXXX",0)')

Box 3: MASKED WITH (FUNCTION ='email()')


The Email masking method which exposes the first letter of an email address and the constant suffix ".com", in
the form of an email address. [email protected].

Example definition syntax: Email varchar(100) MASKEDWITH (FUNCTION = 'email()') NULL

References:
https://msdn.microsoft.com/en-us/library/mt130841.aspx

QUESTION 55
HOTSPOT

You are developing an app that allows users to query historical company financial data. You are reviewing
email messages from the various stakeholders for a project.

The message from the security officer is shown in the Security Officer Email exhibit below.

TO: Database developer


From: Security Officer
Subject: SQL object requirements

We need to simplify the security settings for the SQL objects. Having a assign permissions at every object in
SQL is tedious and leads to a problem. Documentation is also much more difficult when we have to assign
permissions at multiple levels. We need to assign the required permissions at one object, even though that
object may be obtaining from other objects.

The message from the sales manager is shown in the Sales Manager Email exhibit below.

TO: Database developer


From: Sales Manager
Subject: Needed SQL objects

When creating objects for our use, they need to be flexible. We will be changing the base infrastructure
frequently. We need components in SQL that will provide backward compatibility to our front end applications
as the environments change so that we do not need to modify the front end applications. We need objects that
can provide a filtered set of the data. The data may be coming from multiple tables and we need an object that
can provide access to all of the data through a single object reference.

This is an example of the types of data we need to be able to have queries against without having to change
the front end applications.

07B13F58239056B81577933EB624485B
The message from the web developer is shown in the Web Developer Email exhibit below.

TO: Database developer


From: Web Developer
Subject: SQL Object component

Whatever you will be configuring to provide access to data in SQL, it needs to connect using the items
referenced in this interface. We have been using this for a long time, and we cannot change this from end
easily. Whatever objects are going to be used in SQL they must work using object types this interface
references.

You need to create one or more objects that meet the needs of the security officer, the sales manager and the
web developer.
For each of the following statements, select Yes if the statement is true. Otherwise, select No.

Hot Area:

07B13F58239056B81577933EB624485B
Correct Answer:

Section: (none)
Explanation

Explanation/Reference:
* Stored procedure: Yes
A stored procedure to implement the following:
Whatever you will be configuring to provide access to data in SQL, it needs to connect using the items
referenced in this interface. We have been using this for a long time, and we cannot change this from end
easily. Whatever objects are going to be used in SQL they must work using object types this interface
references.

* Trigger: No
No requirements are related to actions taken when changing the data.

* View: Yes
Because: We need objects that can provide a filtered set of the data. The data may be coming from multiple
tables and we need an object that can provide access to all of the data through a single object reference.

07B13F58239056B81577933EB624485B
QUESTION 56
Note: This question is part of a series of questions that present the same scenario. Each question in the series
contains a unique solution. Determine whether the solution meets the stated goals.

You have a database that contains a table named Employees. The table stored information about the
employees of your company.
You need to implement the following auditing rules for the Employees table:
- Record any changes that are made to the data in the Employees table.
- Customize the data that is recorded by the audit operations.

Solution: You implement a user-defined function on the Employees table.

Does the solution meet the goal?

A. Yes
B. No

Correct Answer: A
Section: (none)
Explanation

Explanation/Reference:
Explanation:
SQL Server 2016 provides two features that track changes to data in a database: change data capture and
change tracking. These features enable applications to determine the DML changes (insert, update, and delete
operations) that were made to user tables in a database.

Change data is made available to change data capture consumers through table-valued functions (TVFs).

References:https://msdn.microsoft.com/en-us/library/cc645858.aspx

QUESTION 57
Note: This question is part of a series of questions that present the same scenario. Each question in the series
contains a unique solution. Determine whether the solution meets the stated goals.

You have a database that contains a table named Employees. The table stored information about the
employees of your company.
You need to implement the following auditing rules for the Employees table:
- Record any changes that are made to the data in the Employees table.
- Customize the data that is recorded by the audit operations.

Solution: You implement a check constraint on the Employees table.

Does the solution meet the goal?

A. Yes
B. No

Correct Answer: B
Section: (none)
Explanation

Explanation/Reference:
Explanation:
Check constraints cannot be used to track changes in a table.

References:https://msdn.microsoft.com/en-us/library/bb933994.aspx

07B13F58239056B81577933EB624485B
QUESTION 58
Note: This question is part of a series of questions that present the same scenario. Each question in the series
contains a unique solution. Determine whether the solution meets the stated goals.

You have a database that contains a table named Employees. The table stored information about the
employees of your company.
You need to implement the following auditing rules for the Employees table:
- Record any changes that are made to the data in the Employees table.
- Customize the data that is recorded by the audit operations.

Solution: You implement a stored procedure on the Employees table.

Does the solution meet the goal?

A. Yes
B. No

Correct Answer: B
Section: (none)
Explanation

Explanation/Reference:
Explanation:
We should use table-valued functions, not procedures, to customize the recorded change data.

References: https://msdn.microsoft.com/en-us/library/cc645858.aspx

QUESTION 59
Note: This question is part of a series of questions that present the same scenario. Each question in the series
contains a unique solution. Determine whether the solution meets the stated goals.

Your company has employees in different regions around the world.


You need to create a database table that stores the following employee attendance information:
- Employee ID
- date and time employee checked in to work
- date and time employee checked out of work

Date and time information must be time zone aware and must not store fractional seconds.

Solution: You run the following Transact-SQL statement:

Does the solution meet the goal?

A. Yes
B. No

07B13F58239056B81577933EB624485B
Correct Answer: B
Section: (none)
Explanation

Explanation/Reference:
Explanation:
Datetimeoffset, not datetimeofset, defines a date that is combined with a time of a day that has time zone
awareness and is based on a 24-hourclock.
Syntaxis: datetimeoffset [ (fractional seconds precision) ]
For the use "datetimeoffset", the Fractional seconds precision is 7.

References:https://msdn.microsoft.com/en-us/library/bb630289.aspx

QUESTION 60
Note: This question is part of a series of questions that present the same scenario. Each question in the series
contains a unique solution. Determine whether the solution meets the stated goals.

Your company has employees in different regions around the world.

You need to create a database table that stores the following employee attendance information:
Employee ID
date and time employee checked in to work
date and time employee checked out of work

Date and time information must be time zone aware and must not store fractional seconds.

Solution: You run the following Transact-SQL statement:

Does the solution meet the goal?

A. Yes
B. No

Correct Answer: B
Section: (none)
Explanation

Explanation/Reference:
Explanation:
Datetime2 stores fractional seconds.
Datetime2 defines adate that is combined with a time of day that is based on 24-hour clock. datetime2 can be
considered as an extension of the existing datetime type that has a larger date range, a larger default
fractional precision, and optional user-specified precision.

References: https://docs.microsoft.com/en-us/sql/t-sql/data-types/datetime2-transact-sql?view=sql-server-

07B13F58239056B81577933EB624485B
2017
https://msdn.microsoft.com/en-us/library/bb677335.aspx

QUESTION 61
Note: This question is part of a series of questions that present the same scenario. Each question in the series
contains a unique solution. Determine whether the solution meets the stated goals.

Your company has employees in different regions around the world.


You need to create a database table that stores the following employee attendance information:
Employee ID
date and time employee checked in to work
date and time employee checked out of work

Date and time information must be time zone aware and must not store fractional seconds.

Solution: You run the following Transact-SQL statement:

Does the solution meet the goal?

A. Yes
B. No

Correct Answer: A
Section: (none)
Explanation

Explanation/Reference:
Explanation:
Datetimeoffset defines a date that is combined with a time of a day that has time zone awareness and is based
on a 24-hour clock.
Syntaxis: datetimeoffset [ (fractional seconds precision) ]
Forthe use "datetimeoffset(0)", the Fractional seconds precision is 0, which is required here.

References:https://msdn.microsoft.com/en-us/library/bb630289.aspx

QUESTION 62
Note: This question is part of a series of questions that present the same scenario. Each question in the series
contains a unique solution. Determine whether the solution meets the stated goals.

The Account table was created using the following Transact-SQL statement:

07B13F58239056B81577933EB624485B
There are more than 1 billion records in the Account table. The Account Number column uniquely identifies
each account. The ProductCode column has 100 different values. The values are evenly distributed in the
table. Table statistics are refreshed and up to date.

You frequently run the following Transact-SQL SELECT statements:

You must avoid table scans when you run the queries.
You need to create one or more indexes for the table.
Solution: You run the following Transact-SQL statement:

Does the solution meet the goal?

A. Yes
B. No

Correct Answer: B
Section: (none)
Explanation

Explanation/Reference:
Explanation:
Create a clustered index on the AccountNumber column as it is unique, not a non nonclustered one.

References:https://msdn.microsoft.com/en-us/library/ms190457.aspx

QUESTION 63
Note: This question is part of a series of questions that present the same scenario. Each question in the series
contains a unique solution. Determine whether the solution meets the stated goals.

07B13F58239056B81577933EB624485B
The Account table was created using the following Transact-SQL statement:

There are more than 1 billion records in the Account table. The Account Number column uniquely identifies
each account. The ProductCode column has 100 different values. The values are evenly distributed in the
table. Table statistics are refreshed and up to date.

You frequently run the following Transact-SQL SELECT statements:

You must avoid table scans when you run the queries.
You need to create one or more indexes for the table.
Solution: You run the following Transact-SQL statement:

CREATE CLUSTERED INDEX PK_Account ON Account(ProductCode);

Does the solution meet the goal?

A. Yes
B. No

Correct Answer: B
Section: (none)
Explanation

Explanation/Reference:
Explanation:
We need an index on the productCode column as well.

References:https://msdn.microsoft.com/en-us/library/ms190457.aspx

QUESTION 64
Note: This question is part of a series of questions that present the same scenario. Each question in the series
contains a unique solution. Determine whether the solution meets the stated goals.

The Account table was created using the following Transact-SQL statement:

07B13F58239056B81577933EB624485B
There are more than 1 billion records in the Account table. The Account Number column uniquely identifies
each account. The ProductCode column has 100 different values. The values are evenly distributed in the
table. Table statistics are refreshed and up to date.

You frequently run the following Transact-SQL SELECT statements:

You must avoid table scans when you run the queries.
You need to create one or more indexes for the table.
Solution: You run the following Transact-SQL statements:

Does the solution meet the goal?

A. Yes
B. No

Correct Answer: A
Section: (none)
Explanation

Explanation/Reference:
Explanation:
Create a clustered index on theAccountNumber column as it is unique.
Create a nonclustered index that includes the ProductCode column.

References:https://msdn.microsoft.com/en-us/library/ms190457.aspx

QUESTION 65
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution. Determine whether the solution meets the stated goals.

You need to create a stored procedure that updates the Customer, CustomerInfo, OrderHeader, and
OrderDetails tables in order.

07B13F58239056B81577933EB624485B
You need to ensure that the stored procedure:

Runs within a single transaction.


Commits updates to the Customer and CustomerInfo tables regardless of the status of updates to the
OrderHeader and OrderDetail tables.
Commits changes to all four tables when updates to all four tables are successful.

Solution: You create a stored procedure that includes the following Transact-SQL segment:

Does the solution meet the goal?

A. Yes
B. No

Correct Answer: B
Section: (none)
Explanation

Explanation/Reference:
Explanation:
All four tables are updated in a single transaction.

Need to handle the case where the first two updates (OrderHeader, OrderDetail) are successful, but either the
3rd or the 4th (OrderHeader, OrderDetail) fail. Can add a variable in the BEGIN TRY block, and test the
variable in the BEGIN CATCH block.

References:
https://docs.microsoft.com/en-us/sql/t-sql/language-elements/begin-transaction-transact-sql

QUESTION 66
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution. Determine whether the solution meets the stated goals.

You need to create a stored procedure that updates the Customer, CustomerInfo, OrderHeader, and
OrderDetails tables in order.

You need to ensure that the stored procedure:


Runs within a single transaction.
Commits updates to the Customer and CustomerInfo tables regardless of the status of updates to the
OrderHeader and OrderDetail tables.
Commits changes to all four tables when updates to all four tables are successful.

07B13F58239056B81577933EB624485B
Solution: You create a stored procedure that includes the following Transact-SQL segment:

Does the solution meet the goal?

A. Yes
B. No

Correct Answer: A
Section: (none)
Explanation

Explanation/Reference:
Explanation:
Need to handle the case where the first two updates (OrderHeader, OrderDetail) are successful, but either the
3rd or the 4th (OrderHeader, OrderDetail) fail. We add the @CustomerComplete variable in the BEGIN TRY
block, and test it in the BEGIN CATCH block.

Note: XACT_STATE indicates whether the request has an active user transaction, and whether the transaction
is capable of being committed.
XACT_STATE =1: the current request has an active user transaction. The request can perform any actions,
including writing data and committing the transaction.

References:
https://docs.microsoft.com/en-us/sql/t-sql/functions/xact-state-transact-sql

QUESTION 67
Note: This question is part of a series of questions that present the same scenario. Each question in the series
contains a unique solution. Determine whether the solution meets the stated goals.

You have a database that contains a table named Employees. The table stores information about the
employees of your company.

07B13F58239056B81577933EB624485B
You need to implement and enforce the following business rules:
Limit the values that are accepted by the Salary column.
Prevent salaries less than $15,000 and greater than $300,000 from being entered.
Determine valid values by using logical expressions.
Do not validate data integrity when running DELETE statements.

Solution: You implement a check constraint on the table.

Does the solution meet the goal?

A. Yes
B. No

Correct Answer: A
Section: (none)
Explanation

Explanation/Reference:
Explanation:

References: https://en.wikipedia.org/wiki/Check_constraint

QUESTION 68
Note: This question is part of a series of questions that present the same scenario. Each question in the series
contains a unique solution. Determine whether the solution meets the stated goals.

You have a database that contains a table named Employees. The table stores information about the
employees of your company.

You need to implement and enforce the following business rules:


Limit the values that are accepted by the Salary column.
Prevent salaries less than $15,000 and greater than $300,000 from being entered.
Determine valid values by using logical expressions.
Do not validate data integrity when running DELETE statements.

Solution: You implement a FOR UPDATE trigger on the table.

Does the solution meet the goal?

A. Yes
B. No

Correct Answer: B
Section: (none)
Explanation

Explanation/Reference:
Explanation:

References: http://stackoverflow.com/questions/16081582/difference-between-for-update-of-and-for-update

QUESTION 69
Note: This question is part of a series of questions that present the same scenario. Each question in the series
contains a unique solution. Determine whether the solution meets the stated goals.

You have a database that contains a table named Employees. The table stores information about the

07B13F58239056B81577933EB624485B
employees of your company.

You need to implement and enforce the following business rules:


Limit the values that are accepted by the Salary column.
Prevent salaries less than $15,000 and greater than $300,000 from being entered.
Determine valid values by using logical expressions.
Do not validate data integrity when running DELETE statements.

Solution: You implement cascading referential integrity constraints on the table.

Does the solution meet the goal?

A. Yes
B. No

Correct Answer: A
Section: (none)
Explanation

Explanation/Reference:
Explanation:

References: https://technet.microsoft.com/en-us/library/ms186973(v=sql.105).aspx

QUESTION 70
Note: This question is part of a series of questions that present the same scenario. Each question in the series
contains a unique solution. Determine whether the solution meets the stated goals.

The Account table was created by using the following Transact-SQL statement:

There are more than 1 billion records in the Account table. The Account Number column uniquely identifies
each account. The ProductCode column has 100 different values. The values are evenly distributed in the
table. Table statistics are refreshed and up to date.

You frequently run the following Transact-SQL SELECT statements:

You must avoid table scans when you run the queries.

07B13F58239056B81577933EB624485B
You need to create one or more indexes for the table.

Solution: You run the following Transact-SQL statement:

CREATE NONCLUSTERED INDEX IX_Account_ProductCode ON Account(ProductCode);

Does the solution meet the goal?

A. Yes
B. No

Correct Answer: A
Section: (none)
Explanation

Explanation/Reference:
Explanation:

References: https://msdn.microsoft.com/en-za/library/ms189280.aspx

QUESTION 71
Note: This question is part of a series of questions that present the same scenario. Each question in the series
contains a unique solution. Determine whether the solution meets the stated goals.

You are developing a new application that uses a stored procedure. The stored procedure inserts thousands of
records as a single batch into the Employees table.

Users report that the application response time has worsened since the stored procedure was updated. You
examine disk-related performance counters for the Microsoft SQL Server instance and observe several high
values that include a disk performance issue. You examine wait statistics and observe an unusually high
WRITELOG value.

You need to improve the application response time.

Solution: You update the application to use implicit transactions when connecting to the database.

Does the solution meet the goal?

A. Yes
B. No

Correct Answer: B
Section: (none)
Explanation

Explanation/Reference:
Explanation:

References: http://sqltouch.blogspot.co.za/2013/05/writelog-waittype-implicit-vs-explicit.html

QUESTION 72
Note: This question is part of a series of questions that present the same scenario. Each question in the series
contains a unique solution. Determine whether the solution meets the stated goals.

You are developing a new application that uses a stored procedure. The stored procedure inserts thousands of
records as a single batch into the Employees table.

07B13F58239056B81577933EB624485B
Users report that the application response time has worsened since the stored procedure was updated. You
examine disk-related performance counters for the Microsoft SQL Server instance and observe several high
values that include a disk performance issue. You examine wait statistics and observe an unusually high
WRITELOG value.

You need to improve the application response time.

Solution: You add a unique clustered index to the Employees table.

Does the solution meet the goal?

A. Yes
B. No

Correct Answer: A
Section: (none)
Explanation

Explanation/Reference:
Explanation:

References: https://msdn.microsoft.com/en-us/library/ms190457.aspx

QUESTION 73
Note: This question is part of a series of questions that present the same scenario. Each question in the series
contains a unique solution. Determine whether the solution meets the stated goals.

You are developing a new application that uses a stored procedure. The stored procedure inserts thousands of
records as a single batch into the Employees table.

Users report that the application response time has worsened since the stored procedure was updated. You
examine disk-related performance counters for the Microsoft SQL Server instance and observe several high
values that include a disk performance issue. You examine wait statistics and observe an unusually high
WRITELOG value.

You need to improve the application response time.

Solution: You replace the stored procedure with a user-defined function.

Does the solution meet the goal?

A. Yes
B. No

Correct Answer: B
Section: (none)
Explanation

Explanation/Reference:
Explanation:

References: https://msdn.microsoft.com/en-us/library/ms345075.aspx

QUESTION 74
Note: This question is part of a series of questions that use the same answer choices. An answer choice may

07B13F58239056B81577933EB624485B
be correct for more than one question in the series. Each question is independent of the other questions in this
series. Information and details provided in a question apply only to that question.

You are developing an application to track customer sales. You create tables to support the application. You
need to create a database object that meets the following data entry requirements:

What should you create?

A. extended procedure
B. CLR procedure
C. user-defined procedure
D. DML trigger
E. DDL trigger
F. scalar-valued function
G. table-valued function

Correct Answer: C
Section: (none)
Explanation

Explanation/Reference:
Explanation:

References: https://msdn.microsoft.com/en-us/library/ms345075.aspx

QUESTION 75
DRAG DROP

You have a Microsoft Azure SQL Database named MyDb that uses server version V12.

You plan to use Query Performance Insight to troubleshoot performance problems. The database query store
is not enabled.

You need to enable the database query store to meet the following requirements for the database:
Statistics must be aggregated every 15 minutes.
Query stores must use no more than 1,024 megabytes (MB) of storage.
Query information must be retained for at least 15 days.
Queries must be captured based on resource consumption.

07B13F58239056B81577933EB624485B
You connect to the database by using SQL Server Managements Studio.

How should you complete the Transact-SQL statements? To answer, drag the appropriate Transact-SQL
segments to the correct locations. Each Transact-SQL segment may be used once, more than once, or not at
all. You may need to drag the split bar between panes or scroll to view content.

NOTE: More than one combination of answer choices is correct. You will receive credit for any of the correct
combinations you select. Each correct selection is worth one point.

Select and Place:

Correct Answer:

Section: (none)
Explanation

Explanation/Reference:
Explanation:

07B13F58239056B81577933EB624485B
or

Both answers are correct.

References: https://msdn.microsoft.com/en-us/library/mt604821.aspx

QUESTION 76
You are experiencing performance issues with the database server.
You need to evaluate schema locking issues, plan cache memory pressure points, and backup I/O problems.

What should you create?

A. a System Monitor report


B. a sys.dm_tran_database_transaction dynamic management view query
C. an Extended Events session that uses Query Editor
D. an Activity Monitor session in Microsoft SQL Management Studio.

Correct Answer: D
Section: (none)
Explanation

Explanation/Reference:
Explanation:

References: https://msdn.microsoft.com/en-us/library/hh212951.aspx

07B13F58239056B81577933EB624485B
QUESTION 77
HOTSPOT

You create a database table named FactSales by running the following Transact-SQL statements:

You must optimize the indexes without making changes to the ix_FactSales_EmployeeKey index.

You need to implement a columnstore index for the table.

How should you complete the Transact-SQL code? To answer, select the appropriate Transact-SQL segments
in the answer area.

NOTE: Each correct selection is worth one point.

Hot Area:

Correct Answer:

07B13F58239056B81577933EB624485B
Section: (none)
Explanation

Explanation/Reference:

QUESTION 78
DRAG DROP

You have a trigger named CheckTriggerCreation that runs when a user attempts to create a trigger. The
CheckTriggerCreation trigger was created with the ENCRYPTION option and additional proprietary business
logic.

You need to prevent users from running the ALTER and DROP statements or the sp_tableoption stored
procedure.

Which three Transact-SQL segments should you use to develop the solution? To answer, move the
appropriate Transact-SQL segments from the list of Transact-SQL segments to the answer area and arrange
them in the correct order.

Select and Place:

07B13F58239056B81577933EB624485B
Correct Answer:

Section: (none)

07B13F58239056B81577933EB624485B
Explanation

Explanation/Reference:

QUESTION 79
DRAG DROP

You need to build a function that meets the following requirements:


Returns multiple rows
Optimizes the performance of the query within the function

How should you complete the Transact-SQL statement? To answer, drag the appropriate Transact-SQL
statements to the correct locations. Each Transact-SQL segment may be used once, more than once, or not at
all. You may need to drag the split bar between panes or scroll to view content.

Select and Place:

Correct Answer:

07B13F58239056B81577933EB624485B
Section: (none)
Explanation

Explanation/Reference:
References: https://technet.microsoft.com/en-us/library/ms189294(v=sql.105).aspx

QUESTION 80
DRAG DROP

You must create two staging database tables. The tables have the following requirements:

07B13F58239056B81577933EB624485B
You need to select the correct storage mechanism for each table.

Which storage mechanism should you use? To answer, drag the appropriate table types to the correct tables.
Each table type may be used once, more than once, or not at all. You may need to drag the split bar between
panes or scroll to view content.

Select and Place:

Correct Answer:

07B13F58239056B81577933EB624485B
Section: (none)
Explanation

Explanation/Reference:

QUESTION 81
Note: This question is part of a series of questions that use the same or similar answer choices. An answer
choice may be correct for more than one question in the series. Each question is independent of the other
questions in this series. Information and details provided in a question apply only to that question.

You have a database named DB1. There is no memory-optimized filegroup in the database.

You have a table and a stored procedure that were created by running the following Transact-SQL statements:

07B13F58239056B81577933EB624485B
The Employee table is persisted on disk. You add 2,000 records to the Employee table.

You need to create an index that meets the following requirements:


Optimizes the performance of the stored procedure.
Covers all the columns required from the Employee table.
Uses FirstName and LastName as included columns.
Minimizes index storage size and index key size.

What should you do?

A. Create a clustered index on the table.


B. Create a nonclustered index on the table.
C. Create a nonclustered filtered index on the table.
D. Create a clustered columnstore index on the table.
E. Create a nonclustered columnstore index on the table.
F. Create a hash index on the table.

Correct Answer: B
Section: (none)
Explanation

Explanation/Reference:
Explanation:

References: https://technet.microsoft.com/en-us/library/jj835095(v=sql.110).aspx

QUESTION 82
Case Study

07B13F58239056B81577933EB624485B
Background

You have a database named HR1 that includes a table named Employee.

You have several read-only, historical reports that contain regularly changing totals. The reports use multiple
queries to estimate payroll expenses. The queries run concurrently. Users report that the payroll estimate
reports do not always run. You must monitor the database to identify issues that prevent the reports from
running.

You plan to deploy the application to a database server that supports other applications. You must minimize
the amount of storage that the database requires.

Employee Table

You use the following Transact-SQL statements to create, configure, and populate the Employee table:

Application

You have an application that updates the Employees table. The application calls the following stored
procedures simultaneously and asynchronously:
UspA: This stored procedure updates only the EmployeeStatus column.
UspB: This stored procedure updates only the EmployeePayRate column.

The application uses views to control access to data. Views must meet the following requirements:
Allow users access to all columns in the tables that the view accesses.
Restrict updates to only the rows that the view returns.

Exhibit

07B13F58239056B81577933EB624485B
Both of the stored procedures experience blocking issues. UspB must not abort if UspA commits changes to a
row before UspB commits changes to the same row. UspA must not abort if UspB commits changes to a row
before UspA commits changes to the same row.

You need to specify the transaction isolation levels to enable row versioning.

How should you complete the Transact-SQL statements? To answer, drag the Transact-SQL segments to the
correct locations. Each Transact-SQL segment may be used once, more than once, or not at all. You may
need to drag the split bar between panes or scroll to view content.

Select and Place:

Correct Answer:

07B13F58239056B81577933EB624485B
Section: (none)
Explanation

Explanation/Reference:
References: https://technet.microsoft.com/en-us/library/ms175095(v=sql.105).aspx
https://technet.microsoft.com/en-us/library/ms173763(v=sql.105).aspx

QUESTION 83
HOTSPOT

Background

You have a database named HR1 that includes a table named Employee.

You have several read-only, historical reports that contain regularly changing totals. The reports use multiple
queries to estimate payroll expenses. The queries run concurrently. Users report that the payroll estimate
reports do not always run. You must monitor the database to identify issues that prevent the reports from
running.

You plan to deploy the application to a database server that supports other applications. You must minimize
the amount of storage that the database requires.

Employee Table

You use the following Transact-SQL statements to create, configure, and populate the Employee table:

07B13F58239056B81577933EB624485B
Application

You have an application that updates the Employees table. The application calls the following stored
procedures simultaneously and asynchronously:
UspA: This stored procedure updates only the EmployeeStatus column.
UspB: This stored procedure updates only the EmployeePayRate column.

The application uses views to control access to data. Views must meet the following requirements:
Allow users access to all columns in the tables that the view accesses.
Restrict updates to only the rows that the view returns.

Exhibit

Users must only be able to modify data in the Employee table by using the vwEmployee view. You must
prevent users from viewing the view definition in catalog views.

You need to identify the view attribute to use when creating vwEmployee.

In the table below, identify the attributes that you must use.

NOTE: Make only one selection in each column.

07B13F58239056B81577933EB624485B
Hot Area:

Correct Answer:

Section: (none)
Explanation

Explanation/Reference:
References: https://msdn.microsoft.com/en-us/library/ms187956.aspx

QUESTION 84
Background

You have a database named HR1 that includes a table named Employee.

You have several read-only, historical reports that contain regularly changing totals. The reports use multiple
queries to estimate payroll expenses. The queries run concurrently. Users report that the payroll estimate
reports do not always run. You must monitor the database to identify issues that prevent the reports from
running.

You plan to deploy the application to a database server that supports other applications. You must minimize
the amount of storage that the database requires.

07B13F58239056B81577933EB624485B
Employee Table

You use the following Transact-SQL statements to create, configure, and populate the Employee table:

Application

You have an application that updates the Employees table. The application calls the following stored
procedures simultaneously and asynchronously:
UspA: This stored procedure updates only the EmployeeStatus column.
UspB: This stored procedure updates only the EmployeePayRate column.

The application uses views to control access to data. Views must meet the following requirements:
Allow users access to all columns in the tables that the view accesses.
Restrict updates to only the rows that the view returns.

Exhibit

You are analyzing the performance of the database environment. You discover that locks that are held for a
long period of time as the reports are generated.

You need to generate the reports more quickly. The database must not use additional resources.

07B13F58239056B81577933EB624485B
What should you do?

A. Update the transaction level of the report query session to READPAST.


B. Modify the report queries to use the UNION statement to combine the results of two or more queries.
C. Set the READ_COMMITTED_SNAPSHOT database option to ON.
D. Update the transaction level of the report query session to READ UNCOMMITTED.
E. Apply a nonclustered index to all tables used in the report queries.

Correct Answer: D
Section: (none)
Explanation

Explanation/Reference:
Explanation:

Transactions running at the READ UNCOMMITTED level do not issue shared locks to prevent other
transactions from modifying data read by the current transaction. This is the least restrictive of the isolation
levels.

References:
https://technet.microsoft.com/en-us/library/ms173763(v=sql.105).aspx

QUESTION 85
HOTSPOT

Background

You have a database named HR1 that includes a table named Employee.

You have several read-only, historical reports that contain regularly changing totals. The reports use multiple
queries to estimate payroll expenses. The queries run concurrently. Users report that the payroll estimate
reports do not always run. You must monitor the database to identify issues that prevent the reports from
running.

You plan to deploy the application to a database server that supports other applications. You must minimize
the amount of storage that the database requires.

Employee Table

You use the following Transact-SQL statements to create, configure, and populate the Employee table:

07B13F58239056B81577933EB624485B
Application

You have an application that updates the Employees table. The application calls the following stored
procedures simultaneously and asynchronously:
UspA: This stored procedure updates only the EmployeeStatus column.
UspB: This stored procedure updates only the EmployeePayRate column.

The application uses views to control access to data. Views must meet the following requirements:
Allow users access to all columns in the tables that the view accesses.
Restrict updates to only the rows that the view returns.

Exhibit

You view the Deadlock Graph as shown in the exhibit. (Click the Exhibit tab.)

Use the drop-down menus to select the answer choice that answers each question based on the information
presented in the graphic.

NOTE: Each correct selection is worth one point.

Hot Area:

07B13F58239056B81577933EB624485B
Correct Answer:

Section: (none)
Explanation

Explanation/Reference:
References: https://msdn.microsoft.com/en-us/library/ms186736.aspx

QUESTION 86
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some questions sets might
have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.

You have a database that is 130 GB and contains 500 million rows of data.

Granular transactions and mass batch data imports change the database frequently throughout the day.
Microsoft SQL Server Reporting Services (SSRS) uses the database to generate various reports by using
several filters.

You discover that some reports time out before they complete.

You need to reduce the likelihood that the reports will time out.

Solution: You partition the largest tables.

07B13F58239056B81577933EB624485B
Does this meet the goal?

A. Yes
B. No

Correct Answer: B
Section: (none)
Explanation

Explanation/Reference:

QUESTION 87
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some questions sets might
have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.

You have a database that is 130 GB and contains 500 million rows of data.

Granular transactions and mass batch data imports change the database frequently throughout the day.
Microsoft SQL Server Reporting Services (SSRS) uses the database to generate various reports by using
several filters.

You discover that some reports time out before they complete.

You need to reduce the likelihood that the reports will time out.

Solution: You change the transaction log file size to expand dynamically in increments of 200 MB.

Does this meet the goal?

A. Yes
B. No

Correct Answer: B
Section: (none)
Explanation

Explanation/Reference:

QUESTION 88
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some questions sets might
have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.

You have a database that is 130 GB and contains 500 million rows of data.

Granular transactions and mass batch data imports change the database frequently throughout the day.
Microsoft SQL Server Reporting Services (SSRS) uses the database to generate various reports by using
several filters.

07B13F58239056B81577933EB624485B
You discover that some reports time out before they complete.

You need to reduce the likelihood that the reports will time out.

Solution: You create a file group for the indexes and a file group for the data files. You store the files for each
file group on separate disks.

Does this meet the goal?

A. Yes
B. No

Correct Answer: A
Section: (none)
Explanation

Explanation/Reference:
Explanation:
Consider creating two additional File Groups: Tables and Indexes. It is best not to put your stuff in PRIMARY
as that is where SQL SERVER stores all of its data and meta-data about your objects. You create your Table
and Clustered Index (as that is the data for the table) on [Tables] and all Non-Clustered indexes on [Indexes].

QUESTION 89
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some questions sets might
have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.

You have a 3-TB database. The database server has 64 CPU cores.

You plan to migrate the database to Microsoft Azure SQL Database.

You need to select the service tier for the Azure SQL database. The solution must meet or exceed the current
processing capacity.

Solution: You select the Premium service tier.

Does this meet the goal?

A. Yes
B. No

Correct Answer: A
Section: (none)
Explanation

Explanation/Reference:
Explanation:
Premium service is required for 3 TB of storage.

Single database DTU and storage limits

07B13F58239056B81577933EB624485B
References: https://docs.microsoft.com/en-us/azure/sql-database/sql-database-service-tiers-dtu

QUESTION 90
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some questions sets might
have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.

You have a 3-TB database. The database server has 64 CPU cores.

You plan to migrate the database to Microsoft Azure SQL Database.

You need to select the service tier for the Azure SQL database. The solution must meet or exceed the current
processing capacity.

Solution: You select the Standard service tier.

Does this meet the goal?

A. Yes
B. No

Correct Answer: B
Section: (none)
Explanation

Explanation/Reference:
Explanation:
Premium service is required for 3 TB of storage.

Single database DTU and storage limits

References: https://docs.microsoft.com/en-us/azure/sql-database/sql-database-service-tiers-dtu

QUESTION 91
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some questions sets might

07B13F58239056B81577933EB624485B
have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.

You have a 3-TB database. The database server has 64 CPU cores.

You plan to migrate the database to Microsoft Azure SQL Database.

You need to select the service tier for the Azure SQL database. The solution must meet or exceed the current
processing capacity.

Solution: You select the Basic service tier.

Does this meet the goal?

A. Yes
B. No

Correct Answer: B
Section: (none)
Explanation

Explanation/Reference:
Explanation:
Premium service is required for 3 TB of storage.

Single database DTU and storage limits

References: https://docs.microsoft.com/en-us/azure/sql-database/sql-database-service-tiers-dtu

QUESTION 92
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution. Determine whether the solution meets the stated goals.

You have a table that has a clustered index and a nonclustered index. The indexes use different columns from
the table. You have a query named Query1 that uses the nonclustered index.

Users report that Query1 takes a long time to report results. You run Query1 and review the following statistics
for an index seek operation:

07B13F58239056B81577933EB624485B
You need to resolve the performance issue.

Solution: You drop the nonclustered index.

Does the solution meet the goal?

A. Yes
B. No

Correct Answer: B
Section: (none)
Explanation

Explanation/Reference:
Explanation:
We see Actual Number of Row is 3571454, while Estimated Number of Rows is 0.
This indicates that the statistics are old, and need to be updated.

QUESTION 93
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution. Determine whether the solution meets the stated goals.

You have a table that has a clustered index and a nonclustered index. The indexes use different columns from
the table. You have a query named Query1 that uses the nonclustered index.

Users report that Query1 takes a long time to report results. You run Query1 and review the following statistics
for an index seek operation:

07B13F58239056B81577933EB624485B
You need to resolve the performance issue.

Solution: You defragment both indexes.

Does the solution meet the goal?

A. Yes
B. No

Correct Answer: B
Section: (none)
Explanation

Explanation/Reference:
Explanation:
We see Actual Number of Row is 3571454, while Estimated Number of Rows is 0.
This indicates that the statistics are old, and need to be updated.

QUESTION 94
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution. Determine whether the solution meets the stated goals.

You have a table that has a clustered index and a nonclustered index. The indexes use different columns from
the table. You have a query named Query1 that uses the nonclustered index.

Users report that Query1 takes a long time to report results. You run Query1 and review the following statistics
for an index seek operation:

07B13F58239056B81577933EB624485B
You need to resolve the performance issue.

Solution: You update statistics for the nonclustered index.

Does the solution meet the goal?

A. Yes
B. No

Correct Answer: A
Section: (none)
Explanation

Explanation/Reference:
Explanation:
We see Actual Number of Row is 3571454, while Estimated Number of Rows is 0.
This indicates that the statistics are old, and need to be updated.

QUESTION 95
You have a reporting application that uses a table named Table1. You deploy a new batch update process to
perform updates to Table1.

The environment is configured with the following properties:

The database is configured with the default isolation setting.


The application and process use the default transaction handling.

You observe the application cannot access any rows that are in use by the process.

07B13F58239056B81577933EB624485B
You have the following requirements:

Ensure the application is not blocked by the process.


Ensure the application has a consistent view of the data
Ensure the application does not read dirty data.

You need to resolve the issue and meet the requirements with the least amount of administrative effort.

What should you do?

A. Enable the database for the ALLOW_SNAPSHOT_ISOLATION isolation level. Modify the application for the
SERIALIZABLE isolation level.
B. Enable the database for the READ_COMITTED_SNAPSHOT isolation level.
C. Enable the application for the WITH (NOLOCK) hint.
D. Enable the database for the ALLOW_SNAPSHOT_ISOLATION isolation level. Modify the application and the
update process for the SNAPSHOT isolation level.

Correct Answer: B
Section: (none)
Explanation

Explanation/Reference:
Explanation:
Snapshot isolation must be enabled by setting the ALLOW_SNAPSHOT_ISOLATION ON database option
before it is used in transactions. This activates the mechanism for storing row versions in the temporary
database (tempdb).

READ COMMITTED is the default isolation level for SQL Server. It prevents dirty reads by specifying that
statements cannot read data values that have been modified but not yet committed by other transactions.
Other transactions can still modify, insert, or delete data between executions of individual statements within the
current transaction, resulting in non-repeatable reads, or "phantom" data.

Incorrect Answers:
A: SERIALIZABLE is the most restrictive isolation level, because it locks entire ranges of keys and holds the
locks until the transaction is complete. It encompasses REPEATABLE READ and adds the restriction that
other transactions cannot insert new rows into ranges that have been read by the transaction until the
transaction is complete.

References: https://docs.microsoft.com/en-us/dotnet/framework/data/adonet/sql/snapshot-isolation-in-sql-
server

QUESTION 96
DRAG DROP

You have two databases with the following settings:

You run the following Transact –SQL statements:

07B13F58239056B81577933EB624485B
You need to select data from DiskTable and insert the data into MemTable. You must complete the insertion
operation into MemTable as an explicit transaction without immediate durability.

Which four Transact-SQL segments should you use? To answer, move the appropriate Transact-SQL
segments from the list of Transact-SQL segments to the answer area and arrange them in the correct order.

Select and Place:

Correct Answer:

07B13F58239056B81577933EB624485B
Section: (none)
Explanation

Explanation/Reference:
Explanation:

Box 1: BEGIN TRANSACTION

Box 2: UPDATE … #Disktable

Box 3: IF… SELECT INTO …#Disktable

Box 4: .. DELAYED_DURABILITY = ON

The COMMIT syntax is extended so you can force delayed transaction durability. If DELAYED_DURABILITY is
DISABLED or FORCED at the database level (see above) this COMMIT option is ignored.

Syntax:
COMMIT [ { TRAN | TRANSACTION } ] [ transaction_name | @tran_name_variable ] ] [ WITH
( DELAYED_DURABILITY = { OFF | ON } ) ]

References: https://docs.microsoft.com/en-us/sql/relational-databases/logs/control-transaction-durability?
view=sql-server-2017

QUESTION 97
You run the following Transact-SQL following statement:

07B13F58239056B81577933EB624485B
Customer records may be inserted individually or in bulk from an application.

You observe that the application attempts to insert duplicate records.

You must ensure that duplicate records are not inserted and bulk insert operations continue without
notifications.

Which Transact-SQL statement should you run?

A. CREATE UNIQUE NONCLUSTERED INDEX IX_Customer_Code ON Customer (Code) WITH


(ONLINE = OFF)
B. CREATE UNIQUE INDEX IX_CUSTOMER_Code O Customer (Code) WITH (IGNORE_DUP_KEY =
ON)
C. CREATE UNIQUE INDEX IX Customer Code ON Customer (Code) WITH (IGNORE DUP KEY
=OFF)
D. CREATE UNIQUE NONCLUSTERED INDEX IX_Customer_Code ON Customer (Code)
E. CREATE UNIQUE NONCLUSTERED INDEX IX_Customer_Code ON Customer (Code) WITH
(ONLINE = ON)

Correct Answer: B
Section: (none)
Explanation

Explanation/Reference:
Explanation:
IGNORE_DUP_KEY = { ON | OFF } specifies the error response when an insert operation attempts to insert
duplicate key values into a unique index. The IGNORE_DUP_KEY option applies only to insert operations after
the index is created or rebuilt. The option has no effect when executing CREATE INDEX, ALTER INDEX, or
UPDATE. The default is OFF.

Incorrect Answers:
ONLINE = { ON | OFF } specifies whether underlying tables and associated indexes are available for queries
and data modification during the index operation. The default is OFF.

References: https://docs.microsoft.com/en-us/sql/t-sql/statements/create-index-transact-sql?view=sql-server-
2017

QUESTION 98
DRAG DROP

You manage a database with tables named Invoice and InvoiceDetails. Each invoice may have multiple
records.

Users update the InvoiceDetails table by using a .NET web application. The application retrieves records from
both tables and updates the tables by running an inline update statement.

Users experience slow performance when updating records in the application. The solution must meet the
following requirements:

Must use a stored procedure.


Must not use inline update statements
Must use a table-valued parameter.
Must call the stored procedure to update all records.

You need to optimize performance.

07B13F58239056B81577933EB624485B
Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of
actions to the answer area and arrange them in the correct order.

Select and Place:

Correct Answer:

Section: (none)
Explanation

Explanation/Reference:
Explanation:

Box 1: Create a user-defined table type…


Table-valued parameters are declared by using user-defined table types. You can use table-valued
parameters to send multiple rows of data to a Transact-SQL statement or a routine, such as a stored
procedure or function, without creating a temporary table or many parameters.

Box 2: ..read-only input parameter.

07B13F58239056B81577933EB624485B
Table-valued parameters must be passed as input READONLY parameters to Transact-SQL routines.

Box 3:
Example
The following example uses Transact-SQL and shows you how to create a table-valued parameter type,
declare a variable to reference it, fill the parameter list, and then pass the values to a stored procedure.

USE AdventureWorks2012;
/* Create a table type. */
CREATE TYPE LocationTableType AS TABLE
( LocationName VARCHAR(50)
, CostRate INT );
GO

/* Create a procedure to receive data for the table-valued parameter. */


CREATE PROCEDURE dbo. usp_InsertProductionLocation
@TVP LocationTableType READONLY
Etc.¨

/* Declare a variable that references the type. */


DECLARE @LocationTVP AS LocationTableType;

/* Add data to the table variable. */


INSERT INTO @LocationTVP (LocationName, CostRate)
SELECT Name, 0.00
FROM AdventureWorks2012.Person.StateProvince;

/* Pass the table variable data to a stored procedure. */


EXEC usp_InsertProductionLocation @LocationTVP;
GO

References: https://docs.microsoft.com/en-us/sql/relational-databases/tables/use-table-valued-parameters-
database-engine?view=sql-server-2017

QUESTION 99
You suspect deadlocks on a database.

Which two trace flags in the Microsoft SQL Server error log should you locate? Each correct answer presents
part of the solution.

NOTE: Each correct selection is worth one point.

A. 1204
B. 1211
C. 1222
D. 2528
E. 3205

Correct Answer: AC
Section: (none)
Explanation

Explanation/Reference:
Explanation:
Trace flag 1204 returns the resources and types of locks participating in a deadlock and also the current
command affected.
Trace flag 1222 returns the resources and types of locks that are participating in a deadlock and also the

07B13F58239056B81577933EB624485B
current command affected, in an XML format that does not comply with any XSD schema.

References: https://docs.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-traceon-trace-flags-
transact-sql?view=sql-server-2017

QUESTION 100
You have the following stored procedure that is called by other stored procedures and applications:

You need to modify the stored procedure to meet the following requirements:

Always return a value to the caller.


Return 0 if @Status is NULL.
Callers must be able to use @Status as a variable.

Which two actions should you perform? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.

A. Replace NULL values with 0. Add a PRINT statement to return @Status.


B. Add a RETURN statement.
C. Replace NULL values with 0. Add an output parameter to return @Status.
D. Replace NULL values with 0. Add a SELECT statement to return @Status.
E. Add a PRINT statement.
F. Add a SELECT statement to return @Status.
G. Add an output parameter to return @Status.

Correct Answer: BC
Section: (none)
Explanation

Explanation/Reference:
Explanation:
There are three ways of returning data from a procedure to a calling program: result sets, output parameters,
and return codes.

References: https://docs.microsoft.com/en-us/sql/relational-databases/stored-procedures/return-data-from-a-
stored-procedure?view=sql-server-2017

QUESTION 101
DRAG DROP

You need to implement triggers to automate responses to the following events:

07B13F58239056B81577933EB624485B
SQL Server logons
Database schema changes
Database updates

Which trigger types should you use? To answer, drag the appropriate trigger types to the appropriate
scenarios. Each trigger type may be used once, more than once, or not at all. You may need to drag the split
bar between panes or scroll to view content.

NOTE: Each correct selection is worth one point.

Select and Place:

Correct Answer:

Section: (none)
Explanation

Explanation/Reference:
Explanation:

Box 1: LOGON
Logon triggers fire stored procedures in response to a LOGON event. This event is raised when a user session
is established with an instance of SQL Server.

Box 2: INSTEAD OF INSERT


An "INSTEAD of trigger" is executed instead of the original operation, and not combining with the operation.
INSTEAD OF triggers override the standard actions of the triggering statement. It can be used to bypass the
statement and execute a whole different statement, or just help us check and examine the data before the
action is done.

Box 3: DDL
DDL statements (CREATE or ALTER primarily) issued either by a particular schema/user or by any schema/
user in the database

Note:
You can write triggers that fire whenever one of the following operations occurs:

DML statements (INSERT, UPDATE, DELETE) on a particular table or view, issued by any user
DDL statements (CREATE or ALTER primarily) issued either by a particular schema/user or by any

07B13F58239056B81577933EB624485B
schema/user in the database
Database events, such as logon/logoff, errors, or startup/shutdown, also issued either by a particular
schema/user or by any schema/user in the database

References:
https://docs.oracle.com/cd/B19306_01/server.102/b14220/triggers.htm
https://social.technet.microsoft.com/wiki/contents/articles/28152.t-sql-instead-of-triggers.aspx
https://docs.microsoft.com/en-us/sql/relational-databases/triggers/logon-triggers?view=sql-server-2017

QUESTION 102
HOTSPOT

You have a table named Person.Address that includes the following columns:

AddressID
AddressLine1
AddressLine2
City
StateProvinceID
PostakCode
RowGuid
ModifiedDate

You need to create a nonclustered index on PostalCode named IX_Address_PostalCode that uses the
following included columns:

AddressLine1
AddressLine2
City
StateProvinceID

How should you complete the Transact-SQL statement? To answer, select the appropriate Transact_SQL
segments in the answer are.

Hot Area:

Correct Answer:

07B13F58239056B81577933EB624485B
Section: (none)
Explanation

Explanation/Reference:
Explanation:

Box 1: INDEX

Box 2: ON

Box 3: INCLUDE
INCLUDE (column [ ,... n ] ) specifies the non-key columns to be added to the leaf level of the nonclustered
index. The nonclustered index can be unique or non-unique.

References: https://docs.microsoft.com/en-us/sql/t-sql/statements/create-index-transact-sql?view=sql-server-
2017

QUESTION 103
HOTSPOT

You are developing queries and stored procedures to support a line-of-business application.

You need to use the appropriate isolation level based on the scenario.

Which isolation levels should you implement? To answer, select the appropriate isolation level for each
scenario in the answer area. Each isolation level may be used only one.

NOTE: Each correct selection is worth one point.

Hot Area:

07B13F58239056B81577933EB624485B
Correct Answer:

Section: (none)
Explanation

Explanation/Reference:

07B13F58239056B81577933EB624485B
Explanation:

Box 1: READ UNCOMMITTED


Transactions running at the READ UNCOMMITTED level do not issue shared locks to prevent other
transactions from modifying data read by the current transaction. READ UNCOMMITTED transactions are also
not blocked by exclusive locks that would prevent the current transaction from reading rows that have been
modified but not committed by other transactions. When this option is set, it is possible to read uncommitted
modifications, which are called dirty reads. Values in the data can be changed and rows can appear or
disappear in the data set before the end of the transaction.

Box 2: READ COMMITTED


READ COMMITTED specifies that statements cannot read data that has been modified but not committed by
other transactions.
SERIALIZABLE specifies that statements cannot read data that has been modified but not yet committed by
other transactions.

Box 3: REPEATABLE READ


REPEATABLE READ specifies that statements cannot read data that has been modified but not yet committed
by other transactions and that no other transactions can modify data that has been read by the current
transaction until the current transaction completes.

Box 4: SNAPSHOT
If READ_COMMITTED_SNAPSHOT is set to ON, the Database Engine uses row versioning to present each
statement with a transactionally consistent snapshot of the data as it existed at the start of the statement.

References: https://docs.microsoft.com/en-us/sql/t-sql/statements/set-transaction-isolation-level-transact-sql?
view=sql-server-2017

QUESTION 104
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution. Determine whether the solution meets the stated goals.

You have a database named dbReporting. Users run a large number of read-only ad hoc queries against
the database. The application and all ad hoc queries use default database transaction isolation levels. You set
the value of the READ_COMMITTED_SNAPSHOT database option to ON.

You have an application that updates 10 tables sequentially and modifies a large volume of records in a single
transaction. The updates are isolated from each other.

Users report an error which indicates that the version store us full.

You need to reduce the number of occurrences of the error.

Solution: You increase the auto growth rate for the dbReporting database.

Does the solution meet the goal?

A. Yes
B. No

Correct Answer: B
Section: (none)
Explanation

Explanation/Reference:

QUESTION 105

07B13F58239056B81577933EB624485B
HOTSPOT

You have a database that contains both disk-based and memory-optimized tables.

You need to create two modules. The modules must meet the requirements described in the following table.

Which programming object should you use for each module? To answer, select the appropriate object types in
the answer area.

Hot Area:

Correct Answer:

07B13F58239056B81577933EB624485B
Section: (none)
Explanation

Explanation/Reference:
Explanation:
Module 1: Interpreted stored procedure
An interpreted stored procedure can access both disk-based and memory-optimized tables.

Returning Data by Using OUTPUT Parameters


If you specify the OUTPUT keyword for a parameter in the procedure definition, the stored procedure can
return the current value of the parameter to the calling program when the stored procedure exits.

SQL Server stored procedures, views and functions are able to use the WITH ENCRYPTION option to
disguise the contents of a particular procedure or function from discovery.

Module 2: Natively compiled stored procedure


Natively compiled stored procedures are Transact-SQL stored procedures compiled to native code that access
memory-optimized tables. Natively compiled stored procedures allow for efficient execution of the queries and
business logic in the stored procedure.

In-Memory OLTP introduces the concept of native compilation. SQL Server can natively compile stored
procedures that access memory-optimized tables. SQL Server is also able to natively compile memory-
optimized tables. Native compilation allows faster data access and more efficient query execution than
interpreted (traditional) Transact-SQL. Native compilation of tables and stored procedures produce DLLs.

References:
https://technet.microsoft.com/en-us/library/ms187004(v=sql.105).aspx

https://msdn.microsoft.com/en-us/library/dn249342.aspx

https://docs.microsoft.com/en-us/sql/relational-databases/in-memory-oltp/natively-compiled-stored-
procedures?view=sql-server-2017

07B13F58239056B81577933EB624485B
QUESTION 106
You are creating the following two stored procedures:

A natively-compiled stored procedure


An interpreted stored procedure that accesses both disk-based and memory-optimized tables

Both stored procedures run within transactions.

You need to ensure that cross-container transactions are possible.

Which setting or option should you use?

A. the SET TRANSACTION_READ_COMMITTED isolation level for the connection


B. the SERIALIZABLE table hint on disk-based tables
C. the SET MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT=ON option for the database
D. the SET MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT=OFF option for the database

Correct Answer: C
Section: (none)
Explanation

Explanation/Reference:
Explanation:
Provide a supported isolation level for the memory-optimized table using a table hint, such as WITH
(SNAPSHOT). The need for the WITH (SNAPSHOT) hint can be avoided through the use of the database
option MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT. When this option is set to ON, access to a
memory-optimized table under a lower isolation level is automatically elevated to SNAPSHOT isolation.

Incorrect Answers:
B: Accessing memory optimized tables using the READ COMMITTED isolation level is supported only for
autocommit transactions. It is not supported for explicit or implicit transactions.

References: https://docs.microsoft.com/en-us/sql/relational-databases/in-memory-oltp/transactions-with-
memory-optimized-tables?view=sql-server-2017

QUESTION 107
You are developing a database reporting solution for a table that contains 900 million rows and is 103 GB.

The table is updated thousands of times a day, but data is not deleted.

The SELECT statements vary in the number of columns used and the amount of rows retrieved.

You need to reduce the amount of time it takes to retrieve data from the table. The must prevent data
duplication.

Which indexing strategy should you use?

A. a nonclustered index for each column in the table


B. a clustered columnstore index for the table
C. a hash index for the table
D. a clustered index for the table and nonclustered indexes for nonkey columns

Correct Answer: B
Section: (none)
Explanation

07B13F58239056B81577933EB624485B
Explanation/Reference:
Explanation:
Columnstore indexes are the standard for storing and querying large data warehousing fact tables. It uses
column-based data storage and query processing to achieve up to 10x query performance gains in your data
warehouse over traditional row-oriented storage.

A clustered columnstore index is the physical storage for the entire table.

Generally, you should define the clustered index key with as few columns as possible.

A nonclustered index contains the index key values and row locators that point to the storage location of the
table data. You can create multiple nonclustered indexes on a table or indexed view. Generally, nonclustered
indexes should be designed to improve the performance of frequently used queries that are not covered by the
clustered index.

References: https://docs.microsoft.com/en-us/sql/relational-databases/indexes/columnstore-indexes-overview?
view=sql-server-2017

QUESTION 108
HOTSPOT

You plan to create a stored procedure that uses a table parameter as an input parameter. The table value
parameter may hold between 1 and 10,000 rows when you run the stored procedure.

The stored procedure will use the rows within the table value parameter to filter the rows that will be returned
by the SELECT statement.

You need to create the stored procedure and ensure that it runs as quickly as possible.

How should you complete the procedure? To answer, select the appropriate Transact-SQL segments in the
answer area.

NOTE: Each correct answer selection is worth one point.

Hot Area:

07B13F58239056B81577933EB624485B
Correct Answer:

07B13F58239056B81577933EB624485B
Section: (none)
Explanation

Explanation/Reference:
References:
https://docs.microsoft.com/en-us/sql/t-sql/statements/create-procedure-transact-sql?view=sql-server-2017

QUESTION 109
HOTSPOT

You use Query Store to optimize a query in a database.

The query has two execution plans:

Plan 2 is shown in the Plan 2 Execution Plan exhibit. (Click the Plan 2 Execution Plan tab.)
Plan 10 is shown in the Plan 10 Execution Plan exhibit. (Click the Plan 10 Execution Plan tab.)

You create an index at 22:24 based on the missing index suggestion in Plan 2.

The average duration statistics for the query is shown in the Tracked Queries exhibit. (Click the Exhibit button.)

You need to analyze the operators in the two execution plans.

For each of the following statements, select Yes if the statement is true. Otherwise, select No.

07B13F58239056B81577933EB624485B
Plan 2 Execution Plan

Plan 10 Execution Plan

Tracked Queries

Hot Area:

Correct Answer:

07B13F58239056B81577933EB624485B
Section: (none)
Explanation

Explanation/Reference:
Explanation:

QUESTION 110
DRAG DROP

You maintain a Microsoft Azure SQL Database instance.

You grant User1 the SELECT and EXECUTE permissions for all objects in the dbo schema.

You must create a stored procedure that allows User1 to view the following information:

details for each connection to the database


a list of all active user connections and internal tasks

You need to create the stored procedure for User1 and ensure that User1 can run the stored procedure
without any error.

How should you complete the Transact-SQL statements? To answer, drag the appropriate Transact-SQL
segments to the correct locations. Each Transact-SQL segment may be used once, more than once, or not at
all. You may need to drag the split bar between panes or scroll to view content.

NOTE: More than one combination of answer choices is correct. You will receive credit for any of the correct
combinations you select.

Select and Place:

07B13F58239056B81577933EB624485B
Correct Answer:

Section: (none)
Explanation

Explanation/Reference:
Explanation:

Box 1: Sys.dm_exec_connections
Sys.dm_exec_connections returns information about the connections established to this instance of SQL
Server and the details of each connection. Returns server wide connection information for SQL Server.
Returns current database connection information for SQL Database.

Box 2: sys.dm_exec_sessions
sys.dm_exec_sessions returns one row per authenticated session on SQL Server. sys.dm_exec_sessions is a
server-scope view that shows information about all active user connections and internal tasks.

Box 3: GRANT VIEW DATABASE STATE To User1


SQL Database: Requires VIEW DATABASE STATE to see all connections to the current database. VIEW
DATABASE STATE cannot be granted in the master database.

Incorrect Answers:
Sys.dm_exec_requests returns information about each request that is executing within SQL Server.

07B13F58239056B81577933EB624485B
References:
https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-
sessions-transact-sql?view=sql-server-2017

QUESTION 111
DRAG DROP

Case study

Background

You have a database named HR1 that includes a table named Employee.

You have several read-only, historical reports that contain regularly changing totals. The reports use multiple
queries to estimate payroll expenses. The queries run concurrently. Users report that the payroll estimate
reports do not always run. You must monitor the database to identify issues that prevent the reports from
running.

You plan to deploy the application to a database server that supports other applications. You must minimize
the amount of storage that the database requires.

Employee Table

You use the following Transact-SQL statements to create, configure, and populate the Employee table:

Application

You have an application that updates the Employees table. The application calls the following stored
procedures simultaneously and asynchronously:
UspA: This stored procedure updates only the EmployeeStatus column.
UspB: This stored procedure updates only the EmployeePayRate column.

The application uses views to control access to data. Views must meet the following requirements:
Allow users access to all columns in the tables that the view accesses.
Restrict updates to only the rows that the view returns.

07B13F58239056B81577933EB624485B
You observe that the four indexes require a large amount of disk space. You must reduce the amount of disk
space that the indexes are using.

You need to create a single index to replace the existing indexes.

Which three Transact-SQL segments should you use to develop the solution? To answer, move the
appropriate Transact-SQL segments from the list of Transact-SQL segments to the answer area and arrange
them in the correct order.

NOTE: More than one order of answer choices is correct. You will receive credit for any of the correct orders
you select.

Select and Place:

Correct Answer:

07B13F58239056B81577933EB624485B
Section: (none)
Explanation

Explanation/Reference:
Explanation:

Scenario: You observe that the four indexes require a large amount of disk space. You must reduce the
amount of disk space that the indexes are using.
Current indexes:

QUESTION 112
HOTSPOT

A database administrator needs to perform specific actions for objects in the database.

You need to ensure the administrator can perform the tasks using the principle of least privilege.

Which schema permissions should you assign? To answer, select the appropriate permissions for each task in
the answer area.

NOTE: Each correct selection is worth one point.

Hot Area:

07B13F58239056B81577933EB624485B
Correct Answer:

07B13F58239056B81577933EB624485B
Section: (none)
Explanation

Explanation/Reference:
References:
https://docs.microsoft.com/en-us/sql/t-sql/statements/alter-schema-transact-sql?view=sql-server-
2017#permissions

https://docs.microsoft.com/en-us/sql/relational-databases/collations/set-or-change-the-database-collation?
view=sql-server-2017

https://docs.microsoft.com/en-us/sql/t-sql/statements/drop-schema-transact-sql?view=sql-server-2017

https://docs.microsoft.com/en-us/sql/relational-databases/tables/rename-tables-database-engine?view=sql-
server-2017

https://docs.microsoft.com/en-us/sql/t-sql/statements/alter-database-transact-sql?view=sql-server-
2017#permissions

07B13F58239056B81577933EB624485B
QUESTION 113
You use Microsoft SQL Server Profile to evaluate a query named Query1. The Profiler report indicates the
following issues:
At each level of the query plan, a low total number of rows are processed.
The query uses many operations. This results in a high overall cost for the query.

You need to identify the information that will be useful for the optimizer.

What should you do?

A. Start a SQL Server Profiler trace for the event class Performance statistics in the Performance event
category.
B. Create one Extended Events session with the sqlserver.missing_column_statistics event added.
C. Start a SQL Server Profiler trace for the event class Soft Warnings in the Errors and Warnings event
category.
D. Create one Extended Events session with the sqlserver.error_reported event added.

Correct Answer: A
Section: (none)
Explanation

Explanation/Reference:
Explanation:
The Performance Statistics event class can be used to monitor the performance of queries, stored procedures,
and triggers that are executing. Each of the six event subclasses indicates an event in the lifetime of queries,
stored procedures, and triggers within the system. Using the combination of these event subclasses and the
associated sys.dm_exec_query_stats, sys.dm_exec_procedure_stats and sys.dm_exec_trigger_stats dynamic
management views, you can reconstitute the performance history of any given query, stored procedure, or
trigger.

References: https://docs.microsoft.com/en-us/sql/relational-databases/event-classes/performance-statistics-
event-class?view=sql-server-2017

QUESTION 114
You have an existing Microsoft SQL Trace script. You plan to convert the script to an Extended Events
session.

You need to collect the trace ID and other required information.

Which system table should you use?

A. dbo.sysnotifications
B. dbo.syssessions
C. trace_xe_action_map
D. sysdbmainplans

Correct Answer: C
Section: (none)
Explanation

Explanation/Reference:
References:
https://docs.microsoft.com/en-us/sql/relational-databases/extended-events/convert-an-existing-sql-trace-script-
to-an-extended-events-session?view=sql-server-2017

07B13F58239056B81577933EB624485B
QUESTION 115
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some questions sets might
have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.

You have a database that is 130 GB and contains 500 million rows of data.

Granular transactions and mass batch data imports change the database frequently throughout the day.
Microsoft SQL Server Reporting Services (SSRS) uses the database to generate various reports by using
several filters.

You discover that some reports time out before they complete.

You need to reduce the likelihood that the reports will time out.

Solution: You increase the number of log files for the database. You store the log files across multiple disks.

Does this meet the goal?

A. Yes
B. No

Correct Answer: B
Section: (none)
Explanation

Explanation/Reference:
Explanation:
Instead, create a file group for the indexes and a file group for the data files.

QUESTION 116
Note: This question is part of a series of questions that use the same scenario. For your convenience,
the scenario is repeated in each question. Each question presents a different goal and answer choices,
but the text of the scenario is exactly the same in each question in this series.

You have a database that contains the following tables: BlogCategory, BlogEntry, ProductReview, Product,
and SalesPerson. The tables were created using the following Transact SQL statements:

07B13F58239056B81577933EB624485B
You must modify the ProductReview Table to meet the following requirements:
The table must reference the ProductID column in the Product table
Existing records in the ProductReview table must not be validated with the Product table.
Deleting records in the Product table must not be allowed if records are referenced by the ProductReview
table.
Changes to records in the Product table must propagate to the ProductReview table.

You also have the following database tables: Order, ProductTypes, and SalesHistory. The transact-SQL
statements for these tables are not available.

You must modify the Orders table to meet the following requirements:
Create new rows in the table without granting INSERT permissions to the table.
Notify the sales person who places an order whether or not the order was completed.

You must add the following constraints to the SalesHistory table:


a constraint on the SaleID column that allows the field to be used as a record identifier
a constant that uses the ProductID column to reference the Product column of the ProductTypes table
a constraint on the CategoryID column that allows one row with a null value in the column
a constraint that limits the SalePrice column to values greater than four

07B13F58239056B81577933EB624485B
Finance department users must be able to retrieve data from the SalesHistory table for sales persons where
the value of the SalesYTD column is above a certain threshold.

You plan to create a memory-optimized table named SalesOrder. The table must meet the following
requirements:
The table must hold 10 million unique sales orders.
The table must use checkpoints to minimize I/O operations and must not use transaction logging.
Data loss is acceptable.

Performance for queries against the SalesOrder table that use WHERE clauses with exact equality operations
must be optimized.

You need to modify the environment to meet the requirements for the Orders table.

What should you create?

A. an AFTER UPDATE trigger


B. a user-defined function
C. a stored procedure with output parameters
D. an INSTEAD OF INSERT trigger

Correct Answer: D
Section: (none)
Explanation

Explanation/Reference:
Explanation:
From Question: You must modify the Orders table to meet the following requirements:
Create new rows in the table without granting INSERT permissions to the table.
Notify the sales person who places an order whether or not the order was completed.

References: https://docs.microsoft.com/en-us/sql/t-sql/statements/create-trigger-transact-sql?view=sql-server-
2017

QUESTION 117
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution. Determine whether the solution meets the stated goals.

You have a table that has a clustered index and a nonclustered index. The indexes use different columns from
the table. You have a query named Query1 that uses the nonclustered index.

Users report that Query1 takes a long time to report results. You run Query1 and review the following
statistics for an index seek operation:

07B13F58239056B81577933EB624485B
You need to resolve the performance issue.

Solution: You rebuild the clustered index.

Does the solution meet the goal?

A. Yes
B. No

Correct Answer: B
Section: (none)
Explanation

Explanation/Reference:
Explanation:
The query uses the nonclustered index, so improving the clustered index will not help.
We should update statistics for the nonclustered index.

QUESTION 118
HOTSPOT

You are performing a code review for Transact-SQL statements.

Which action does the constraint in each statement perform? To answer, select the appropriate options in the
answer area.

NOTE: Each correct selection is worth one point.

Hot Area:

07B13F58239056B81577933EB624485B
Correct Answer:

Section: (none)

07B13F58239056B81577933EB624485B
Explanation

Explanation/Reference:

QUESTION 119
You need to create a view that can be indexed.

You write the following statement.

What should you add at line 02?

A. with check_option
B. with recompile
C. with view_metadata
D. with schemabinding

Correct Answer: D
Section: (none)
Explanation

Explanation/Reference:
Explanation:
The following steps are required to create an indexed view and are critical to the successful implementation of
the indexed view:

1. Verify the SET options are correct for all existing tables that will be referenced in the view.
2. Verify that the SET options for the session are set correctly before you create any tables and the view.
3. Verify that the view definition is deterministic.
4. Create the view by using the WITH SCHEMABINDING option.
5. Create the unique clustered index on the view.

References: https://docs.microsoft.com/en-us/sql/relational-databases/views/create-indexed-views?view=sql-
server-2017

QUESTION 120
You have a nonpartitioned table that has a single dimension. The table is named dim.Products.Projections.

The table is queried frequently by several line-of-business applications. The data is updated frequently
throughout the day by two processes.

Users report that when they query data from dim.Products.Projections, the responses are slower than
expected. The issue occurs when a large number of rows are being updated.

You need to prevent the updates from slowing down the queries.

What should you do?

07B13F58239056B81577933EB624485B
A. Use the NOLOCK option.
B. Execute the DBCC UPDATEUSAGE statement.
C. Use the max worker threads option.
D. Use a table-valued parameter.
E. Set SET ALLOW_SNAPSHOT_ISOLATION to ON.

Correct Answer: A
Section: (none)
Explanation

Explanation/Reference:
Explanation:
The NOLOCK hint allows SQL to read data from tables by ignoring any locks and therefore not being blocked
by other processes.
This can improve query performance, but also introduces the possibility of dirty reads.

References: https://www.mssqltips.com/sqlservertip/2470/understanding-the-sql-server-nolock-hint/

QUESTION 121
Your company runs end-of-the-month accounting reports. While the reports run, other financial records are
updated in the database.

Users report that the reports take longer than expected to run.

You need to reduce the amount of time it takes for the reports to run. The reports must show committed data
only.

What should you do?

A. Use the NOLOCK option.


B. Execute the DBCC UPDATEUSAGE statement.
C. Use the max worker threads option.
D. Use a table-valued parameter.
E. Set SET ALLOW_SNAPSHOT_ISOLATION to ON.
F. Set SET XACT_ABORT to ON.
G. Execute the ALTER TABLE T1 SET (LOCK_ESCALATION = AUTO); statement.
H. Use the OUTPUT parameters.

Correct Answer: E
Section: (none)
Explanation

Explanation/Reference:
Explanation:
Snapshot isolation enhances concurrency for OLTP applications.

Once snapshot isolation is enabled, updated row versions for each transaction are maintained in tempdb. A
unique transaction sequence number identifies each transaction, and these unique numbers are recorded for
each row version. The transaction works with the most recent row versions having a sequence number before
the sequence number of the transaction. Newer row versions created after the transaction has begun are
ignored by the transaction.

References: https://docs.microsoft.com/en-us/dotnet/framework/data/adonet/sql/snapshot-isolation-in-sql-

07B13F58239056B81577933EB624485B
server

QUESTION 122
You have several real-time applications that constantly update data in a database. The applications run more
than 400 transactions per second that insert and update new metrics from sensors.

A new web dashboard is released to present the data from the sensors. Engineers report that the applications
take longer than expected to commit updates.

You need to change the dashboard queries to improve concurrency and to support reading uncommitted data.

What should you do?

A. Use the NOLOCK option.


B. Execute the DBCC UPDATEUSAGE statement.
C. Use the max worker threads option.
D. Use a table-valued parameter.
E. Set SET ALLOW_SNAPSHOT_ISOLATION to ON.
F. Set SET XACT_ABORT to ON.
G. Execute the ALTER TABLE T1 SET (LOCK_ESCALATION = AUTO); statement.
H. Use the OUTPUT parameters.

Correct Answer: A
Section: (none)
Explanation

Explanation/Reference:
Explanation:
The NOLOCK hint allows SQL to read data from tables by ignoring any locks and therefore not being blocked
by other processes.
This can improve query performance, but also introduces the possibility of dirty reads.

Incorrect Answers:
F: When SET XACT_ABORT is ON, if a Transact-SQL statement raises a run-time error, the entire transaction
is terminated and rolled back.

G: DISABLE, not AUTO, would be better.


There are two more lock escalation modes: AUTO and DISABLE.
The AUTO mode enables lock escalation for partitioned tables only for the locked partition. For non-partitioned
tables it works like TABLE.
The DISABLE mode removes the lock escalation capability for the table and that is important when
concurrency issues are more important than memory needs for specific tables.

Note: SQL Server's locking mechanism uses memory resources to maintain locks. In situations where the
number of row or page locks increases to a level that decreases the server's memory resources to a minimal
level, SQL Server's locking strategy converts these locks to entire table locks, thus freeing memory held by the
many single row or page locks to one table lock. This process is called lock escalation, which frees memory,
but reduces table concurrency.

References: https://www.mssqltips.com/sqlservertip/2470/understanding-the-sql-server-nolock-hint/

QUESTION 123
You have multiple stored procedures inside a transaction.

You need to ensure that all the data modified by the transaction is rolled back if a stored procedure causes a

07B13F58239056B81577933EB624485B
deadlock or times out.

What should you do?

A. Use the NOLOCK option.


B. Execute the DBCC UPDATEUSAGE statement.
C. Use the max worker threads option.
D. Use a table-valued parameter.
E. Set SET ALLOW_SNAPSHOT_ISOLATION to ON.
F. Set SET XACT_ABORT to ON.
G. Execute the ALTER TABLE T1 SET (LOCK_ESCALATION = AUTO); statement.
H. Use the OUTPUT parameters.

Correct Answer: F
Section: (none)
Explanation

Explanation/Reference:
Explanation:
When SET XACT_ABORT is ON, if a Transact-SQL statement raises a run-time error, the entire transaction is
terminated and rolled back.

References: https://docs.microsoft.com/en-us/sql/t-sql/statements/set-xact-abort-transact-sql?view=sql-server-
2017

QUESTION 124
DRAG DROP

You manage a database that includes the tables shown in the exhibit. (Click the Exhibit button.)

You plan to create a DML trigger that reads the value of the LineTotal column for each row in the
PurchaseOrderDetail table. The trigger must add the value obtained to the value in the SubTotal column of
the PurchaseOrderHeader table.

You need to organize the list to form the appropriate Transact-SQL statement.

Which five Transact-SQL segments should you use to develop the solution? To answer, move the appropriate
Transact-SQL segments from the list of Transact-SQL segments to the answer area and arrange them in the
correct order.

Table diagram

07B13F58239056B81577933EB624485B
Select and Place:

Correct Answer:

07B13F58239056B81577933EB624485B
Section: (none)
Explanation

Explanation/Reference:
Explanation:

Box 1: Create TRIGGER NewPODetail

Box 2: ON PurchaseDetail

Box 3: AFTER INSERT AS


Basic syntax:
CREATE TRIGGER trigger_name
ON { table | view }
{ FOR | AFTER | INSTEAD OF }
{ [ INSERT ] [ , ] [ UPDATE ] [ , ] [ DELETE ] }
AS
{ sql_statement [ ; ] [ ,...n ] | EXTERNAL NAME <method specifier [ ; ] > }

Box 4: Update PurchaseOrderHeader


SET SubTotal = SubTotal + LineTotal
FROM inserted

Box 5: WHERE PurchaserOrderHeader.PurchaseOrderID=


Inserted.PurchaseOrderID

The trigger must read the value of the LineTotal column for each row in the PurchaseOrderDetail table. The
trigger must add the value obtained to the value in the SubTotal column of the PurchaseOrderHeader table.

References: https://docs.microsoft.com/en-us/sql/t-sql/statements/create-trigger-transact-sql?view=sql-server-
2017

QUESTION 125
You run the following Transact-SQL statements:

07B13F58239056B81577933EB624485B
Records must only be added to the Orders table by using the view. If a customer name does not exist, then a
new customer name must be created.

You need to ensure that you can insert rows into the Orders table by using the view.

A. Add the CustomerID column from the Orders table and the WITH CHECK OPTION statement to the view.
B. Create an INSTEAD OF trigger on the view.
C. Add the WITH SCHEMABINDING statement to the view statement and create a clustered index on the view.
D. Remove the subquery from the view, add the WITH SCHEMABINDING statement, and add a trigger to the
Orders table to perform the required logic.

Correct Answer: A
Section: (none)
Explanation

Explanation/Reference:
Explanation:
The WITH CHECK OPTION clause forces all data-modification statements executed against the view to
adhere to the criteria set within the WHERE clause of the SELECT statement defining the view. Rows cannot
be modified in a way that causes them to vanish from the view.

References: http://www.informit.com/articles/article.aspx?p=130855&seqNum=4

QUESTION 126
You run the following Transact-SQL statement:

There are multiple unique OrderID values. Most of the UnitPrice values for the same OrderID are different.

07B13F58239056B81577933EB624485B
You need to create a single index seek query that does not use the following operators:

Nested loop
Sort
Key lookup

Which Transact-SQL statement should you run?

A. CREATE INDEX IX_OrderLines_1 ON OrderLines (OrderID, UnitPrice) INCLUDE


(Description, Quantity)
B. CREATE INDEX IX_OrderLines_1 ON OrderLines (OrderID, UnitPrice) INCLUDE
(Quantity)
C. CREATE INDEX IX_OrderLines_1 ON OrderLines (OrderID, UnitPrice, Quantity)
D. CREATE INDEX IX_OrderLines_1 ON OrderLines (UnitPrice, OrderID) INCLUDE
(Description, Quantity)

Correct Answer: A
Section: (none)
Explanation

Explanation/Reference:
Explanation:
An index with nonkey columns can significantly improve query performance when all columns in the query are
included in the index either as key or nonkey columns. Performance gains are achieved because the query
optimizer can locate all the column values within the index; table or clustered index data is not accessed
resulting in fewer disk I/O operations.

Note: All data types except text, ntext, and image can be used as nonkey columns.

Incorrect Answers:
C: Redesign nonclustered indexes with a large index key size so that only columns used for searching and
lookups are key columns.

D: The most unique column should be the first in the index.

References: https://docs.microsoft.com/en-us/sql/t-sql/statements/create-index-transact-sql?view=sql-server-
2017

QUESTION 127
You are developing an ETL process to cleanse and consolidate incoming data. The ETL process will use a
reference table to identify which data must be cleansed in the target table. The server that hosts the table
restarts daily.

You need to minimize the amount of time it takes to execute the query and the amount of time it takes to
populate the reference table.

What should you do?

A. Convert the target table to a memory-optimized table. Create a natively compiled stored procedure to
cleanse and consolidate the data.
B. Convert the reference table to a memory-optimized table. Set the DURABILITY option to
SCHEMA_AND_DATA.
C. Create a native compiled stored procedure to implement the ETL process for both tables.
D. Convert the reference table to a memory-optimized table. Set the DURABILITY option to SCHEMA_ONLY.

07B13F58239056B81577933EB624485B
Correct Answer: D
Section: (none)
Explanation

Explanation/Reference:
Explanation:
If you use temporary tables, table variables, or table-valued parameters, consider conversions of them to
leverage memory-optimized tables and table variables to improve performance.

In-Memory OLTP provides the following objects that can be used for memory-optimizing temp tables and table
variables:
Memory-optimized tables
Durability = SCHEMA_ONLY
Memory-optimized table variables
Must be declared in two steps (rather than inline):
CREATE TYPE my_type AS TABLE ...; , then
DECLARE @mytablevariable my_type;.

References: https://docs.microsoft.com/en-us/sql/relational-databases/in-memory-oltp/faster-temp-table-and-
table-variable-by-using-memory-optimization?view=sql-server-2017

QUESTION 128
You are designing a stored procedure for a database named DB1.

The following requirements must be met during the entire execution of the stored procedure:

The stored procedure must only read changes that are persisted to the database.
SELECT statements within the stored procedure should only show changes to the data that are made by
the stored procedure.

You need to configure the transaction isolation level for the stored procedure.

Which Transact-SQL statement or statements should you run?

A. SET TRANSACTION ISOLATION LEVEL READ UNCOMMITED


ALTER DATABASE DB1 SET READ_COMMITED_SNAPSHOT ON
B. SET TRANSACTION ISOLATION LEVEL READ COMMITED
ALTER DATABASE DB1 SET READ_COMMITED_SNAPSHOT OFF
C. SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
D. SET TRANSACTION ISOLATION LEVEL READ UNCOMMITED
ALTER DATABASE SET READ_COMMITED_SNAPSHOT OFF

Correct Answer: B
Section: (none)
Explanation

Explanation/Reference:
Explanation:
READ COMMITTED specifies that statements cannot read data that has been modified but not committed by
other transactions. This prevents dirty reads. Data can be changed by other transactions between individual
statements within the current transaction, resulting in nonrepeatable reads or phantom data. This option is the
SQL Server default.

Incorrect Answers:
A, D: READ UNCOMMITTED specifies that statements can read rows that have been modified by other
transactions but not yet committed.

07B13F58239056B81577933EB624485B
References: https://docs.microsoft.com/en-us/dotnet/framework/data/adonet/sql/snapshot-isolation-in-sql-
server

QUESTION 129
HOTSPOT

You are developing a stored procedure with the following requirements:

Accepts an integer as input and inserts the value into a table.


Ensures new transactions are committed as part of the outer transactions.
Preserves existing transactions if the transaction in the procedure fails.
If the transaction in the procedure fails, rollback the transaction.

How should you complete the procedure? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Hot Area:

07B13F58239056B81577933EB624485B
07B13F58239056B81577933EB624485B
Correct Answer:

07B13F58239056B81577933EB624485B
Section: (none)
Explanation

Explanation/Reference:
Explanation:

Box 1: @TranCount> 0
Procedure called when there is an active transaction. Create a savepoint to be able to roll back only the work
done in the procedure if there is an error.

Box 2: @TranCount = 0
-- @TranCount = 0 means no transaction was started before the procedure was called. The procedure must
commit the transaction it started.

Box 3: XACT_STATE() <> -1


If the transaction is still valid, just roll back to the savepoint set at the start of the stored procedure.

References: https://docs.microsoft.com/en-us/sql/t-sql/language-elements/save-transaction-transact-sql?
view=sql-server-2017

QUESTION 130
You are designing a solution for a company that operates retail stores. Each store has a database that tracks
sales transactions. You create a summary table in the database at the corporate office. You plan to use the
table to record the quantity of each product sold at each store on each day. Managers will use this data to
identify reorder levels for products.

Every evening stores, must transmit sales data to the corporate office. The data must be inserted into the
summary table that includes the StoreID, ProductID, Qtysold, Totprodsales, and Datesold columns.

You need to prevent duplicate rows in the summary table. Each row must uniquely identify the store that sold
the product and the total amount sold for that store on a specific date.

What should you include in your solution?

A. Create a unique constraint.


B. Create a foreign key constraint to the StoreID column in each of the store tables.
C. Create a rule and bind it to the StoreID column.
D. Create a check constraint.
E. Create a table-valued user-defined function.

Correct Answer: A
Section: (none)
Explanation

Explanation/Reference:
Explanation:
You can use UNIQUE constraints to make sure that no duplicate values are entered in specific columns that
do not participate in a primary key. Although both a UNIQUE constraint and a PRIMARY KEY constraint
enforce uniqueness, use a UNIQUE constraint instead of a PRIMARY KEY constraint when you want to
enforce the uniqueness of a column, or combination of columns, that is not the primary key.

Incorrect Answers:
D: CHECK constraints enforce domain integrity by limiting the values that are accepted by one or more
columns.

Reference:
https://docs.microsoft.com/en-us/sql/relational-databases/tables/unique-constraints-and-check-constraints?

07B13F58239056B81577933EB624485B
view=sql-server-2017

QUESTION 131
You have the following stored procedure:

The Numbers table becomes unavailable when you run the stored procedure. The stored procedure obtains an
exclusive lock on the table and does not release the lock.

What are two possible ways to resolve the issue? Each correct answer presents a complete solution.

NOTE: Each correct selection is worth one point.

A. Remove the implicit transaction and the SET ANSI_DEFAULTS ON statement.


B. Set the ANSI_DEFAULT statement to OFF and add a COMMIT TRANSACTION statement after the INSERT
statement.
C. Add a COMMIT TRANSACTION statement after the INSERT statement.
D. Remove the SET ANSI_DEFAULTS ON statement.

Correct Answer: CD
Section: (none)
Explanation

Explanation/Reference:
Explanation:
SET ANSI_DEFAULTS is a server-side setting that the client does not modify. When enabled (ON), this option
enables SET IMPLICIT_TRANSACTIONS (and some other options).
The SET IMPLICIT_TRANSACTIONS, when ON, the system is in implicit transaction mode.
This means that if @@TRANCOUNT = 0, any of the following Transact-SQL statements begins a new
transaction. It is equivalent to an unseen BEGIN TRANSACTION being executed first: ALTER TABLE,
FETCH, REVOKE, BEGIN TRANSACTION, GRANT, SELECT, CREATE, INSERT, TRUNCATE TABLE,
DELETE, OPEN, UPDATE, DROP.

References: https://docs.microsoft.com/en-us/sql/t-sql/statements/set-implicit-transactions-transact-sql?
view=sql-server-2017

QUESTION 132
HOTSPOT

You have a Microsoft SQL Server database that has a table named Sales. The table is used for retrieving data
and is updated during non-business hours.

You run the following Transact-SQL statement:

07B13F58239056B81577933EB624485B
You analyze the execution plan for the statement. (Click the Exhibit tab).

Execution Plan

You need to add an index that optimizes performance.

Hot Area:

Correct Answer:

07B13F58239056B81577933EB624485B
Section: (none)
Explanation

Explanation/Reference:
Explanation:

Box 1:
Specifying the order in which key values are stored in an index is useful when queries referencing the table
have ORDER BY clauses that specify different directions for the key column or columns in that index. In these
cases, the index can remove the need for a SORT operator in the query plan; therefore, this makes the query
more efficient.

The Database Engine can move equally efficiently in either direction. An index defined as (RejectedQty DESC,
ProductID ASC) can still be used for a query in which the sort direction of the columns in the ORDER BY
clause are reversed. For example, a query with the ORDER BY clause ORDER BY RejectedQty ASC,
ProductID DESC can use the index.

Box 2:
You can include nonkey columns in a nonclustered index to avoid exceeding the current index size limitations
of a maximum of 16 key columns and a maximum index key size of 900 bytes. The Database Engine does not
consider nonkey columns when calculating the number of index key columns or index key size.

References: https://technet.microsoft.com/en-us/library/jj835095(v=sql.110).aspx

QUESTION 133
You have a database that users query frequently.

The users report that during peak business hours, the queries take longer than expected to execute.

A junior database administrator uses Microsoft SQL Server Profiler on the database server to trace the session
activities.

07B13F58239056B81577933EB624485B
While performing the trace, the performance of the database server worsens, and the server crashes.

You need to recommend a solution to collect the query run times. The solution must minimize the impact on
the resources of the database server.

What should you recommend?

A. Increase the free space on the system drive of the database server, and then use SQL Server Profiler on
the server to trace the session activities.
B. Collect session activity data by using SQL Server Extended Events.
C. Clean up tempdb, and then use SQL Server Profiler on the database server to trace the session activities.
D. Collect performance data by using a Data Collector Set (DCS) in Performance Monitor.

Correct Answer: B
Section: (none)
Explanation

Explanation/Reference:
Explanation:
SQL Server Extended Events has a highly scalable and highly configurable architecture that allows users to
collect as much or as little information as is necessary to troubleshoot or identify a performance problem.
Extended Events is a light weight performance monitoring system that uses very few performance resources.
Extended Events provides two graphical user interfaces (New Session Wizard and New Session) to create,
modify, display, and analyze your session data.

References: https://docs.microsoft.com/en-us/sql/relational-databases/extended-events/extended-events?
view=sql-server-2017

QUESTION 134
HOTSPOT

You have a Microsoft Azure SQL database.

Some queries take a long time to execute.

You need to identify whether any of the long-running queries prevent users from updating data in the
database.

Which query should you execute? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Hot Area:

07B13F58239056B81577933EB624485B
Correct Answer:

07B13F58239056B81577933EB624485B
Section: (none)
Explanation

Explanation/Reference:
Explanation:

Box 1: sys.dm_tran_locks
sys.dm_tran_locks returns information about currently active lock manager resources in SQL Server 2017.
Each row represents a currently active request to the lock manager for a lock that has been granted or is
waiting to be granted.

Box 2: sys.dm_os_waiting_tasks
sys.dm_tran_locks.lock_owner_address is the memory address of the internal data structure that is used to
track this request. This column can be joined the with resource_address column in sys.dm_os_waiting_tasks.

References: https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/
sys-dm-tran-locks-transact-sql?view=sql-server-2017

QUESTION 135
You have a relational data warehouse that contains 1 TB of data.

You have a stored procedure named usp_sp1 that generated an HTML fragment. The HTML fragment
contains color and font style.

You need to return the HTML fragment.

07B13F58239056B81577933EB624485B
What should you do?

A. Use the NOLOCK option.


B. Execute the DBCC UPDATEUSAGE statement.
C. Use the max worker threads option.
D. Use a table-valued parameter.
E. Set SET ALLOW_SNAPSHOT_ISOLATION to ON.
F. Set SET XACT_ABORT to ON.
G. Execute the ALTER TABLE T1 SET (LOCK_ESCALATION = AUTO); statement.
H. Use the OUTPUT parameters.

Correct Answer: G
Section: (none)
Explanation

Explanation/Reference:
Explanation:
A SQL Server stored procedure that you can call is one that returns one or more OUT parameters, which are
parameters that the stored procedure uses to return data back to the calling application.

References: https://docs.microsoft.com/en-us/sql/connect/jdbc/using-a-stored-procedure-with-output-
parameters?view=sql-server-2017

QUESTION 136
Note: This question is part of a series of questions that use the same or similar answer choices. As
answer choice may be correct for more than one question in the series. Each question is independent
of the other questions in this series.
Information and details provided in a question apply only to that question.

You have a Microsoft SQL Server database named DB1 that contains the following tables:

You frequently run the following queries:

07B13F58239056B81577933EB624485B
There are no foreign key relationships between TBL1 and TBL2.

You need to minimize the amount of time required for the two queries to return records from the tables.

What should you do?

A. Create clustered indexes on TBL1 and TBL2.


B. Create a clustered index on TBL1.Create a nonclustered index on TBL2 and add the most frequently
queried column as included columns.
C. Create a nonclustered index on TBL2 only.
D. Create UNIQUE constraints on both TBL1 and TBL2. Create a partitioned view that combines columns
from TBL1 and TBL2.
E. Drop existing indexes on TBL1 and then create a clustered columnstore index. Create a nonclustered
columnstore index on TBL1.Create a nonclustered index on TBL2.
F. Drop existing indexes on TBL1 and then create a clustered columnstore index. Create a nonclustered
columnstore index on TBL1.Make no changes to TBL2.
G. Create CHECK constraints on both TBL1 and TBL2. Create a partitioned view that combines columns
from TBL1 and TBL2.
H. Create an indexed view that combines columns from TBL1 and TBL2.

Correct Answer: B
Section: (none)
Explanation

Explanation/Reference:

QUESTION 137
Note: This question is part of a series of questions that use the same or similar answer choices. As
answer choice may be correct for more than one question in the series. Each question is independent
of the other questions in this series.
Information and details provided in a question apply only to that question.

You have a Microsoft SQL Server database named DB1 that contains the following tables:

There are no foreign key relationships between TBL1 and TBL2.

You need to minimize the amount of time required for queries that use data from TBL1 and TBL2 to return
data.

07B13F58239056B81577933EB624485B
What should you do?

A. Create clustered indexes on TBL1 and TBL2.


B. Create a clustered index on TBL1.Create a nonclustered index on TBL2 and add the most frequently
queried column as included columns.
C. Create a nonclustered index on TBL2 only.
D. Create UNIQUE constraints on both TBL1 and TBL2. Create a partitioned view that combines columns
from TBL1 and TBL2.
E. Drop existing indexes on TBL1 and then create a clustered columnstore index. Create a nonclustered
columnstore index on TBL1.Create a nonclustered index on TBL2.
F. Drop existing indexes on TBL1 and then create a clustered columnstore index. Create a nonclustered
columnstore index on TBL1.Make no changes to TBL2.
G. Create CHECK constraints on both TBL1 and TBL2. Create a partitioned view that combines columns
from TBL1 and TBL2.
H. Create an indexed view that combines columns from TBL1 and TBL2.

Correct Answer: B
Section: (none)
Explanation

Explanation/Reference:
References: http://www.sqlservergeeks.com/sql-server-indexing-for-aggregates-in-sql-server/

QUESTION 138
Note: This question is part of a series of questions that use the same or similar answer choices. As
answer choice may be correct for more than one question in the series. Each question is independent
of the other questions in this series.
Information and details provided in a question apply only to that question.

You have a Microsoft SQL Server database named DB1 that contains the following tables:

Users frequently run the following query:

Users report that the query takes a long time to return results.

You need to minimize the amount of time requires for the query to return data.

What should you do?

07B13F58239056B81577933EB624485B
A. Create clustered indexes on TBL1 and TBL2.
B. Create a clustered index on TBL1.Create a nonclustered index on TBL2 and add the most frequently
queried column as included columns.
C. Create a nonclustered index on TBL2 only.
D. Create UNIQUE constraints on both TBL1 and TBL2. Create a partitioned view that combines columns
from TBL1 and TBL2.
E. Drop existing indexes on TBL1 and then create a clustered columnstore index. Create a nonclustered
columnstore index on TBL1.Create a nonclustered index on TBL2.
F. Drop existing indexes on TBL1 and then create a clustered columnstore index. Create a nonclustered
columnstore index on TBL1.Make no changes to TBL2.
G. Create CHECK constraints on both TBL1 and TBL2. Create a partitioned view that combines columns
from TBL1 and TBL2.
H. Create an indexed view that combines columns from TBL1 and TBL2.

Correct Answer: H
Section: (none)
Explanation

Explanation/Reference:

QUESTION 139
Note: This question is part of a series of questions that use the same or similar answer choices. As
answer choice may be correct for more than one question in the series. Each question is independent
of the other questions in this series.
Information and details provided in a question apply only to that question.

You have a Microsoft SQL Server database named DB1 that contains the following tables:

Users frequently run the following query. The users report that the query takes a long time to return results.

07B13F58239056B81577933EB624485B
You need to minimize the amount of time required for the query to return data.

A. Create clustered indexes on TBL1 and TBL2.


B. Create a clustered index on TBL1.Create a nonclustered index on TBL2 and add the most frequently
queried column as included columns.
C. Create a nonclustered index on TBL2 only.
D. Create UNIQUE constraints on both TBL1 and TBL2. Create a partitioned view that combines columns
from TBL1 and TBL2.
E. Drop existing indexes on TBL1 and then create a clustered columnstore index. Create a nonclustered
columnstore index on TBL1.Create a nonclustered index on TBL2.
F. Drop existing indexes on TBL1 and then create a clustered columnstore index. Create a nonclustered
columnstore index on TBL1.Make no changes to TBL2.
G. Create CHECK constraints on both TBL1 and TBL2. Create a partitioned view that combines columns
from TBL1 and TBL2.
H. Create an indexed view that combines columns from TBL1 and TBL2.

Correct Answer: D
Section: (none)
Explanation

Explanation/Reference:
Explanation:
A partitioned view is a view defined by a UNION ALL of member tables structured in the same way, but stored
separately as multiple tables in either the same instance of SQL Server or in a group of autonomous instances
of SQL Server servers, called federated database servers.

References: https://docs.microsoft.com/en-us/sql/t-sql/statements/create-view-transact-sql?view=sql-server-
2017#partitioned-views

QUESTION 140
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution. Determine whether the solution meets the stated goals.

You need to create a stored procedure that updates the Customer, CustomerInfo, OrderHeader, and
OrderDetail tables in order.

You need to ensure that the stored procedure:

Runs within a single transaction.


Commits updates to the Customer and CustomerInfo tables regardless of the status of updates to the
OrderHeader and OrderDetail tables.
Commits changes to all four tables when updates to all four tables are successful.

Solution: You create a stored procedure that includes the following Transact-SQL code:

07B13F58239056B81577933EB624485B
Does the solution meet the goal?

A. Yes
B. No

Correct Answer: B
Section: (none)
Explanation

Explanation/Reference:
Explanation:
Need to handle the case where the first two updates (OrderHeader, OrderDetail) are successful, but either the
3rd or the 4th (OrderHeader, OrderDetail) fail. Can add a variable in the BEGIN TRY block, and test the
variable in the BEGIN CATCH block.

Note: XACT_STATE indicates whether the request has an active user transaction, and whether the transaction
is capable of being committed.
XACT_STATE =1: the current request has an active user transaction. The request can perform any actions,
including writing data and committing the transaction.

References:
https://docs.microsoft.com/en-us/sql/t-sql/functions/xact-state-transact-sql

QUESTION 141
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution. Determine whether the solution meets the stated goals.

You have a database that contains a table named Employees. The table stores information about the

07B13F58239056B81577933EB624485B
employees of your company.

You need to implement and enforce the following business rules:

Limit the values that are accepted by the Salary column.


Prevent salaries less than $15,000 and greater than $300,000 from being entered.
Determine valid values by using logical expressions.
Do not validate data integrity when running DELETE statements.

Solution: You implement a foreign key on the table.

Does the solution meet the goal?

A. Yes
B. No

Correct Answer: B
Section: (none)
Explanation

Explanation/Reference:
Explanation:
A check constraint is needed.

References:
https://en.wikipedia.org/wiki/Check_constraint

QUESTION 142
DRAG DROP

You must troubleshoot performance issues that users report. You identify the following representative user
sessions:

You need to configure the appropriate Extended Events.

Which target should you use for each session? To answer, drag the appropriate Extended Event targets to the
correct sessions. Each Extended Event target may be used once, or not at all. You may need to drag the split
bar between panes or scroll to view content.

Select and Place:

07B13F58239056B81577933EB624485B
Correct Answer:

Section: (none)
Explanation

07B13F58239056B81577933EB624485B
Explanation/Reference:
Explanation:

Session 1: Event Files


Event files use to write event session output from complete memory buffers to disk. This is an asynchronous
target.

Session 2: Histogram
Histogram use to count the number of times that a specified event occurs, based on a specified event column
or action. This is an asynchronous target.

Incorrect Answers:
ETW:
The Extended Events infrastructure supports the correlation of data from SQL Server, and under certain
conditions, the correlation of data from the operating system and database applications. In the latter case,
Extended Events output must be directed to Event Tracing for Windows (ETW) to correlate the event data with
operating system or application event data.

Event Counter:
An event counter counts all specified events that occur during an Extended Events session. Use to obtain
information about workload characteristics without adding the overhead of full event collection. This is a
synchronous target.

References:
https://docs.microsoft.com/en-us/sql/database-engine/sql-server-extended-events-targets?

QUESTION 143
You have a view that includes an aggregate.

You must be able to change the values of columns in the view. The changes must be reflected in the tables
that the view uses.

You need to ensure that you can update the view.

What should you create?

A. a nonclustered index
B. a schema-bound view
C. a stored procedure
D. an INSTEAD OF trigger

Correct Answer: D
Section: (none)
Explanation

Explanation/Reference:
Explanation:
When you create a view, you must obey the following conditions in order to make the view updateable:
The columns being modified in the view have to directly reference the underlying table columns. As a
consequence, the view’s columns being modified cannot be the result of an aggregate function or a
computed column.

The only way to make data changes on a non-updateable view is by using INSTEAD OF DML triggers. This
way you can use procedural code to overcome the limitation.

Reference:
https://www.mssqltips.com/sqlservertip/5984/sql-server-trigger-on-view-example/

07B13F58239056B81577933EB624485B
QUESTION 144
HOTSPOT

You have a database named Sales.

You need to create a table named Customer that includes the columns described in the following table:

How should you complete the Transact SQL statement? To answer, select the appropriate Transact-SQL
segments in the answer area.

Hot Area:

07B13F58239056B81577933EB624485B
Correct Answer:

Section: (none)
Explanation

Explanation/Reference:
Explanation:

Box 1: random
Random is a random masking function for use on any numeric type to mask the original value with a random
value within a specified range.

Box 2: partial

Box 3: email
Email is a masking method which exposes the first letter of an email address and the constant suffix ".com", in
the form of an email address. . [email protected].

References:
https://docs.microsoft.com/en-us/sql/relational-databases/security/dynamic-data-masking

QUESTION 145
HOTSPOT

You have tables with the following requirements:

Some records in the table named OrderLines do not exist in the table named Order.
The column named OrderNumber must be a unique value in the Order table.
The OrderNumber column on the OrderLines table allows null values.
The OrderNumber column on the OrderLines table must be validated with the Order table.
Must not allow a new record in the OrderLines table that does not match a record in the Order table.

07B13F58239056B81577933EB624485B
How should you complete the statements? To answer, select the appropriate transact-SQL segments from the
drop-down menus in the answer area.

NOTE: Each correct selection is worth one point.

Hot Area:

Correct Answer:

Section: (none)
Explanation

Explanation/Reference:
Explanation:

07B13F58239056B81577933EB624485B
Box 1: PRIMARY KEY

Box 2: NOCHECK
Need NOCHECK as some records in the table named OrderLines do not exist in the table named Order.

Box 3: FOREIGN KEY


FOREIGN KEY allows inserting NULL values if there is no NOT NULL constraint defined on this key, but the
PRIMARY KEY does not accept NULLs.

References:
https://www.sqlshack.com/commonly-used-sql-server-constraints-foreign-key-check-default/

QUESTION 146
You are optimizing the performance of a batch update process. You have tables and indexes that were
created by running the following Transact-SQL statements:

The following query runs nightly to update the isCreditValidated field:

You review the database and make the following observations:

Most of the IsCreditValidated values in the Invoices table are set to a value of 1.

07B13F58239056B81577933EB624485B
There are many unique InvoiceDate values.
The CreditValidation table does not have an index.
Statistics for the index IX_invoices_CustomerID_Filter_IsCreditValidated indicate there are no individual
seeks but multiple individual updates.

You need to ensure that any indexes added can be used by the update query. If the
IX_invoices_CustomerId_Filter_IsCreditValidated index cannot be used by the query, it must be removed.
Otherwise, the query must be modified to use with the index.

Which three actions should you perform? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.

A. Add a filtered nonclustered index to Invoices on InvoiceDate that selects where IsCreditNote= 1 and
IsCreditValidated = 0.
B. Rewrite the update query so that the condition for IsCreditValidated = 0 precedes the condition for
IsCreditNote = 1.
C. Create a nonclustered index for invoices in IsCreditValidated, InvoiceDate with an include statement using
IsCreditNote and CustomerID.
D. Add a nonclustered index for CreditValidation on CustomerID.
E. Drop the IX_invoices_CustomerId_Filter_IsCreditValidatedIndex.

Correct Answer: ABE


Section: (none)
Explanation

Explanation/Reference:
Explanation:
A filtered index is an optimized nonclustered index especially suited to cover queries that select from a well-
defined subset of data. It uses a filter predicate to index a portion of rows in the table. A well-designed filtered
index can improve query performance as well as reduce index maintenance and storage costs compared with
full-table indexes.

References:
https://docs.microsoft.com/en-us/sql/relational-databases/indexes/create-filtered-indexes

QUESTION 147
Note: This question is part of a series of questions that use the same or similar answer choices. An
answer choice may be correct for more than one question in the series. Each question is independent
of the other questions in this series.

Information and details provided in a question apply only to that question.

You have a Microsoft SQL Server database named DB1 that contains the following tables:

07B13F58239056B81577933EB624485B
There are no foreign key relationships between TBL1 and TBL2.

You need to create a query that includes data from both tables and minimizes the amount of time required for
the query to return data.

What should you do?

A. Create clustered indexes on TBL1 and TBL2.


B. Create a clustered index on TBL1.
Create a nonclustered index on TBL2 and add the most frequently queried column as included columns.
C. Create a nonclustered index on TBL2 only.
D. Create UNIQUE constraints on both TBL1 and TBL2.
Create a partitioned view that combines columns from TBL1 and TBL2.
E. Drop existing indexes on TBL1 and then create a clustered columnstore index.
Create a nonclustered columnstore index on TBL1.Create a nonclustered index on TBL2.
F. Drop existing indexes on TBL1 and then create a clustered columnstore index.
Create a nonclustered columnstore index on TBL1.Make no changes to TBL2.
G. Create CHECK constraints on both TBL1 and TBL2.
Create a partitioned view that combines columns from TBL1 and TBL2.
H. Create an indexed view that combines columns from TBL1 and TBL2.

Correct Answer: G
Section: (none)
Explanation

Explanation/Reference:
Explanation:
A partitioned view is a view defined by a UNION ALL of member tables structured in the same way, but stored
separately as multiple tables in either the same instance of SQL Server or in a group of autonomous instances
of SQL Server servers, called federated database servers.

Conditions for Creating Partitioned Views Include:


The select list
All columns in the member tables should be selected in the column list of the view definition.
The columns in the same ordinal position of each select list should be of the same type, including
collations. It is not sufficient for the columns to be implicitly convertible types, as is generally the case for
UNION.

Also, at least one column (for example <col>) must appear in all the select lists in the same ordinal position.
This <col> should be defined in a way that the member tables T1, ..., Tn have CHECK constraints C1, ..., Cn
defined on <col>, respectively.

07B13F58239056B81577933EB624485B
References:
https://docs.microsoft.com/en-us/sql/t-sql/statements/create-view-transact-sql

QUESTION 148
Note: This question is part of a series of questions that use the same or similar answer choices. An
answer choice may be correct for more than one question in the series. Each question is independent
of the other questions in this series.

Information and details provided in a question apply only to that question.

You have a Microsoft SQL Server database named DB1 that contains the following tables:

There are no foreign key relationships between TBL1 and TBL2.

You need to minimize the amount of time required for queries that use data from TB1 and TBL2 to return data.

What should you do?

A. Create clustered indexes on TBL1 and TBL2.


B. Create a clustered index on TBL1.
Create a nonclustered index on TBL2 and add the most frequently queried column as included columns.
C. Create a nonclustered index on TBL2 only.
D. Create UNIQUE constraints on both TBL1 and TBL2.
Create a partitioned view that combines columns from TBL1 and TBL2.
E. Drop existing indexes on TBL1 and then create a clustered columnstore index.
Create a nonclustered columnstore index on TBL1.
Create a nonclustered index on TBL2.
F. Drop existing indexes on TBL1 and then create a clustered columnstore index.
Create a nonclustered columnstore index on TBL1.Make no changes to TBL2.
G. Create CHECK constraints on both TBL1 and TBL2.
Create a partitioned view that combines columns from TBL1 and TBL2.
H. Create an indexed view that combines columns from TBL1 and TBL2.

Correct Answer: G
Section: (none)
Explanation

Explanation/Reference:
Explanation:
A partitioned view is a view defined by a UNION ALL of member tables structured in the same way, but stored

07B13F58239056B81577933EB624485B
separately as multiple tables in either the same instance of SQL Server or in a group of autonomous instances
of SQL Server servers, called federated database servers.

Conditions for Creating Partitioned Views Include:


The select list
All columns in the member tables should be selected in the column list of the view definition.
The columns in the same ordinal position of each select list should be of the same type, including collations. It
is not sufficient for the columns to be implicitly convertible types, as is generally the case for UNION.

Also, at least one column (for example <col>) must appear in all the select lists in the same ordinal position.
This <col> should be defined in a way that the member tables T1, ..., Tn have CHECK constraints C1, ..., Cn
defined on <col>, respectively.

References:
https://docs.microsoft.com/en-us/sql/t-sql/statements/create-view-transact-sql

QUESTION 149
You have multiple queries that take a long time to complete.

You need to identify the cause by using detailed information about the Transact-SQL statements in the
queries. The Transact-SQL statements must not run as part of the analysis.

Which Transact-SQL statement should you run?

A. SET STATISTICS PROFILE OFF


B. SET SHOWPLAN_TEXT OFF
C. SET SHOWPLAN_ALL ON
D. SET STATISTICS PROFILE ON

Correct Answer: C
Section: (none)
Explanation

Explanation/Reference:
Explanation:
SET SHOWPLAN_ALL ON causes Microsoft SQL Server not to execute Transact-SQL statements. Instead,
SQL Server returns detailed information about how the statements are executed and provides estimates of the
resource requirements for the statements.

Incorrect Answers:
D: When STATISTICS PROFILE is ON, each executed query returns its regular result set, followed by an
additional result set that shows a profile of the query execution.
Displays the profile information for a statement. STATISTICS PROFILE works for ad hoc queries, views, and
stored procedures.

References:
https://docs.microsoft.com/en-us/sql/t-sql/statements/set-showplan-all-transact-sql

QUESTION 150
DRAG DROP

You have a memory-optimized table named Customer. The table is accessed by a stored procedure named
ManageCustomer.

The database was created in Microsoft SQL Server 2014. A backup and restore operation was used to move
the database to SQL Server 2016.

You have performance issues with the stored procedure.

07B13F58239056B81577933EB624485B
You need to resolve the performance issues and ensure the table statistics are updated automatically.

Which three Transact-SQL segments should you use to develop the solution? To answer, move the
appropriate Transact-SQL segments from the list of Transact-SQL segments to the answer area and arrange
them in the correct order.

Select and Place:

Correct Answer:

Section: (none)
Explanation

Explanation/Reference:
Explanation:

Step 1: ALTER DATABASE CURRENT SET COMPATIBILITY_LEVEL


Sets certain database behaviors to be compatible with the specified version of SQL Server.
Syntax:
ALTER DATABASE database_name
SET COMPATIBILITY_LEVEL = { 150 | 140 | 130 | 120 | 110 | 100 | 90 }

Step 2: UPDATE STATISTICS Customers


UPDATE STATISTICS updates query optimization statistics on a table or indexed view.

Step 3: EXEC sp_recompile N' 'Manage_customer'

07B13F58239056B81577933EB624485B
sp_recompile causes stored procedures, triggers, and user-defined functions to be recompiled the next time
that they are run. It does this by dropping the existing plan from the procedure cache forcing a new plan to be
created the next time that the procedure or trigger is run

References:
https://docs.microsoft.com/en-us/sql/t-sql/statements/alter-database-transact-sql-compatibility-level

https://docs.microsoft.com/en-us/sql/t-sql/statements/update-statistics-transact-sql

QUESTION 151
DRAG DROP

You create tables by using the following Transact-SQL statements:

Each customer may have multiple addresses but only one is the primary address.

You must plan a solution that meets the following requirements:

Return both customers and address information.


Return only the primary address of the customer.
Allow updates of all customer information and address details with the exception of the identity columns
and the IsActive column.

Which three actions should you perform is sequence? To answer, move the appropriate actions from the list of
actions to the answer area and arrange them in the correct order.

Select and Place:

07B13F58239056B81577933EB624485B
Correct Answer:

Section: (none)
Explanation

Explanation/Reference:
Explanation:

Step 1:
SQL Server Views WITH CHECK OPTION. Views can be created in SQL Server WITH CHECK OPTION.
WITH CHECK OPTION will make sure that all INSERT and UPDATE statements executed against the view
meet the restrictions in the WHERE clause, and that the modified data in the view remains visible after
INSERT and UPDATE statements.

Step 2-3:
INSTEAD OF triggers can be created on a view to make a view updatable.

References:
http://zarez.net/?p=3002

https://docs.microsoft.com/en-us/sql/t-sql/statements/create-view-transact-sql?view=sql-server-2017

QUESTION 152
You have a database with multiple tables. You must insert data into the tables by using views.

Which of the following two statements best describes an updatable view?

NOTE: Each correct selection is worth one point.

07B13F58239056B81577933EB624485B
A. The view must not include the primary key of the table.
B. The view may include GROUP BY or HAVING statements.
C. The view may include a composite field.
D. Each view must reference columns from one table.
E. The view must not include subqueries.

Correct Answer: BD
Section: (none)
Explanation

Explanation/Reference:
Explanation:
B: The columns being modified must not be affected by GROUP BY, HAVING, or DISTINCT clauses.

D: Any modifications, including UPDATE, INSERT, and DELETE statements, must reference columns from
only one base table.

Incorrect Answers:
A: Primary keys are allowed.

C: The columns cannot be derived in any other way, such as through the following:
A computation. The column cannot be computed from an expression that uses other columns.

E: The restrictions apply to any subqueries in the FROM clause of the view, just as they apply to the view itself.

References:
https://docs.microsoft.com/en-us/sql/t-sql/statements/create-view-transact-sql?view=sql-server-2017

QUESTION 153
You manage a database that supports an Internet of Things (IoS) solution. The database records metrics from
over 100 million devices every minute. The database requires 99.995% uptime.

The database uses a table named Checkins that is 100 gigabytes (GB) in size. The Checkins table stores
metrics from the devices. The database also has a table named Archive that stores four terabytes (TB) of data.
You use stored procedures for all access to the tables.
You observe that the wait type PAGELATCH_IO causes large amounts of blocking.
You need to resolve the blocking issues while minimizing downtime for the database.

Which two actions should you perform? Each correct answer presents part of the solution.

A. Convert all stored procedures that access the Checkins table to natively compiled procedures.
B. Convert the Checkins table to an In-Memory OLTP table.
C. Convert all tables to clustered columnstore indexes.
D. Convert the Checkins table to a clustered columnstore index.

Correct Answer: AB
Section: (none)
Explanation

Explanation/Reference:
Explanation:
Natively compiled stored procedures are Transact-SQL stored procedures compiled to native code that access
memory-optimized tables. Natively compiled stored procedures allow for efficient execution of the queries and
business logic in the stored procedure.

SQL Server In-Memory OLTP helps improve performance of OLTP applications through efficient, memory-

07B13F58239056B81577933EB624485B
optimized data access, native compilation of business logic, and lock- and latch free algorithms. The In-
Memory OLTP feature includes memory-optimized tables and table types, as well as native compilation of
Transact-SQL stored procedures for efficient access to these tables.

References:
https://docs.microsoft.com/en-us/sql/relational-databases/in-memory-oltp/natively-compiled-stor ed-procedures

https://docs.microsoft.com/en-us/sql/relational-databases/in-memory-oltp/memory-optimized-tables

QUESTION 154
DRAG DROP

You have a database that contains three encrypted store procedures named dbo.Proc1, dbo.Proc2 and
dbo.Proc3. The stored procedures include INSERT, UPDATE, and DELETE statements.

You have the following requirements:

You must run all the stored procedures within the same transaction.
You must automatically start a transaction when stored procedures include DML statements.
You must not automatically start a transaction when stored procedures include DDL statements.

You need to run all three stored procedures.

Which three Transact-SQL segments should you use to develop the solution? To answer, move the
appropriate Transact-SQL segments to the answer area and arrange then in the correct order.

Select and Place:

Correct Answer:

07B13F58239056B81577933EB624485B
Section: (none)
Explanation

Explanation/Reference:
Explanation:

Step 1: SET IMPLICIT_TRANSACTIONS ON BEGIN TRAN


When ON, the system is in implicit transaction mode. This means that if @@TRANCOUNT = 0, any of the
following Transact-SQL statements begins a new transaction. It is equivalent to an unseen BEGIN
TRANSACTION being executed first:

Step 2:
Run the procedure within the same transaction

Step 3:
Commit the transaction

References:
https://docs.microsoft.com/en-us/sql/t-sql/statements/set-implicit-transactions-transact-sql

07B13F58239056B81577933EB624485B
QUESTION 155
HOTSPOT

You are designing a data warehouse that will be clustered across four servers. Several of the tables in the
data warehouse contain transient data that you can rebuild as needed.

The schema for the data changes periodically. You must minimize the work required to make schema changes
and deploy those changes across the server farm. Administrators must be able to make the following schema
changes:

Adding columns to tables


Removing columns from tables
Adding tables
Removing tables

Changes to the transient data are done as singleton operations. You must make the data available on all the
servers in the server farm. Data movement between servers must occur in real time or near real time.

You must be able to run SELECT statements against the data from any server at any time.

You need to configure the environment.

Which technologies should you implement? To answer, select the appropriate technologies in the answer
area.

NOTE: Each correct selection is worth one point.

Hot Area:

Correct Answer:

07B13F58239056B81577933EB624485B
Section: (none)
Explanation

Explanation/Reference:
Explanation:

Box 1: Durable in-Memory OLTP


Databases containing memory-optimized tables, with or without native compiled stored procedures, are fully
supported with Always On Availability Groups. There is no difference in the configuration and support for
databases which contain In-Memory OLTP objects as compared to those without.

Box 2: Always on Availability groups


The Always On availability groups feature is a high-availability and disaster-recovery solution that provides an
enterprise-level alternative to database mirroring. Always On availability groups maximizes the availability of a
set of user databases for an enterprise. An availability group supports a failover environment for a discrete set
of user databases, known as availability databases, that fail over together. An availability group supports a set
of read-write primary databases and one to eight sets of corresponding secondary databases. Optionally,
secondary databases can be made available for read-only access and/or some backup operations.

An availability group fails over at the level of an availability replica.

References:
https://docs.microsoft.com/en-us/sql/relational-databases/in-memory-oltp/high-availability-support-for-in-
memory-oltp-databases

https://docs.microsoft.com/en-us/sql/database-engine/availability-groups/windows/always-on-availability-
groups-sql-server

QUESTION 156
HOTSPOT

07B13F58239056B81577933EB624485B
This is a case study. Case studies are not timed separately. You can use as much exam time as you would
like to complete each case. However, there may be additional case studies and sections on this exam. You
must manage your time to ensure that you are able to complete all questions included on this exam in the time
provided.

To answer the questions included in a case study, you will need to reference information that is provided in the
case study. Case studies might contain exhibits and other resources that provide more information about the
scenario that is described in the case study. Each question is independent of the other questions in this case
study.

At the end of this case study, a review scree will appear. This screen allows you to review your answers and to
make changes before you move to the next section of the exam. After you begin a new section, you cannot
return to this section.

To start the case study


To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore
the content of the case study before you answer the questions. Clicking these buttons displays information
such as business requirements. Existing environment, and problem statements. If the case study has an All
Information tab, note that the information displayed is identical to the information displayed on the subsequent
tabs. When you are ready to answer a question, click the Question button to return to the question.

Background

You have a database named Sales.

The Customer table includes a column that stores the date for the last order that the customer placed.

You plan to create a table named Leads. The Leads table is expected to contain approximately 20,000
records. Storage requirements for the Leads table must be minimized.

Tables

You need to modify the database design to meet the following requirements:

Rows in the Orders table must always have a valid value for the ProductID column.
Rows in the Products table must not be deleted if they are part of any rows in the Orders table.

07B13F58239056B81577933EB624485B
All rows in both tables must be unique.

In the table below, identify the constraint that must be configured for each table.

NOTE: Make only one selection in each column.

Hot Area:

Correct Answer:

Section: (none)
Explanation

Explanation/Reference:
Explanation:

QUESTION 157
DRAG DROP

07B13F58239056B81577933EB624485B
This is a case study. Case studies are not timed separately. You can use as much exam time as you would
like to complete each case. However, there may be additional case studies and sections on this exam. You
must manage your time to ensure that you are able to complete all questions included on this exam in the time
provided.

To answer the questions included in a case study, you will need to reference information that is provided in the
case study. Case studies might contain exhibits and other resources that provide more information about the
scenario that is described in the case study. Each question is independent of the other questions in this case
study.

At the end of this case study, a review scree will appear. This screen allows you to review your answers and to
make changes before you move to the next section of the exam. After you begin a new section, you cannot
return to this section.

To start the case study


To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore
the content of the case study before you answer the questions. Clicking these buttons displays information
such as business requirements. Existing environment, and problem statements. If the case study has an All
Information tab, note that the information displayed is identical to the information displayed on the subsequent
tabs. When you are ready to answer a question, click the Question button to return to the question.

Background

You have a database named Sales.

The Customer table includes a column that stores the date for the last order that the customer placed.

You plan to create a table named Leads. The Leads table is expected to contain approximately 20,000
records. Storage requirements for the Leads table must be minimized.

Tables

You need to begin to modify the table design to adhere to third normal form.

Which column should you remove for each table? To answer? drag the appropriate column names to the
correct locations. Each column name may be used once, more than once, or not at all. You may need to drag

07B13F58239056B81577933EB624485B
the split bar between panes or scroll to view content.

Select and Place:

Correct Answer:

Section: (none)
Explanation

Explanation/Reference:
Explanation:

Box 1: SupplierName
SupplierName should be moved to a separate Supplier table.

Box 2: ProductName
ProductName is already defined in the Products table.

QUESTION 158
HOTSPOT

07B13F58239056B81577933EB624485B
This is a case study. Case studies are not timed separately. You can use as much exam time as you would
like to complete each case. However, there may be additional case studies and sections on this exam. You
must manage your time to ensure that you are able to complete all questions included on this exam in the time
provided.

To answer the questions included in a case study, you will need to reference information that is provided in the
case study. Case studies might contain exhibits and other resources that provide more information about the
scenario that is described in the case study. Each question is independent of the other questions in this case
study.

At the end of this case study, a review scree will appear. This screen allows you to review your answers and to
make changes before you move to the next section of the exam. After you begin a new section, you cannot
return to this section.

To start the case study


To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore
the content of the case study before you answer the questions. Clicking these buttons displays information
such as business requirements. Existing environment, and problem statements. If the case study has an All
Information tab, note that the information displayed is identical to the information displayed on the subsequent
tabs. When you are ready to answer a question, click the Question button to return to the question.

Background

You have a database named Sales.

The Customer table includes a column that stores the date for the last order that the customer placed.

You plan to create a table named Leads. The Leads table is expected to contain approximately 20,000
records. Storage requirements for the Leads table must be minimized.

Tables

The Leads table must include the columns described in the following table.

07B13F58239056B81577933EB624485B
The data types chosen must consume the least amount of storage possible.

You need to select the appropriate data types for the Leads table.

In the table below, identify the data type that must be used for each table column.

NOTE: Make only one selection in each column.

Hot Area:

Correct Answer:

Section: (none)
Explanation

Explanation/Reference:
Explanation:

LeadID: smallint

07B13F58239056B81577933EB624485B
IsCustomer: bit
Bit is an integer data type that can take a value of 1, 0, or NULL.
The SQL Server Database Engine optimizes storage of bit columns. If there are 8 or fewer bit columns in a
table, the columns are stored as 1 byte. If there are from 9 up to 16 bit columns, the columns are stored as 2
bytes, and so on.

References:
https://docs.microsoft.com/en-us/sql/t-sql/data-types/int-bigint-smallint-and-tinyint-transact-sql

https://docs.microsoft.com/en-us/sql/t-sql/data-types/bit-transact-sql

QUESTION 159
HOTSPOT

This is a case study. Case studies are not timed separately. You can use as much exam time as you would
like to complete each case. However, there may be additional case studies and sections on this exam. You
must manage your time to ensure that you are able to complete all questions included on this exam in the time
provided.

To answer the questions included in a case study, you will need to reference information that is provided in the
case study. Case studies might contain exhibits and other resources that provide more information about the
scenario that is described in the case study. Each question is independent of the other questions in this case
study.

At the end of this case study, a review scree will appear. This screen allows you to review your answers and to
make changes before you move to the next section of the exam. After you begin a new section, you cannot
return to this section.

To start the case study


To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore
the content of the case study before you answer the questions. Clicking these buttons displays information
such as business requirements. Existing environment, and problem statements. If the case study has an All
Information tab, note that the information displayed is identical to the information displayed on the subsequent
tabs. When you are ready to answer a question, click the Question button to return to the question.

Background

You have a database named Sales.

The Customer table includes a column that stores the date for the last order that the customer placed.

You plan to create a table named Leads. The Leads table is expected to contain approximately 20,000

07B13F58239056B81577933EB624485B
records. Storage requirements for the Leads table must be minimized.

Tables

You need to create triggers on the Orders table that meet the following requirements:

Optimize the performance and data integrity of the tables.


Display a custom error if a user attempts to create an order for a customer that does not exist.
In the Customers table, update the value for the LastOrderPlaced field.
Complete all actions as part of the original transaction.

In the table below, identify the trigger types that meet the requirements.

NOTE: Make only selection in each column. Each correct selection is worth one point.

Hot Area:

Correct Answer:

07B13F58239056B81577933EB624485B
Section: (none)
Explanation

Explanation/Reference:
Explanation:

Display Custom Error: INSTEAD OF INSERT trigger


Before we insert, we check if the customer already exist or not.

Update Customer table: AFTER UPDATE trigger

References:
https://docs.microsoft.com/en-us/sql/t-sql/statements/create-trigger-transact-sql

QUESTION 160
HOTSPOT

This is a case study. Case studies are not timed separately. You can use as much exam time as you would
like to complete each case. However, there may be additional case studies and sections on this exam. You
must manage your time to ensure that you are able to complete all questions included on this exam in the time
provided.

To answer the questions included in a case study, you will need to reference information that is provided in the
case study. Case studies might contain exhibits and other resources that provide more information about the
scenario that is described in the case study. Each question is independent of the other questions in this case
study.

At the end of this case study, a review scree will appear. This screen allows you to review your answers and to
make changes before you move to the next section of the exam. After you begin a new section, you cannot
return to this section.

To start the case study


To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore
the content of the case study before you answer the questions. Clicking these buttons displays information
such as business requirements. Existing environment, and problem statements. If the case study has an All
Information tab, note that the information displayed is identical to the information displayed on the subsequent
tabs. When you are ready to answer a question, click the Question button to return to the question.

Background

You have a database named Sales.

07B13F58239056B81577933EB624485B
The Customer table includes a column that stores the date for the last order that the customer placed.

You plan to create a table named Leads. The Leads table is expected to contain approximately 20,000
records. Storage requirements for the Leads table must be minimized.

Tables

You need to implement a stored procedure that deletes a discontinued product from the Products table. You
identify the following requirements:

If an open order includes a discontinued product, the records for the product must not be deleted.
The stored procedure must display a custom error message if a product record cannot be deleted. The
message must identify the OrderID for the open order.

What should you do? To answer, select the appropriate Transact-SQL segments in the answer area.

Hot Area:

07B13F58239056B81577933EB624485B
Correct Answer:

Section: (none)
Explanation

Explanation/Reference:
Explanation:

Box 1: Try/Catch

07B13F58239056B81577933EB624485B
A TRY...CATCH construct catches all execution errors that have a severity higher than 10 that do not close the
database connection.

Box 2: ERROR_MESSAGE()
ERROR_MESSAGE() returns the complete text of the error message. The text includes the values supplied for
any substitutable parameters, such as lengths, object names, or times.

References:
https://docs.microsoft.com/en-us/sql/t-sql/language-elements/try-catch-transact-sql

QUESTION 161
You have a table that stores transactions partitioned by year. Users occasionally experience performance
issues when they access the table. The table is locked exclusively when the records are updated. You need to
prevent exclusive locks on the table and maintain data integrity.
What should you do?

A. Set the LOCK_EXCALATION property to DISABLE


B. Set the DATA_COMPRESSION property to ROW at the partition level
C. Set the LOCK_EXCALATION property to AUTO
D. Set the LOCK_EXCALATION property to TABLE
E. Set the DATA_COMPRESSION property to PAGE at the partition level

Correct Answer: C
Section: (none)
Explanation

Explanation/Reference:
Explanation:
The default lock escalation mode is called TABLE, it implements SQL Server's lock escalation on all types of
tables whether partitioned or not partitioned.

There are two more lock escalation modes: AUTO and DISABLE.
The AUTO mode enables lock escalation for partitioned tables only for the locked partition. For non-
partitioned tables it works like TABLE.
The DISABLE mode removes the lock escalation capability for the table and that is important when
concurrency issues are more important than memory needs for specific tables.

References:
https://www.mssqltips.com/sqlservertip/4359/altering-lock-escalation-for-sql-server-tables/

QUESTION 162
HOTSPOT

You have a database named Database1.

Users report that they experience deadlock issues. You run the sp_readerlog stored procedure. You view
the output from the Process List section as shown in the Process List exhibit. (Click the Process List tab.)

07B13F58239056B81577933EB624485B
You view the contents of the Resource List section as shown in the Resource List exhibit. (Click the Resource
List tab.)

07B13F58239056B81577933EB624485B
You view deadlock information as shown in the Deadlock List exhibit. (Click the Deadlock List tab.)

You need to identify the causes of the deadlock.

For each of the following statements, select Yes if the statement is true. Otherwise, select No.

Hot Area:

Correct Answer:

07B13F58239056B81577933EB624485B
Section: (none)
Explanation

Explanation/Reference:
Explanation:

Box 1: No

The following three lock types are used for row- and page-level locking:

Shared (S)
Exclusive (X)
Update (U)

An exclusive lock reserves a page or row, but not a whole table, for the exclusive use of a single transaction.

Note: Read committed specifies that statements cannot read data that has been modified but not committed by
other transactions.

Box 2: No
Taskpriority is 0, which is the default priority. The transaction could be chosen as the victim if other
transactions have a priority higher than 0.

Box 3: Yes

07B13F58239056B81577933EB624485B
and

References:
https://www.sqlshack.com/what-is-a-sql-server-deadlock/

https://logicalread.com/sql-server-lock-modes-mc03/#.XULCdm8zaUk

QUESTION 163
Note: This question is part of a series of questions that use the same or similar answer choices. An
Answer choice may be correct for more than one question in the series. Each question independent of
the other questions in this series. Information and details provided in a question apply only to that
question.

You are a database developer for a company. The company has a server that has multiple physical disks. The
disks are not part of a RAID array. The server hosts three Microsoft SQL Server instances. There are many
SQL jobs that run during off-peak hours.

You must monitor and optimize the SQL Server to maximize throughput, response time, and overall SQL
performance.

You need to examine delays in executed threads, including errors with queries and batches.

What should you do?

A. Create a sys.dm_os_waiting_tasks query.


B. Create a sys.dm_exec_sessions query.
C. Create a Performance Monitor Data Collector Set.
D. Create a sys.dm_os_memory_objects query.
E. Create a sp_configure ‘max server memory’query.
F. Create a SQL Profiler trace.
G. Create a sys.dm_os_wait_stats query.
H. Create an Extended Event.

Correct Answer: G
Section: (none)
Explanation

Explanation/Reference:
Explanation:
sys.dm_os_wait_stats returns information about all the waits encountered by threads that executed. You can

07B13F58239056B81577933EB624485B
use this aggregated view to diagnose performance issues with SQL Server and also with specific queries and
batches.

Incorrect Answers:
A: sys.dm_os_waiting_tasks returns information about the wait queue of tasks that are waiting on some
resource.

References:
https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-os-
wait-stats-transact-sql

QUESTION 164
You are designing a solution for a company that operates retail stores. Each store has a database that tracks
sales transactions. You create a summary table in the database at the corporate office. You plan to use the
table to record the quantity of each product sold at each store on each day. Managers will use this data to
identify reorder levels for products.

Every evening stores, must transmit sales data to the corporate office. The data must be inserted into the
summary table that includes the StoreID, ProductID, Qtysold, Totprodsales, and Datesold columns.

You need to prevent duplicate rows in the summary table. Each row must uniquely identify the store that sold
the product and the total amount sold for that store on a specific date.

What should you include in your solution?

A. Create a primary key constraint.


B. Create a rule and bind it to the StoreId column.
C. Create a rule and bind it to the ProductID column.
D. Create a check constraint.

Correct Answer: A
Section: (none)
Explanation

Explanation/Reference:
Reference:
https://docs.microsoft.com/en-us/sql/relational-databases/tables/unique-constraints-and-check-constraints?
view=sql-server-2017

QUESTION 165
You have a data warehouse fact table that has a clustered columnstore index.

You have multiple CSV files that contain a total of 3 million rows of data.

You need to upload the data to the fact table. The solution must avoid the delta group when you import the
data.

Which solution will achieve the goal in the least amount of time?

A. Load the source data to a staging table. Load the data to the fact table by using the INSERT_SELECT
statement and specify the TABLOCK option on the staging table
B. Create a Microsoft SQL Server Integration Services (SSIS) package. Use multiple data flow tasks to load
the data in parallel.
C. Load the source data to the fact table by running bcp.exe and specify the –H TABLOCK option
D. Load the source data to the fact table by using the BULK INSERT statement and specify the TABLOCK
option

07B13F58239056B81577933EB624485B
Correct Answer: A
Section: (none)
Explanation

Explanation/Reference:
Explanation:
If you are loading data only to stage it before running more transformations, loading the table to heap table will
be much faster than loading the data to a clustered columnstore table. In addition, loading data to a [temporary
table][Temporary] will also load much faster than loading a table to permanent storage.

A common pattern for data load is to load the data into a staging table, do some transformation and then load it
into the target table using the following command

INSERT INTO <columnstore index>


SELECT <list of columns> FROM <Staging Table>
This command loads the data into the columnstore index in similar ways to BCP or Bulk Insert but in a single
batch. If the number of rows in the staging table < 102400, the rows are loaded into a delta rowgroup
otherwise the rows are directly loaded into compressed rowgroup. One key limitation was that this INSERT
operation was single threaded. To load data in parallel, you could create multiple staging table or issue
INSERT/SELECT with non-overlapping ranges of rows from the staging table. This limitation goes away with
SQL Server 2016 (13.x). The command below loads the data from staging table in parallel but you will need to
specify TABLOCK.

References:
https://docs.microsoft.com/en-us/sql/relational-databases/indexes/columnstore-indexes-data-loading-
guidance?view=sql-server-2017#plan-bulk-load-sizes-to-minimize-delta-rowgroups

QUESTION 166
DRAG DROP

You are a database developer for a company that delivers produce and other refrigerated goods to grocery
stores. You capture the food storage temperature for delivery vehicles by using Internet of Things (IoT)
devices. You store the temperature data in a database table named VehicleTemperatures. The
ChillerSensorNumber column stores the identifier for the IoT devices.

You need to create an indexed view that meets the following requirements:

Persists the data on disk to reduce the amount of I/O


Provides the number of ChillerSensorNumber items
Creates only a set of summary rows

How should you complete the view definition? To answer, drag the appropriate Transact-SQL segments to the
correct locations. Each Transact-SQL segment may be used once, more than once, or not at all. You may
need to drag the split bat between panes or scroll to view content.

Select and Place:

07B13F58239056B81577933EB624485B
Correct Answer:

Section: (none)
Explanation

Explanation/Reference:
Explanation:

Box1: WITH SCHEMABINDING


Persists the data on disk to reduce the amount of I/O.

Box 2: COUNT_BIG(ChillerSensorNumber)
Provides the number of ChillerSensorNumber items

The function COUNT_BIG returns the number of items found in a group. COUNT_BIG operates like the
COUNT function. These functions differ only in the data types of their return values. COUNT_BIG always

07B13F58239056B81577933EB624485B
returns a bigint data type value. COUNT always returns an int data type value.

Box 3: GROUP BY ChillerSensorNumber


Creates only a set of summary rows

References:
https://docs.microsoft.com/en-us/sql/t-sql/functions/count-big-transact-sql

QUESTION 167
You have a Microsoft Azure SQL Database. You enable Query Store for the database and configure the store
to use the following settings:

SIZE_BASED_CLEANUP_MODE = OFF
STALE_QUERY_THRESHOLD_DAYS = 60
MAX_STORAGE_SIZE_MB = 100
QUERY_CAPTURE_MODE = ALL

You use Azure Query Performance Insight to review queries. You observe that new queries are not displayed
after 15 days and that the Query Store is set to read-only mode.

If the Query Store runs low on data space, the store must prioritize queries that run regularly or queries that
consume applicant resources.

You must set the Query Store to read_write mode and determine the performance of queries from the past 60
days.

Which three actions should you perform? Each correct step presents part of the solution.

NOTE: Each correct selection is worth one point.

A. Set the value of the CLEANUP_POLICY setting to (STALE_QUERY_THRESHOLD_DAYS = 75)


B. Set the value of the QUERY_CAPTURE_MODE setting to AUTO
C. Increase the value for the MAX_STORAGE_SIZE_MB setting
D. Set the value of the SIZE_BASED_CLEANUP_MODE setting to AUTO
E. In the Azure portal, navigate to Query Performance Insight. Use the Custom tab to select a period of 2
months.

Correct Answer: BCD


Section: (none)
Explanation

Explanation/Reference:
Explanation:
B: Capture mode:
All – Captures all queries. This is the default option.
Auto – Infrequent queries and queries with insignificant cost are ignored. (Ad hoc recommended)
None – Query Store stops capturing new queries.

C: Max Size (MB): Specifies the limit for the data space that Query Store can consume within the database.
This is the most important setting that directly affects operation mode of the Query Store.

While Query Store collects queries, execution plans and statistics, its size in the database grows until this limit
is reached. When that happens, Query Store automatically changes the operation mode to read-only and
stops collecting new data. You should monitor this closely to make sure you have sized the store appropriately
to contain the full history you’d like to retain.

D: Size Based Cleanup Mode: Specifies whether automatic data cleanup will take place when Query Store
data size approaches the limit.

07B13F58239056B81577933EB624485B
It is strongly recommended to activate size-based cleanup to makes sure that Query Store always runs in
read-write mode and collects the latest data.

References:
https://docs.microsoft.com/en-us/sql/relational-databases/performance/best-practice-with-the-query-store

QUESTION 168
DRAG DROP

You are tuning a database named MyDatabase.

You need to create an Extended Events session to capture execution plans for queries that run for at least 10
minutes. The following requirements must be met:

The target must write complete buffers to disk asynchronously.


The system must retain a maximum of 10 files
Each session must allocate no more than 10 megabytes (MB) of memory for event buffering.

Which four Transact-SQL segments should you use to develop the solution? To answer, move the appropriate
Transact-SQL segments from the list of Transact-SQL segments to the answer area and arrange them in the
correct order.

Select and Place:

Correct Answer:

07B13F58239056B81577933EB624485B
Section: (none)
Explanation

Explanation/Reference:
Explanation:

Step 1: CREATE EVENT SESSION SubOptimalExecPlans ON SERVER

Step 2: ADD EVENT sqlserver.query.query_pre_execution_showplan

Incorrect Answers:
Query_post_execution_showplan

Step 3: ADD TARGET package0.asynchronous_file_target .. max_rollower_files=10)


The target must write complete buffers to disk asynchronously.
The system must retain a maximum of 10 files

Step 4: WITH (MAX_MEMORY = 10 MB) ;


Each session must allocate no more than 10 megabytes (MB) of memory for event buffering.

Example:
CREATE EVENT SESSION test_session
ON SERVER
ADD EVENT sqlos.async_io_requested,
ADD EVENT sqlserver.lock_acquired
ADD TARGET package0.etw_classic_sync_target
(SET default_etw_session_logfile_path = N'C:\demo\traces\sqletw.etl' )
WITH (MAX_MEMORY=4MB, MAX_EVENT_SIZE=4MB);

References:
https://www.sqlservercentral.com/steps/stairway-to-sql-server-extended-events-level-4-extended-events-
engine-essential-concepts

QUESTION 169
Database users report that SELECT statements take a long time to return results. You run the following
Transact-SQL statement:

07B13F58239056B81577933EB624485B
You need to create one nonclustered covering index that contains all of the columns in the above table. You
must minimize index key size.

Which Transact-SQL statement should you run?

A. CREATE NONCLUSTERED INDEX IX_User ON Users (CountryCode, UserName);


B. CREATE NONCLUSTERED INDEX IX_User ON Users (CountryCode, UserStatus) INCLUDE
(UserName);
C. CREATE NONCLUSTERED INDEX IX_User ON Users (CountryCode, UserStatus, UserName);
D. CREATE NONCLUSTERED INDEX IX_User ON Users (UserStatus, CountryCode) INCLUDE
(UserName);

Correct Answer: D
Section: (none)
Explanation

Explanation/Reference:
Explanation:
Use the UserStatus as the first column in the index, as it is an in_equality column.

Incorrect Answers:
A: UserStatus is not included.

References:
https://docs.microsoft.com/en-us/sql/relational-databases/indexes/create-indexes-with-included-columns

QUESTION 170
You are experiencing performance issues with the database server.

You need to evaluate schema locking issues, plan cache memory pressure points, and backup I/O problems.

What should you create?

A. a System Monitor report


B. a sys.dm_tran_database_transaction dynamic management view query
C. an Extended Events session that uses Query Editor
D. a Microsoft SQL Profiler trace

Correct Answer: C
Section: (none)
Explanation

Explanation/Reference:
Explanation:
Extended Events: considered as "the best way" by the SQL Server purists. You can configure Extended
Events to find Locking Issues in SQL Server.

07B13F58239056B81577933EB624485B
Incorrect Answers:
D: SQL Trace and SQL Server Profiler are deprecated. The Microsoft.SqlServer.Management.Trace
namespace that contains the Microsoft SQL Server Trace and Replay objects are also deprecated.

This feature is in maintenance mode and may be removed in a future version of Microsoft SQL Server. Avoid
using this feature in new development work, and plan to modify applications that currently use this feature.

Use Extended Events instead.

References:
https://www.mssqltips.com/sqlservertip/5752/configuring-extended-events-to-find-locking-issues-in-sql-server/

QUESTION 171
You have multiple queries that take a long time to complete.

You need to identify the cause by using detailed information about the Transact-SQL statements in the
queries. The Transact-SQL statements must not run as part of the analysis.

Which Transact-SQL statement should you run?

A. SET STATISTICS IO ON
B. SET SHOWPLAN_TEXT ON
C. SET STATISTICS XML ON
D. SET STATISTICS PROFILE ON

Correct Answer: B
Section: (none)
Explanation

Explanation/Reference:
Explanation:
SET SHOWPLAN_TEXT ON causes Microsoft SQL Server not to execute Transact-SQL statements. Instead,
SQL Server returns detailed information about how the statements are executed.

Incorrect Answers:
C: SET STATISTICS XML ON causes Microsoft SQL Server to execute Transact-SQL statements and
generate detailed information about how the statements were executed in the form of a well-defined XML
document.

D: When STATISTICS PROFILE is ON, each executed query returns its regular result set, followed by an
additional result set that shows a profile of the query execution.

The additional result set contains the SHOWPLAN_ALL columns for the query and these additional columns.

Column name, Description


Rows: Actual number of rows produced by each operator
Executes: Number of times the operator has been executed

References:
https://docs.microsoft.com/en-us/sql/t-sql/statements/set-showplan-text-transact-sql

QUESTION 172
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution. Determine whether the solution meets the stated goals.

You need to identify the indexes that are not being used so that you can remove them from the database.

07B13F58239056B81577933EB624485B
Solution: You run the sys.dm_db_index_operational_stats dynamic management view.

Does the solution meet the goal?

A. Yes
B. No

Correct Answer: B
Section: (none)
Explanation

Explanation/Reference:

QUESTION 173
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution. Determine whether the solution meets the stated goals.

You need to identify the indexes that are not being used so that you can remove them from the database.

Solution: You run the sys.dm_db_index_usage_stats dynamic management view.

Does the solution meet the goal?

A. Yes
B. No

Correct Answer: A
Section: (none)
Explanation

Explanation/Reference:
References:
https://www.mssqltips.com/sqlservertutorial/256/discovering-unused-indexes/

https://www.sqlshack.com/how-to-identify-and-monitor-unused-indexes-in-sql-server/

QUESTION 174
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution. Determine whether the solution meets the stated goals.

You need to identify the indexes that are not being used so that you can remove them from the database.

Solution: You run the sys.dm_db_index_physical_stats dynamic management view.

Does the solution meet the goal?

A. Yes
B. No

Correct Answer: B
Section: (none)
Explanation

Explanation/Reference:

07B13F58239056B81577933EB624485B
QUESTION 175
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution. Determine whether the solution meets the stated goals.

You have a database named dbReporting. Users run a large number of read-only ad hoc queries against
the database. The application and all ad hoc queries use default database transaction isolation levels. You set
the value of the READ_COMMITTED_SNAPSHOT database option to ON.

You have an application that updates 10 tables sequentially and modifies a large volume of records in a single
transaction. The updates are isolated from each other.

Users report an error which indicates that the version store us full.

You need to reduce the number of occurrences of the error.

Solution: You increase the maximum database size for the dbReporting database.

Does the solution meet the goal?

A. Yes
B. No

Correct Answer: B
Section: (none)
Explanation

Explanation/Reference:

QUESTION 176
You are creating the following two stored procedures:

A natively-compiled stored procedure


An interpreted stored procedure that accesses both disk-based and memory-optimized tables

Both stored procedures run within transactions.

You need to ensure that cross-container transactions are possible.

Which setting or option should you use?

A. the BEGIN TRANSACTION AND COMMIT OR ROLLBACK TRANSACTION statements for the connection
B. the SERIALIZABLE table hint on memory-optimized tables
C. the CONTAINS MEMORY_OPTIMIZED_DATA option for the file group
D. the SET MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT=OFF option for the database

Correct Answer: A
Section: (none)
Explanation

Explanation/Reference:
Explanation:
Your Transact-SQL contains the code BEGIN TRANSACTION, along with an eventual COMMIT
TRANSACTION. Two or more statements can be corralled into the same transaction.
In explicit mode, you must either use the database option
MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT or code a table hint about the transaction isolation level
on the memory-optimized table in the FROM clause.

07B13F58239056B81577933EB624485B
Incorrect Answers:
B: Accessing memory optimized tables using the READ COMMITTED isolation level is supported only for
autocommit transactions. It is not supported for explicit or implicit transactions.

Reference:
https://docs.microsoft.com/en-us/sql/relational-databases/in-memory-oltp/transactions-with-memory-optimized-
tables?view=sql-server-2017

QUESTION 177
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some questions sets might
have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.

You need to collect data from the following two sources:

The performance counters of the operating system


Microsoft SQL Server events

You must analyze the two datasets side-by side by using a single tool.

Solution: You use dynamic management views and Data Collector Sets (DCs) in Performance Monitor to
collect performance data. You use SQL Server Management Studio (SSMS) to analyze the data.

Does this meet the goal?

A. Yes
B. No

Correct Answer: B
Section: (none)
Explanation

Explanation/Reference:
Reference:
https://www.sqlshack.com/dba-guide-sql-server-performance-troubleshooting-part-2-monitoring-utilities/

QUESTION 178
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some questions sets might
have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.

You need to collect data from the following two sources:

The performance counters of the operating system


Microsoft SQL Server events

You must analyze the two datasets side-by side by using a single tool.

Solution: You use SQL Server Profiler and SQL Server Extended Events to collect performance data. You use
SQL Server Profiler to analyze the data.

07B13F58239056B81577933EB624485B
Does this meet the goal?

A. Yes
B. No

Correct Answer: B
Section: (none)
Explanation

Explanation/Reference:
Reference:
https://www.sqlshack.com/dba-guide-sql-server-performance-troubleshooting-part-2-monitoring-utilities/

QUESTION 179
Note: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution that might meet the stated goals. Some questions sets might
have more than one correct solution, while others might not have a correct solution.

After you answer a question in this section, you will NOT be able to return to it. As a result, these
questions will not appear in the review screen.

You need to collect data from the following two sources:

The performance counters of the operating system


Microsoft SQL Server events

You must analyze the two datasets side-by side by using a single tool.

Solution: You use dynamic management views and SQL Server Profiler to collect performance data. You use
SQL Server Management Studio (SSMS) to analyze the data.

Does this meet the goal?

A. Yes
B. No

Correct Answer: A
Section: (none)
Explanation

Explanation/Reference:
Reference:
https://www.sqlshack.com/dba-guide-sql-server-performance-troubleshooting-part-2-monitoring-utilities/

QUESTION 180
NOTE: This question is part of a series of questions that present the same scenario. Each question in
the series contains a unique solution. Determine whether the solution meets the stated goals.

Your company has employees in different regions around the world.

You need to create a database table that stores the following employee attendance information:

Employee ID
Date and time employee checked in to work
Date and time employee checked out of work

Date and time information must be time zone aware and must not store fractional seconds.

07B13F58239056B81577933EB624485B
Solution: You run the following Transact-SQL statement:

Does the solution meet the goal?

A. Yes
B. No

Correct Answer: B
Section: (none)
Explanation

Explanation/Reference:
Explanation:
Datetime2 stores fractional seconds.
Datetime2 defines a date that is combined with a time of day that is based on 24-hour clock. datetime2 can be
considered as an extension of the existing datetime type that has a larger date range, a larger default
fractional precision, and optional user-specified precision.

Reference:
https://docs.microsoft.com/en-us/sql/t-sql/data-types/datetime2-transact-sql?view=sql-server-2017

https://msdn.microsoft.com/en-us/library/bb677335.aspx

QUESTION 181
DRAG DROP

Your database contains a table named Invoices with a unique identifier column named InvoiceID.

You have a batch update process that reads data from an input file and updates the table.

You create a clustered columnstore index on the table and must update the configuration to meet the following
requirements:

The table must remain online.


Perform analytic queries during business hours.
Each row must have a unique key.

You need to configure the index to optimize performance.

Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of
actions to the answer area and arrange them in the correct order.

Select and Place:

07B13F58239056B81577933EB624485B
Correct Answer:

Section: (none)
Explanation

07B13F58239056B81577933EB624485B
Explanation/Reference:
Reference:
https://solutioncenter.apexsql.com/why-when-and-how-to-rebuild-and-reorganize-sql-server-indexes/

https://docs.microsoft.com/en-us/sql/database-engine/using-nonclustered-columnstore-indexes?view=sql-
server-2014

07B13F58239056B81577933EB624485B

You might also like