Download as pdf or txt
Download as pdf or txt
You are on page 1of 73

19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

Question 1 Marked as review Unattempted

Domain :Design Resilient Architectures

You are working with a global Pharma firm having Corporate Head office in Washington & regional
office in Chicago & Paris. They have an intranet website which is a two-tier with web servers
deployed in VPC at us-east-1 region & database servers deployed on-premise at Washington
office. Corporate Head office has Direct Connect link to VPC in us-east-1 region.
Washington corporate office is connected to Chicago & Paris via a global service provider WAN
links while each of these offices has separate internet links from local service provider. Recently
they are facing link outage issue with WAN links which results isolation of regional office from
corporate head office. They are looking for a cost-effective backup solution which can be set-up
quickly without any additional devices and links. What will be most suitable connectivity option?

Using existing Internet connection at Washington, Chicago & Paris, setup a VPN
connection with us-east-1 VGW advertising prefixes via BGP. BGP ASN should be
] A.
unique at these locations. VGW at us-east-1 will re-advertise these prefixes to
Washington office.

Using existing Internet connection at Chicago & Paris, setup a VPN


] B.
connection with us-east-1 VGW advertising prefixes via BGP. BGP ASN
should be unique at these locations. VGW at us-east-1 will re-advertise
these prefixes to Washington office.
A
Using existing Internet connection at Chicago & Paris, setup a VPN connection
with VGW at us-west-1 & eu-west-3 advertising prefixes via BGP. BGP ASN should
] C. be unique at these locations. VGW at us-west-1 & eu-west-3 will advertise these
prefixes to VGW at us-east-1 having connectivity to Washington corporate head
office.

Using existing Internet connection at Chicago setup VPN connection with VGW at
us-east-1 while for Paris, setup a VPN connection with VGW at eu-west-3,
] D. advertising prefixes via BGP. BGP ASN should be unique at these locations. VGW
at eu-west-3 will advertise these prefixes to VGW at us-east-1 having
connectivity to Washington corporate head office.

Explanation:

Correct Answer – B

Using AWS VPN CloudHub, VGW can be used to connect multiple locations. Each location using
existing Internet link & customer routers will setup a VPN connection to VGW. BGP Peering will be
configured between customer gateway router & VGW using unique BGP ASN at each location.
VGW will receive prefixes from each location & re-advertise to other peers. Direct Connect links
terminating on this VGW will also have connectivity with these locations via VGW.

Option A is incorrect as Washington Corporate o ce is already having a Direct Connect link to


VGW in us-east-1, so no additional VPN needs to be setup from Washington o ce.

Option C is incorrect as Washington Head o ce has already Direct Connect connectivity with
VGW in us-east-1, so o ce in Chicago & Paris should setup VPN to VGW in us-east-1 region, so
https://www.whizlabs.com/learn/course/quiz-result/531143 3/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

that connectivity can be establish between these 3 o ces.  

Option D is incorrect as with this connectivity solution, connectivity will be established between
Chicago & Washington o ces but not with Paris as it is connecting to di erent VGW.

For more information on using VPC Cloudhub, refer to following URL,

https://docs.aws.amazon.com/vpn/latest/s2svpn/VPN_CloudHub.html

Ask our Experts

Rate this Question? vu


Question 2 Correct

Domain :Design Resilient Architectures

An application currently consists of an EC2 Instance hosting a Web application. The Web
application connects to an AWS RDS database. Which of the following can be used to ensure that
the database layer is highly available?

Create another EC2 Instance in another Availability Zone and host a replica of the
] A.
database.

Create another EC2 Instance in another Availability Zone and host a replica of the
] B.
Webserver.

] C. Enable Read Replica for the AWS RDS database.

z] D. Enable Multi-AZ for the AWS RDS database.


A
Explanation:

Answer – D

AWS Documentation mentions the following:

Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB)
Instances, making them a natural fit for production database workloads. When you provision a
Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and
synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each
AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly
reliable. In case of an infrastructure failure, Amazon RDS performs an automatic failover to the
standby (or to a read replica in the case of Amazon Aurora), so that you can resume database
operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the
https://www.whizlabs.com/learn/course/quiz-result/531143 4/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

same after a failover, your application can resume database operation without the need for manual
administrative intervention.

For more information on AWS RDS Multi-AZ, please visit the following URL:

https://aws.amazon.com/rds/details/multi-az/

Ask our Experts

Rate this Question? vu


Question 3 Incorrect

Domain :De ne Operationally-Excellent Architectures

An application currently allows users to upload files to an S3 bucket. You want to ensure that the
file name for each uploaded file is stored in a DynamoDB table. How can this be achieved? Choose
2 answers from the options given below. Each answer forms part of the solution.

A.
Create an AWS Lambda function to insert the required entry for each
uploaded file. A
B. Use AWS CloudWatch to probe for any S3 event.

z C. Add an event in S3 with notification send to Lambda.


A
z D. Add the CloudWatch event to the DynamoDB table streams section.
B
Explanation:

Answer – A and C

You can create a Lambda function containing the code to process the file, and add the name of the
file to the DynamoDB table.

You can then use an Event Notification from the S3 bucket to invoke the Lambda function
whenever the file is uploaded.

https://www.whizlabs.com/learn/course/quiz-result/531143 5/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

For more information on Amazon S3 Event Noti cations, please visit the following URL:

https://docs.aws.amazon.com/AmazonS3/latest/dev/Noti cationHowTo.html

Ask our Experts

Rate this Question? vu


Question 4 Incorrect

Domain :De ne Performant Architectures

 
A company is migrating an on-premises MySQL database to AWS.

https://www.whizlabs.com/learn/course/quiz-result/531143 6/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

Following are the key requirements:


a) Ability to support an initial size of 5TB
b) Ability to allow the database to double in size
c)  Replication Lag to be kept under 100 milliseconds
Which Amazon RDS engine meets these requirements?
 

z] A. MySQL
B
] B. Microsoft SQL Server

] C. Oracle

] D. Amazon Aurora
A
Explanation:

Answer – D

The AWS Documentation explains about how AWS Aurora fulfills the mentioned requirements: 

Amazon Aurora (Aurora) is a fully managed, MySQL- and PostgreSQL-compatible, relational


database engine. It combines the speed and reliability of high-end commercial databases with the
simplicity and cost-effectiveness of open-source databases. It delivers up to five times the
throughput of MySQL and up to three times the throughput of PostgreSQL without requiring
changes to most of your existing applications.

All Aurora Replicas return the same data for query results with minimal replica lag—usually much
lesser than 100 milliseconds after the primary instance has written an update.

For more information on AWS Aurora, please visit the following URL:

http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Aurora.Overview.html

Ask our Experts

Rate this Question? vu


Question 5 Marked as review Incorrect

https://www.whizlabs.com/learn/course/quiz-result/531143 7/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

Domain :Design Resilient Architectures

You are working for a global software firm having offices in various continents. Pre-Sales team
need to provide a new application demo to a prospective customer. For this they are looking
urgently for a separate temporary connection between 3 regional offices at Sydney, London, Tokyo
& Demo VPC at us-west-1 region. Also, there should be connectivity between these offices for data
synchronisation of new application.
You are planning to setup a VPN connection from these offices to VGW at us-west-1. You have
arranged for Internet link, routers at each regional offices & VPN parameter list. What other factors
need to be considered to meet this connectivity solution. (Select TWO).

VGW at us-west-1 should be enabled to advertise  IP prefixes of each regional


A.
office to other regional office.

B.
Non-overlapping IP address pool should be configured at each of regional
office. A
Each router should have a BGP peering with other routers at each regional office
C.
over VPN connection.

D. BGP ASN should be unique at these regional offices.


A
Each of this office should setup VPN connection to VGW only in that specific
E.
region instead of to VGW at us-west-1.

Explanation:

Correct Answer – B, D

AWS VPN CloudHub provides connectivity between spoke location over VPN connection. In this
case VGW acts a Hub & re-advertise prefixes received from one regional office to other regional
office. For this connectivity to establish, each regional site should have non-overlapping IP prefixes
& BGP ASN should be unique at each site. If BGP ASN are not unique, additional ALLOWAS-IN will
be required.

Option A is incorrect as VGW by default acts a Hub & spoke & no additional con guration needs
to be done at VGW end.

Option C is incorrect as router needs to have BGP peering only with VGW & not with routers in
other location.

Option E is incorrect as regional o ce can be setup VPN connection to VGW of di erent region
as well.

For more information on using AWS VPN CloudHub, refer to following URL,

https://docs.aws.amazon.com/aws-technical-content/latest/aws-vpc-connectivity-
options/aws-vpn-cloudhub-network-to-amazon.html

https://www.whizlabs.com/learn/course/quiz-result/531143 8/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

Ask our Experts

Rate this Question? vu


Question 6 Incorrect

Domain :Design Cost-Optimized Architectures

An application needs to have a database hosted in AWS. The database will be hosted on an EC2
Instance. The application's expected performance is 2 IOPS/GiB, with the ability to burst to 2,000
IOPS for extended periods of time. What is the MOST suitable storage type that could be used by
the underlying EC2 instance hosting the database?

z] A. Amazon EBS provisioned IOPS SSD


B
] B. Amazon EBS Throughput Optimized HDD

] C. Amazon EBS General Purpose SSD


A
] D. Amazon EFS

Explanation:

Answer - C

AWS recommends that for small workloads it is better to use General Purpose SSD volume (gp2)
than Throughput Optimized HDD volumes. (st1).

Please refer the below link

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html

As per the AWS docs :

Throughput Optimized HDD (st1) Volumes

Throughput Optimized HDD (st1) volumes provide low-cost magnetic storage that defines
performance in terms of throughput rather than IOPS. This volume type is a good fit for large,
sequential workloads such as Amazon EMR, ETL, data warehouses, and log processing. Bootable
st1 volumes are not supported.

Throughput Optimized HDD (st1) volumes, though similar to Cold HDD (sc1) volumes, are designed
to support frequently accessed data.

https://www.whizlabs.com/learn/course/quiz-result/531143 9/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html#ine ciency

Amazon Elastic File System (Amazon EFS) provides simple, scalable file storage for use with
Amazon EC2. With Amazon EFS, storage capacity is elastic, growing and shrinking automatically as
you add and remove files, so your applications have the storage they need, when they need it. The
service is designed to be highly scalable, highly available, and highly durable. Amazon EFS file
systems store data and metadata across multiple Availability Zones in a region and can grow to
petabyte scale, drive high levels of throughput, and allow massively parallel access from Amazon
EC2 instances to your data.

Since the database is not going to be used that frequently, you should ideally choose the EBS
General Purpose SSD over EBS provisioned IOPS SSD.

For more information on AWS EBS Volumes, please visit the following URL:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumes.html

Ask our Experts

Rate this Question? vu


Question 7 Correct

Domain :De ne Operationally-Excellent Architectures

You are working for an electrical appliance company which has web-application hosted in AWS.
This is a two-tier web application with web-servers hosted in VPC’s & on-premise data-centre.  You
are using Network Load balancer in front end to distribute traffic between these servers. You are
using instance Id for configuring targets for Network Load Balancer. Some clients are complaining
about delay in accessing this web-site.
To troubleshoot this issue, you are looking for a list of Client IP address having longer TLS
handshake time. You have enabled access logging on Network Load balancing with logs saved in
Amazon S3 buckets. Which tool can be used to quickly analyse large amount of log files without
any visualization and in a cost-effective way?

z] A. Use Amazon Athena to query logs saved in Amazon S3 buckets.


A
] B. Use Amazon S3 console to process logs.

] C. Export Network Load Balancer access logs to third-party application.

Use Amazon Athena along with Amazon QuickSight to query logs saved in
] D.
Amazon S3 buckets.

Explanation:
https://www.whizlabs.com/learn/course/quiz-result/531143 10/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

Correct Answer – A

Amazon Athena is suitable tool for querying Network Load Balancers logs. In above case since
large amount of logs are saved in S3 buckets from Network load balancer, Amazon Athena can be
used to query logs & generate required details of client IP address & TLS handshake time.

Option B is incorrect as processing large amount of logs directly from S3 console will be time
consuming process.

Option C is incorrect as using a third-party tool will not be a cost-e ective solution.

Option D is incorrect as in above case we require only details of Client IP details along with TLS
handshake time for troubleshooting purpose. Amazon QuickSight will be useful in case you
need data visualization.

 For more information on using Amazon Athena to query Network Load Balancer logs, refer to
following URL,

 https://docs.aws.amazon.com/athena/latest/ug/networkloadbalancer-classic-logs.html

Ask our Experts

Rate this Question? vu


Question 8 Correct

Domain :Design Cost-Optimized Architectures

An application allows users to upload images to an S3 bucket. Initially these images will be
downloaded quite frequently, but after some time, the images might only be accessed once a
week and the retrieval time should be as minimal as possible. 
What could be done to ensure a COST effective solution? Choose 2 answers from the options
below. Each answer forms part of the solution.

A. Store the objects in Amazon Glacier.

z B. Store the objects in S3 – Standard storage.


A
Create a Lifecycle Policy to transfer the objects to S3 – Standard storage after a
C.
certain duration of time.

z D.
Create a Lifecycle Policy to transfer the objects to S3 – Infrequent Access
storage after a certain duration of time. A
Explanation:

Answer – B and D

https://www.whizlabs.com/learn/course/quiz-result/531143 11/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

Store the images initially in Standard storage since they are accessed frequently. De ne
Lifecycle Policies to move the images to Infrequent Access storage to save on costs.

Amazon S3 Infrequent access is perfect if you want to store data that is not frequently accessed,
and is much more cost-e ective than Option C i.e. Amazon S3 Standard. Also, if you choose
Amazon Glacier with expedited retrievals, you defeat the whole purpose of the requirement,
because this option would result in increased costs.

For more information on AWS Storage classes, please visit the following URL:

https://aws.amazon.com/s3/storage-classes/

Ask our Experts

Rate this Question? vu


Question 9 Correct

Domain :Design Resilient Architectures

 
A company needs a solution to store and archive corporate documents and has determined that
Amazon Glacier is the right solution. It is required that data is delivered within 5 minutes of a
retrieval request.
Which feature in Amazon Glacier can help meet this requirement?
 

] A. Defining a Vault Lock

z] B. Using Expedited retrieval


A
] C. Using Bulk retrieval

] D. Using Standard retrieval

Explanation:

Answer – B

The AWS Documentation mentions the following:

https://www.whizlabs.com/learn/course/quiz-result/531143 12/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

Expedited retrievals allow you to quickly access your data when occasional urgent requests for a
subset of archives are required.

For more information on AWS Glacier Retrieval, please visit the following URL:

https://docs.aws.amazon.com/amazonglacier/latest/dev/downloading-an-archive-two-
steps.html

Ask our Experts

Rate this Question? vu


Question 10 Correct

Domain :Design Cost-Optimized Architectures

You are working for a global financial company. Company locations spread across various
countries upload transaction details data to S3 bucket in US-West region. Large amount of data is
uploaded on daily basis simultaneously from each of these locations. You are using Amazon
Athena to query this data & create reports using Amazon QuickSight to create a daily dashboard to
management team. In some cases, while running queries, you are observing Amazon S3 exception
errors.
Also, in monthly bill, high percentage of cost is associated with Amazon Athena. Which of the
following can help eliminate S3 errors while querying data & also reduce cost associated with
queries? (Select TWO.)

A. Partition data based upon user credentials

z B. Partition data based upon date & location.


A
z C. Create a separate Workgroups based upon user groups.
A
D. Create a single Workgroup for all users.

Explanation:

Correct Answer – B, C

AWS Athena pricing is based upon per query & amount of data scanned in each query. In above
case, each regional office is uploading large amount of data simultaneously, this data needs to be
partitioned based upon location & date. A separate Workgroup can be created based upon users,

https://www.whizlabs.com/learn/course/quiz-result/531143 13/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

teams, application or workloads. This will lead to minimise data scanned for each query improving
performance & reducing cost.

Option A is INCORRECT as partitioning the data on user credentials is irrelevant here.

Option D is incorrect as single Workgroup will not decrease data scanned per query. 

For more information on Partitioning data & using Workgroups, refer to following URLs,

https://docs.aws.amazon.com/athena/latest/ug/partitions.html

https://docs.aws.amazon.com/athena/latest/ug/manage-queries-control-costs-with-
workgroups.html

Ask our Experts

Rate this Question? vu


Question 11 Correct

Domain :De ne Performant Architectures

You plan to use Auto Scaling groups to maintain the performance of your web application. How
can you ensure that the scaling activity has sufficient time to stabilize without executing another
scaling action?

] A. Modify the Instance User Data property with a timeout interval.

z] B. Increase the Auto Scaling Cooldown timer value.


A
] C. Enable the Auto Scaling cross zone balancing feature.

] D. Disable CloudWatch alarms till the application stabilizes.

Explanation:

Answer – B

AWS Documentation mentions the following:

The Cooldown period is a configurable setting for your Auto Scaling group which ensures that it
doesn't launch or terminate additional instances before the previous scaling activity takes effect.
After the Auto Scaling group dynamically scales using a simple Scaling Policy, it waits for the
Cooldown period to complete before resuming scaling activities.

https://www.whizlabs.com/learn/course/quiz-result/531143 14/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

For more information on Auto Scaling Cooldown, please visit the following URL:

https://docs.aws.amazon.com/autoscaling/ec2/userguide/Cooldown.html

Ask our Experts

Rate this Question? vu


Question 12 Incorrect

Domain :Specify Secure Applications and Architectures

A company hosts a popular web application that connects to an Amazon RDS MySQL DB instance
running in a private VPC subnet created with default ACL settings. The IT Security department has
identified a DoS attack from a suspecting IP. How can you protect the subnets from this attack?

z] A. Change the Inbound Security Groups to deny access from the suspecting IP.
B
] B. Change the Outbound Security Groups to deny access from the suspecting IP.

] C. Change the Inbound NACL to deny access from the suspecting IP.
A
] D. Change the Outbound NACL to deny access from the suspecting IP.

Explanation:

Answer – C

Option A and B are invalid because the Security Groups already block traffic by default. You can
use NACL’s as an additional security layer for the subnet to deny traffic.

Option D is invalid since just changing the Inbound Rules is sufficient.

AWS Documentation mentions the following:

A Network access control list (ACL) is an optional layer of security for your VPC that acts as a
firewall for controlling traffic in and out of one or more subnets. You might set up network ACLs
with rules similar to your security groups in order to add an additional layer of security to your
VPC. 

https://www.whizlabs.com/learn/course/quiz-result/531143 15/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

For more information on Network Access Control Lists, please visit the following URL:

https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html

Ask our Experts

Rate this Question? vu


Question 13 Incorrect

Domain :De ne Operationally-Excellent Architectures

A company is planning on allowing their users to upload and read objects from an S3 bucket. Due
to the numerous amount of users, the read/write traffic will be very high. How should the
architect maximize Amazon S3 performance?

] A. Prefix each object name with a random string.


A
] B. Use the STANDARD_IA storage class.

z] C. Prefix each object name with the current data.


B
] D. Enable versioning on the S3 bucket.

Explanation:

Answer – A

If the request rate is high, you can use hash keys or random strings to prefix to the object name.
Here, partitions used to store the objects will be better distributed and hence allow for better
read/write performance for your objects.

For more information on how to ensure performance in S3, please visit the following URL:

https://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html

https://www.whizlabs.com/learn/course/quiz-result/531143 16/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

Ask our Experts

Rate this Question? vu


Question 14 Correct

Domain :De ne Performant Architectures

An EC2 Instance setup in AWS will host an application which will make API calls to the Simple
Storage Service. What is an ideal way for the application to access the Simple Storage Service?

] A. Pass API credentials to the instance using instance user data.

] B. Store API credentials as an object in a separate Amazon S3 bucket.

] C. Embed the API credentials into your application.

z] D. Create and Assign an IAM role to the EC2 Instance.


A
Explanation:

Answer - D

AWS Documentation mentions the following:

You can use roles to delegate access to users, applications, or services that don't normally have
access to your AWS resources. It is not a good practice to use IAM credentials for a production-
based application. It is always a good practice to use IAM Roles.

For more information on IAM Roles, please visit the following URL:

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html

Ask our Experts

Rate this Question? vu

https://www.whizlabs.com/learn/course/quiz-result/531143 17/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

Question 15 Incorrect

Domain :De ne Performant Architectures

Videos are uploaded to an S3 bucket, and you need to provide access to users to view the same.
What is the best way to do so, while maintaining a good user experience for all users regardless of
the region in which they are located?

z] A. Enable Cross-Region Replication for the S3 bucket to all regions.


B
] B. Use CloudFront with the S3 bucket as the source.
A
] C. Use API Gateway with S3 bucket as the source.

] D. Use AWS Lambda functions to deliver the content to users.

Explanation:

Answer – B

AWS Documentation mentions the following to backup this requirement:

Amazon CloudFront is a web service that speeds up distribution of static and dynamic web
content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content
through a worldwide network of data centers called Edge locations. When a user requests content
that you're serving with CloudFront, the user is routed to the Edge location that provides the
lowest latency (time delay), so that content is delivered with the best possible performance. If the
content is already in the Edge location with the lowest latency, CloudFront delivers it immediately.
If the content is not in that Edge location, CloudFront retrieves it from an Amazon S3 bucket or an
HTTP server (for example, a web server) that you have identified as the source for the definitive
version of your content.

For more information on Amazon CloudFront, please visit the following URL:

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Introduction.html

Ask our Experts

Rate this Question? vu


https://www.whizlabs.com/learn/course/quiz-result/531143 18/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

Question 16 Correct

Domain :Design Resilient Architectures

An organization has a requirement to store 10TB worth of scanned files across multiple availability
zones. They are required to have a search application in place to search through the scanned files. 
Which of the below mentioned options is ideal for implementing the search facility?

Use S3 with reduced redundancy to store and serve the scanned files. Install a
] A. commercial search application on EC2 Instances and configure with Auto-Scaling
and an ElasticLoad Balancer.

Model the environment using CloudFormation. Use an EC2 instance running


] B. Apache webserver and an open source search application, stripe multiple
standard EBSvolumes together to store the scanned files with a search index.

Use S3 with standard redundancy to store and serve the scanned files. Use
z] C. CloudSearch for query processing, and use Elastic Beanstalk to host the
website across multiple Availability Zones.
A
Use a single-AZ RDS MySQL instance to store the search index for the scanned
] D. files and use an EC2 instance with a custom application to search based on the
index.

Explanation:

Answer – C

With Amazon CloudSearch, you can quickly add rich search capabilities to your website or
application. You don't need to become a search expert or worry about hardware provisioning,
setup, and maintenance. With a few clicks in the AWS Management Console, you can create a
search domain and upload the data that you want to make searchable, and Amazon CloudSearch
will automatically provision the required resources and deploy a highly tuned search index.

You can easily change your search parameters, fine tune search relevance, and apply new settings
at any time. As your volume of data and traffic fluctuates, Amazon CloudSearch seamlessly scales
to meet your needs.

For more information on AWS CloudSearch, please visit the below link:

https://aws.amazon.com/cloudsearch/

https://www.whizlabs.com/learn/course/quiz-result/531143 19/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/awseb-dg.pdf

Ask our Experts

Rate this Question? vu


Question 17 Correct

Domain :Design Resilient Architectures

You are working for a global financial company. Company locations spread across various
countries upload transaction data to S3 bucket in US-West region. You will be using AWS Glue &
Amazon Athena to further analyse this data. You are using Crawler which will scan all data from S3
buckets & populate Glue Data Catalog, to which Amazon Athena will query.
Large amount of CSV data is uploaded on a  daily basis simultaneously from all the global
locations. To decrease scanning time while scanning data in S3 buckets, you need to ensure only
changes in datasets are scanned. Which of the following configurations will meet this
requirement?

] A. Reset Job Bookmark in AWS Glue.

] B. Disable Job Bookmark in AWS Glue.

] C. Pause Job Bookmark in AWS Glue. 

z] D. Enable Job Bookmark in AWS Glue. 


https://www.whizlabs.com/learn/course/quiz-result/531143
A 20/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

Explanation:

Correct Answer – D

AWS Glue keep a track of processed data using Job Bookmark. Enabling Job Bookmark will help to
scan only changes since last bookmark & prevent processing of whole data again.

Option A is incorrect as resetting Job Bookmark in AWS Glue will reprocess all data & will not
prevent reprocessing of already scanned data.

Option B is incorrect as Disabling Job Bookmark will process all data each time.

Option C is incorrect as Pausing Job Bookmark will process incremental data since last data
scan but will not update state information. So, for succeeding process, it will start scanning all
data since last bookmark.

For more information on Job Bookmark in AWS Glue, refer to following URL,

https://docs.aws.amazon.com/glue/latest/dg/monitor-continuations.html

Ask our Experts

Rate this Question? vu


Question 18 Incorrect

Domain :Design Resilient Architectures

A concern raised in your company is that developers could potentially delete production-based
EC2 resources. As a Cloud Admin, which of the below options would you choose to help alleviate
this concern? Choose 2 options.

Tag the production instances with production-identifying tag and add


z A. resource-level permissions to the developers with an explicit deny on the
terminate API call to instances with the production tag.
A
B. Create a separate AWS account and move the developers to that account.
A
z C.
Modify the IAM policy on the production users to require MFA before deleting
EC2 instances, and disable MFA access to the employee. B
Modify the IAM policy on the developers to require MFA before deleting EC2
D.
instances.

Explanation:

https://www.whizlabs.com/learn/course/quiz-result/531143 21/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

Answer – A and B

Creating separate AWS account for developers will help the organization to facilitate
the highest level of resource and security isolation.

The following documentation from AWS gives us a clear picture of the scenarios when we need to
consider creating multiple accounts.

When to Create Multiple Accounts


While there is no one-size-fits-all answer for how many AWS accounts a particular customer
should have, most companies will want to create more than one AWS account because multiple
accounts provide the highest level of resource and security isolation. Answering “yes” to any of the
following questions is a good indication that you should consider creating additional AWS
accounts:

Does the business require administrative isolation between workloads?


Administrative isolation by account provides the most straightforward approach for granting
independent administrative groups di erent levels of administrative control over AWS resources
based on the workload, development lifecycle, business unit (BU), or data sensitivity.

Does the business require limited visibility and discoverability of workloads?


Accounts provide a natural boundary for visibility and discoverability. Workloads cannot be
accessed or viewed unless an administrator of the account enables access to users managed in
another account.

Does the business require isolation to minimize the blast radius?


Blast-radius isolation by account provides a mechanism for limiting the impact of a critical event
such as a security breach, if an AWS Region or Availability Zone becomes unavailable, account
suspensions, etc. Separate accounts help de ne boundaries and provide natural blast-radius
isolation.

Does the business require strong isolation of recovery and/or auditing data?
Businesses that are required to control access and visibility to auditing data due to regulatory
requirements can isolate their recovery data and/or auditing data in an account separate from
where they run their workloads (e.g., writing CloudTrail logs to a di erent account).

For more information:

https://d0.awsstatic.com/aws-answers/AWS_Multi_Account_Security_Strategy.pdf

Tags enable you to categorize your AWS resources in different ways, for example, by purpose,
owner, or environment. This is useful when you have many resources of the same type — you can
quickly identify a specific resource based on the tags you've assigned to it. Each tag consists of a
key and an optional value, both of which you define.

For more information on tagging AWS resources, please refer to the below URL:

https://www.whizlabs.com/learn/course/quiz-result/531143 22/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html

The question says that the developers should not have the option to delete production based
resources. So, option A and B completely keep the developers away from production resources. 

You wish to use MFA, which means developers can delete the production-based resources if they
know MFA code which is not recommended. 

AWS Multi-Factor Authentication (MFA) is a simple best practice that adds an extra layer of
protection on top of your user name and password. With MFA enabled, when a user signs in to an
AWS website, they will be prompted for their user name and password (the first factor—what they
know), as well as for an authentication code from their AWS MFA device (the second factor—what
they have). Taken together, these multiple factors provide increased security for your AWS account
settings and resources.

Organizations have good control on newly created accounts rather than old aws accounts.
Because they can easily monitor and maintain(few) assigned permissions on accounts and they
delete those accounts once the required task will be done.

Ask our Experts

Rate this Question? vu


Question 19 Correct

Domain :Design Resilient Architectures

A company needs to monitor the read and write IOPS metrics for their AWS MySQL RDS instance
and send real-time alerts to their Operations team. Which AWS services can accomplish this?
Choose 2 answers from the options given below.

A. Amazon Simple Email Service

z B. Amazon CloudWatch
A
C. Amazon Simple Queue Service

D. Amazon Route 53

z E. Amazon Simple Notification Service


A
Explanation:

https://www.whizlabs.com/learn/course/quiz-result/531143 23/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

Answer – B and E

Amazon CloudWatch may be used to monitor IOPS metrics from the RDS instance and Amazon
Simple Notification Service, to send the notification if any alarm is triggered.

For more information on CloudWatch metrics, please refer to the link below.

http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CW_Support_For_AWS.html

Ask our Experts

Rate this Question? vu


Question 20 Correct

Domain :De ne Operationally-Excellent Architectures

You run an ad-supported photo-sharing website using S3 to serve photos to visitors of your site. At
some point you find out that other sites have been linking to photos on your site, causing loss to
your business. What is an effective method to mitigate this? Choose the correct answer from the
options below:

] A. Use CloudFront distributions for static content.

] B. Store photos on an EBS volume of the web server.

z] C. Remove public read access and use signed URLs with expiry dates.
A
] D. Block the IPs of the offending websites in Security Groups.

Explanation:

Answer – C

You can distribute private content using a signed URL that is valid for only a short time—possibly
for as little as a few minutes. Signed URLs that are valid for such a short period are good for
distributing content on-the-fly to a user for a limited purpose, such as distributing movie rentals or
music downloads to customers on demand. 

For more information on Signed URLs please visit the below link:

https://www.whizlabs.com/learn/course/quiz-result/531143 24/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

https://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html

Ask our Experts

Rate this Question? vu


Question 21 Incorrect

Domain :De ne Operationally-Excellent Architectures

You are working with a global IT firm which has web-application hosted in AWS. This is a two-tier
web application with web-servers behind Application load balancers. A new application is
developed for which you need to analyse performance at each node.
These parameters will be used as a reference before making this application into commercial
services & henceforth for any operational challenges. You are using AWS X-Ray for this purpose.
Which of the following will help to get traces while ensuring cost is within limits?

] A. Sampling at default rate.

z] B. Sampling at higher rate.


B
] C. Filter expression

] D. Sampling at low rate.


A
Explanation:

Correct Answer – D

Sampling rate can be lower to collect statistically significant number of requests, to get optimum
traces & have a cost-effective solution

Option A is incorrect as Default Sampling rate is conservative but can be further customised to
sample at lower rate based upon your sampling requirements.

Option B is incorrect as Sampling at higher rate will collect a lot of data, but also incur additional
cost.

Option C is incorrect as Filter expression can be used to narrow down results from number of
traces scanned during a de ned period. It will not a ect number of traces scanned.

For more information on X-Ray parameters, refer to following URL,

https://docs.aws.amazon.com/xray/latest/devguide/xray-concepts.html

https://www.whizlabs.com/learn/course/quiz-result/531143 25/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

Ask our Experts

Rate this Question? vu


Question 22 Correct

Domain :Specify Secure Applications and Architectures

Your IT Security department has mandated that all data on EBS volumes created for underlying
EC2 Instances needs to be encrypted. Which of the following can help achieve this?

z] A. AWS KMS
A
] B. AWS Certificate Manager

] C. API Gateway with STS

] D. IAM Access Key

Explanation:

Answer – A

Option B is incorrect - The AWS Certificate manager can be used to generate SSL certificates used
to encrypt traffic in transit, but not at rest.

Option C is incorrect - This is used for issuing tokens when using the API gateway for traffic in
transit.

Option D is incorrect - This is used for secure access to EC2 Instances.

The AWS Documentation mentions the following on AWS KMS:

AWS Key Management Service (AWS KMS) is a managed service that makes it easy for you to
create and control the encryption keys used to encrypt your data. AWS KMS is integrated with
other AWS services including Amazon Elastic Block Store (Amazon EBS), Amazon Simple Storage
Service (Amazon S3), Amazon Redshift, Amazon Elastic Transcoder, Amazon WorkMail, Amazon
Relational Database Service (Amazon RDS), and others to make it simple to encrypt your data with
encryption keys that you manage.

For more information on AWS KMS, please visit the following URL:

https://docs.aws.amazon.com/kms/latest/developerguide/overview.html

https://www.whizlabs.com/learn/course/quiz-result/531143 26/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

Ask our Experts

Rate this Question? vu


Question 23 Incorrect

Domain :Design Cost-Optimized Architectures

A company is worried about the EBS volume hosted in AWS and wants to ensure that redundancy
is achieved for the same. What must be done to achieve this in a cost-effective manner?

] A.
Nothing, since by default, EBS Volumes are replicated within their
Availability Zones. A
] B. Copy the data to S3 bucket for data redundancy.

z] C. Create EBS Snapshots in another Availability Zone for data redundancy.


B
] D. Copy the data to a DynamoDB table for data redundancy.

Explanation:

Answer – A

The AWS Documentation mentions the following:

Amazon Elastic Block Store (Amazon EBS) provides persistent block storage volumes for use with
Amazon EC2 instances in the AWS Cloud. Each Amazon EBS volume is automatically replicated
within its Availability Zone to protect you from component failure, offering high availability and
durability. Amazon EBS volumes offer the consistent and low-latency performance needed to run
your workloads. With Amazon EBS, you can scale your usage up or down within minutes – all while
paying a low price for only what you provision.

For more information on EBS, please visit the following URL:

https://aws.amazon.com/ebs/

Note:

Amazon EBS Availability & Durability:

Amazon EBS volumes are designed to be highly available and reliable. At no additional charge to
https://www.whizlabs.com/learn/course/quiz-result/531143 27/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

you, Amazon EBS volume data is replicated across multiple servers in an Availability Zone to
prevent the loss of data from the failure of any single component

Amazon EBS volumes are designed for an annual failure rate (AFR) of between 0.1% - 0.2%, where
failure refers to a complete or partial loss of the volume, depending on the size and performance
of the volume. This makes EBS volumes 20 times more reliable than typical commodity disk drives,
which fail with an AFR of around 4%. For example, if you have 1,000 EBS volumes running for 1 year,
you should expect 1 to 2 will have a failure. EBS also supports a snapshot feature, which is a good
way to take point-in-time backups of your data.

Please refer below link.

https://aws.amazon.com/ebs/details/#AvailabilityandDurability

We need to notice two things. one - the question says that we need to achieve in a cost-effective
manner. Two - we are testing whether you understood the core functionality of the service in CSAA
exam. You can expect this kind of question in the AWS exam as well. Try to understand the
question and answer.

In the exam, if they mentioned specifically about the replication based on the multi-AZ, then you
can go with Option C. So based on the question, the answer will differ.

Ask our Experts

Rate this Question? vu


Question 24 Correct

Domain :De ne Performant Architectures

A mobile application hosted on AWS needs to access a Data Store in AWS. With each item
measuring around 10KB in size, the latency of data access must remain consistent despite very
high application traffic. Which of the following would be an ideal Data Store for the application?

z] A. AWS DynamoDB
A
] B. AWS EBS Volumes

] C. AWS Glacier

] D. AWS Redshift

Explanation:

https://www.whizlabs.com/learn/course/quiz-result/531143 28/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

Answer - A

AWS Documentation mentions the following:

Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need
consistent, single-digit millisecond latency at any scale. It is a fully managed cloud database and
supports both document and key-value store models. Its flexible data model, reliable
performance, and automatic scaling of throughput capacity, makes it a great fit for mobile, web,
gaming, ad tech, IoT, and many other applications.

For more information on AWS DynamoDB, please visit the following URL:

https://aws.amazon.com/dynamodb/

Ask our Experts

Rate this Question? vu


Question 25 Incorrect

Domain :Design Resilient Architectures

A company is planning to design a Microservices architectured application that will be hosted in


AWS. This entire architecture needs to be decoupled whenever possible. Which of the following
services can help achieve this? Please select 2 correct options.

A. AWS SNS
A
z B. AWS ELB
B
z C. AWS Auto Scaling
B
D. AWS SQS
A
Explanation:

Answer – A and D

AWS Documentation mentions the following:

Amazon Simple Queue Service (SQS) is a fully managed message queuing service that makes it
easy to decouple and scale microservices, distributed systems, and serverless applications.
https://www.whizlabs.com/learn/course/quiz-result/531143 29/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

Building applications from individual components that each perform a discrete function improves
scalability and reliability, and is best practice design for modern applications. SQS makes it simple
and cost-effective to decouple and coordinate the components of a cloud application. Using SQS,
you can send, store, and receive messages between software components at any volume, without
losing messages or requiring other services to be always available.

For more information on AWS SQS, please visit the following URL:

https://aws.amazon.com/sqs/

For more information on AWS SNS, please visit the following URL:

https://aws.amazon.com/blogs/compute/implementing-enterprise-integration-
patterns-with-aws-messaging-services-point-to-point-channels/

Note:

The question asks - which services can be used to work on decoupling of microservices whenever
required?

  Now we can use both services to work with microservices.  Please find the links for your
reference:

 https://aws.amazon.com/sns/

 https://aws.amazon.com/blogs/compute/implementing-enterprise-integration-patterns-with-
aws-messaging-services-point-to-point-channels/

 The below point is from AWS SNS FAQ's:

 Q: How is Amazon SNS different from Amazon MQ?

 Amazon MQ, Amazon SQS, and Amazon SNS are messaging services that are suitable for anyone
from startups to enterprises. If you're using messaging with existing applications, and want to move
your messaging to the cloud quickly and easily, we recommend you consider Amazon MQ. It supports
industry-standard APIs and protocols so you can switch from any standards-based message broker
to Amazon MQ without rewriting the messaging code in your applications. If you are building brand
new applications in the cloud, we recommend you consider Amazon SQS and Amazon SNS. Amazon
SQS and SNS are lightweight, fully managed message queue and topic services that scale almost
infinitely and provide simple, easy-to-use APIs. You can use Amazon SQS and SNS to decouple and
scale microservices, distributed systems, and serverless applications, and improve reliability.

Ask our Experts

Rate this Question? vu


https://www.whizlabs.com/learn/course/quiz-result/531143 30/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

Question 26 Correct

Domain :Specify Secure Applications and Architectures

You are working for a construction firm. They are using Amazon WorkDocs for sharing project
planning document with third-party external contract teams. Last week there was an incident
where a sensitive document was shared by a user which leaked financial information to external
third-party users.
Security team revoked access for all users & only nominated users should be allowed to grant
access to use WorkDocs or share links with third-party users. Which of the following are correct
options? (Select TWO.)

z A.
For external users to use WorkDocs site, allow permission only to Power
users to invite new externals users. A
For external users to use WorkDocs site, allow permission only to Managed users
B.
to invite new externals users

z C.
For sending publicly shareable links, grant permission only to Power users to
share publicly. A
For sending publicly shareable links, grant permission only to Managed users to
D.
share publicly.

Explanation:

Correct Answer – A, C

To restrict all users to invite external users & to share WorkDoc links publicly, you can create a
Power user who will be responsible to perform this activity.

Option B is incorrect as in this case all users will be able to invite external users.

Option D is incorrect as in this case all users will be able to share links with external users.

For more information on managing access on Amazon WorkDocs, refer to following URL,

https://docs.aws.amazon.com/workdocs/latest/adminguide/security-settings.html

Ask our Experts

Rate this Question? vu


Question 27 Correct

Domain :De ne Performant Architectures

https://www.whizlabs.com/learn/course/quiz-result/531143 31/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

Your architecture for an application currently consists of EC2 Instances sitting behind a classic ELB.
The EC2 Instances are used to serve an application and are accessible through the internet. What
can be done to improve this architecture in the event that the number of users accessing the
application increases?

] A. Add another ELB to the architecture.

z] B. Use Auto Scaling Groups.


A
] C. Use an Application Load Balancer instead.

] D. Use the Elastic Container Service.

Explanation:

Answer – B

AWS Documentation mentions the following:

AWS Auto Scaling monitors your applications and automatically adjusts capacity to maintain
steady, predictable performance at the lowest possible cost. Using AWS Auto Scaling, it is easy to
setup application scaling for multiple resources across multiple services in minutes.

For more information on AWS Auto Scaling, please visit the following URL:

https://aws.amazon.com/autoscaling/

Ask our Experts

Rate this Question? vu


Question 28 Incorrect

Domain :De ne Performant Architectures

You are working as an AWS Architect for a retail website having an application deployed on EC2
instance. You are using SQS queue for storing messages between Web server & database servers.
Due to heavy load on the website, there are some cases where clients are getting price details
before logging to the website.

https://www.whizlabs.com/learn/course/quiz-result/531143 32/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

To resolve this issue, you are planning to migrate to SQS FIFO queue which will preserve the order
of messages. You have created a new FIFO queue with message delay time per queue instead of
per message, deleting an existing standard queue. Which of the following are prerequisites to have
a smooth migration to SQS FIFO queue? (Select TWO.)

z A.
Each FIFO queue should have Message group ID only in case of multiple
ordered message groups are required. B
For application with identical message bodies, use content-based
z B. deduplication ID while for unique message bodies use unique deduplication
ID.
B
z C.
Each FIFO queue should have a Message group ID irrespective of multiple
ordered message groups required. A
D.
For application with identical message bodies, use unique deduplication ID,
while for unique message bodies use content-based deduplication ID. A
Explanation:

Correct Answer – C, D.

FIFO queue use Message Deduplication ID & Message Group ID per message. Message
Deduplication ID is used as a token while sending messages. Message Group ID is used as a tag
based upon various groups, so that messages in a group are process in an orderly manner.

Option A is incorrect as Each FIFO queue requires a Message group ID. If there are no multiple
ordered message groups, you can specify a same Message group ID.

Option B is incorrect as if application is using identical message bodies, a unique deduplication


ID is required per message while for application using unique message bodies, content-based
deduplication ID should be used.

For more information on moving from a SQS Standard Queue to SQS FIFO queues, refer to
following URL,

https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-
queues.html#FIFO-queues-moving

Ask our Experts

Rate this Question? vu


Question 29 Correct

Domain :De ne Performant Architectures

https://www.whizlabs.com/learn/course/quiz-result/531143 33/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

You are the architect for a business intelligence application which reads data from a MySQL
database hosted on an EC2 Instance. The application experiences a high number of read and write
requests.
Which Amazon EBS Volume type can meet the performance requirements of this database?

z] A. EBS Provisioned IOPS SSD


A
] B. EBS Throughput Optimized HDD

] C. EBS General Purpose SSD

] D. EBS Cold HDD

Explanation:

Answer – A

Since there is a high performance requirement with high IOPS needed, one needs to opt for EBS
Provisioned IOPS SSD.

The below snapshot from AWS Documentation mentions the need for using Provisioned IOPS for
better IOPS performance for database-based applications.

For more information on AWS EBS Volume types, please visit the following URL:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html

Ask our Experts

Rate this Question? vu

https://www.whizlabs.com/learn/course/quiz-result/531143 34/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

Question 30 Correct

Domain :Design Resilient Architectures

An organization planning to use AWS for their production roll out, wants to implement automation
for deployment such that it will automatically create a LAMP stack, download the latest PHP
installable from S3 and setup the ELB. Which of the below mentioned AWS services meets the
requirement for making an orderly deployment of the software?

z] A. AWS Elastic Beanstalk


A
https://www.whizlabs.com/learn/course/quiz-result/531143 35/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

] B. AWS CloudFront

] C. AWS CloudFormation

] D. AWS DevOps

Explanation:

Answer – A

The Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and
services.

We can simply upload code and Elastic Beanstalk automatically handles the deployment, from
capacity provisioning, load balancing, Auto-Scaling, to application health monitoring. Meanwhile,
we can retain full control over the AWS resources used in the application and access the
underlying resources at any time.

Hence, A is the CORRECT answer.

For more information on launching a LAMP stack with Elastic Beanstalk, follow the below link.

https://aws.amazon.com/getting-started/projects/launch-lamp-web-app/

How is AWS CloudFormation different from AWS Elastic Beanstalk?

These services are designed to complement each other. AWS Elastic Beanstalk provides an


environment to easily deploy and run applications in the cloud. It is integrated with developer tools
and provides a one-stop experience for you to manage the lifecycle of your applications. AWS
CloudFormation is a convenient provisioning mechanism for a broad range of AWS resources. It
supports the infrastructure needs of many different types of applications such as existing
enterprise applications, legacy applications, applications built using a variety of AWS resources
and container-based solutions (including those built using AWS Elastic Beanstalk).

Elastic BeanStalk as a Platform where I deploy an application. Cloudformation is "where I define a


stack of resources".

https://www.whizlabs.com/learn/course/quiz-result/531143 36/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

Elastic BeanStalk is good for the relatively narrow use case of PaaS applications. and
Cloudformation is good for the relatively broad use of defining Infrastructure as Code.

 In question they asked about " Which of the below mentioned AWS services meets the
requirement for making an orderly deployment of the software?". We are not provisioning broad
range of AWS resources here.

So, Elastic Beanstalk will be a suitable answer.

Ask our Experts

Rate this Question? vu


Question 31 Correct

Domain :De ne Operationally-Excellent Architectures

Your company is planning on using the API Gateway service to manage APIs for developers and
users. There is a need to segregate the access rights for both developers and users. How can this
be accomplished?

z] A. Use IAM permissions to control the access.


A
] B. Use AWS Access keys to manage the access.

] C. Use AWS KMS service to manage the access.

] D. Use AWS Config Service to control the access.

Explanation:

Answer - A

AWS Documentation mentions the following:

You control access to Amazon API Gateway with IAM permissions by controlling access to the
following two API Gateway component processes:

https://www.whizlabs.com/learn/course/quiz-result/531143 37/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

To create, deploy, and manage an API in API Gateway, you must grant the API developer
permissions to perform the required actions supported by the API management component of
API Gateway.

To call a deployed API or to refresh the API caching, you must grant the API caller permissions
to perform required IAM actions supported by the API execution component of API Gateway.

 For more information on permissions for the API gateway, please visit the URL:

https://docs.aws.amazon.com/apigateway/latest/developerguide/permissions.html

Ask our Experts

Rate this Question? vu


Question 32 Incorrect

Domain :Design Resilient Architectures

You currently have 2 development environments hosted in 2 different VPCs in an AWS account in
the same region. There is now a need for resources from one VPC to access another. How can this
be accomplished?

] A. Establish a Direct Connect connection.

z] B. Establish a VPN connection.


B
] C. Establish VPC Peering.
A
] D. Establish Subnet Peering.

Explanation:

Answer – C

AWS Documentation mentions the following:

https://www.whizlabs.com/learn/course/quiz-result/531143 38/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

A VPC peering connection is a networking connection between two VPCs that enables you to
route traffic between them privately. Instances in either VPC can communicate with each other as
if they are within the same network. You can create a VPC peering connection between your own
VPCs, with a VPC in another AWS account, or with a VPC in a different AWS Region.

For more information on VPC peering, please visit the URL:

https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-peering.html

Ask our Experts

Rate this Question? vu


Question 33 Incorrect

Domain :Design Cost-Optimized Architectures

Your company is planning on using the EMR service available in AWS for running their big data
framework and wants to minimize the cost for running the EMR service. Which of the following
could help achieve this?

] A. Running the EMR cluster in a dedicated VPC

] B. Choosing Spot Instances for the underlying nodes


A
z] C. Choosing On-Demand Instances for the underlying nodes
B
] D. Disable automated backups

Explanation:

Answer - B

AWS Documentation mentions the following:

Spot Instances in Amazon EMR provide an option to purchase Amazon EC2 instance capacity at a
reduced cost as compared to On-Demand purchasing.

For more information on Instance types for EMR, please visit the URL:

https://www.whizlabs.com/learn/course/quiz-result/531143 39/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-instance-purchasing-
options.html

https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan-instances-
guidelines.html

Ask our Experts

Rate this Question? vu


Question 34 Correct

Domain :De ne Operationally-Excellent Architectures

You have an S3 bucket hosted in AWS which is used to store promotional videos you upload. You
need to provide access to users for a limited duration of time. How can this be achieved?

] A. Use versioning and enable a timestamp for each version.

z] B. Use Pre-signed URLs with session duration.


A
] C. Use IAM Roles with a timestamp to limit the access.

] D. Use IAM policies with a timestamp to limit the access.

Explanation:

Answer - B

AWS Documentation mentions the following:

All objects by default are private. Only the object owner has permission to access these objects.
However, the object owner can optionally share objects with others by creating a pre-signed URL,
using their own security credentials, to grant time-limited permission to download the objects.

For more information on pre-signed URLs, please visit the URL below.

https://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html

Ask our Experts

Rate this Question? vu


https://www.whizlabs.com/learn/course/quiz-result/531143 40/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

Question 35 Correct

Domain :Design Resilient Architectures

An application currently writes a large number of records to a DynamoDB table in one region.
There is a requirement for a secondary application to retrieve new records written to the
DynamoDB table every 2 hours and process the updates accordingly. Which of the following is an
ideal way to ensure that the secondary application gets the relevant changes from the DynamoDB
table?

Insert a timestamp for each record and then scan the entire table for the
] A.
timestamp as per the last 2 hours.

] B. Create another DynamoDB table with the records modified in the last 2 hours.

z] C. Use DynamoDB Streams to monitor the changes in the DynamoDB table.


A
] D. Transfer records to S3 which were modified in the last 2 hours.

Explanation:

Answer – C

AWS Documentation mentions the following:

A DynamoDB Stream is an ordered flow of information about changes to items in an Amazon


DynamoDB table. When you enable a stream on a table, DynamoDB captures information about
every modification to data items in the table.

Whenever an application creates, updates, or deletes items in the table, DynamoDB Streams write
a stream record with the primary key attribute(s) of the items that were modified. A stream
record contains information about a data modification to a single item in a DynamoDB table. You
can configure the stream so that the stream records capture additional information, such as the
"before" and "after" images of modified items.

For more information on DynamoDB Streams, please visit the below URL.

http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html

https://www.whizlabs.com/learn/course/quiz-result/531143 41/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

Ask our Experts

Rate this Question? vu


Question 36 Incorrect

Domain :De ne Performant Architectures

You are working as an AWS Architect for an IT firm which is developing a new application for
traders in Capital market. There would be multiple trading orders initiated by clients. You have
multiple EC2 instance with Auto-Scaling groups to process these trades parallelly. Also, each trade
should be stateful & needs to process independently. Which of the following can be used to meet
this requirement?

z] A. Use SQS FIFO queue with Receive Request Attempt ID.


B
] B. Use SQS FIFO queue with Sequence number.

] C. Use SQS FIFO queue with Message Group ID.


A
] D. Use SQS FIFO queue with Deduplication ID.

Explanation:

Correct Answer – C

Message which are part of a specific message group will be processed in a strict orderly manner
using Message group ID. This message group ID helps to process messages by multiples
consumers in FIFO manner while keeping session data of each trade initiated by users..

Option A is incorrect as the purpose of Receive request Attempt ID is adding a deduplication


token for receive message, this will not su ce requirement of processing multiple message
from di erent trades independently.

Option B is incorrect as SQS FIFO sequence number is non-sequence number added to each
message, this will not su ce requirement of processing multiple message from di erent trades
independently.

Option D is incorrect as this will SQS FIFO queue with Deduplication ID will not su ce
requirement of multiple message being process parallelly by multiple consumers with separate
session data for each.   

For more information on managing SQS FIFO queues, refer to following URL,

https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/using-
messagegroupid-property.html

https://www.whizlabs.com/learn/course/quiz-result/531143 42/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

Ask our Experts

Rate this Question? vu


Question 37 Incorrect

Domain :Specify Secure Applications and Architectures

Your IT Security department has mandated that all traffic flowing in and out of EC2 instances
needs to be monitored. Which of the below services can help achieve this?

] A. Trusted Advisor

] B. VPC Flow Logs


A
] C. Use CloudWatch metrics

z] D. Use CloudTrail
B
Explanation:

Answer – B

AWS Documentation mentions the following:

VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to
and from network interfaces in your VPC. Flow log data is stored using Amazon CloudWatch Logs.
After you've created a flow log, you can view and retrieve its data in Amazon CloudWatch Logs.

For more information on VPC Flow Logs, please visit the following URL:

https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/ ow-logs.html

Note:

The question asks to monitor all traffic flowing in and out of EC2 instances. Now you have to
launch EC2 instance inside the VPC. As there is no other Option available. To monitor the
navigation of IP traffic we use VPC Flow Logs. 

Coming to question - why not CloudTrail?  Before venturing into it, let's look into the types of log
categories we have in AWS.
1. AWS Infrastructure Logs - AWS CloudTrail, Amazon VPC Flow Logs
2. AWS Service Logs - Amazon S3, AWS Elastic Load Balancing, Amazon CloudFront, AWS
Lambda, AWS Elastic Beanstalk, etc.,
3. Host Based Logs - Messages, Security, NGINX/Apache/IIS, Windows Event Logs, Windows

https://www.whizlabs.com/learn/course/quiz-result/531143 43/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

Performance Counters, etc.,

AWS CloudTrail: it is used for recording AWS API calls for your account like,
- who made the API call?
- when was the API call made?
- what was the API call?
- which resources were acted upon in the API call?
- where was the API calls made form and made to?

Note:

AWS has launched a new feature called VPC Traffic Mirroring, which is used to capture and inspect
network traffic at scale.  To know more about this feature, please check the below link.

https://aws.amazon.com/blogs/aws/new-vpc-tra c-mirroring/

Ask our Experts

Rate this Question? vu


Question 38 Correct

Domain :Design Resilient Architectures

 
 
A company is currently utilising Redshift cluster as their production warehouse. As a cloud
architect, you are tasked to ensure that the disaster recovery is in place. Which of the following
options is best in addressing this issue?
 

Take a copy of the underlying EBS volumes to S3 and then do Cross-Region


] A.
Replication.

z] B. Enable Cross-Region Snapshots for the Redshift Cluster.


A
] C. Create a CloudFormation template to restore the Cluster in another region.

] D. Enable Cross Availability Zone Snapshots for the Redshift Cluster.

Explanation:

Answer – B

https://www.whizlabs.com/learn/course/quiz-result/531143 44/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

The below diagram shows that snapshots are available for Redshift clusters enabling them to be
available in different regions:

For more information on managing Redshift Snapshots, please visit the following URL:

https://docs.aws.amazon.com/redshift/latest/mgmt/managing-snapshots-console.html

Ask our Experts

Rate this Question? vu


Question 39 Incorrect

Domain :Design Resilient Architectures

You have an AWS RDS PostgreSQL database hosted in the Singapore region. You need to ensure
that a backup database is in place and the data is asynchronously copied. Which of the following
would help fulfill this requirement?

https://www.whizlabs.com/learn/course/quiz-result/531143 45/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

] A. Enable Multi-AZ for the database

] B. Enable Read Replicas for the database


A
z] C. Enable Asynchronous replication for the database
B
] D. Enable manual backups for the database

Explanation:

Answer – B

AWS Documentation mentions the following:

Amazon RDS Read Replicas enable you to create one or more read-only copies of your database
instance within the same AWS Region or in a different AWS Region. Updates made to the source
database are then asynchronously copied to your Read Replicas. In addition to providing scalability
for read-heavy workloads, Read Replicas can be promoted to become a standalone database
instance when needed. 

For more information on Read Replicas, please visit the following URL:

https://aws.amazon.com/rds/details/read-replicas/

Note: 
When you enable Multi-AZ for the database then we are enabling synchronous replication rather
than asynchronous replication mentioned in the question.

When you create a Read Replica, you first specify an existing DB instance as the source. Then
Amazon RDS takes a snapshot of the source instance and creates a read-only instance from the
snapshot. Amazon RDS then uses the asynchronous replication method for the DB engine to
update the Read Replica whenever there is a change to the source DB instance. 

You can use Read Replica promotion as a data recovery scheme if the source DB instance fails.

For more information please click the link given below:


https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html

https://www.whizlabs.com/learn/course/quiz-result/531143 46/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

Ask our Experts

Rate this Question? vu


Question 40 Correct

Domain :Design Cost-Optimized Architectures

Your current log analysis application takes more than four hours to generate a report of the top 10
users of your web application. You have been asked to implement a system that can report this
information in real time, ensure that the report is always up to date, and handle increases in the
number of requests to your web application. Choose the option that is cost-effective and can fulfill
the requirements.

Publish your data to CloudWatch Logs, and configure your application to Auto
] A.
Scale to handle the load on demand.

Publish your log data to an Amazon S3 bucket. Use AWS CloudFormation to


] B. create an Auto Scaling group to scale your post-processing application which is
configured to pull down your log files stored in Amazon S3.

Post your log data to an Amazon Kinesis data stream, and subscribe your
z] C. log-processing application so that is configured to process your logging
data.
A
Configure an Auto Scaling group to increase the size of your Amazon EMR
] D.
cluster.

Explanation:

Answer – C

AWS Documentation mentions the below:

Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you
can get timely insights and react quickly to new information. Amazon Kinesis offers key
capabilities to cost effectively process streaming data at any scale, along with the flexibility to

https://www.whizlabs.com/learn/course/quiz-result/531143 47/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can
ingest real-time data such as application logs, website clickstreams, IoT telemetry data, and more
into your databases, data lakes and data warehouses, or build your own real-time applications
using this data. Amazon Kinesis enables you to process and analyze data as it arrives and respond
in real-time instead of having to wait until all your data is collected before the processing can
begin.

For more information on AWS Kinesis, please see the below link:

https://aws.amazon.com/kinesis/

Ask our Experts

Rate this Question? vu


Question 41 Incorrect

Domain :Design Resilient Architectures

You are working as an AWS Architect for a global financial firm. They provide daily consolidated
reports to their clients for trades in stock markets. For large amount of data processing, they store
daily trading transaction data in S3 buckets which triggers AWS Lambda function. This function
submits a new AWS Batch job in Job queue. These queues use compute resources having EC2 On
Demand instance with Amazon ECS-optimized AMI having enough resources to complete the job.
Due to large volumes of data there has been requirement to increase storage size & create a
customised AMI.
You have been working on new application on compute resources having a combination of EC2 On
Demand instance & EC2 Spot Instance with this customised AMI. While you are performing a trial
for this new application even though it has enough memory/CPU resources, your job is stuck in
Runnable state. Which of the following can be used to get Job into starting state?

] A.
Ensure that awslogs log driver is configured on compute resources which
will send log information to CloudWatch logs. A
z] B.
AWS Batch does not support customised AMI & requires only Amazon ECS-
optimized AMI. B
] C. Check dependencies for the job which holds the job in Runnable state.

Use only On Demand EC2 instance in compute resource instead of combination


] D.
of On Demand & Spot Instance.

https://www.whizlabs.com/learn/course/quiz-result/531143 48/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

Explanation:

Correct Answer – A

AWS Batch Job sends log information to CloudWatch logs which requires awslogs log driver to be
configured on compute resources having customised AMI. In case of Amazon ECS-optimized AMI,
these drivers are pre-configured.

Option B is incorrect as AWS Batch supports both customised AMI as well as Amazon ECS-
optimized AMI, this is not a reason for Job stuck in Runnable state.

Option C is incorrect as a Job moves into Runnable state only after all dependencies are
processed. If there are any dependencies, Job stays in Pending state, not Runnable state.

Option D is incorrect as Compute resource can be a On Demand Instance or a Spot Instance. If


there are enough compute resource available to process the job, it will move into Starting state.

For more information on AWS Batch Job state & troubleshooting if a job stuck in Runnable state,
refer to following URLs,

https://docs.aws.amazon.com/batch/latest/userguide/job_states.html

https://docs.aws.amazon.com/batch/latest/userguide/troubleshooting.html#job_stuck_in_runnabl

Ask our Experts

Rate this Question? vu


Question 42 Correct

Domain :De ne Operationally-Excellent Architectures

There is a requirement to load a lot of data from your on-premises network on to AWS S3. Which of
the below options can be used for this data transfer? Choose 2 answers from the options given
below.

A. Data Pipeline

z B. Direct Connect
A
z C. Snowball
A
D. AWS VPN

Explanation:

Answer – B and C
https://www.whizlabs.com/learn/course/quiz-result/531143 49/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

AWS documentation mentions the following about the above services:

With a Snowball, you can transfer hundreds of terabytes or petabytes of data between your on-
premises data centers and Amazon Simple Storage Service (Amazon S3). AWS Snowball uses
Snowball appliances and provides powerful interfaces that you can use to create jobs, transfer
data, and track the status of your jobs through to completion. By shipping your data in Snowballs,
you can transfer large amounts of data at a significantly faster rate than if you were transferring
that data over the Internet, saving you time and money.

AWS Direct Connect links your internal network to an AWS Direct Connect location over a standard
1-gigabit or 10-gigabit Ethernet fiber-optic cable. One end of the cable is connected to your router,
the other to an AWS Direct Connect router. With this connection in place, you can create virtual
interfaces directly to public AWS services (for example, to Amazon S3) or to Amazon VPC,
bypassing Internet service providers in your network path.

For more information on Direct Connect, please refer to the below URL:

http://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html

Option A is INCORRECT because AWS Data Pipeline is a web service that you can use to
automate the movement and transformation of data. Here we are not transforming the data and
we are just moving the data from on-premises to S3

Option D is INCORRECT because AWS VPN is used just for connecting on-premises to S3 and
not for moving the data

For more information on AWS Snowball, please refer to the below URL:

http://docs.aws.amazon.com/snowball/latest/ug/whatissnowball.html

Ask our Experts

Rate this Question? vu


Question 43 Correct

Domain :De ne Operationally-Excellent Architectures

Having created a Redshift cluster in AWS, you are trying to use SQL Client tools from an EC2
Instance, but aren't able to connect to the Redshift Cluster. What must you do to ensure that you
are able to connect to the Redshift Cluster from the EC2 Instance?

] A. Install Redshift client tools on the EC2 Instance first.

z] B. Modify the VPC Security Groups.


A
] C. Use the AWS CLI instead of the Redshift client tools.

https://www.whizlabs.com/learn/course/quiz-result/531143 50/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

] D. Modify the NACL on the subnet.

Explanation:

Answer – B

AWS Documentation mentions the following:

By default, any cluster that you create is closed to everyone. IAM credentials only control access
to the Amazon Redshift API-related resources: the Amazon Redshift console, command line
interface (CLI), API, and SDK. To enable access to the cluster from SQL client tools via JDBC or
ODBC, you use security groups:

If you are using the EC2-Classic platform for your Amazon Redshift cluster, you must use
Amazon Redshift security groups.

If you are using the EC2-VPC platform for your Amazon Redshift cluster, you must use VPC
security groups. 

For more information on Amazon Redshift, please refer to the below URL:

http://docs.aws.amazon.com/redshift/latest/mgmt/overview.html

Ask our Experts

Rate this Question? vu


Question 44 Correct

Domain :De ne Operationally-Excellent Architectures

You currently work for a company that is specialised in baggage management. GPS devices
installed on all the baggages, deliver the coordinates of the unit every 10 seconds. You need to
process these coordinates in real-time from multiple sources. Which tool should you use to
process the data?

] A. Amazon EMR

https://www.whizlabs.com/learn/course/quiz-result/531143 51/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

] B. Amazon SQS

] C. AWS Data Pipeline

z] D. Amazon Kinesis
A
Explanation:

Answer – D

The AWS Documentation mentions the following

Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you
can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities
to cost-effectively process streaming data at any scale, along with the flexibility to choose the
tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-
time data such as video, audio, application logs, website clickstreams, and IoT telemetry data for
machine learning, analytics, and other applications. Amazon Kinesis enables you to process and
analyze data as it arrives and respond instantly instead of having to wait until all your data is
collected before the processing can begin.

For more information on Amazon Kinesis, please visit the link below.

https://aws.amazon.com/kinesis/

Ask our Experts

Rate this Question? vu


Question 45 Correct

Domain :Specify Secure Applications and Architectures

You are planning on hosting a web and MySQL database application in an AWS VPC. The database
should only be accessible by the web server. Which of the following would you change to fulfill
this requirement?

] A. Network Access Control Lists

https://www.whizlabs.com/learn/course/quiz-result/531143 52/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

] B. AWS RDS Parameter Groups

] C. Route Tables

z] D. Security groups
A
Explanation:

Answer – D

Security group associated with the db instance should allow port 3306 traffic from EC2 instance.
The AWS Documentation additionally mentions the following:

A security group acts as a virtual firewall for your instance to control inbound and outbound traffic.
When you launch an instance in a VPC, you can assign up to five security groups to the instance.
Security groups act at the instance level, not the subnet level. Therefore, each instance in a subnet
in your VPC could be assigned to a different set of security groups. If you don't specify a particular
group at launch time, the instance is automatically assigned to the default security group for the
VPC.

For more information on VPC Security Groups, please visit the link below.

https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_SecurityGroups.html

Ask our Experts

Rate this Question? vu


Question 46 Correct

Domain :Specify Secure Applications and Architectures

A company has a requirement for block level storage which would be able to store 800GB of data.
Also, encryption of the data is required. Which of the following can be used in this case?

https://www.whizlabs.com/learn/course/quiz-result/531143 53/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

z] A. AWS EBS Volumes


A
] B. AWS S3

] C. AWS Glacier

] D. AWS EFS

Explanation:

Answer - A

For block level storage, consider EBS Volumes.

Options B and C are incorrect since they provide object level storage.

Option D is incorrect since this provides file level storage.

For more information on EBS Volumes, please visit the following URL:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumes.html

Ask our Experts

Rate this Question? vu


Question 47 Correct

Domain :Design Resilient Architectures

You are working as an AWS Architect for a media firm. They have a large text files which needs to
convert into Audio files. They are using S3 buckets to store this text files.
AWS Batch is used to process these files along with Amazon Polly. For compute environment you
have a mix of EC2 On Demand & Spot instance. Critical Jobs needs to complete quickly while non-
critical Jobs can be schedule during non-peak hours. While using AWS Batch, management wants
a cost-effective solution with no performance impact. Which of the following Job Queue can be
selected to meet this requirement?

Create single Job Queue with EC2 On Demand instance having higher priority &
] A.
Spot Instance having lower priority.

https://www.whizlabs.com/learn/course/quiz-result/531143 54/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

Create multiple Job Queues with one Queue having EC2 On Demand
z] B. instance & having higher priority while another queue having Spot Instance
& lower priority.
A
Create multiple Job Queues with one Queue having EC2 On Demand instance &
] C.
having lower priority while another queue having Spot Instance & higher priority.

Create single Job Queue with EC2 On Demand instance having lower priority &
] D.
Spot Instance having higher priority.

Explanation:

Correct Answer – B

You can create multiple Job queue with different priority & mapped Compute environments to
each Job queue. When Job queues are mapped to a same compute environment, queues with
higher priority are evaluated first.

Option A is incorrect as Multiple queues need to be created for each Job type. In the
requirement, critical Jobs will be process using EC2 instance while low priority jobs will be using
Job queue with Spot Instance. 

Option C is incorrect as Priority for a Job queue is selected in descending order, higher priority
Job queue is preferred rst.

Option D is incorrect as Multiple queues need to be created for each Job type. In the
requirement, critical Jobs will be process using EC2 instance while low priority jobs will be using
Job queue with Spot Instance. Also, higher priority Job queue is preferred rst.

For more information on Job queues in AWS Batch, refer to following URL,

https://docs.aws.amazon.com/batch/latest/userguide/job_queue_parameters.html

Ask our Experts

Rate this Question? vu


Question 48 Correct

Domain :De ne Performant Architectures

As the cloud administrator of your company, you notice that one of EC2 instances is restarting
frequently. There is a need to troubleshoot and analyse the system logs. What can be used in AWS
to store and analyze the log files from the EC2 Instance? Choose one answer from the options
below.

] A. AWS SQS

https://www.whizlabs.com/learn/course/quiz-result/531143 55/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

] B. AWS S3

] C. AWS CloudTrail

z] D. AWS CloudWatch Logs


A
Explanation:

Answer – D

You can use Amazon CloudWatch Logs to monitor, store, and access your log files from Amazon
Elastic Compute Cloud (Amazon EC2) instances, AWS CloudTrail, and other sources.

For more information on CloudWatch Logs, please visit the following URL:

https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html

Note: The question is not about compliance or auditing or tracking any kind of malicious activity or
to monitor api calls in your account. If that would have been the case we w'd have used cloudtrail
as it provides info such as who made the request, when the request was made, What was the
request, What was the response etc.

In this question we need cloudwatch logs to store and analyze  logs from EC2 to find  why the
instance is restarting frequently.

Ask our Experts

Rate this Question? vu


Question 49 Incorrect

Domain :Design Cost-Optimized Architectures

Your company has migrated their production environment into AWS VPC 6 months ago. As a cloud
architect, you are required to revise the infrastructure and ensure that it is cost-effective in the
long term. There are more than 50 EC2 instances that are up and running all the time to support
the business operation. What can you do to lower the cost?

] A. Reserved instances
A
] B. On-demand instances

] C. Spot instances

https://www.whizlabs.com/learn/course/quiz-result/531143 56/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

z] D. Regular instances
B
Explanation:

Answer – A

When you have instances that will be used continuously and throughout the year, the best option
is to buy reserved instances. By buying reserved instances, you are actually allocated an instance
for the entire year or the duration you specify with a reduced cost.

To understand more on reserved instances, please visit the below URL:

https://aws.amazon.com/ec2/pricing/reserved-instances/

Ask our Experts

Rate this Question? vu


Question 50 Incorrect

Domain :Design Resilient Architectures

Your organization is building a collaboration platform for which they chose AWS EC2 for web and
application servers and MySQL RDS instance as the database. Due to the nature of the traffic to
the application, they would like to increase the number of connections to RDS instance. How can
this be achieved?

] A. Login to RDS instance and modify database config file under /etc/mysql/my.cnf

] B.
Create a new parameter group, attach it to DB instance and change the
setting. A
] C. Create a new option group, attach it to DB instance and change the setting.

z] D. Modify setting in default options group attached to DB instance.


B
Explanation:

https://www.whizlabs.com/learn/course/quiz-result/531143 57/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

Answer – B

You manage your DB engine configuration through the use of parameters in a DB parameter group
. DB parameter groups act as a container for engine configuration values that are applied to one or
more DB instances.

A default DB parameter group is created if you create a DB instance without specifying a


customer-created DB parameter group. Each default DB parameter group contains database
engine defaults and Amazon RDS system defaults based on the engine, compute class, and
allocated storage of the instance. You cannot modify the parameter settings of a default DB
parameter group; you must create your own DB parameter group to change parameter settings
from their default value. Note that not all DB engine parameters can be changed in a customer-
created DB parameter group.

If you want to use your own DB parameter group, you simply create a new DB parameter group,
modify the desired parameters, and modify your DB instance to use the new DB parameter
group. All DB instances that are associated with a particular DB parameter group get all parameter
updates to that DB parameter group.

For more information;

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithParamGroups.ht

https://www.whizlabs.com/learn/course/quiz-result/531143 58/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

Ask our Experts

Rate this Question? vu


Question 51 Incorrect

Domain :Specify Secure Applications and Architectures

You are working for a Pharma firm. You are using S3 buckets to save a large amount of sensitive
project document for new medicine research. You need to ensure all data at rest in these buckets
is encrypted. All the keys need to be managed by the inhouse Security team. Which of the
following can be used as a best practice to encrypt all data securely?

Generate a data key using Customer managed CMK’s. Encrypt data with
] A.
data key & delete data keys. Store encrypted data keys & data in S3 buckets.
For decryption, use CMK to decrypt data key into plain text & then decrypt
data using plain text data key.
A
Generate a data key using AWS managed CMK’s. Encrypt data with data key
z] B.
& delete data keys. Store encrypted data & data keys in S3 buckets. For
decryption, use CMK to decrypt data key into plain text & then decrypt data
using plain text data key.
B
Generate a data key using Customer managed CMK’s. Encrypt data with data key
] C. & do not delete data keys. Store encrypted data & plain text data keys in S3
buckets. For decryption, use plain text data key to decrypt data.

Generate a data key using AWS managed CMK’s. Encrypt data with data key & do
] D. not delete data keys. Store encrypted data & plain text data keys in S3 buckets.
For decryption, use plain text data key to decrypt data.

Explanation:

Correct Answer - A

SInce Key Management will be done by Inhouse security team, Customer Managed CMK need to
be used. Customer managed CMK will generate plain text Data Key & encrypted Data Keys. All
project related sensitive documents will be encrypted using these plain text Data Keys. After
encryption , plain text Data keys needs to be deleted to avoid any inappropriate use  & encrypted
Data Keys along with encrypted Data is stored in S3 buckets.

While decryption, encrypted Data Key is decrypted using Customer CMK into plain text Key which
is further used to decrypt documents. This Envelope Encryption ensures that Data is protected by
Data key which is further protected by another key.

https://www.whizlabs.com/learn/course/quiz-result/531143 59/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

Option B is incorrect, since all keys needs to manage by inhouse customer Security team, AWS
managed CMK’s cannot be used.

Option C is incorrect as It's not a best practise to save data key le in plain text format. All plain
text data keys should be deleted & only encrypted data keys need to be saved.

Option D is incorrect since all keys needs to manage by inhouse customer Security team, AWS
managed CMK’s cannot be used. Also, all plain text data keys should be deleted & only
encrypted data keys need to be saved.

 For more information on AWS KMS, refer to following URLs,

 https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html

 https://d0.awsstatic.com/whitepapers/aws-kms-best-practices.pdf

Ask our Experts

Rate this Question? vu


Question 52 Correct

Domain :Design Resilient Architectures

A company is building a service using Amazon EC2 as a worker instance that will process an
uploaded audio file and generate a text file. You must store both of these files in the same durable
storage until the text file is retrieved. You do not know what the storage capacity requirements are.
Which storage option is both cost-efficient and scalable?

] A. Multiple Amazon EBS Volume with snapshots

] B. A single Amazon Glacier vault

z] C. A single Amazon S3 bucket


A
] D. Multiple instance stores

Explanation:

Answer – C

Amazon S3 is the best storage option for this. It is durable and highly available.

For more information on Amazon S3, please refer to the below URL:

https://www.whizlabs.com/learn/course/quiz-result/531143 60/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

https://aws.amazon.com/s3/

Ask our Experts

Rate this Question? vu


Question 53 Marked as review Unattempted

Domain :Specify Secure Applications and Architectures

You are working as an AWS developer for an online multiplayer game developing start-up
company. Elasticache with Redis is used for gaming leaderboards along with application servers,
to provide low latency and avoid stale data for these highly popular online games. Redis clusters
are deployed within a dedicated VPC in the us-east-1 region. Last week, due to configuration
changes in Redis Clusters, the gaming application was impacted for two hours. To avoid such
incidents, you have been requested to plan for secure access to all the new clusters. What would
you prefer to use for secure access to Redis Clusters while accessing from EC2 instance, initialized
in different VPC in the us-east-1 region? (Select TWO)

A. Use Redis AUTH with in-transit encryption disabled for clusters.

B. Create a Transit VPC solution to have connectivity between 2 VPC’s.

C. Use Redis AUTH with in-transit encryption, enabled for clusters.


A
D. Create an Amazon VPN connection between 2 VPCs.

E. Use Redis AUTH with At-Rest encryption, enabled for clusters.

F. Create an Amazon VPC Peering connection between 2 VPCs.


A
Explanation:

Correct Answer – C and F

For using Redis AUTH which will require users to provide a password before accessing Redis
Cluster, in-transit encryption needs to be enabled on the cluster while creation of the cluster.  For
accessing Redis Cluster from EC2 instance in different VPCs from the same region, a VPC peering
can be established between 2 VPCs.

Option A is incorrect. For Redis AUTH, clusters must be enabled with in-transit encryption
during initial deployment.

https://www.whizlabs.com/learn/course/quiz-result/531143 61/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

Option B is incorrect. Transit VPC solution is suitable to access Redis Clusters from EC2 instance
in VPC created in di erent regions. Since, in this requirement, both VPCs are in the same region,
VPC peering is a correct option.

Option D is incorrect as VPN Connections will be required for accessing the Redis Cluster from
on-prem servers.

Option E is incorrect. For Redis AUTH, clusters must be enabled with in-transit encryption
during initial deployment, not At-Rest encryption.

For more information on Authentication with Redis & Accessing Redis Clusters from a different
VPC, refer to the following URLs:

https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/auth.html

https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/elasticache-vpc-accessing.html

Ask our Experts

Rate this Question? vu


Question 54 Correct

Domain :De ne Performant Architectures

Your company is utilising CloudFront to distribute its media content to multiple regions. The
content is frequently accessed by users. As a cloud architect, which of the following options would
help you improve the performance of the system?

] A. Change the origin location from an S3 bucket to an ELB.

] B. Use a faster Internet connection.

z] C. Increase the cache expiration time.


A
] D. Create an "invalidation" for all your objects, and recache them.

Explanation:

Answer – C

You can control how long your objects stay in a CloudFront cache before CloudFront forwards
another request to your origin. Reducing the duration allows you to serve dynamic content.
Increasing the duration means your users get better performance because your objects are more

https://www.whizlabs.com/learn/course/quiz-result/531143 62/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

likely to be served directly from the edge cache. A longer duration also reduces the load on your
origin.

For more information on CloudFront cache expiration, please refer to the following link:

http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html

Ask our Experts

Rate this Question? vu


Question 55 Correct

Domain :Design Resilient Architectures

You have been instructed by your supervisor to devise a disaster recovery model for the resources
in their AWS account. The key requirement while devising the solution is to ensure that the cost is
at a minimum. Which of the following disaster recovery mechanisms would you employ in such a
scenario?

z] A. Backup and Restore


A
] B. Pilot Light

] C. Warm standby

] D. Multi-Site

Explanation:

Answer – A

Since the cost needs to be at a minimum, the best option is to back up all the resources and then
perform a restore in the event of a disaster.

For more information on disaster recovery, please refer to the below link:

https://d1.awsstatic.com/whitepapers/aws-disaster-recovery.pdf

https://www.whizlabs.com/learn/course/quiz-result/531143 63/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

Ask our Experts

Rate this Question? vu


Question 56 Correct

Domain :De ne Operationally-Excellent Architectures

An application consists of the following architecture:


a. EC2 Instances are in multiple AZ’s behind an ELB.
b. The EC2 Instances are launched via an Auto Scaling Group.
c. There is a NAT instance used so that instances can download updates from the internet.
Due to the high bandwidth being consumed by the NAT instance, it has been decided to use a NAT
Gateway. How should this be implemented?

] A. Use NAT Instances along with the NAT Gateway.

] B. Host the NAT instance in the private subnet.

z] C.
Migrate the NAT Instance to NAT Gateway and host the NAT Gateway in the
public subnet. A
] D. Host the NAT gateway in the private subnet

Explanation:

Answer – C

One can simply start using the NAT Gateway service and stop using the deployed NAT instances.
But you need to ensure that the NAT Gateway is deployed in the public subnet.

For more information on migrating to a NAT Gateway, please visit the following URL:

https://aws.amazon.com/premiumsupport/knowledge-center/migrate-nat-instance-
gateway/

Ask our Experts

Rate this Question? vu


Question 57 Incorrect

Domain :De ne Operationally-Excellent Architectures

https://www.whizlabs.com/learn/course/quiz-result/531143 64/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

A company has an application hosted in AWS. This application consists of EC2 Instances that sit
behind an ELB. The following are requirements from an administrative perspective:
a) Must be able to collect and analyse logs with regard to ELB’s performance.
b) Ensure that notifications are sent when the latency goes beyond 10 seconds.
Which of the following can be used to achieve this requirement? Choose 2 answers from the
options given below.
 
 
 

A. Use CloudWatch for monitoring.


A
B. Enable CloudWatch logs and then investigate the logs whenever there is an issue.

z C.
Enable the logs on the ELB with Latency Alarm that sends an email and then
investigate the logs whenever there is an issue. A
z D. Use CloudTrail to monitor whatever metrics need to be monitored.
B
Explanation:

Answer – A and C

When you use CloudWatch metrics for an ELB, you can get the amount of read requests and
latency out of the box.

For more information on using CloudWatch with the ELB, please visit the following URL:

https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-cloudwatch-metrics.html

Elastic Load Balancing provides access logs that capture detailed information about requests sent
to your load balancer. Each log contains information such as the time the request was received,
the client's IP address, latencies, request paths, and server responses. You can use these access
logs to analyze traffic patterns and to troubleshoot issues.

For more information on using ELB logs, please visit the following URL:

https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/access-log-collection.html

https://www.whizlabs.com/learn/course/quiz-result/531143 65/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

Ask our Experts

Rate this Question? vu


Question 58 Incorrect

Domain :De ne Operationally-Excellent Architectures

Your company would like to leverage the AWS storage option and integrate it with the current on-
premises infrastructure. Additionally, due to business requirements, low latency access to all the
data is a must. Which of the following options would be best suited for this scenario?

z] A. Configure the Simple Storage Service.


B
] B. Configure Storage Gateway Cached Volume.

] C. Configure Storage Gateway Stored Volume.


A
] D. Configure Amazon Glacier.

Explanation:

Answer – C

AWS Documentation mentions the following:

If you need low-latency access to your entire dataset, first configure your on-premises gateway to
store all your data locally. Then asynchronously back up point-in-time snapshots of this data to
Amazon S3. This configuration provides durable and inexpensive offsite backups that you can
recover to your local data center or Amazon EC2. For example, if you need replacement capacity
for disaster recovery, you can recover the backups to Amazon EC2.

For more information on the Storage Gateway, please visit the following URL:

https://docs.aws.amazon.com/storagegateway/latest/userguide/WhatIsStorageGateway.html

Note:

Cached volumes – You store your data in Amazon Simple Storage Service (Amazon S3) and retain
a copy of frequently accessed data subsets locally. Cached volumes offer substantial cost savings
on primary storage and minimize the need to scale your storage on-premises. You also retain low-
latency access to your frequently accessed data.

Stored volumes – If you need low-latency access to your entire dataset, first configure your on-
premises gateway to store all your data locally. Then asynchronously back up point-in-time

https://www.whizlabs.com/learn/course/quiz-result/531143 66/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

snapshots of this data to Amazon S3. This configuration provides durable and inexpensive offsite
backups that you can recover to your local data center or Amazon EC2. For example, if you need
replacement capacity for disaster recovery, you can recover the backups to Amazon EC2.

If you see the Cached volumes primary function to reduce cost and low latency(For frequently
accessed data or recently accessed data), however, In the question the requirements is to maintain
all data with low latency. So Stored volume is the correct answer.

Please check the below link to know more about it.

https://docs.aws.amazon.com/storagegateway/latest/userguide/WhatIsStorageGateway.html

Ask our Experts

Rate this Question? vu


Question 59 Correct

Domain :De ne Operationally-Excellent Architectures

An IT company has a set of EC2 Instances hosted in a VPC. They are hosted in a private subnet.
These instances now need to access resources stored in an S3 bucket. The traffic should not
traverse the internet. The addition of which of the following would help fulfill this requirement?

z] A. VPC Endpoint
A
] B. NAT Instance

] C. NAT Gateway

] D. Internet Gateway

Explanation:

Answer - A

A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC
endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN
connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP
addresses to communicate with resources in the service. Traffic between your VPC and the other
service does not leave the Amazon network.

For more information on AWS VPC endpoints, please visit the following URL:

https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-endpoints.html
https://www.whizlabs.com/learn/course/quiz-result/531143 67/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

Ask our Experts

Rate this Question? vu


Question 60 Correct

Domain :Design Resilient Architectures

You need to host a set of web servers and database servers in an AWS VPC. Which of the following
is a best practice in designing a multi-tier infrastructure?

] A. Use a public subnet for the web tier and a public subnet for the database layer.

z] B.
Use a public subnet for the web tier and a private subnet for the database
layer. A
] C. Use a private subnet for the web tier and a private subnet for the database layer.

] D. Use a private subnet for the web tier and a public subnet for the database layer.

Explanation:

Answer – B

The ideal setup is to ensure that the web server is hosted in the public subnet so that it can be
accessed by users on the internet. The database server can be hosted in the private subnet.

The below diagram from AWS Documentation shows how this can be setup:

https://www.whizlabs.com/learn/course/quiz-result/531143 68/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

For more information on public and private subnets in AWS, please visit the following URL:

https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html

https://www.whizlabs.com/learn/course/quiz-result/531143 69/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

Ask our Experts

Rate this Question? vu


Question 61 Correct

Domain :Specify Secure Applications and Architectures

An IT company would like to secure their resources in their AWS Account. Which of the following
options is able to secure data at rest and in transit in AWS? Choose 3 answers from the options
given below.

z A. Encrypt all EBS volumes attached to EC2 Instances.


A
z B. Use Server-Side Encryption for S3.
A
z C. Use SSL/HTTPS when using the Elastic Load Balancer.
A
D. Use IOPS Volumes when working with EBS Volumes on EC2 Instances.

Explanation:

Answer – A, B and C

AWS documentation mentions the following:

Amazon EBS encryption offers you a simple encryption solution for your EBS volumes without the
need for you to build, maintain, and secure your own key management infrastructure. When you
create an encrypted EBS volume and attach it to a supported instance type, the following types of
data are encrypted:

Data at rest inside the volume

All data moving between the volume and the instance

All snapshots created from the volume 

https://www.whizlabs.com/learn/course/quiz-result/531143 70/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

Data protection refers to protecting data while in-transit (as it travels to and from Amazon S3) and
at rest (while it is stored on disks in Amazon S3 data centers). You can protect data in transit by
using SSL or by using client-side encryption. You have the following options of protecting data at
rest in Amazon S3.

Use Server-Side Encryption – You request Amazon S3 to encrypt your object before saving it on
disks in its data centers and decrypt it when you download the objects.

Use Client-Side Encryption – You can encrypt data client-side and upload the encrypted data to
Amazon S3. In this case, you manage the encryption process, the encryption keys, and related
tools. 

You can create a load balancer that uses the SSL/TLS protocol for encrypted connections (also
known as SSL offload). This feature enables traffic encryption between your load balancer and the
clients that initiate HTTPS sessions, and for connections between your load balancer and your EC2
instances.

For more information on securing data at rest, please refer to the below link:

https://d0.awsstatic.com/whitepapers/aws-securing-data-at-rest-with-encryption.pdf

Ask our Experts

Rate this Question? vu


Question 62 Incorrect

Domain :Design Cost-Optimized Architectures

Your company currently has a set of EC2 Instances running a web application which sits behind an
Elastic Load Balancer. You also have an Amazon RDS instance which is accessible from the web
application. You have been asked to ensure that this architecture is self-healing in nature. Which of
the following would fulfill this requirement? Choose 2 answers from the options given below.

Use CloudWatch metrics to check the utilization of the web layer. Use Auto
z A. Scaling Group to scale the web instances accordingly based on the
CloudWatch metrics.
A

https://www.whizlabs.com/learn/course/quiz-result/531143 71/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

Use CloudWatch metrics to check the utilization of the databases servers. Use
B. Auto Scaling Group to scale the database instances accordingly based on the
CloudWatch metrics.

z C. Utilize the Read Replica feature for the Amazon RDS layer.
B
D. Utilize the Multi-AZ feature for the Amazon RDS layer.
A
Explanation:

Answer - A and D

The following diagram from AWS showcases a self-healing architecture where you have a set of
EC2 servers as Web server being launched by an Auto Scaling Group.

AWS Documentation mentions the following:

Amazon RDS Multi-AZ deployments provide enhanced availability and durability for Database (DB)
Instances, making them a natural fit for production database workloads. When you provision a
Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and
synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Each
AZ runs on its own physically distinct, independent infrastructure, and is engineered to be highly
reliable. In case of an infrastructure failure, Amazon RDS performs an automatic failover to the

https://www.whizlabs.com/learn/course/quiz-result/531143 72/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

standby (or to a read replica in the case of Amazon Aurora), so that you can resume database
operations as soon as the failover is complete. Since the endpoint for your DB Instance remains the
same after a failover, your application can resume database operation without the need for manual
administrative intervention.

For more information on Multi-AZ RDS, please refer to the below link:

https://aws.amazon.com/rds/details/multi-az/

Ask our Experts

Rate this Question? vu


Question 63 Correct

Domain :Specify Secure Applications and Architectures

 
Your company has a set of EC2 Instances that access data objects stored in an S3 bucket. Your IT
Security department is concerned about the security of this architecture and wants you to
implement the following:
1) Ensure that the EC2 Instance securely accesses the data objects stored in the S3 bucket
2) Prevent accidental deletion of objects
Which of the following would help fulfill the requirements of the IT Security department? Choose 2
answers from the options given below.
 
 
 

Create an IAM user and ensure the EC2 Instances use the IAM user credentials to
A.
access the data in the bucket.

z B.
Create an IAM Role and ensure the EC2 Instances use the IAM Role to access
the data in the bucket. A
Use S3 Cross-Region Replication to replicate the objects so that the integrity of
C.
data is maintained.

z D.
Use an S3 bucket policy that ensures that MFA Delete is set on the objects in
the bucket. A
Explanation:

Answer - B and D

AWS Documentation mentions the following:

https://www.whizlabs.com/learn/course/quiz-result/531143 73/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

IAM roles are designed so that your applications can securely make API requests from your
instances, without requiring you to manage the security credentials that the applications use.
Instead of creating and distributing your AWS credentials, you can delegate permission to make
API requests using IAM roles 

For more information on IAM Roles, please refer to the below link:

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html

MFA Delete can be used to add another layer of security to S3 Objects to prevent accidental
deletion of objects.

For more information on MFA Delete, please refer to the below link:

https://aws.amazon.com/blogs/security/securing-access-to-aws-using-mfa-part-3/

Ask our Experts

Rate this Question? vu


Question 64 Correct

Domain :De ne Operationally-Excellent Architectures

You have a requirement to get a snapshot of the current configuration of resources in your AWS
Account. Which of the following services can be used for this purpose?

] A. AWS CodeDeploy

] B. AWS Trusted Advisor

z] C. AWS Config
A
] D. AWS IAM

Explanation:

Answer - C

AWS Documentation mentions the following:

With AWS Config, you can do the following:

Evaluate your AWS resource con gurations for desired settings.

Get a snapshot of the current con gurations of the supported resources that are associated with
your AWS account.

Retrieve con gurations of one or more resources that exist in your account.
https://www.whizlabs.com/learn/course/quiz-result/531143 74/76
19/9/2019 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP)

Retrieve historical con gurations of one or more resources.

Receive a noti cation whenever a resource is created, modi ed, or deleted.

View relationships between resources. For example, you might want to nd all resources that
use a particular security group.

For more information on AWS Config, please visit the below URL:

http://docs.aws.amazon.com/config/latest/developerguide/WhatIsConfig.html

Ask our Experts

Rate this Question? vu


Question 65 Correct

Domain :De ne Performant Architectures

Your company is hosting an application in AWS. The application is read intensive and consists of a
set of web servers and AWS RDS. It has been noticed that the response time of the application
increases due to the load on the AWS RDS instance. Which of the following measures can be taken
to scale the data tier? Choose 2 answers from the options given below.

z A.
Create Amazon DB Read Replicas. Configure the application layer to query
the Read Replicas for query needs. A
B. Use Auto Scaling to scale out and scale in the database tier.

C. Use SQS to cache the database queries.

z D. Use ElastiCache in front of your Amazon RDS DB to cache common queries.


A
Explanation:

Answer - A and D

AWS documentation mentions the following:

Amazon RDS Read Replicas provide enhanced performance and durability for database (DB)
instances. This replication feature makes it easy to elastically scale out beyond the capacity
constraints of a single DB Instance for read-heavy database workloads. You can create one or

https://www.whizlabs.com/learn/course/quiz-result/531143 75/76

You might also like