SAA-C03 | Top Tips Of Most Up-to-date SAA-C03 Samples

Exam Code: SAA-C03 (Practice Exam Latest Test Questions VCE PDF)
Exam Name: AWS Certified Solutions Architect - Associate (SAA-C03)
Certification Provider: Amazon-Web-Services
Free Today! Guaranteed Training- Pass SAA-C03 Exam.

Online SAA-C03 free questions and answers of New Version:

NEW QUESTION 1
A company maintains a searchable repository of items on its website. The data is stored in an Amazon RDS for MySQL database table that contains more than 10 million rows The database has 2 TB of General Purpose SSD storage There are millions of updates against this data every day through the company's website
The company has noticed that some insert operations are taking 10 seconds or longer The company has determined that the database storage performance is the problem
Which solution addresses this performance issue?

  • A. Change the storage type to Provisioned IOPS SSD
  • B. Change the DB instance to a memory optimized instance class
  • C. Change the DB instance to a burstable performance instance class
  • D. Enable Multi-AZ RDS read replicas with MySQL native asynchronous replication.

Answer: A

Explanation:
https://aws.amazon.com/ebs/features/
"Provisioned IOPS volumes are backed by solid-state drives (SSDs) and are the highest performance EBS volumes designed for your critical, I/O intensive database applications. These volumes are ideal for both IOPS-intensive and throughput-intensive workloads that require extremely low latency."

NEW QUESTION 2
A company uses Amazon EC2 instances to host its internal systems As pan of a deployment operation, an administrator tries to use the AWS CLI to terminate an EC2 instance However, the administrator receives a 403 (Access Dented) error message
The administrator is using an IAM role that has the following 1AM policy attached:
SAA-C03 dumps exhibit
What is the cause of the unsuccessful request?

  • A. The EC2 Instance has a resource-based policy win a Deny statement.B The principal has not been specified in the policy statement
  • B. The "Action" field does not grant the actions that are required to terminate the EC2 instance
  • C. The request to terminate the EC2 instance does not originate from the CIDR blocks 192 0 2.0:24 or 203.0.113.0/24.

Answer: B

NEW QUESTION 3
A company has two VPCs named Management and Production The Management VPC uses VPNs through a customer gateway to connect to a single device in the data center. The Production VPC uses a virtual private gateway with two attached AWS Direct Connect connections The Management and Production VPCs both use a single VPC peering connection to allow communication between the applications.
What should a solutions architect do to mitigate any single point of failure in this architecture?

  • A. Add a set of VPNs between the Management and Production VPCs
  • B. Add a second virtual private gateway and attach it to the Management VPC.
  • C. Add a second set of VPNs to the Management VPC from a second customer gateway device
  • D. Add a second VPC peering connection between the Management VPC and the Production VPC.

Answer: C

Explanation:
https://docs.aws.amazon.com/vpn/latest/s2svpn/images/Multiple_Gateways_diagram.png
"To protect against a loss of connectivity in case your customer gateway device becomes unavailable, you can set up a second Site-to-Site VPN connection to your VPC and virtual private gateway by using a second customer gateway device." https://docs.aws.amazon.com/vpn/latest/s2svpn/vpn-redundant-connection.html

NEW QUESTION 4
A company is developing an application that provides order shipping statistics for retrieval by a REST API. The company wants to extract the shipping statistics, organize the data into an easy-to-read HTML format, and send the report to several email addresses at the same time every morning.
Which combination of steps should a solutions architect take to meet these requirements? (Choose two.)

  • A. Configure the application to send the data to Amazon Kinesis Data Firehose.
  • B. Use Amazon Simple Email Service (Amazon SES) to format the data and to send the report by email.
  • C. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that invokes an AWS Glue job to query the application's API for the data.
  • D. Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled event that invokes an AWS Lambda function to query the application's API for the data.
  • E. Store the application data in Amazon S3. Create an Amazon Simple Notification Service (Amazon SNS) topic as an S3 event destination to send the report by

Answer: DE

NEW QUESTION 5
A solutions architect is designing the cloud architecture for a new application being deployed on AWS. The process should run in parallel while adding and removing application nodes as needed based on the number of fobs to be processed. The processor application is stateless. The solutions architect must ensure that the application is loosely copied and the job items are durably stored
Which design should the solutions architect use?

  • A. Create an Amazon SNS topic to send the jobs that need to be processed Create an Amazon Machine Image (AMI) that consists of the processor application Create a launch configuration that uses the AMI Create an Auto Scaling group using the launch configuration Set the scaling policy for the Auto Scaling group to add and remove nodes based on CPU usage
  • B. Create an Amazon SQS queue to hold the jobs that need to be processed Create an Amazon Machine image (AMI) that consists of the processor application Create a launch configuration that uses the AM' Create an Auto Scaling group using the launch configuration Set the scaling policy for the Auto Scaling group to add and remove nodes based on network usage
  • C. Create an Amazon SQS queue to hold the jobs that needs to be processed Create an Amazon Machine image (AMI) that consists of the processor application Create a launch template that uses the AMI Create an Auto Scaling group using the launch template Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of items in the SQS queue
  • D. Create an Amazon SNS topic to send the jobs that need to be processed Create an Amazon Machine Image (AMI) that consists of the processor application Create a launch template that uses the AMI Create an Auto Scaling group using the launch template Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of messages published to the SNS topic

Answer: C

Explanation:
"Create an Amazon SQS queue to hold the jobs that needs to be processed. Create an Amazon EC2 Auto Scaling group for the compute application. Set the scaling policy for the Auto Scaling group to add and remove nodes based on the number of items in the SQS queue"
In this case we need to find a durable and loosely coupled solution for storing jobs. Amazon SQS is ideal for this use case and can be configured to use dynamic scaling based on the number of jobs waiting in the queue.To configure this scaling you can use the backlog per instance metric with the target value being the acceptable backlog per instance to maintain. You can calculate these numbers as follows: Backlog per instance: To calculate your backlog per instance, start with the ApproximateNumberOfMessages queue attribute to determine the length of the SQS queue

NEW QUESTION 6
A solutions architect is designing the architecture of a new application being deployed to the AWS Cloud. The application will run on Amazon EC2 On-Demand Instances and will automatically scale across multiple Availability Zones. The EC2 instances will scale up and down frequently throughout the day. An Application Load Balancer (ALB) will handle the load distribution. The architecture needs to support distributed session data management. The company is willing to make changes to code if needed.
What should the solutions architect do to ensure that the architecture supports distributed session data management?

  • A. Use Amazon ElastiCache to manage and store session data.
  • B. Use session affinity (sticky sessions) of the ALB to manage session data.
  • C. Use Session Manager from AWS Systems Manager to manage the session.
  • D. Use the GetSessionToken API operation in AWS Security Token Service (AWS STS) to manage the session

Answer: A

Explanation:
Explanation
https://aws.amazon.com/vi/caching/session-management/
In order to address scalability and to provide a shared data storage for sessions that can be accessible from any individual web server, you can abstract the HTTP sessions from the web servers themselves. A common solution to for this is to leverage an In-Memory Key/Value store such as Redis and Memcached. ElastiCache offerings for In-Memory key/value stores include ElastiCache for Redis, which can support replication, and ElastiCache for Memcached which does not support replication.

NEW QUESTION 7
A gaming company is moving its public scoreboard from a data center to the AWS Cloud. The company uses Amazon EC2 Windows Server instances behind an Application Load Balancer to host its dynamic application. The company needs a highly available storage solution for the application. The application consists of static files and dynamic server-side code.
Which combination of steps should a solutions architect take to meet these requirements? (Select TWO.)

  • A. Store the static files on Amazon S3. Use Amazon
  • B. CloudFront to cache objects at the edge.
  • C. Store the static files on Amazon S3. Use Amazon ElastiCache to cache objects at the edge.
  • D. Store the server-side code on Amazon Elastic File System (Amazon EFS). Mount the EFS volume on each EC2 instance to share the files.
  • E. Store the server-side code on Amazon FSx for Windows File Serve
  • F. Mount the FSx for Windows File Server volume on each EC2 instance to share the files.
  • G. Store the server-side code on a General Purpose SSD (gp2) Amazon Elastic Block Store (Amazon EBS) volum
  • H. Mount the EBS volume on each EC2 instance to share the files.

Answer: AE

NEW QUESTION 8
A company is preparing to launch a public-facing web application in the AWS Cloud. The architecture consists of Amazon EC2 instances within a VPC behind an Elastic Load Balancer (ELB). A third-party service is used for the DNS. The company's solutions architect must recommend a solution to detect and protect against large-scale DDoS attacks.
Which solution meets these requirements?

  • A. Enable Amazon GuardDuty on the account.
  • B. Enable Amazon Inspector on the EC2 instances.
  • C. Enable AWS Shield and assign Amazon Route 53 to it.
  • D. Enable AWS Shield Advanced and assign the ELB to it.

Answer: D

NEW QUESTION 9
An online retail company has more than 50 million active customers and receives more than 25,000 orders each day. The company collects purchase data for customers and stores this data in Amazon S3. Additional customer data is stored in Amazon RDS.
The company wants to make all the data available to various teams so that the teams can perform analytics. The solution must provide the ability to manage fine-grained permissions for the data and must minimize operational overhead.
Which solution will meet these requirements?

  • A. Migrate the purchase data to write directly to Amazon RD
  • B. Use RDS access controls to limit access.
  • C. Schedule an AWS Lambda function to periodically copy data from Amazon RDS to Amazon S3. Create an AWS Glue crawle
  • D. Use Amazon Athena to query the dat
  • E. Use S3 policies to limit access.
  • F. Create a data lake by using AWS Lake Formatio
  • G. Create an AWS Glue JOBC connection to Amazon RD
  • H. Register the S3 bucket in Lake Formatio
  • I. Use Lake
  • J. Formation access controls to limit acces
  • K. Create an Amazon Redshift cluster Schedule an AWS Lambda function to periodically copy data from Amazon S3 and Amazon RDS to Amazon Redshif
  • L. Use Amazon Redshift access controls to limit access.

Answer: C

NEW QUESTION 10
A company runs multiple Windows workloads on AWS. The company’s employees use Windows the file shares that are hosted on two Amazon EC2 instances. The file shares synchronize data between themselves and maintain duplicate copies. The company wants a highly available and durable storage solution that preserves how users currently access the files.

  • A. Migrate all the data to Amazon S3 Set up IAM authentication for users to access files
  • B. Set up an Amazon S3 File Gatewa
  • C. Mount the S3 File Gateway on the existing EC2 Instances.
  • D. Extend the file share environment to Amazon FSx for Windows File Server with a Multi-AZ configuratio
  • E. Migrate all the data to FSx for Windows File Server.
  • F. Extend the file share environment to Amazon Elastic File System (Amazon EFS) with a Multi-AZ configuratio
  • G. Migrate all the data to Amazon EFS.

Answer: C

NEW QUESTION 11
A company has chosen to rehost its application on Amazon EC2 instances The application occasionally experiences errors that affect parts of its functionality The company was unaware of this issue until users reported the errors The company wants to address this problem during the migration and reduce the time it takes to detect issues with the application Log files for the application are stored on the local disk.
A solutions architect needs to design a solution that will alert staff if there are errors in the application after the application is migrated to AWS. The solution must not require additional changes to the application code.
What is the MOST operationally efficient solution that meets these requirements?

  • A. Configure the application to generate custom metrics tor the errors Send these metric data points to Amazo
  • B. CloudWatch by using the PutMetricData API call Create a CloudWatch alarm that is based on the custom metrics
  • C. Create an hourly cron job on the instances to copy the application log data to an Amazon S3 bucket Configure an AWS Lambda function to scan the log file and publish a message to an Amazon Simple Notification Service (Amazon SNS) topic to alert staff rf errors are detected.
  • D. Install the Amazon CloudWatch agent on the instances Configure the CloudWatch agent to stream the application log file to Amazon CloudWatch Logs Run a CloudWatch Logs insights query to search lor the relevant pattern in the log file Create a CloudWatch alarm that is based on the query output
  • E. Install the Amazon CloudWatch agent on the instances Configure the CloudWatch agent to stream the application log file to Amazon CloudWatch Log
  • F. Create a metric fitter for the relevant log grou
  • G. Define the filter pattern that is required to determine that there are errors in the application Create a CloudWatch alarm that is based on the resulting metric.

Answer: B

NEW QUESTION 12
A company is running a critical business application on Amazon EC2 instances behind an Application Load Balancer The EC2 instances run in an Auto Scaling group and access an Amazon RDS DB instance
The design did not pass an operational review because the EC2 instances and the DB instance are all located in a single Availability Zone A solutions architect must update the design to use a second Availability Zone
Which solution will make the application highly available?

  • A. Provision a subnet in each Availability Zone Configure the Auto Scaling group to distribute the EC2 instances across bothAvailability Zones Configure the DB instance with connections to each network
  • B. Provision two subnets that extend across both Availability Zones Configure the Auto Scaling group to distribute the EC2 instancesacross both Availability Zones Configure the DB instance with connections to each network
  • C. Provision a subnet in each Availability Zone Configure the Auto Scaling group to distribute the EC2 instances across both Availability Zones Configure the DB instance for Multi-AZ deployment
  • D. Provision a subnet that extends across both Availability Zones Configure the Auto Scaling group to distribute the EC2 instancesacross both Availability Zones Configure the DB instance for Multi-AZ deployment

Answer: C

NEW QUESTION 13
A company is hosting a web application on AWS using a single Amazon EC2 instance that stores useruploaded documents in an Amazon EBS volume. For better scalability and availability, the company
duplicated the architecture and created a second EC2 instance and EBS volume in another Availability
Zone placing both behind an Application Load Balancer After completing this change, users reported
that, each time they refreshed the website, they could see one subset of their documents or the
other, but never all of the documents at the same time.
What should a solutions architect propose to ensure users see all of their documents at once?

  • A. Copy the data so both EBS volumes contain all the documents.
  • B. Configure the Application Load Balancer to direct a user to the server with the documents
  • C. Copy the data from both EBS volumes to Amazon EFS Modify the application to save newdocuments to Amazon EFS
  • D. Configure the Application Load Balancer to send the request to both servers Return eachdocument from the correct server.

Answer: C

Explanation:
Explanation
Amazon EFS provides file storage in the AWS Cloud. With Amazon EFS, you can create a file system,
mount the file system on an Amazon EC2 instance, and then read and write data to and from your file
system. You can mount an Amazon EFS file system in your VPC, through the Network File System
versions 4.0 and 4.1 (NFSv4) protocol. We recommend using a current generation Linux NFSv4.1 client, such as those found in the latest Amazon Linux, Redhat, and Ubuntu
AMIs, in conjunction with the Amazon EFS Mount Helper. For instructions, see Using the amazon-efsutils
Tools.
For a list of Amazon EC2 Linux Amazon Machine Images (AMIs) that support this protocol, see NFS
Support. For some AMIs, you'll need to install an NFS client to mount your file system on your
Amazon EC2 instance. For instructions, see Installing the NFS Client.
You can access your Amazon EFS file system concurrently from multiple NFS clients, so applications
that scale beyond a single connection can access a file system. Amazon EC2 instances running in
multiple Availability Zones within the same AWS Region can access the file system, so that many
users can access and share a common data source.

NEW QUESTION 14
A company has an application that runs on Amazon EC2 instances and uses an Amazon Aurora database. The EC2 instances connect to the database by using user names and passwords that are stored locally in a file. The company wants to minimize the operational overhead of credential management.
What should a solutions architect do to accomplish this goal?

  • A. Use AWS Secrets Manage
  • B. Turn on automatic rotation.
  • C. Use AWS Systems Manager Parameter Stor
  • D. Turn on automatic rotation.
  • E. Create an Amazon S3 bucket lo store objects that are encrypted with an AWS Key
  • F. Management Service (AWS KMS) encryption ke
  • G. Migrate the credential file to the S3 bucke
  • H. Point the application to the S3 bucket.
  • I. Create an encrypted Amazon Elastic Block Store (Amazon EBS) volume (or each EC2 instanc
  • J. Attach the new EBS volume to each EC2 instanc
  • K. Migrate the credential file to the new EBS volum
  • L. Point the application to the new EBS volume.

Answer: B

NEW QUESTION 15
A company has a business system that generates hundreds of reports each day. The business system saves the reports to a network share in CSV format The company needs to store this data in the AWS Cloud in near-real time for analysis. Which solution will meet these requirements with the LEAST administrative overhead?

  • A. Use AWS DataSync to transfer the files to Amazon S3 Create a scheduled task that runs at the end of each day.
  • B. Create an Amazon S3 File Gateway Update the business system to use a new network share from the S3 File Gateway.
  • C. Use AWS DataSync to transfer the files to Amazon S3 Create an application that uses the DataSync API in the automation workflow.
  • D. Deploy an AWS Transfer for SFTP endpoint Create a script that checks for new files on the network share and uploads the new files by using SFTP.

Answer: B

NEW QUESTION 16
A company collects temperature, humidity, and atmospheric pressure data in cities across multiple
continents. The average volume of data collected per site each day is 500 GB. Each site has a highspeed
internet connection. The company's weather forecasting applications are based in a single Region and analyze the data daily.
What is the FASTEST way to aggregate data from all of these global sites?

  • A. Enable Amazon S3 Transfer Acceleration on the destination bucke
  • B. Use multipart uploads todirectly upload site data to the destination bucket.
  • C. Upload site data to an Amazon S3 bucket in the closest AWS Regio
  • D. Use S3 cross-Regionreplication to copy objects to the destination bucket.
  • E. Schedule AWS Snowball jobs daily to transfer data to the closest AWS Regio
  • F. Use S3 cross-Regionreplication to copy objects to the destination bucket.
  • G. Upload the data to an Amazon EC2 instance in the closest Regio
  • H. Store the data in an AmazonElastic Block Store (Amazon EBS) volum
  • I. Once a day take an EBS snapshot and copy it to thecentralized Regio
  • J. Restore the EBS volume in the centralized Region and run an analysis on the datadaily.

Answer: A

Explanation:
Explanation
You might want to use Transfer Acceleration on a bucket for various reasons, including the following:
You have customers that upload to a centralized bucket from all over the world.
You transfer gigabytes to terabytes of data on a regular basis across continents.
You are unable to utilize all of your available bandwidth over the Internet when uploading to Amazon
S3.
https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html
https://aws.amazon.com/s3/transferacceleration/#:~:text=S3%20Transfer%20Acceleration%20(S3TA)%20reduces,to%20S3%20for%20remote%20applications:
"Amazon S3 Transfer Acceleration can speed up content transfers to and from Amazon S3 by as much
as 50-500% for long-distance transfer of larger objects. Customers who have either web or mobile
applications with widespread users or applications hosted far away from their S3 bucket can experience long and variable upload and download speeds over the Internet"
https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html
"Improved throughput - You can upload parts in parallel to improve throughput."

NEW QUESTION 17
A solutions architect is designing a customer-facing application for a company. The application's database will have a clearly defined access pattern throughout the year and will have a variable number of reads and writes that depend on the time of year. The company must retain audit records for the database for 7 days. The recovery point objective (RPO) must be less than 5 hours. Which solution meets these requirements?

  • A. Use Amazon DynamoDB with auto scaling Use on-demand backups and Amazon DynamoDB Streams
  • B. Use Amazon Redshif
  • C. Configure concurrency scalin
  • D. Activate audit loggin
  • E. Perform database snapshots every 4 hours.
  • F. Use Amazon RDS with Provisioned IOPS Activate the database auditing parameter Perform database snapshots every 5 hours
  • G. Use Amazon Aurora MySQL with auto scalin
  • H. Activate the database auditing parameter

Answer: B

NEW QUESTION 18
......

Thanks for reading the newest SAA-C03 exam dumps! We recommend you to try the PREMIUM Dumpscollection.com SAA-C03 dumps in VCE and PDF here: https://www.dumpscollection.net/dumps/SAA-C03/ (0 Q&As Dumps)