AWS-Certified-DevOps-Engineer-Professional | Amazon AWS-Certified-DevOps-Engineer-Professional Dumps Questions 2021

for Amazon certification, Real Success Guaranteed with Updated . 100% PASS AWS-Certified-DevOps-Engineer-Professional Amazon AWS Certified DevOps Engineer Professional exam Today!

Online Amazon AWS-Certified-DevOps-Engineer-Professional free dumps demo Below:

NEW QUESTION 1
Which major database needs a BYO license?

  • A. PostgreSQL
  • B. NIariaDB
  • C. MySQL
  • D. Oracle

Answer: D

Explanation: Oracle is not open source, and requires a bring your own license model.
Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_OracIe.htm|

NEW QUESTION 2
You need to perform ad-hoc analysis on log data, including searching quickly for specific error codes and reference numbers. Which should you evaluate first?

  • A. AWS Elasticsearch Service
  • B. AWS RedShift
  • C. AWS EMR
  • D. AWS DynamoDB

Answer: A

Explanation: Amazon Elasticsearch Service (Amazon ES) is a managed service that makes it easy to deploy, operate, and scale Elasticsearch clusters in the AWS cloud. Elasticsearch is a popular open-source search and analytics engine for use cases such as log analytics, real-time application monitoring, and click stream analytics.
Reference:
http://docs.aws.amazon.com/elasticsearch-service/Iatest/developerguide/what-is-amazon-elasticsearch-s ervice.htmI

NEW QUESTION 3
When thinking of AWS OpsWorks, which of the following is not an instance type you can allocate in a stack layer?

  • A. 24/7 instances
  • B. Spot instances
  • C. Time-based instances
  • D. Load-based instances

Answer: B

Explanation: AWS OpsWorks supports the following instance types, which are characterized by how they are started and stopped. 24/7 instances are started manually and run until you stop them.Time-based instances are run by AWS OpsWorks on a specified daily and weekly schedule. They allow your stack to automatically adjust the number of instances to accommodate predictable usage patterns. Load-based instances are automatically started and stopped by AWS OpsWorks, based on specified load metrics, such as CPU utilization. They allow your stack to automatically adjust the number of instances to accommodate variations in incoming traffic. Load-based instances are available only for Linux-based stacks. Reference: http://docs.aws.amazon.com/opsworks/latest/userguide/weIcome.htmI

NEW QUESTION 4
What is a circular dependency in AWS CIoudFormation?

  • A. When a Template references an earlier version of itself.
  • B. When Nested Stacks depend on each other.
  • C. When Resources form a DependOn loop.
  • D. When a Template references a region, which references the original Templat

Answer: C

Explanation: To resolve a dependency error, add a DependsOn attribute to resources that depend on other resources in your template. In some cases, you must explicitly declare dependencies so that AWS CIoudFormation can create or delete resources in the correct order. For example, if you create an Elastic IP and a VPC
with an Internet gateway in the same stack, the Elastic IP must depend on the Internet gateway attachment. For additional information, see DependsOn Attribute.
Reference: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/troubleshooting.htm|#troub|eshootin g-errors-dependence-error

NEW QUESTION 5
For AWS CIoudFormation, which stack state refuses UpdateStack calls?

  • A. <code>UPDATE_ROLLBACK_FAILED</code>
  • B. <code>UPDATE_ROLLBACK_COMPLETE</code>
  • C. <code>UPDATE_CONIPLETE</code>
  • D. <code>CREATE_COMPLETE</code>

Answer: A

Explanation: When a stack is in the UPDATE_ROLLBACK_FA|LED state, you can continue rolling it back to return it to a working state (to UPDATE_ROLLBACK_COMPLETE). You cannot update a stack that is in the UPDATE_ROLLBACK_FA|LED state. However, if you can continue to roll it back, you can return the stack to its original settings and try to update it again.
Reference:
http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/using-cfn-updating-stacks-continueu pdateroIIback.htmI

NEW QUESTION 6
You need to process long-running jobs once and only once. How might you do this?

  • A. Use an SNS queue and set the visibility timeout to long enough forjobs to process.
  • B. Use an SQS queue and set the reprocessing timeout to long enough forjobs to process.
  • C. Use an SQS queue and set the visibility timeout to long enough forjobs to process.
  • D. Use an SNS queue and set the reprocessing timeout to long enough forjobs to proces

Answer: C

Explanation: The message timeout defines how long after a successful receive request SQS waits before allowing jobs to be seen by other components, and proper configuration prevents duplicate processing.
Reference: http://docs.aws.amazon.com/AWSSimpIeQueueService/latest/SQSDeveIoperGuide/MessageLifecycIe.ht ml

NEW QUESTION 7
What does it mean if you have zero IOPS and a non-empty I/O queue for all EBS volumes attached to a running EC2 instance?

  • A. The I/O queue is buffer flushing.
  • B. Your EBS disk head(s) is/are seeking magnetic stripes.
  • C. The EBS volume is unavailable.
  • D. You need to re-mount the EBS volume in the O

Answer: C

Explanation: This is the definition of Unavailable from the EC2 and EBS SLA.
"UnavaiIabIe" and "Unavai|abi|ity" mean... For Amazon EBS, when all of your attached volumes perform zero read write IO, with pending IO in the queue.
Reference: https://aws.amazon.com/ec2/s|a/

NEW QUESTION 8
You are building a Ruby on Rails application for internal, non-production use which uses IV|ySQL as a database. You want developers without very much AWS experience to be able to deploy new code with a single command line push. You also want to set this up as simply as possible. Which tool is ideal for this setup?

  • A. AWS CIoudFormation
  • B. AWS OpsWorks
  • C. AWS ELB + EC2 with CLI Push
  • D. AWS Elastic Beanstalk

Answer: D

Explanation: Elastic BeanstaIk's primary mode of operation exactly supports this use case out of the box. It is simpler than all the other options for this question.
With Elastic Beanstalk, you can quickly deploy and manage applications in the AWS cloud without worrying about the infrastructure that runs those applications. AWS Elastic Beanstalk reduces management complexity without restricting choice or control. You simply upload your application, and Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring.
Reference: http://docs.aws.amazon.com/elasticbeanstaIk/Iatest/dg/create_depIoy_Ruby_raiIs.html

NEW QUESTION 9
What is the order of most-to-least rapidly-scaling (fastest to scale first)?
(A) EC2 + ELB + Auto Scaling (B) Lambda (C) RDS

  • A. B, A, C
  • B. C, B, A
  • C. C, A, B
  • D. A, C, B

Answer: A

Explanation: Lambda is designed to scale instantly. EC2 + ELB + Auto Scaling require single-digit minutes to scale out. RDS will take atleast 15 minutes, and will apply OS patches or any other updates when applied. Reference: https://aws.amazon.com/|ambda/faqs/

NEW QUESTION 10
What is required to achieve gigabit network throughput on EC2? You already selected cluster-compute, 10GB instances with enhanced networking, and your workload is already network-bound, but you are not seeing 10 gigabit speeds.

  • A. Enable biplex networking on your servers, so packets are non-blocking in both directions and there's no switching overhead.
  • B. Ensure the instances are in different VPCs so you don't saturate the Internet Gateway on any one VPC.
  • C. Select PIOPS for your drives and mount several, so you can provision sufficient disk throughput.
  • D. Use a placement group for your instances so the instances are physically near each other in the same Availability Zone.

Answer: D

Explanation: You are not guaranteed 10gigabit performance, except within a placement group.
A placement group is a logical grouping of instances within a single Availability Zone. Using placement groups enables applications to participate in a low-latency, 10 Gbps network. Placement groups are recommended for applications that benefit from low network latency, high network throughput, or both. Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html

NEW QUESTION 11
You have been asked to de-risk deployments at your company. Specifically, the CEO is concerned about outages that occur because of accidental inconsistencies between Staging and Production, which sometimes cause unexpected behaviors in Production even when Staging tests pass.
You already use Docker to get high consistency between Staging and Production for the application environment on your EC2 instances. How do you further de-risk the rest of the execution environment, since in AWS, there are many service components you may use beyond EC2 virtual machines?

  • A. Develop models of your entire cloud system in CIoudFormatio
  • B. Use this model in Staging and Production to achieve greater parity.
  • C. Use AWS Config to force the Staging and Production stacks to have configuration parit
  • D. Any differences will be detected for you so you are aware of risks.
  • E. Use AMIs to ensure the whole machine, including the kernel of the virual machines, is consistent, since Docker uses Linux Container (LXC) technology, and we need to make sure the container environment is consistent.
  • F. Use AWS ECS and Docker clusterin
  • G. This will make sure that the AMIs and machine sizes are the same across both environments.

Answer: A

Explanation: Only CIoudFormation's JSON Templates allow declarative version control of repeatably deployable models of entire AWS clouds.
Reference: https://bIogs.aws.amazon.com/application-management/blog/category/Best+practices

NEW QUESTION 12
You need the absolute highest possible network performance for a cluster computing application. You already selected homogeneous instance types supporting 10 gigabit enhanced networking, made sure that your workload was network bound, and put the instances in a placement group. What is the last optimization you can make?

  • A. Use 9001 MTU instead of 1500 for Jumbo Frames, to raise packet body to packet overhead ratios.
  • B. Segregate the instances into different peered VPCs while keeping them all in a placement group, so each one has its own Internet Gateway.
  • C. Bake an AMI for the instances and relaunch, so the instances are fresh in the placement group and donot have noisy neighbors.
  • D. Turn off SYN/ACK on your TCP stack or begin using UDP for higher throughpu

Answer: A

Explanation: For instances that are collocated inside a placement group, jumbo frames help to achieve the maximum network throughput possible, and they are recommended in this case. For more information, see Placement Groups.
Reference: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/network_mtu.htm|#jumbo_frame_instances

NEW QUESTION 13
You are hired as the new head of operations for a SaaS company. Your CTO has asked you to make debugging any part of your entire operation simpler and as fast as possible. She complains that she has no idea what is going on in the complex, service-oriented architecture, because the developers just log to disk, and it's very hard to find errors in logs on so many services. How can you best meet this requirement and satisfy your CTO?

  • A. Copy all log files into AWS S3 using a cron job on each instanc
  • B. Use an S3 Notification Configuration on the <code>PutBucket</code> event and publish events to AWS Lambd
  • C. Use the Lambda to analyze logs as soon as they come in and flag issues.
  • D. Begin using CIoudWatch Logs on every servic
  • E. Stream all Log Groups into S3 object
  • F. Use AWS EMR clusterjobs to perform ad-hoc MapReduce analysis and write new queries when needed.
  • G. Copy all log files into AWS S3 using a cron job on each instanc
  • H. Use an S3 Notification Configuration on the <code>PutBucket</code> event and publish events to AWS Kinesi
  • I. Use Apache Spark on AWS EMR to perform at-scale stream processing queries on the log chunks and flag issues.
  • J. Begin using CIoudWatch Logs on every servic
  • K. Stream all Log Groups into an AWS Elasticsearch Service Domain running Kibana 4 and perform log analysis on a search cluster.

Answer: D

Explanation: The Elasticsearch and Kibana 4 combination is called the ELK Stack, and is designed specifically for real-time, ad-hoc log analysis and aggregation. All other answers introduce extra delay or require pre-defined queries.
Amazon Elasticsearch Service is a managed service that makes it easy to deploy, operate, and scale Elasticsearch in the AWS Cloud. Elasticsearch is a popular open-source search and analytics engine for use cases such as log analytics, real-time application monitoring, and click stream analytics. Reference: https://aws.amazon.com/elasticsearch-service/

NEW QUESTION 14
You are creating an application which stores extremely sensitive financial information. All information in
the system must be encrypted at rest and in transit. Which of these is a violation of this policy?

  • A. ELB SSL termination.
  • B. ELB Using Proxy Protocol v1.
  • C. CIoudFront Viewer Protocol Policy set to HTTPS redirection.
  • D. Telling S3 to use AES256 on the server-sid

Answer: A

Explanation: Terminating SSL terminates the security of a connection over HTTP, removing the S for "Secure" in HTTPS. This violates the "encryption in transit" requirement in the scenario.
Reference:
http://docs.aws.amazon.com/E|asticLoadBaIancing/latest/DeveIoperGuide/elb-listener-config.htmI

NEW QUESTION 15
You are creating a new API for video game scores. Reads are 100 times more common than writes, and the top 1% of scores are read 100 times more frequently than the rest of the scores. What's the best design for this system, using DynamoDB?

  • A. DynamoDB table with 100x higher read than write throughput, with CloudFront caching.
  • B. DynamoDB table with roughly equal read and write throughput, with CloudFront caching.
  • C. DynamoDB table with 100x higher read than write throughput, with E|astiCache caching.
  • D. DynamoDB table with roughly equal read and write throughput, with EIastiCache cachin

Answer: D

Explanation: Because the 100x read ratio is mostly driven by a small subset, with caching, only a roughly equal number of reads to writes will miss the cache, since the supermajority will hit the top 1% scores. Knowing we need to set the values roughly equal when using caching, we select AWS EIastiCache, because CIoudFront cannot directly cache DynamoDB queries, and EIastiCache is an excellent in-memory cache for database queries, rather than a distributed proxy cache for content delivery.
One solution would be to cache these reads at the application layer. Caching is a technique that is used in many high-throughput applications, offloading read actMty on hot items to the cache rather than to the database. Your application can cache the most popular items in memory, or use a product such as EIastiCache to do the same.
Reference: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GuideIinesForTabIes.htmI#GuideIi nesForTabIes.CachePopuIarItem

NEW QUESTION 16
Your serverless architecture using AWS API Gateway, AWS Lambda, and AWS DynamoDB experienced a large increase in traffic to a sustained 400 requests per second, and dramatically increased in failure rates. Your requests, during normal operation, last 500 milliseconds on average. Your DynamoDB table did not exceed 50% of provisioned throughput, and Table primary keys are designed correctly. What is the most likely issue?

  • A. Your API Gateway deployment is throttling your requests.
  • B. Your AWS API Gateway Deployment is bottlenecking on request (de)seriaIization.
  • C. You did not request a limit increase on concurrent Lambda function executions.
  • D. You used Consistent Read requests on DynamoDB and are experiencing semaphore loc

Answer: C

Explanation: AWS API Gateway by default throttles at 500 requests per second steady-state, and 1000 requests per second at spike. Lambda, by default, throttles at 100 concurrent requests for safety. At 500 milliseconds (half of a second) per request, you can expect to support 200 requests per second at 100 concurrency. This is less than the 400 requests per second your system now requires. Make a limit increase request via the AWS Support Console.
AWS Lambda: Concurrent requests safety throttle per account -> 100
Reference: http://docs.aws.amazon.com/generaI/latest/gr/aws_service_Iimits.htm|#|imits_|ambda

NEW QUESTION 17
For AWS Auto Scaling, what is the first transition state a new instance enters after leaving steady state when scaling out due to increased load?

  • A. EnteringStandby
  • B. Pending
  • C. Terminating:Wait
  • D. Detaching

Answer: B

Explanation: When a scale out event occurs, the Auto Scaling group launches the required number of EC2 instances, using its assigned launch configuration. These instances start in the Pending state. If you add a lifecycle hook to your Auto Scaling group, you can perform a custom action here. For more information, see Lifecycle Hooks.
Reference: http://docs.aws.amazon.com/AutoScaling/latest/DeveIoperGuide/AutoScaIingGroupLifecycIe.html

NEW QUESTION 18
You are building out a layer in a software stack on AWS that needs to be able to scale out to react to increased demand as fast as possible. You are running the code on EC2 instances in an Auto Scaling Group behind an ELB. Which application code deployment method should you use?

  • A. SSH into new instances that come online, and deploy new code onto the system by pulling it from an S3 bucket, which is populated by code that you refresh from source control on new pushes.
  • B. Bake an AMI when deploying new versions of code, and use that AMI for the Auto Scaling Launch Configuration.
  • C. Create a Dockerfile when preparing to deploy a new version to production and publish it to S3. Use UserData in the Auto Scaling Launch configuration to pull down the Dockerfile from S3 and run it when new instances launch.
  • D. Create a new Auto Scaling Launch Configuration with UserData scripts configured to pull the latest code at all times.

Answer: B

Explanation: the bootstrapping process can be slower if you have a complex application or multiple applications to install. Managing a fileet of applications with several build tools and dependencies can be a challenging task during rollouts. Furthermore, your deployment service should be designed to do faster rollouts to take advantage of Auto Scaling.
Reference: https://d0.awsstatic.com/whitepapers/overview-of-deployment-options-on-aws.pdf

100% Valid and Newest Version AWS-Certified-DevOps-Engineer-Professional Questions & Answers shared by 2passeasy, Get Full Dumps HERE: https://www.2passeasy.com/dumps/AWS-Certified-DevOps-Engineer-Professional/ (New 102 Q&As)