DOP-C01 | Renovate AWS Certified DevOps Engineer- Professional DOP-C01 Dumps Questions

We provide real DOP-C01 exam questions and answers braindumps in two formats. Download PDF & Practice Tests. Pass Amazon-Web-Services DOP-C01 Exam quickly & easily. The DOP-C01 PDF type is available for reading and printing. You can print more and practice many times. With the help of our Amazon-Web-Services DOP-C01 dumps pdf and vce product and material, you can easily pass the DOP-C01 exam.

Online Amazon-Web-Services DOP-C01 free dumps demo Below:

NEW QUESTION 1
Which of the following services from AWS can be integrated with the Jenkins continuous integration tool.

  • A. AmazonEC2
  • B. AmazonECS
  • C. AmazonElastic beanstalk
  • D. Allof the above

Answer: D

Explanation:
The following AWS sen/ices can be integrated with Jenkins
DOP-C01 dumps exhibit
For more information on Jenkins in AWS, please refer to the below link:
https://dOawsstatic.com/whitepapers/DevOps/Jenkins_on_AWS.pdf

NEW QUESTION 2
A vendor needs access to your AWS account. They need to be able to read protected messages in a private S3 bucket. They have a separate AWS account. Which of the solutions below is the best way to do this?

  • A. Allowthe vendor to ssh into your EC2 instance and grant them an 1AM role with fullaccess to the bucket.
  • B. Createa cross-account 1AM role with permission to access the bucket, and grantpermission to use the role to the vendor AWS account.
  • C. Createan 1AM User with API Access Key
  • D. Give the vendor the AWS Access Key ID and AWSSecret Access Key for the user.
  • E. Createan S3 bucket policy that allows the vendor to read from the bucket from theirAWS account.

Answer: B

Explanation:
The AWS Documentation mentions the following on cross account roles
You can use AWS Identity and Access Management (I AM) roles and AWS Security Token Service (STS) to set up cross-account access between AWS accounts. When you assume an 1AM role in another AWS account to obtain cross-account access to services and resources in that account, AWS CloudTrail logs the cross-account activity. For more information on Cross account roles, please visit the below URL
http://docs.aws.amazon.com/IAM/latest/UserGuide/tuto rial_cross-account-with-roles.htm I https://docs.aws.amazon.com/AmazonS3/latest/dev/example-walkthroughs-managing-access- example2.html

NEW QUESTION 3
You are building a Ruby on Rails application for internal, non-production use which uses MySQL as a database. You want developers without very much AWS experience to be able to deploy new code with a single command line push. You also want to set this up as simply as possible. Which tool is ideal for this setup?

  • A. AWSCIoudFormation
  • B. AWSOpsWorks
  • C. AWS ELB+ EC2 with CLI Push
  • D. AWS Elastic Beanstalk

Answer: D

Explanation:
With Elastic Beanstalk, you can quickly deploy and manage applications in the AWS Cloud without worrying about the infrastructure that runs those applications.
AWS Elastic Beanstalk reduces management complexity without restricting choice or control. You simply upload your application, and Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring
Elastic Beanstalk supports applications developed in Java, PHP, .NET, Node.js, Python, and Ruby, as well as different container types for each language.
For more information on Elastic beanstalk, please visit the below URL:
• http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/Welcome.html

NEW QUESTION 4
After a daily scrum with your development teams, you've agreed that using Blue/Green style deployments would benefit the team. Which technique should you use to deliver this new requirement?

  • A. Re-deploy your application on AWS Elastic Beanstalk, and take advantage of Elastic Beanstalk deployment types.
  • B. Using an AWS CloudFormation template, re-deploy your application behind a load balancer, launch a new AWS CloudFormation stack during each deployment, update your load balancer to send half your traffic to the new stack while you test, after verification update the load balancer to send 100% of traffic to the new stack, and then terminate the old stack.
  • C. Create a new Autoscaling group with the new launch configuration and desired capacity same as that of the initial Autoscaling group andassociate it with the same load balance
  • D. Once the new AutoScaling group's instances got registered with ELB, modify the desired capacity of the initial AutoScal ing group to zero and gradually delete the old Auto scaling grou
  • E. •>/
  • F. Using an AWS OpsWorks stack, re-deploy your application behind an Elastic Load Balancing load balancer and take advantage of OpsWorks stack versioning, during deployment create a new version of your application, tell OpsWorks to launch the new version behind your load balancer, and when the new version is launched, terminate the old OpsWorks stack.

Answer: C

Explanation:
This is given as a practice in the Green Deployment Guides
DOP-C01 dumps exhibit
A blue group carries the production load while a green group is staged and deployed with the new code. When if s time to deploy, you simply attach the green group to
the existing load balancer to introduce traffic to the new environment. For HTTP/HTTP'S listeners, the load balancer favors the green Auto Scaling group because it uses a least outstanding requests routing algorithm
As you scale up the green Auto Scaling group, you can take blue Auto Scaling group instances out of service by either terminating them or putting them in Standby state.
For more information on Blue Green Deployments, please refer to the below document link: from AWS
https://dOawsstatic.com/whitepapers/AWS_Blue_Green_Deployments.pdf

NEW QUESTION 5
A user is trying to save some cost on the AWS services. Which of the below mentioned options will not help him save cost?

  • A. Delete the unutilized EBS volumes once the instance is terminated
  • B. Delete the AutoScaling launch configuration after the instances are terminated
  • C. Release the elastic IP if not required once the instance is terminated
  • D. Delete the AWS ELB after the instances are terminated

Answer: B

Explanation:
Option A is wrong because CBS volumes does have a costing aspect and hence deleting the volumes will save on cost
Option C is wrong because Elastic IP will consume cost if not removed. Option D is wrong because CLB also incur costs.
Only Autoscaling groups are free of cost. It's only the underlying resources which you are charged for. For more information on AWS Pricing, please visit the link: https://aws.amazon.com/pricing/services/

NEW QUESTION 6
You are designing a system which needs, at a minimum, 8 m4.large instances operating to service traffic. When designing a system for high availability in the us-east-1 region, which has 6 Availability Zones, your company needs to be able to handle the death of a full availability zone. How should you distribute the servers, to save as much cost as possible, assuming all of the EC2 nodes are properly linked to an ELB? Your VPC account can utilize us-east-1's AZ's a through f, inclusive.

  • A. 3 servers in each of AZ's a through d, inclusiv
  • B. 8 servers in each of AZ's a and b.
  • C. 2 servers in each of AZ's a through e, inclusive.
  • D. 4 servers in each of AZ's a through conclusive.

Answer: C

Explanation:
The best way is to distribute the instances across multiple AZ's to get the best and avoid a disaster scenario. With this scenario, you will always a minimum of more than 8 servers even if one AZ were to go down. Even though A and D are also valid options, the best option when it comes to distribution is Option C. For more information on High Availability and Fault tolerance, please refer to the below link:
https://media.amazonwebservices.com/architecturecenter/AWS_ac_ra_ftha_04.pdf

NEW QUESTION 7
Your application's Auto Scaling Group scales up too quickly, too much, and stays scaled when traffic decreases. What should you do to fix this?

  • A. Set a longer cooldown period on the Group, so the system stops overshooting the target capacit
  • B. The issue is that the scaling system doesn't allow enough time for new instances to begin servicing requests before measuring aggregate load again.
  • C. Calculate the bottleneck or constraint on the compute layer, then select that as the new metric, and set the metric thresholds to the bounding values that begin to affect response latency.
  • D. Raise the CloudWatch Alarms threshold associated with your autoscaling group, so the scaling takes more of an increase in demand before beginning.
  • E. Use larger instances instead of lots of smaller ones, so the Group stops scaling out so much and wasting resources as the OS level, since the OS uses a higher proportion of resources on smaller instances.

Answer: B

Explanation:
The ideal case is that the right metric is not being used for the scale up and down.
Option A is not valid because it mentions that the cooldown is not happening when the traffic decreases, that means the metric threshold for the scale down is not occurring in Cloudwatch
Option C is not valid because increasing the Cloudwatch alarm metric will not ensure that the instances scale down when the traffic decreases.
Option D is not valid because the question does not mention any constraints that points to the instance size. For an example on using custom metrics for scaling in and out, please follow the below link for a use case.
• https://blog.powerupcloud.com/aws-autoscaling-based-on-database-query-custom-metrics- f396c16e5e6a

NEW QUESTION 8
You work for a startup that has developed a new photo-sharing application for mobile devices. Over recent months your application has increased in popularity; this has resulted in a decrease in the performance of the application due to the increased load. Your application has a two-tier architecture that is composed of an Auto Scaling PHP application tier and a MySQL RDS instance initially deployed with AWS CloudFormation. Your Auto Scaling group has a min value of 4 and a max value of 8. The desired capacity is now at 8 due to the high CPU utilization of the instances. After some analysis, you are confident that the performance issues stem from a constraint in CPU capacity, while memory utilization remains low. You therefore decide to move from the general-purpose M3 instances to the compute-optimized C3 instances. How would you deploy this change while minimizing any interruption to your end users?

  • A. Signin to the AWS Management Console, copy the old launch configuration, and createa new launch configuration that specifies the C3 instance
  • B. Update the AutoScaling group with the new launch configuratio
  • C. Auto Scaling will then updatethe instance type of all running instances
  • D. Signinto the AWS Management Console and update the existing launch configurationwith the newC3 instance typ
  • E. Add an UpdatePolicy attribute to your AutoScaling group that specifies an AutoScaling RollingUpdate.
  • F. Updatethe launch configuration specified in the AWS CloudFormation template with thenew C3 instance typ
  • G. Run a stack update with the new templat
  • H. Auto Scalingwill then update the instances with the new instance type.
  • I. Updatethe launch configuration specified in the AWS CloudFormation template with thenew C3 instance typ
  • J. Also add an UpdatePolicy attribute to your Auto Scalinggroup that specifies an AutoScalingRollingUpdat
  • K. Run a stack update with thenew template

Answer: D

Explanation:
The AWS Documentation mentions the below
The AWS::AutoScaling::AutoScalingGroup resource supports an UpdatePolicy attribute. This is used to define how an Auto Scaling group resource is updated when an update to the Cloud Formation stack occurs. A common approach to updating an Auto Scaling group is to perform a rolling update, which is done by specifying the AutoScalingRollingUpdate policy. This retains the same Auto Scaling group and replaces old instances with new ones, according to the parameters specified.
For more information on Rolling Updates for Autoscaling please see the below link:
• https://aws.amazon.com/premiumsupport/knowledge-center/auto-scaling-group-rolling- updates/

NEW QUESTION 9
You have decided to migrate your application to the cloud. You cannot afford any downtime. You want to gradually migrate so that you can test the application with a small percentage of users and increase over time. Which of these options should you implement?

  • A. Use Direct Connect to route traffic to the on-premise locatio
  • B. In DirectConnect, configure the amount of traffic to be routed to the on-premise location.
  • C. Implement a Route 53 failover routing policy that sends traffic back to the on-premises application if the AWS application fails.
  • D. Configure an Elastic Load Balancer to distribute the traffic between the on-premises application and the AWS application.
  • E. Implement a Route 53 weighted routing policy that distributes the traffic between your on- premises application and the AWS application depending on weight.

Answer: D

Explanation:
Option A is incorrect because DirectConnect cannot control the flow of traffic.
Option B is incorrect because you want to split the percentage of traffic. Failover will direct all of the traffic to the backup servers.
Option C is incorrect because you cannot control the percentage distribution of traffic.
Weighted routing lets you associate multiple resources with a single domain name (example.com) or subdomain name (acme.example.com) and choose how much traffic is routed to each resource. This can be useful for a variety of purposes, including load
balancing and testing new versions of software.
For more information on the Routing policy please refer to the below link: http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html

NEW QUESTION 10
Your company has a set of resources hosted in AWS. Your IT Supervisor is concerned with the costs being incurred by the resources running in AWS and wants to optimize on the costs as much as possible. Which of the following ways could help achieve this efficiently? Choose 2 answers from the options given below.

  • A. Create Cloudwatch alarms to monitor underutilized resources and either shutdown or terminate resources which are not required.
  • B. Use the Trusted Advisor to see underutilized resources
  • C. Create a script which monitors all the running resources and calculates the costs accordingl
  • D. The analyze those resources accordingly and see which can be optimized.
  • E. Create Cloudwatch logs to monitor underutilized resources and either shutdown or terminate resources which are not required.

Answer: AB

Explanation:
You can use Cloudwatch alarms to see if resources are below a threshold for long periods of time. If so you can take the decision to either stop them or to terminate the resources.
For more information on Cloudwatch alarms, please visit the below URL:
• <http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Ala rmThatSendsCmail.html>
In the Trusted Advisor, when you enable the Cost optimization section, you will get all sorts of checks which can be used to optimize the costs of your AWS resources.
DOP-C01 dumps exhibit
For more information on the Trusted Advisor, please visit the below U RL:
• https://aws.amazon.com/premiumsupport/trustedadvisor/

NEW QUESTION 11
When you implement a lifecycle hook in Autoscaling, by default what is the time limit in which the instance will be a pending state.

  • A. 60seconds
  • B. 5minutes
  • C. 60minutes
  • D. 120minutes

Answer: C

Explanation:
The AWS Documentation mentions
By default, the instance remains in a wait state for one hour, and then Auto Scaling continues the launch or terminate process (Pending: Proceed or Terminating: Proceed). If you need more time, you can restart the timeout period by recording a heartbeat. If you finish before the timeout
period ends, you can complete the lifecycle action, which continues the launch or termination process.
For more information on Autoscaling lifecycle hooks please see the below link:
• http://docs.aws.a mazon.com/autoscaling/latest/userguide/lifecycle-hooks.htm I

NEW QUESTION 12
Which of the following is a container for metrics in Cloudwatch?

  • A. MetricCollection
  • B. Namespaces
  • C. Packages
  • D. Locale

Answer: B

Explanation:
The AWS Documentation mentions the following
Cloud Watch namespaces are containers for metrics. Metrics in different namespaces are isolated from each other, so that metrics from different applications are
not mistakenly aggregated into the same statistics. All AWS services that provide Amazon Cloud Watch data use a namespace string, beginning with "AWS/". When
you create custom metrics, you must also specify a namespace as a container for custom metrics. The following services push metric data points to Cloud Watch.
For more information on Cloudwatch namespaces, please visit the below URL: ttp://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/aws-namespaces.htmI

NEW QUESTION 13
Your company is using an Autoscaling Group to scale out and scale in instances. There is an expectation of a peak in traffic every Monday at 8am. The traffic is then expected to come down before the weekend on Friday 5pm. How should you configure Autoscaling in this?

  • A. Createdynamic scaling policies to scale up on Monday and scale down on Friday
  • B. Create a scheduled policy to scale up on Fridayand scale down on Monday
  • C. CreateascheduledpolicytoscaleuponMondayandscaledownonFriday
  • D. Manuallyadd instances to the Autoscaling Group on Monday and remove them on Friday

Answer: C

Explanation:
The AWS Documentation mentions the following for Scheduled scaling
Scaling based on a schedule allows you to scale your application in response to predictable load changes. For example, every week the traffic to your web application starts to increase on Wednesday, remains high on Thursday, and starts to decrease on Friday. You can plan your scaling activities based on the predictable traffic patterns of your web application.
For more information on scheduled scaling for Autoscaling, please visit the below URL
• http://docs.aws.amazon.com/autoscaling/latest/userguide/sched ule_time.htm I

NEW QUESTION 14
You have created a DynamoDB table for an application that needs to support thousands of users. You need to ensure that each user can only access their own data in a particular table. Many users already have accounts with a third-party identity provider, such as Facebook, Google, or Login with Amazon. How would you implement this requirement?
Choose 2 answers from the options given below.

  • A. Createan 1AM User for all users so that they can access the application.
  • B. UseWeb identity federation and register your application with a third-partyidentity provider such as Google, Amazon, or Facebook.
  • C. Createan 1AM role which has specific access to the DynamoDB table.
  • D. Usea third-party identity provider such as Google, Facebook or Amazon so users canbecome an AWS1AM User with access to the application.

Answer: BC

Explanation:
The AWS Documentation mentions the following
With web identity federation, you don't need to create custom sign-in code or manage your own user identities. Instead, users of your app can sign in using a well-known identity provider (IdP) — such as Login with Amazon, Facebook, Google, or any other OpenID Connect (OIDC)-compatible IdP, receive an authentication token, and then exchange that token for temporary security credentials in AWS that map to an 1AM role with permissions to use the resources in your AWS account. Using an IdP helps you keep your AWS account secure, because you don't have to embed and distribute long- term security credentials with your application. For more information on Web Identity federation, please visit the below url http://docs.ws.amazon.com/IAM/latest/UserGuide/id_roles_providers_oidc.html

NEW QUESTION 15
A company has developed a Ruby on Rails content management platform. Currently, OpsWorks with several stacks for dev, staging, and production is being used to deploy and manage the application. Now the company wants to start using Python instead of Ruby. How should the company manage the new deployment? Choose the correct answer from the options below

  • A. Update the existing stack with Python application code and deploy the application using the deploy life-cycle action to implement the application code.
  • B. Create a new stack that contains a new layer with the Python cod
  • C. To cut over to the new stack the company should consider using Blue/Green deployment
  • D. Create a new stack that contains the Python application code and manage separate deployments of the application via the secondary stack using the deploy lifecycle action to implement the application code.
  • E. Create a new stack that contains the Python application code and manages separate deployments of the application via the secondary stack.

Answer: B

Explanation:
Blue/green deployment is a technique for releasing applications by shifting traffic between two identical environments running different versions of the application.
Blue/green deployments can mitigate common risks associated with deploying software, such as downtime and rollback capability
DOP-C01 dumps exhibit
Please find the below link on a white paper for blue green deployments
• https://d03wsstatic.com/whitepapers/AWS_Blue_Green_Deployments.pdf

NEW QUESTION 16
You are building a game high score table in DynamoDB. You will store each user's highest score for each game, with many games, all of which have relatively similar usage levels and numbers of players. You need to be able to look up the highest score for any game. What's the best DynamoDB key structure?

  • A. HighestScore as the hash/only key.
  • B. GamelD as the hash key, HighestScore as the range ke
  • C. GamelD as the hash/only key.
  • D. GamelDastherange/onlykey.

Answer: B

Explanation:
It always best to choose the hash key as the column that will have a wide range of values. This is also given in the AWS documentation
Choosing a Partition Key
The following table compares some common partition key schemas for provisioned throughput efficiency:
DOP-C01 dumps exhibit
Next since you need to sort by the Highest Score, you need to use that as the sort key For more information on Table Guidelines, please visit the below URL:
• http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Guide linesForTables.html

NEW QUESTION 17
Your development team wants account-level access to production instances in order to do live debugging of a highly secure environment. Which of the following should you do?

  • A. Place the credentials provided by Amazon Elastic Compute Cloud (EC2) into a secure Amazon Sample Storage Service (S3) bucket with encryption enable
  • B. Assign AWS Identity and Access Management (1AM) users to each developer so they can download the credentials file.
  • C. Place an internally created private key into a secure S3 bucket with server-side encryption using customer keys andconfiguration management, create a service account on al I the instances using this private key, and assign I AM users to each developer so they can download the fi le.
  • D. Place each developer's own public key into a private S3 bucket, use instance profiles and configuration management to create a user account for each developer on all instances, and place the user's public keys into the appropriate accoun
  • E. ^/
  • F. Place the credentials provided by Amazon EC2 onto an MFA encrypted USB drive, and physically share it with each developer so that the private key never leaves the office.

Answer: C

Explanation:
An instance profile is a container for an 1AM role that you can use to pass role information to an CC2 instance when the instance starts.
A private S3 bucket can be created for each developer, the keys can be stored in the bucket and then assigned to the instance profile.
Option A and D are invalid, because the credentials should not be provided by a AWS EC2 Instance. Option B is invalid because you would not create a service account, instead you should create an instance profile.
For more information on Instance profiles, please refer to the below document link: from AWS
• http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-ro le-ec2_instance- profiles.htm I

NEW QUESTION 18
As part of your deployment pipeline, you want to enable automated testing of your AWS CloudFormation template. What testing should be performed to enable faster feedback while minimizing costs and risk? Select three answers from the options given below

  • A. Usethe AWS CloudFormation Validate Template to validate the syntax of the template
  • B. Usethe AWS CloudFormation Validate Template to validate the properties ofresources defined in the template.
  • C. Validatethe template's is syntax using a generalJSON parser.
  • D. Validatethe AWS CloudFormation template against the official XSD scheme definitionpublished by Amazon Web Services.
  • E. Updatethe stack with the templat
  • F. If the template fails rollback will return thestack and its resources to exactly the same state.
  • G. When creating the stack, specify an Amazon SNS topic to which your testing system is subscribe
  • H. Your testing system runs tests when it receives notification that the stack is created or updated.

Answer: AEF

Explanation:
The AWS documentation mentions the following
The aws cloudformation validate-template command is designed to check only the syntax of your template. It does not ensure that the property values that you have specified for a resource are valid for that resource. Nor does it determine the number of resources that will exist when the stack is created.
To check the operational validity, you need to attempt to create the stack. There is no sandbox or test area for AWS Cloud Formation stacks, so you are charged for the resources you create during testing. Option F is needed for notification.
For more information on Cloudformation template validation, please visit the link:
http://docs.aws.a mazon.com/AWSCIoudFormation/latest/UserGuide/using-cfn-va I idate- template.htm I

NEW QUESTION 19
You are using lifecycle hooks in your AutoScaling Group. Because there is a lifecycle hook, the instance is put in the Pending:Wait state, which means that it is not available to handle traffic yet. When the instance enters the wait state, other scaling actions are suspended. After some time, the instance state is changed to Pending:Proceed, and finally InService where the instances that are part of the Autoscaling Group can start serving up traffic. But you notice that the bootstrapping process on the instances finish much earlier, long before the state is changed to PendingiProceed.
What can you do to ensure the instances are placed in the right state after the bootstrapping process is complete?

  • A. Use the complete-lifecycle-action call to complete the lifecycle actio
  • B. Run this command from another EC2 Instance.
  • C. Use the complete-lifecycle-action call to complete the lifecycle actio
  • D. Run this command from the Command line interfac
  • E. -^C Use the complete-lifecycle-action call to complete the lifecycle actio
  • F. Run this command from the Simple Notification service.
  • G. Use the complete-lifecycle-action call to complete the lifecycle actio
  • H. Run this command from a SQS queue

Answer: B

Explanation:
The AWS Documentation mentions the following
3. If you finish the custom action before the timeout period ends, use the complete-1ifecycle-action command so that the Auto Scalinggroup can continue launching
or terminating the instance. You can specify the lifecycle action token, as shown in the following command:
3. If you finish the custom action before the timeout period ends, use the complete-lifecycle-action command so that Auto Scaling can continue launching or terminating the instance. You can specify the lifecycle action token, as shown in the following command:
DOP-C01 dumps exhibit
For more information on lifecycle hooks, please refer to the below URL:
• http://docs.aws.amazon.com/autoscaling/latest/userguide/lifecycle-hooks.htm I

NEW QUESTION 20
You are working for a startup company that is building an application that receives large amounts of data. Unfortunately, current funding has left the start-up short on cash, cannot afford to purchase thousands of dollars of storage hardware, and has opted to use AWS. Which services would you implement in order to store a virtually unlimited amount of data without any effort to scale when demand unexpectedly increases? Choose the correct answer from the options below

  • A. AmazonS3, because it provides unlimited amounts of storage data, scales automatically highlyavailable, and durable
  • B. AmazonGlacier, to keep costs low for storage and scale infinitely
  • C. Amazonlmport/Export, because Amazon assists in migrating large amounts of data toAmazon S3
  • D. AmazonEC2, because EBS volumes can scale to hold any amount of data and, when usedwith Auto Scaling, can be designed for fault tolerance and high availability

Answer: A

Explanation:
The best option is to use S3 because you can host a large amount of data in S3 and is the best storage option provided by AWS.
For more information on S3, please refer to the below link:
• http://docs.aws.a mazon.com/AmazonS3/latest/dev/We lcome.htmI

NEW QUESTION 21
You have implemented a system to automate deployments of your configuration and application dynamically after an Amazon EC2 instance in an Auto Scaling group is launched. Your system uses a configuration management tool that works in a standalone configuration, where there is no master node. Due to the volatility of application load, new instances must be brought into service within three minutes of the launch of the instance operating system. The deployment stages take the following times to complete:
1) Installing configuration management agent: 2mins
2) Configuring instance using artifacts: 4mins
3) Installing application framework: 15mins
4) Deploying application code: 1min
What process should you use to automate the deployment using this type of standalone agent configuration?

  • A. Configureyour Auto Scaling launch configuration with an Amazon EC2 UserData script toinstall the agent, pull configuration artifacts and application code from anAmazon S3 bucket, and then execute the agent to configure the infrastructureand application.
  • B. Builda custom Amazon Machine Image that includes all components pre-installed,including an agent, configuration artifacts, application frameworks, and code.Create a startup script that executes the agent to configure the system onstartu
  • C. *t
  • D. Builda custom Amazon Machine Image that includes the configuration management agentand application framework pre-installed.Configure your Auto Scaling launchconfiguration with an Amazon EC2 UserData script to pull configurationartifacts and application code from an Amazon S3 bucket, and then execute theagent toconfigure the system.
  • E. Createa web service that polls the Amazon EC2 API to check for new instances that arelaunched in an Auto Scaling grou
  • F. When it recognizes a new instance, execute aremote script via SSH to install the agent, SCP the configuration artifacts andapplication code, and finally execute the agent to configure the system

Answer: B

Explanation:
Since the new instances need to be brought up in 3 minutes, hence the best option is to pre-bake all the components into an AMI. If you try to user the User Data option, it will just take time, based on the time mentioned in the question to install and configure the various components.
For more information on AMI design please see the below link:
• https://aws.amazon.com/answers/configuration-management/aws-ami-design/

NEW QUESTION 22
Which of the following resource is used in Cloudformation to create nested stacks

  • A. AWS::CloudFormation::Stack
  • B. AWS::CloudFormation::Nested
  • C. AWS::CloudFormation::NestedStack
  • D. AWS::CloudFormation::StackNest

Answer: A

Explanation:
The AWS Documentation mentions the following
A nested stack is a stack that you create within another stack by using the AWS:: Cloud Formation::
Stack resource. With nested stacks, you deploy and manage all resources from a single stack. You can use outputs from one stack in the nested stack group as inputs to another stack in the group
For more information on AWS::CloudFormation::Stack resource, please refer to the below link: http://docs.aws.amazon.com/AWSCIoudFormation/latest/UserGuide/using-cfn-stack-exports. html

NEW QUESTION 23
Your public website uses a load balancer and an Auto Scalinggroup in a virtual private cloud. Your chief security officer has asked you to set up a monitoring system that quickly detects and alerts your team when a large sudden traffic increase occurs. How should you set this up?

  • A. Setup an Amazon CloudWatch alarm for the Elastic Load Balancing Networkln metricand then use Amazon SNS to alert your team.
  • B. Usean Amazon EMR job to run every thirty minutes, analyze the Elastic LoadBalancing access logs in a batch manner to detect a sharp increase in trafficand then use the Amazon Simple Email Service to alert your team.
  • C. Usean Amazon EMR job to run every thirty minutes analyze the CloudWatch logs fromyour application Amazon EC2 instances in a batch manner to detect a sharpincrease in traffic and then use the Amazon SNS SMS notification to alert yourteam
  • D. Setup an Amazon CloudWatch alarm for the Amazon EC2 Networkln metric for the AutoScaling group and then use Amazon SNS to alert your team.
  • E. Setup a cron job to actively monitor the AWS CloudTrail logs for increased trafficand use Amazon SNS to alert your team.

Answer: D

Explanation:
The below snapshot from the AWS documentation gives details on the Networkln metric
DOP-C01 dumps exhibit

NEW QUESTION 24
A custom script needs to be passed to a new Amazon Linux instances created in your Auto Scalinggroup. Which feature allows you to accomplish this?

  • A. User data
  • B. EC2Config service
  • C. 1AM roles
  • D. AWSConfig

Answer: A

Explanation:
When you configure an instance during creation, you can add custom scripts to the User data section. So in Step 3 of creating an instance, in the Advanced Details section, we can enter custom scripts in the User Data section. The below script installs Perl during the instance creation of the CC2 instance.
DOP-C01 dumps exhibit
For more information on user data please refer to the URL:
• http://docs.aws.amazon.com/AWSCC2/latest/UserGuide/ec2-i nstance-metadata.htm I

NEW QUESTION 25
You are planning on using AWS Code Deploy in your AWS environment. Which of the below features of AWS Code Deploy can be used to Specify scripts to be run on each instance at various stages of the deployment process

  • A. AppSpecfile
  • B. CodeDeployfile
  • C. Configfile
  • D. Deploy file

Answer: A

Explanation:
The AWS Documentation mentions the following on AWS Code Deploy
An application specification file (AppSpec file), which is unique to AWS CodeDeploy, is a YAML- formatted file used to:
Map the source files in your application revision to their destinations on the instance. Specify custom permissions for deployed files.
Specify scripts to be run on each instance at various stages of the deployment process. For more information on AWS CodeDeploy, please refer to the URL: http://docs.aws.amazon.com/codedeploy/latest/userguide/application-specification-files.htmI

NEW QUESTION 26
......

100% Valid and Newest Version DOP-C01 Questions & Answers shared by 2passeasy, Get Full Dumps HERE: https://www.2passeasy.com/dumps/DOP-C01/ (New 116 Q&As)