AWS-Certified-Database-Specialty | Improve AWS-Certified-Database-Specialty Test Question For AWS Certified Database - Specialty Certification

Your success in Amazon AWS-Certified-Database-Specialty is our sole target and we develop all our AWS-Certified-Database-Specialty braindumps in a way that facilitates the attainment of this target. Not only is our AWS-Certified-Database-Specialty study material the best you can find, it is also the most detailed and the most updated. AWS-Certified-Database-Specialty Practice Exams for Amazon AWS-Certified-Database-Specialty are written to the highest standards of technical accuracy.

Online Amazon AWS-Certified-Database-Specialty free dumps demo Below:

NEW QUESTION 1
A database expert is responsible for building a highly available online transaction processing (OLTP) solution that makes use of Amazon RDS for MySQL production databases. Disaster recovery criteria include a cross-regional deployment and an RPO and RTO of 5 and 30 minutes, respectively.
What should the database professional do to ensure that the database meets the criteria for high availability and disaster recovery?

  • A. Use a Multi-AZ deployment in each Region.
  • B. Use read replica deployments in all Availability Zones of the secondary Region.
  • C. Use Multi-AZ and read replica deployments within a Region.
  • D. Use Multi-AZ and deploy a read replica in a secondary Region.

Answer: D

NEW QUESTION 2
A company is using Amazon RDS for MySQL to redesign its business application. A Database Specialist has noticed that the Development team is restoring their MySQL database multiple times a day when Developers make mistakes in their schema updates. The Developers sometimes need to wait hours to the restores to complete.
Multiple team members are working on the project, making it difficult to find the correct restore point for each mistake.
Which approach should the Database Specialist take to reduce downtime?

  • A. Deploy multiple read replicas and have the team members make changes to separate replica instances
  • B. Migrate to Amazon RDS for SQL Server, take a snapshot, and restore from the snapshot
  • C. Migrate to Amazon Aurora MySQL and enable the Aurora Backtrack feature
  • D. Enable the Amazon RDS for MySQL Backtrack feature

Answer: C

Explanation:
"Amazon Aurora, a fully-managed relational database service in AWS, is now offering a backtrack feature. With Amazon Aurora with MySQL compatibility, users can backtrack, or "rewind", a database cluster to a specific point in time, without restoring data from a backup. The backtrack process allows a point in time to be specified with one second resolution, and the rewind process typically takes minutes. This new feature facilitates developers in undoing mistakes like deleting data inappropriately or dropping the wrong table."

NEW QUESTION 3
A company is building a software as a service application. As part of the new user sign-on workflow, a Python script invokes the CreateTable operation using the Amazon DynamoDB API. After the call returns, the script attempts to call PutItem.
Occasionally, the PutItem request fails with a ResourceNotFoundException error, which causes the workflow to fail. The development team has confirmed that the same table name is used in the two API calls.
How should a database specialist fix this issue?

  • A. Add an allow statement for the dynamodb:PutItem action in a policy attached to the role used by the application creating the table.
  • B. Set the StreamEnabled property of the StreamSpecification parameter to true, then call PutItem.
  • C. Change the application to call DescribeTable periodically until the TableStatus is ACTIVE, then call PutItem.
  • D. Add a ConditionExpression parameter in the PutItem request.

Answer: C

Explanation:
https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_DescribeTable.html

NEW QUESTION 4
A company is using an Amazon RDS for MySQL DB instance for its internal applications. A security audit shows that the DB instance is not encrypted at rest. The company’s application team needs to encrypt the DB instance.
What should the team do to meet this requirement?

  • A. Stop the DB instance and modify it to enable encryptio
  • B. Apply this setting immediately without waiting for the next scheduled RDS maintenance window.
  • C. Stop the DB instance and create an encrypted snapsho
  • D. Restore the encrypted snapshot to a new encrypted DB instanc
  • E. Delete the original DB instance, and update the applications to point to the new encrypted DB instance.
  • F. Stop the DB instance and create a snapsho
  • G. Copy the snapshot into another encrypted snapsho
  • H. Restore the encrypted snapshot to a new encrypted DB instanc
  • I. Delete the original DB instance, and update the applications to point to the new encrypted DB instance.
  • J. Create an encrypted read replica of the DB instanc
  • K. Promote the read replica to maste
  • L. Delete the original DB instance, and update the applications to point to the new encrypted DB instance.

Answer: C

NEW QUESTION 5
A company is developing a multi-tier web application hosted on AWS using Amazon Aurora as the database. The application needs to be deployed to production and other non-production environments. A Database Specialist needs to specify different MasterUsername and MasterUserPassword properties in the AWS CloudFormation templates used for automated deployment. The CloudFormation templates are version controlled in the company’s code repository. The company also needs to meet compliance requirement by routinely rotating its database master password for production.
What is most secure solution to store the master password?

  • A. Store the master password in a parameter file in each environmen
  • B. Reference the environment-specific parameter file in the CloudFormation template.
  • C. Encrypt the master password using an AWS KMS ke
  • D. Store the encrypted master password in the CloudFormation template.
  • E. Use the secretsmanager dynamic reference to retrieve the master password stored in AWS Secrets Manager and enable automatic rotation.
  • F. Use the ssm dynamic reference to retrieve the master password stored in the AWS Systems Manager Parameter Store and enable automatic rotation.

Answer: C

Explanation:
"By using the secure string support in CloudFormation with dynamic references you can better maintain your infrastructure as code. You’ll be able to avoid hard coding passwords into your templates and you can keep these runtime configuration parameters separated from your code. Moreover, when properly used, secure strings will help keep your development and production code as similar as possible, while continuing to make your infrastructure code suitable for continuous deployment pipelines."
https://aws.amazon.com/blogs/mt/using-aws-systems-manager-parameter-store-secure-string-parameters-in-aws https://aws.amazon.com/blogs/security/how-to-use-aws-secrets-manager-rotate-credentials-amazon-rds-database

NEW QUESTION 6
A database professional is developing an application that will respond to single-instance requests. The program will query large amounts of client data and offer end users with results.
These reports may include a variety of fields. The database specialist want to enable users to query the database using any of the fields offered.
During peak periods, the database's traffic volume will be significant yet changeable. However, the database will see little activity over the rest of the day.
Which approach will be the most cost-effective in meeting these requirements?

  • A. Amazon DynamoDB with provisioned capacity mode and auto scaling
  • B. Amazon DynamoDB with on-demand capacity mode
  • C. Amazon Aurora with auto scaling enabled
  • D. Amazon Aurora in a serverless mode

Answer: D

Explanation:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Limits.html#limits-items

NEW QUESTION 7
An ecommerce company is using Amazon DynamoDB as the backend for its order-processing application.
The steady increase in the number of orders is resulting in increased DynamoDB costs. Order verification and reporting perform many repeated GetItem functions that pull similar datasets, and this read activity is contributing to the increased costs. The company wants to control these costs without significant development efforts.
How should a Database Specialist address these requirements?

  • A. Use AWS DMS to migrate data from DynamoDB to Amazon DocumentDB
  • B. Use Amazon DynamoDB Streams and Amazon Kinesis Data Firehose to push the data into Amazon Redshift
  • C. Use an Amazon ElastiCache for Redis in front of DynamoDB to boost read performance
  • D. Use DynamoDB Accelerator to offload the reads

Answer: D

Explanation:
https://docs.amazonaws.cn/en_us/amazondynamodb/latest/developerguide/DAX.html
"Applications that are read-intensive, but are also cost-sensitive. With DynamoDB, you provision the number of reads per second that your application requires. If read activity increases, you can increase your tables' provisioned read throughput (at an additional cost). Or, you can offload the activity from your application to a DAX cluster, and reduce the number of read capacity units that you need to purchase otherwise."

NEW QUESTION 8
A corporation intends to migrate a 500-GB Oracle database to Amazon Aurora PostgreSQL utilizing the AWS Schema Conversion Tool (AWS SCT) and AWS Data Management Service (AWS DMS). The database does not have any stored procedures, but does contain several huge or partitioned tables. Because the program is vital to the company, it is preferable to migrate with little downtime.
Which measures should a database professional perform in combination to expedite the transfer process? (Select three.)

  • A. Use the AWS SCT data extraction agent to migrate the schema from Oracle to Aurora PostgreSQL.
  • B. For the large tables, change the setting for the maximum number of tables to load in parallel and perform a full load using AWS DMS.
  • C. For the large tables, create a table settings rule with a parallel load option in AWS DMS, then perform a full load using DMS.
  • D. Use AWS DMS to set up change data capture (CDC) for continuous replication until the cutover date.
  • E. Use AWS SCT to convert the schema from Oracle to Aurora PostgreSQL.
  • F. Use AWS DMS to convert the schema from Oracle to Aurora PostgreSQL and for continuous replication.

Answer: CDE

NEW QUESTION 9
A Database Specialist is performing a proof of concept with Amazon Aurora using a small instance to confirm a simple database behavior. When loading a large dataset and creating the index, the Database Specialist encounters the following error message from Aurora:
ERROR: cloud not write block 7507718 of temporary file: No space left on device
What is the cause of this error and what should the Database Specialist do to resolve this issue?

  • A. The scaling of Aurora storage cannot catch up with the data loadin
  • B. The Database Specialist needs to modify the workload to load the data slowly.
  • C. The scaling of Aurora storage cannot catch up with the data loadin
  • D. The Database Specialist needs to enable Aurora storage scaling.
  • E. The local storage used to store temporary tables is ful
  • F. The Database Specialist needs to scale up the instance.
  • G. The local storage used to store temporary tables is ful
  • H. The Database Specialist needs to enable localstorage scaling.

Answer: C

NEW QUESTION 10
A company is hosting critical business data in an Amazon Redshift cluster. Due to the sensitive nature of the data, the cluster is encrypted at rest using AWS KMS. As a part of disaster recovery requirements, the company needs to copy the Amazon Redshift snapshots to another Region.
Which steps should be taken in the AWS Management Console to meet the disaster recovery requirements?

  • A. Create a new KMS customer master key in the source Regio
  • B. Switch to the destination Region, enable Amazon Redshift cross-Region snapshots, and use the KMS key of the source Region.
  • C. Create a new IAM role with access to the KMS ke
  • D. Enable Amazon Redshift cross-Region replication using the new IAM role, and use the KMS key of the source Region.
  • E. Enable Amazon Redshift cross-Region snapshots in the source Region, and create a snapshot copy grant and use a KMS key in the destination Region.
  • F. Create a new KMS customer master key in the destination Region and create a new IAM role with access to the new KMS ke
  • G. Enable Amazon Redshift cross-Region replication in the source Region anduse the KMS key of the destination Region.

Answer: C

Explanation:
If you want to enable cross-Region snapshot copy for an AWS KMS–encrypted cluster, you must configure a snapshot copy grant for a root key in the destination AWS Region Source-Region : configure a cross-Region snapshot for an AWS KMS–encrypted cluster In Destination AWS Region : choose the AWS Region to which to copy snapshots.
https://docs.aws.amazon.com/redshift/latest/mgmt/managing-snapshots-console.html#xregioncopy-kms-encrypt

NEW QUESTION 11
A media company is using Amazon RDS for PostgreSQL to store user data. The RDS DB instance currently has a publicly accessible setting enabled and is hosted in a public subnet. Following a recent AWS Well- Architected Framework review, a Database Specialist was given new security requirements.
Only certain on-premises corporate network IPs should connect to the DB instance. Connectivity is allowed from the corporate network only.
Which combination of steps does the Database Specialist need to take to meet these new requirements? (Choose three.)

  • A. Modify the pg_hba.conf fil
  • B. Add the required corporate network IPs and remove the unwanted IPs.
  • C. Modify the associated security grou
  • D. Add the required corporate network IPs and remove the unwanted IPs.
  • E. Move the DB instance to a private subnet using AWS DMS.
  • F. Enable VPC peering between the application host running on the corporate network and the VPC associated with the DB instance.
  • G. Disable the publicly accessible setting.
  • H. Connect to the DB instance using private IPs and a VPN.

Answer: BEF

Explanation:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.WorkingWithRDSInstanceinaVPC.ht

NEW QUESTION 12
A company is running a two-tier ecommerce application in one AWS account. The web server is deployed using an Amazon RDS for MySQL Multi-AZ DB instance. A Developer mistakenly deleted the database in the production environment. The database has been restored, but this resulted in hours of downtime and lost revenue.
Which combination of changes in existing IAM policies should a Database Specialist make to prevent an error like this from happening in the future? (Choose three.)

  • A. Grant least privilege to groups, users, and roles
  • B. Allow all users to restore a database from a backup that will reduce the overall downtime to restore the database
  • C. Enable multi-factor authentication for sensitive operations to access sensitive resources and API operations
  • D. Use policy conditions to restrict access to selective IP addresses
  • E. Use AccessList Controls policy type to restrict users for database instance deletion
  • F. Enable AWS CloudTrail logging and Enhanced Monitoring

Answer: ACD

Explanation:
https://aws.amazon.com/blogs/database/using-iam-multifactor-authentication-with-amazon-rds/ https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/security_iam_id-based-policy-htmlhttps://docs.aws

NEW QUESTION 13
After restoring an Amazon RDS snapshot from 3 days ago, a company’s Development team cannot connect to the restored RDS DB instance. What is the likely cause of this problem?

  • A. The restored DB instance does not have Enhanced Monitoring enabled
  • B. The production DB instance is using a custom parameter group
  • C. The restored DB instance is using the default security group
  • D. The production DB instance is using a custom option group

Answer: C

Explanation:
https://aws.amazon.com/premiumsupport/knowledge-center/rds-cannot-connect/ https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_RestoreFromSnapshot.html

NEW QUESTION 14
To meet new data compliance requirements, a company needs to keep critical data durably stored and readily accessible for 7 years. Data that is more than 1 year old is considered archival data and must automatically be moved out of the Amazon Aurora MySQL DB cluster every week. On average, around 10 GB of new data is added to the database every month. A database specialist must choose the most operationally efficient solution to migrate the archival data to Amazon S3.
Which solution meets these requirements?

  • A. Create a custom script that exports archival data from the DB cluster to Amazon S3 using a SQL view, then deletes the archival data from the DB cluste
  • B. Launch an Amazon EC2 instance with a weekly cron job to execute the custom script.
  • C. Configure an AWS Lambda function that exports archival data from the DB cluster to Amazon S3 using a SELECT INTO OUTFILE S3 statement, then deletes the archival data from the DB cluste
  • D. Schedule the Lambda function to run weekly using Amazon EventBridge (Amazon CloudWatch Events).
  • E. Configure two AWS Lambda functions: one that exports archival data from the DB cluster to Amazon S3 using the mysqldump utility, and another that deletes the archival data from the DB cluste
  • F. Schedule both Lambda functions to run weekly using Amazon EventBridge (Amazon CloudWatch Events).
  • G. Use AWS Database Migration Service (AWS DMS) to continually export the archival data from the DB cluster to Amazon S3. Configure an AWS Data Pipeline process to run weekly that executes a custom SQL script to delete the archival data from the DB cluster.

Answer: B

Explanation:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.SaveIntoS3.htm

NEW QUESTION 15
A huge gaming firm is developing a centralized method for storing the status of various online games' user sessions. The workload requires low-latency key-value storage and will consist of an equal number of reads and writes. Across the games' geographically dispersed user base, data should be written to the AWS Region nearest to the user. The design should reduce the burden associated with managing data replication across Regions.
Which solution satisfies these criteria?

  • A. Amazon RDS for MySQL with multi-Region read replicas
  • B. Amazon Aurora global database
  • C. Amazon RDS for Oracle with GoldenGate
  • D. Amazon DynamoDB global tables

Answer: D

Explanation:
https://aws.amazon.com/dynamodb/?nc1=h_ls

NEW QUESTION 16
A Database Specialist needs to define a database migration strategy to migrate an on-premises Oracle database to an Amazon Aurora MySQL DB cluster. The company requires near-zero downtime for the data migration. The solution must also be cost-effective.
Which approach should the Database Specialist take?

  • A. Dump all the tables from the Oracle database into an Amazon S3 bucket using datapump (expdp). Run data transformations in AWS Glu
  • B. Load the data from the S3 bucket to the Aurora DB cluster.
  • C. Order an AWS Snowball appliance and copy the Oracle backup to the Snowball applianc
  • D. Once the Snowball data is delivered to Amazon S3, create a new Aurora DB cluste
  • E. Enable the S3 integration to migrate the data directly from Amazon S3 to Amazon RDS.
  • F. Use the AWS Schema Conversion Tool (AWS SCT) to help rewrite database objects to MySQL during the schema migratio
  • G. Use AWS DMS to perform the full load and change data capture (CDC) tasks.
  • H. Use AWS Server Migration Service (AWS SMS) to import the Oracle virtual machine image as an Amazon EC2 instanc
  • I. Use the Oracle Logical Dump utility to migrate the Oracle data from Amazon EC2 to an Aurora DB cluster.

Answer: C

Explanation:
https://aws.amazon.com/blogs/database/migrating-oracle-databases-with-near-zero-downtime-using-aws-dms/

NEW QUESTION 17
A Database Specialist is constructing a new Amazon Neptune DB cluster and tries to load data from Amazon S3 using the Neptune bulk loader API. The Database Specialist is confronted with the following error message:
€Unable to establish a connection to the s3 endpoint. The source URL is s3:/mybucket/graphdata/ and the region code is us-east-1. Kindly confirm your Configuration S3.
Which of the following activities should the Database Specialist take to resolve the issue? (Select two.)

  • A. Check that Amazon S3 has an IAM role granting read access to Neptune
  • B. Check that an Amazon S3 VPC endpoint exists
  • C. Check that a Neptune VPC endpoint exists
  • D. Check that Amazon EC2 has an IAM role granting read access to Amazon S3
  • E. Check that Neptune has an IAM role granting read access to Amazon S3

Answer: BE

Explanation:
https://docs.aws.amazon.com/neptune/latest/userguide/bulk-load-tutorial-IAM.html https://docs.aws.amazon.com/neptune/latest/userguide/bulk-load-data.html
“An IAM role for the Neptune DB instance to assume that has an IAM policy that allows access to the data files in the S3 bucket. The policy must grant Read and List permissions.” “An Amazon S3 VPC endpoint. For more information, see the Creating an Amazon S3 VPC Endpoint section.”

NEW QUESTION 18
For the first time, a database professional is establishing a test graph database on Amazon Neptune. The database expert must input millions of rows of test observations from an Amazon S3.csv file. The database professional uploaded the data to the Neptune DB instance through a series of API calls.
Which sequence of actions enables the database professional to upload the data most quickly? (Select three.)

  • A. Ensure Amazon Cognito returns the proper AWS STS tokens to authenticate the Neptune DB instance to the S3 bucket hosting the CSV file.
  • B. Ensure the vertices and edges are specified in different .csv files with proper header column formatting.
  • C. Use AWS DMS to move data from Amazon S3 to the Neptune Loader.
  • D. Curl the S3 URI while inside the Neptune DB instance and then run the addVertex or addEdge commands.
  • E. Ensure an IAM role for the Neptune DB instance is configured with the appropriate permissions to allow access to the file in the S3 bucket.
  • F. Create an S3 VPC endpoint and issue an HTTP POST to the database€™s loader endpoint.

Answer: BEF

Explanation:
https://docs.aws.amazon.com/neptune/latest/userguide/bulk-load-optimize.html

NEW QUESTION 19
On a single Amazon RDS DB instance, a business hosts a MySQL database for its ecommerce application. Automatically saving application purchases to the database results in high-volume writes. Employees routinely create purchase reports for the company. The organization wants to boost database performance and minimize downtime associated with upgrade patching.
Which technique will satisfy these criteria with the LEAST amount of operational overhead?

  • A. Enable a Multi-AZ deployment of the RDS for MySQL DB instance, and enable Memcached in the MySQL option group.
  • B. Enable a Multi-AZ deployment of the RDS for MySQL DB instance, and set up replication to a MySQL DB instance running on Amazon EC2.
  • C. Enable a Multi-AZ deployment of the RDS for MySQL DB instance, and add a read replica.
  • D. Add a read replica and promote it to an Amazon Aurora MySQL DB cluster maste
  • E. Then enable Amazon Aurora Serverless.

Answer: C

NEW QUESTION 20
Amazon RDS for Oracle with Transparent Data Encryption is used by a financial services organization (TDE). At all times, the organization is obligated to encrypt its data at rest. The decryption key must be widely distributed, and access to the key must be restricted. The organization must be able to rotate the encryption key on demand to comply with regulatory requirements. If any possible security vulnerabilities are discovered, the organization must be able to disable the key. Additionally, the company's overhead must be kept to a minimal.
What method should the database administrator use to configure the encryption to fulfill these specifications?

  • A. AWS CloudHSM
  • B. AWS Key Management Service (AWS KMS) with an AWS managed key
  • C. AWS Key Management Service (AWS KMS) with server-side encryption
  • D. AWS Key Management Service (AWS KMS) CMK with customer-provided material

Answer: D

Explanation:
https://docs.aws.amazon.com/whitepapers/latest/kms-best-practices/aws-managed-and-customer-managed-cmks

NEW QUESTION 21
......

P.S. Easily pass AWS-Certified-Database-Specialty Exam with 270 Q&As Dumpscollection.com Dumps & pdf Version, Welcome to Download the Newest Dumpscollection.com AWS-Certified-Database-Specialty Dumps: https://www.dumpscollection.net/dumps/AWS-Certified-Database-Specialty/ (270 New Questions)