AWS-Certified-Database-Specialty | All About Real AWS-Certified-Database-Specialty Testing Engine

Master the AWS-Certified-Database-Specialty AWS Certified Database - Specialty content and be ready for exam day success quickly with this Exambible AWS-Certified-Database-Specialty exam engine. We guarantee it!We make it a reality and give you real AWS-Certified-Database-Specialty questions in our Amazon AWS-Certified-Database-Specialty braindumps.Latest 100% VALID Amazon AWS-Certified-Database-Specialty Exam Questions Dumps at below page. You can use our Amazon AWS-Certified-Database-Specialty braindumps and pass your exam.

Amazon AWS-Certified-Database-Specialty Free Dumps Questions Online, Read and Test Now.

NEW QUESTION 1
An electric utility company wants to store power plant sensor data in an Amazon DynamoDB table. The utility company has over 100 power plants and each power plant has over 200 sensors that send data every 2 seconds. The sensor data includes time with milliseconds precision, a value, and a fault attribute if the sensor is malfunctioning. Power plants are identified by a globally unique identifier. Sensors are identified by a unique identifier within each power plant. A database specialist needs to design the table to support an efficient method of finding all faulty sensors within a given power plant.
Which schema should the database specialist use when creating the DynamoDB table to achieve the fastest query time when looking for faulty sensors?

  • A. Use the plant identifier as the partition key and the measurement time as the sort ke
  • B. Create a global secondary index (GSI) with the plant identifier as the partition key and the fault attribute as the sort key.
  • C. Create a composite of the plant identifier and sensor identifier as the partition ke
  • D. Use the measurement time as the sort ke
  • E. Create a local secondary index (LSI) on the fault attribute.
  • F. Create a composite of the plant identifier and sensor identifier as the partition ke
  • G. Use the measurement time as the sort ke
  • H. Create a global secondary index (GSI) with the plant identifier as the partition key and the fault attribute as the sort key.
  • I. Use the plant identifier as the partition key and the sensor identifier as the sort ke
  • J. Create a local secondary index (LSI) on the fault attribute.

Answer: D

Explanation:
Plant id as partition key and Sensor id as a sort key. Fault can be identified quickly using the local secondary index and associated plant and sensor can be identified easily.

NEW QUESTION 2
A company runs online transaction processing (OLTP) workloads on an Amazon RDS for PostgreSQL Multi- AZ DB instance. Tests were run on the database after work hours, which generated additional database logs. The free storage of the RDS DB instance is low due to these additional logs.
What should the company do to address this space constraint issue?

  • A. Log in to the host and run the rm $PGDATA/pg_logs/* command
  • B. Modify the rds.log_retention_period parameter to 1440 and wait up to 24 hours for database logs to be deleted
  • C. Create a ticket with AWS Support to have the logs deleted
  • D. Run the SELECT rds_rotate_error_log() stored procedure to rotate the logs

Answer: B

Explanation:
To set the retention period for system logs, use the rds.log_retention_period parameter. You can find rds.log_retention_period in the DB parameter group associated with your DB instance. The unit for this parameter is minutes. For example, a setting of 1,440 retains logs for one day. The default value is 4,320 (three days). The maximum value is 10,080 (seven days).
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_LogAccess.Concepts.PostgreSQL.ht

NEW QUESTION 3
A business that specializes in internet advertising is developing an application that will show adverts to its customers. The program stores data in an Amazon DynamoDB database. Additionally, the application caches its reads using a DynamoDB Accelerator (DAX) cluster. The majority of reads come via the GetItem and BatchGetItem queries. The application does not need consistency of readings.
The application cache does not behave as intended after deployment. Specific extremely consistent queries to the DAX cluster are responding in several milliseconds rather than microseconds.
How can the business optimize cache behavior in order to boost application performance?

  • A. Increase the size of the DAX cluster.
  • B. Configure DAX to be an item cache with no query cache
  • C. Use eventually consistent reads instead of strongly consistent reads.
  • D. Create a new DAX cluster with a higher TTL for the item cache.

Answer: C

NEW QUESTION 4
A company is deploying a solution in Amazon Aurora by migrating from an on-premises system. The IT department has established an AWS Direct Connect link from the company’s data center. The company’s Database Specialist has selected the option to require SSL/TLS for connectivity to prevent plaintext data from being set over the network. The migration appears to be working successfully, and the data can be queried from a desktop machine.
Two Data Analysts have been asked to query and validate the data in the new Aurora DB cluster. Both Analysts are unable to connect to Aurora. Their user names and passwords have been verified as valid and the Database Specialist can connect to the DB cluster using their accounts. The Database Specialist also verified that the security group configuration allows network from all corporate IP addresses.
What should the Database Specialist do to correct the Data Analysts’ inability to connect?

  • A. Restart the DB cluster to apply the SSL change.
  • B. Instruct the Data Analysts to download the root certificate and use the SSL certificate on the connection string to connect.
  • C. Add explicit mappings between the Data Analysts’ IP addresses and the instance in the security group assigned to the DB cluster.
  • D. Modify the Data Analysts’ local client firewall to allow network traffic to AWS.

Answer: B

Explanation:
• To connect using SSL:
• Provide the SSLTrust certificate (can be downloaded from AWS)
• Provide SSL options when connecting to database
• Not using SSL on a DB that enforces SSL would result in error https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/ssl-certificate-rotation-aurora-postgresql.ht

NEW QUESTION 5
A company is about to launch a new product, and test databases must be re-created from production data. The company runs its production databases on an Amazon Aurora MySQL DB cluster. A Database Specialist needs to deploy a solution to create these test databases as quickly as possible with the least amount of administrative effort.
What should the Database Specialist do to meet these requirements?

  • A. Restore a snapshot from the production cluster into test clusters
  • B. Create logical dumps of the production cluster and restore them into new test clusters
  • C. Use database cloning to create clones of the production cluster
  • D. Add an additional read replica to the production cluster and use that node for testing

Answer: C

Explanation:
https://aws.amazon.com/getting-started/hands-on/aurora-cloning-backtracking/
"Cloning an Aurora cluster is extremely useful if you want to assess the impact of changes to your database, or if you need to perform workload-intensive operations—such as exporting data or running analytical queries, or simply if you want to use a copy of your production database in a development or testing environment. You can make multiple clones of your Aurora DB cluster. You can even create additional clones from other clones, with the constraint that the clone databases must be created in the same region as the source databases.

NEW QUESTION 6
A company is looking to move an on-premises IBM Db2 database running AIX on an IBM POWER7 server. Due to escalating support and maintenance costs, the company is exploring the option of moving the workload to an Amazon Aurora PostgreSQL DB cluster.
What is the quickest way for the company to gather data on the migration compatibility?

  • A. Perform a logical dump from the Db2 database and restore it to an Aurora DB cluste
  • B. Identify the gaps and compatibility of the objects migrated by comparing row counts from source and target tables.
  • C. Run AWS DMS from the Db2 database to an Aurora DB cluste
  • D. Identify the gaps and compatibility of the objects migrated by comparing the row counts from source and target tables.
  • E. Run native PostgreSQL logical replication from the Db2 database to an Aurora DB cluster to evaluate the migration compatibility.
  • F. Run the AWS Schema Conversion Tool (AWS SCT) from the Db2 database to an Aurora DB cluster.Create a migration assessment report to evaluate the migration compatibility.

Answer: D

NEW QUESTION 7
An ecommerce company has tasked a Database Specialist with creating a reporting dashboard that visualizes critical business metrics that will be pulled from the core production database running on Amazon Aurora. Data that is read by the dashboard should be available within 100 milliseconds of an update.
The Database Specialist needs to review the current configuration of the Aurora DB cluster and develop a cost-effective solution. The solution needs to accommodate the unpredictable read workload from the
reporting dashboard without any impact on the write availability and performance of the DB cluster.
Which solution meets these requirements?

  • A. Turn on the serverless option in the DB cluster so it can automatically scale based on demand.
  • B. Provision a clone of the existing DB cluster for the new Application team.
  • C. Create a separate DB cluster for the new workload, refresh from the source DB cluster, and set up ongoing replication using AWS DMS change data capture (CDC).
  • D. Add an automatic scaling policy to the DB cluster to add Aurora Replicas to the cluster based on CPU consumption.

Answer: A

NEW QUESTION 8
A global digital advertising company captures browsing metadata to contextually display relevant images, pages, and links to targeted users. A single page load can generate multiple events that need to be stored individually. The maximum size of an event is 200 KB and the average size is 10 KB. Each page load must query the user’s browsing history to provide targeting recommendations. The advertising company expects over 1 billion page visits per day from users in the United States, Europe, Hong Kong, and India. The structure of the metadata varies depending on the event. Additionally, the browsing metadata must be written and read with very low latency to ensure a good viewing experience for the users.
Which database solution meets these requirements?

  • A. Amazon DocumentDB
  • B. Amazon RDS Multi-AZ deployment
  • C. Amazon DynamoDB global table
  • D. Amazon Aurora Global Database

Answer: C

NEW QUESTION 9
A financial services company is developing a shared data service that supports different applications from throughout the company. A Database Specialist designed a solution to leverage Amazon ElastiCache for Redis with cluster mode enabled to enhance performance and scalability. The cluster is configured to listen on port 6379.
Which combination of steps should the Database Specialist take to secure the cache data and protect it from unauthorized access? (Choose three.)

  • A. Enable in-transit and at-rest encryption on the ElastiCache cluster.
  • B. Ensure that Amazon CloudWatch metrics are configured in the ElastiCache cluster.
  • C. Ensure the security group for the ElastiCache cluster allows all inbound traffic from itself and inbound traffic on TCP port 6379 from trusted clients only.
  • D. Create an IAM policy to allow the application service roles to access all ElastiCache API actions.
  • E. Ensure the security group for the ElastiCache clients authorize inbound TCP port 6379 and port 22 traffic from the trusted ElastiCache cluster’s security group.
  • F. Ensure the cluster is created with the auth-token parameter and that the parameter is used in all subsequent commands.

Answer: ACF

Explanation:
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/encryption.html

NEW QUESTION 10
A business's production database is hosted on a single-node Amazon RDS for MySQL DB instance. The database instance is hosted in a United States AWS Region.
A week before a significant sales event, a fresh database maintenance update is released. The maintenance update has been designated as necessary. The firm want to minimize the database instance's downtime and requests that a database expert make the database instance highly accessible until the sales event concludes.
Which solution will satisfy these criteria?

  • A. Defer the maintenance update until the sales event is over.
  • B. Create a read replica with the latest updat
  • C. Initiate a failover before the sales event.
  • D. Create a read replica with the latest updat
  • E. Transfer all read-only traffic to the read replica during the sales event.
  • F. Convert the DB instance into a Multi-AZ deploymen
  • G. Apply the maintenance update.

Answer: D

Explanation:
https://aws.amazon.com/premiumsupport/knowledge-center/rds-required-maintenance/

NEW QUESTION 11
A large financial services company requires that all data be encrypted in transit. A Developer is attempting to connect to an Amazon RDS DB instance using the company VPC for the first time with credentials provided by a Database Specialist. Other members of the Development team can connect, but this user is consistently receiving an error indicating a communications link failure. The Developer asked the Database Specialist to reset the password a number of times, but the error persists.
Which step should be taken to troubleshoot this issue?

  • A. Ensure that the database option group for the RDS DB instance allows ingress from the Developer machine’s IP address
  • B. Ensure that the RDS DB instance’s subnet group includes a public subnet to allow the Developer to connect
  • C. Ensure that the RDS DB instance has not reached its maximum connections limit
  • D. Ensure that the connection is using SSL and is addressing the port where the RDS DB instance is listening for encrypted connections

Answer: D

Explanation:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/SQLServer.Concepts.General.SSL.Using.html

NEW QUESTION 12
A company has a database monitoring solution that uses Amazon CloudWatch for its Amazon RDS for SQL Server environment. The cause of a recent spike in CPU utilization was not determined using the standard metrics that were collected. The CPU spike caused the application to perform poorly, impacting users. A Database Specialist needs to determine what caused the CPU spike.
Which combination of steps should be taken to provide more visibility into the processes and queries running during an increase in CPU load? (Choose two.)

  • A. Enable Amazon CloudWatch Events and view the incoming T-SQL statements causing the CPU to spike.
  • B. Enable Enhanced Monitoring metrics to view CPU utilization at the RDS SQL Server DB instance level.
  • C. Implement a caching layer to help with repeated queries on the RDS SQL Server DB instance.
  • D. Use Amazon QuickSight to view the SQL statement being run.
  • E. Enable Amazon RDS Performance Insights to view the database load and filter the load by waits, SQL statements, hosts, or users.

Answer: BE

Explanation:
https://aws.amazon.com/premiumsupport/knowledge-center/rds-instance-high-cpu/ "Several factors can cause an increase in CPU utilization. For example, user-initiated heavy workloads, analytic queries, prolonged deadlocks and lock waits, multiple concurrent transactions, long-running transactions, or other processes that utilize CPU resources. First, you can identify the source of the CPU usage by: Using Enhanced Monitoring Using Performance Insights"

NEW QUESTION 13
A manufacturing company’s website uses an Amazon Aurora PostgreSQL DB cluster.
Which configurations will result in the LEAST application downtime during a failover? (Choose three.)

  • A. Use the provided read and write Aurora endpoints to establish a connection to the Aurora DB cluster.
  • B. Create an Amazon CloudWatch alert triggering a restore in another Availability Zone when the primary Aurora DB cluster is unreachable.
  • C. Edit and enable Aurora DB cluster cache management in parameter groups.
  • D. Set TCP keepalive parameters to a high value.
  • E. Set JDBC connection string timeout variables to a low value.
  • F. Set Java DNS caching timeouts to a high value.

Answer: ABC

NEW QUESTION 14
A company is running its line of business application on AWS, which uses Amazon RDS for MySQL at the persistent data store. The company wants to minimize downtime when it migrates the database to Amazon Aurora.
Which migration method should a Database Specialist use?

  • A. Take a snapshot of the RDS for MySQL DB instance and create a new Aurora DB cluster with the option to migrate snapshots.
  • B. Make a backup of the RDS for MySQL DB instance using the mysqldump utility, create a new Aurora DB cluster, and restore the backup.
  • C. Create an Aurora Replica from the RDS for MySQL DB instance and promote the Aurora DB cluster.
  • D. Create a clone of the RDS for MySQL DB instance and promote the Aurora DB cluster.

Answer: C

Explanation:
https://aws.amazon.com/blogs/database/best-practices-for-migrating-rds-for-mysql-databases-to-amazon-aurora/ https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.Migrating.html#Aurora

NEW QUESTION 15
Amazon Aurora MySQL is being used by an ecommerce business to migrate its main application database. The firm is now doing OLTP stress testing using concurrent database connections. A database professional detected sluggish performance for several particular write operations during the first round of testing.
Examining the Amazon CloudWatch stats for the Aurora DB cluster revealed a CPU usage of 90%.
Which actions should the database professional take to determine the main cause of excessive CPU use and sluggish performance most effectively? (Select two.)

  • A. Enable Enhanced Monitoring at less than 30 seconds of granularity to review the operating system metrics before the next round of tests.
  • B. Review the VolumeBytesUsed metric in CloudWatch to see if there is a spike in write I/O.
  • C. Review Amazon RDS Performance Insights to identify the top SQL statements and wait events.
  • D. Review Amazon RDS API calls in AWS CloudTrail to identify long-running queries.
  • E. Enable Advance Auditing to log QUERY events in Amazon CloudWatch before the next round of tests.

Answer: AC

Explanation:
https://aws.amazon.com/premiumsupport/knowledge-center/rds-instance-high-cpu/ https://aws.amazon.com/premiumsupport/knowledge-center/rds-mysql-slow-query/

NEW QUESTION 16
A large retail company recently migrated its three-tier ecommerce applications to AWS. The company’s backend database is hosted on Amazon Aurora PostgreSQL. During peak times, users complain about longer page load times. A database specialist reviewed Amazon RDS Performance Insights and found a spike in IO:XactSync wait events. The SQL attached to the wait events are all single INSERT statements.
How should this issue be resolved?

  • A. Modify the application to commit transactions in batches
  • B. Add a new Aurora Replica to the Aurora DB cluster.
  • C. Add an Amazon ElastiCache for Redis cluster and change the application to write through.
  • D. Change the Aurora DB cluster storage to Provisioned IOPS (PIOPS).

Answer: A

Explanation:
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.Reference.html "This wait most often arises when there is a very high rate of commit activity on the system. You can
sometimes alleviate this wait by modifying applications to commit transactions in batches. "
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/apg-waits.xactsync.html

NEW QUESTION 17
A company has an ecommerce web application with an Amazon RDS for MySQL DB instance. The marketing team has noticed some unexpected updates to the product and pricing information on the website, which is impacting sales targets. The marketing team wants a database specialist to audit future database activity to help identify how and when the changes are being made.
What should the database specialist do to meet these requirements? (Choose two.)

  • A. Create an RDS event subscription to the audit event type.
  • B. Enable auditing of CONNECT and QUERY_DML events.
  • C. SSH to the DB instance and review the database logs.
  • D. Publish the database logs to Amazon CloudWatch Logs.
  • E. Enable Enhanced Monitoring on the DB instance.

Answer: BD

Explanation:
https://aws.amazon.com/blogs/database/configuring-an-audit-log-to-capture-database-activities-for-amazon-rds

NEW QUESTION 18
A company is using Amazon with Aurora Replicas for read-only workload scaling. A Database Specialist needs to split up two read-only applications so each application always connects to a dedicated replica. The Database Specialist wants to implement load balancing and high availability for the read-only applications.
Which solution meets these requirements?

  • A. Use a specific instance endpoint for each replica and add the instance endpoint to each read-only application connection string.
  • B. Use reader endpoints for both the read-only workload applications.
  • C. Use a reader endpoint for one read-only application and use an instance endpoint for the other read-only application.
  • D. Use custom endpoints for the two read-only applications.

Answer: D

Explanation:
https://aws.amazon.com/about-aws/whats-new/2018/11/amazon-aurora-simplifies-workload-management-with-c

NEW QUESTION 19
A company is running Amazon RDS for MySQL for its workloads. There is downtime when AWS operating system patches are applied during the Amazon RDS-specified maintenance window.
What is the MOST cost-effective action that should be taken to avoid downtime?

  • A. Migrate the workloads from Amazon RDS for MySQL to Amazon DynamoDB
  • B. Enable cross-Region read replicas and direct read traffic to then when Amazon RDS is down
  • C. Enable a read replicas and direct read traffic to it when Amazon RDS is down
  • D. Enable an Amazon RDS for MySQL Multi-AZ configuration

Answer: D

Explanation:
https://aws.amazon.com/premiumsupport/knowledge-center/rds-required-maintenance/
To minimize downtime, modify the Amazon RDS DB instance to a Multi-AZ deployment. For Multi-AZ deployments, OS maintenance is applied to the secondary instance first, then the instance fails over, and then the primary instance is updated. The downtime is during failover. For more information, see Maintenance for Multi-AZ Deployments. https://aws.amazon.com/rds/faqs/ The availability benefits of Multi-AZ also extend to planned maintenance. For example, with automated backups, I/O activity is no longer suspended on your primary during your preferred backup window, since backups are taken from the standby. In the case of patching or DB instance class scaling, these operations occur first on the standby, prior to automatic fail over. As a result, your availability impact is limited to the time required for automatic failover to complete.

NEW QUESTION 20
A company is going through a security audit. The audit team has identified cleartext master user password in the AWS CloudFormation templates for Amazon RDS for MySQL DB instances. The audit team has flagged this as a security risk to the database team.
What should a database specialist do to mitigate this risk?

  • A. Change all the databases to use AWS IAM for authentication and remove all the cleartext passwords in CloudFormation templates.
  • B. Use an AWS Secrets Manager resource to generate a random password and reference the secret in the CloudFormation template.
  • C. Remove the passwords from the CloudFormation templates so Amazon RDS prompts for the password when the database is being created.
  • D. Remove the passwords from the CloudFormation template and store them in a separate fil
  • E. Replace the passwords by running CloudFormation using a sed command.

Answer: B

Explanation:
https://aws.amazon.com/blogs/infrastructure-and-automation/securing-passwords-in-aws-quick-starts-using-aws

NEW QUESTION 21
......

P.S. Easily pass AWS-Certified-Database-Specialty Exam with 270 Q&As DumpSolutions.com Dumps & pdf Version, Welcome to Download the Newest DumpSolutions.com AWS-Certified-Database-Specialty Dumps: https://www.dumpsolutions.com/AWS-Certified-Database-Specialty-dumps/ (270 New Questions)