70-776 | Microsoft 70-776 Dumps 2021

Master the 70-776 Exam Questions and Answers content and be ready for exam day success quickly with this 70-776 Dumps. We guarantee it!We make it a reality and give you real 70-776 Dumps Questions in our Microsoft 70-776 braindumps. Latest 100% VALID 70-776 Dumps at below page. You can use our Microsoft 70-776 braindumps and pass your exam.

Microsoft 70-776 Free Dumps Questions Online, Read and Test Now.

NEW QUESTION 1
HOTSPOT
You have a Microsoft Azure Data Lake Analytics service.
You have a tab-delimited file named UserActivity.tsv that contains logs of user sessions. The file does not have a header row.
You need to create a table and to load the logs to the table. The solution must distribute the data by a column named SessionId.
How should you complete the U-SQL statement? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
70-776 dumps exhibit

    Answer:

    Explanation:
    References:
    https://msdn.microsoft.com/en-us/library/mt706197.aspx

    NEW QUESTION 2
    You have a Microsoft Azure Stream Analytics job.
    You are debugging event information manually.
    You need to view the event data that is being collected.
    Which monitoring data should you view for the Stream Analytics job?

    • A. query
    • B. outputs
    • C. scale
    • D. inputs

    Answer: D

    NEW QUESTION 3
    HOTSPOT
    You have a Microsoft Azure Data Lake Analytics service.
    You have a file named Employee.tsv that contains data on employees. Employee.tsv contains seven columns named EmpId, Start, FirstName, LastName, Age, Department, and Title.
    You need to create a Data Lake Analytics jobs to transform Employee.tsv, define a schema for the data, and output the data to a CSV file. The outputted data must contain only employees who are in the sales department. The Age column must allow NULL.
    How should you complete the U-SQL code segment? To answer, select the appropriate options in the answer area.
    NOTE: Each correct selection is worth one point.
    70-776 dumps exhibit

      Answer:

      Explanation:
      References:
      https://docs.microsoft.com/en-us/azure/data-lake-analytics/data-lake-analytics-u-sql-get-started

      NEW QUESTION 4
      You have a Microsoft Azure Data Lake Analytics service and an Azure Data Lake Store.
      You need to use Python to submit a U-SQL job. Which Python module should you install?

      • A. azure-mgmt-datalake-store
      • B. azure-mgmt- datalake-analytics
      • C. azure-datalake-store
      • D. azure-mgmt-resource

      Answer: B

      Explanation:
      References:
      https://docs.microsoft.com/en-us/azure/data-lake-analytics/data-lake-analytics-manage-use- python-sdk

      NEW QUESTION 5
      DRAG DROP
      You need to design a Microsoft Azure solution to analyze text from a Twitter data stream. The solution must identify a sentiment score of positive, negative, or neutral for the tweets.
      Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
      70-776 dumps exhibit

        Answer:

        Explanation: 70-776 dumps exhibit

        NEW QUESTION 6
        You have a Microsoft Azure Data Lake Analytics service.
        You have a CSV file that contains employee salaries.
        You need to write a U-SQL query to load the file and to extract all the employees who earn salaries that are greater than $100,000. You must encapsulate the data for reuse.
        What should you use?

        • A. a table-valued function
        • B. a view
        • C. the extract command
        • D. the output command

        Answer: A

        Explanation:
        References:
        https://docs.microsoft.com/en-au/azure/data-lake-analytics/data-lake-analytics-u-sql-catalog

        NEW QUESTION 7
        HOTSPOT
        Note: This question is part of a series of questions that present the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in this series.
        Start of repeated scenario
        You are migrating an existing on-premises data warehouse named LocalDW to Microsoft Azure. You will use an Azure SQL data warehouse named AzureDW for data storage and an Azure Data Factory named AzureDF for extract, transformation, and load (ETL) functions.
        For each table in LocalDW, you create a table in AzureDW.
        On the on-premises network, you have a Data Management Gateway.
        Some source data is stored in Azure Blob storage. Some source data is stored on an on-premises Microsoft SQL Server instance. The instance has a table named Table1.
        After data is processed by using AzureDF, the data must be archived and accessible forever. The archived data must meet a Service Level Agreement (SLA) for availability of 99 percent. If an Azure region fails, the archived data must be available for reading always. The storage solution for the archived data must minimize costs.
        End of repeated scenario.
        How should you configure the storage to archive the source data? To answer, select the appropriate options in the answer area.
        NOTE: Each correct selection is worth one point.
        70-776 dumps exhibit

          Answer:

          Explanation:
          References:
          https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-storage-tiers

          NEW QUESTION 8
          DRAG DROP
          You need to copy data from Microsoft Azure SQL Database to Azure Data Lake Store by using Azure Data Factory.
          Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
          70-776 dumps exhibit

            Answer:

            Explanation:
            References:
            https://docs.microsoft.com/en-us/azure/data-factory/copy-activity-overview

            NEW QUESTION 9
            Note: This question is part of a series of questions that present the same scenario. For your convenience, the scenario is repeated in each question. Each question presents a different goal and answer choices, but the text of the scenario is exactly the same in each question in this series.
            Start of repeated scenario
            You are migrating an existing on-premises data warehouse named LocalDW to Microsoft Azure. You will use an Azure SQL data warehouse named AzureDW for data storage and an Azure Data Factory named AzureDF for extract, transformation, and load (ETL) functions.
            For each table in LocalDW, you create a table in AzureDW.
            On the on-premises network, you have a Data Management Gateway.
            Some source data is stored in Azure Blob storage. Some source data is stored on an on-premises Microsoft SQL Server instance. The instance has a table named Table1.
            After data is processed by using AzureDF, the data must be archived and accessible forever. The archived data must meet a Service Level Agreement (SLA) for availability of 99 percent. If an Azure region fails, the archived data must be available for reading always. The storage solution for the archived data must minimize costs.
            End of repeated scenario.
            You need to define the schema of Table1 in AzureDF. What should you create?

            • A. a gateway
            • B. a linked service
            • C. a dataset
            • D. a pipeline

            Answer: C

            NEW QUESTION 10
            You have a Microsoft Azure SQL data warehouse that contains information about community events. An Azure Data Factory job writes an updated CSV file in Azure Blob storage to Community/{date}/events.csv daily.
            You plan to consume a Twitter feed by using Azure Stream Analytics and to correlate the feed to the community events.
            You plan to use Stream Analytics to retrieve the latest community events data and to correlate the data to the Twitter feed data.
            You need to ensure that when updates to the community events data is written to the CSV files, the Stream Analytics job can access the latest community events data.
            What should you configure?

            • A. an output that uses a blob storage sink and has a path pattern of Community/{date}
            • B. an output that uses an event hub sink and the CSV event serialization format
            • C. an input that uses a reference data source and has a path pattern of Community/{date}/events.csv
            • D. an input that uses a reference data source and has a path pattern of Community/{date}

            Answer: C

            NEW QUESTION 11
            You have a Microsoft Azure SQL data warehouse that has a fact table named FactOrder. FactOrder contains three columns named CustomerId, OrderId, and OrderDateKey. FactOrder is hash distributed on CustomerId. OrderId is the unique identifier for FactOrder. FactOrder contains 3 million rows.
            Orders are distributed evenly among different customers from a table named dimCustomers that contains 2 million rows.
            You often run queries that join FactOrder and dimCustomers by selecting and grouping by the OrderDateKey column.
            You add 7 million rows to FactOrder. Most of the new records have a more recent OrderDateKey value than the previous records.
            You need to reduce the execution time of queries that group on OrderDateKey and that join dimCustomers and FactOrder.
            What should you do?

            • A. Change the distribution for the FactOrder table to round robin.
            • B. Update the statistics for the OrderDateKey column.
            • C. Change the distribution for the FactOrder table to be based on OrderId.
            • D. Change the distribution for the dimCustomers table to OrderDateKey.

            Answer: B

            Explanation:
            References:
            https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-tables-statistics

            NEW QUESTION 12
            You plan to use Microsoft Azure Data factory to copy data daily from an Azure SQL data warehouse to an Azure Data Lake Store.
            You need to define a linked service for the Data Lake Store. The solution must prevent the access token from expiring.
            Which type of authentication should you use?

            • A. OAuth
            • B. service-to-service
            • C. Basic
            • D. service principal

            Answer: D

            Explanation:
            References:
            https://docs.microsoft.com/en-gb/azure/data-factory/v1/data-factory-azure-datalake- connector#azure-data-lake-store-linked-service-properties

            NEW QUESTION 13
            DRAG DROP
            You are building a data pipeline that uses Microsoft Azure Stream Analytics.
            Alerts are generated when the aggregate of data streaming in from devices during a minute-long window matches the values in a rule.
            You need to retrieve the following information:
            *The event ID
            *The device ID
            *The application ID that runs the service
            Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order.
            70-776 dumps exhibit

              Answer:

              Explanation: 70-776 dumps exhibit

              NEW QUESTION 14
              HOTSPOT
              You need to create a Microsoft Azure SQL data warehouse named dw1 that supports up to 10 TB of data. How should you complete the statement? To answer, select the appropriate options in the answer area.
              NOTE: Each correct selection is worth one point.
              70-776 dumps exhibit

                Answer:

                Explanation: 70-776 dumps exhibit

                NEW QUESTION 15
                You have an on-premises Microsoft SQL Server instance.
                You plan to copy a table from the instance to a Microsoft Azure Storage account. You need to ensure that you can copy the table by using Azure Data Factory. Which service should you deploy?

                • A. an on-premises data gateway
                • B. Azure Application Gateway
                • C. Data Management Gateway
                • D. a virtual network gateway

                Answer: C

                NEW QUESTION 16
                Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
                After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
                You are monitoring user queries to a Microsoft Azure SQL data warehouse that has six compute nodes.
                You discover that compute node utilization is uneven. The rows_processed column from sys.dm_pdw_workers shows a significant variation in the number of rows being moved among the distributions for the same table for the same query.
                You need to ensure that the load is distributed evenly across the compute nodes. Solution: You add a clustered columnstore index.
                Does this meet the goal?

                • A. Yes
                • B. No

                Answer: B

                NEW QUESTION 17
                You have a file in a Microsoft Azure Data Lake Store that contains sales data. The file contains sales amounts by salesperson, by city, and by state.
                You need to use U-SQL to calculate the percentage of sales that each city has for its respective state. Which code should you use?
                70-776 dumps exhibit
                70-776 dumps exhibit

                • A. Option A
                • B. Option B
                • C. Option C
                • D. Option D

                Answer: A

                NEW QUESTION 18
                You are developing an application by using the Microsoft .NET SDK. The application will access data from a Microsoft Azure Data Lake folder.
                You plan to authenticate the application by using service-to-service authentication. You need to ensure that the application can access the Data Lake folder.
                Which three actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.

                • A. Register an Azure Active Directory app that uses the Web app/API application type.
                • B. Configure the application to use the application ID, authentication key, and tenant ID.
                • C. Assign the Azure Active Directory app permission to the Data Lake Store folder.
                • D. Configure the application to use the OAuth 2.0 token endpoint.
                • E. Register an Azure Active Directory app that uses the Native application type.
                • F. Configure the application to use the application ID and redirect URI.

                Answer: ABC

                Explanation:
                References:
                https://docs.microsoft.com/en-us/azure/data-lake-store/data-lake-store-service-to-service-authenticate-using-active-directory

                Recommend!! Get the Full 70-776 dumps in VCE and PDF From Exambible, Welcome to Download: https://www.exambible.com/70-776-exam/ (New 91 Q&As Version)