CCA-505 | Up To The Minute CCA-505 Pdf 2021

Your success in Cloudera CCA-505 is our sole target and we develop all our CCA-505 braindumps in a way that facilitates the attainment of this target. Not only is our CCA-505 study material the best you can find, it is also the most detailed and the most updated. CCA-505 Practice Exams for Cloudera {category} CCA-505 are written to the highest standards of technical accuracy.

NEW QUESTION 1
Which two are Features of Hadoop's rack topology?

  • A. Configuration of rack awareness is accomplished using a configuration fil

  • B. You cannot use a rack topology script.

  • C. Even for small clusters on a single rack, configuring rack awareness will improve performance.

  • D. Rack location is considered in the HDFS block placement policy

  • E. HDFS is rack aware but MapReduce daemons are not

  • F. Hadoop gives preference to Intra rack data transfer in order to conserve bandwidth

Answer: BC

NEW QUESTION 2
Each node in your Hadoop cluster, running YARN, has 64 GB memory and 24 cores. Your yarn-site.xml has the following configuration:
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>32768</value>
</property>
<property>
<name>yarn.nodemanager.resource.cpu-vcores</name>
<value>23</value>
</property>
You want YARN to launch no more than 16 containers per node. What should you do?

  • A. No action is needed: YARN’s dynamic resource allocation automatically optimizes the node memory and cores

  • B. Modify yarn-site.xml with the following property:<name>yarn.nodemanager.resource.cpu-vcores</name><value>16</value>

  • C. Modify yarn-site.xml with the following property:<name>yarn.scheduler.minimum-allocation-mb</name><value>2048</value>

  • D. Modify yarn-site.xml with the following property:<name>yarn.scheduler.minimum-allocation-mb</name><value>4096</value>

Answer: B

NEW QUESTION 3
You want a node to only swap Hadoop daemon data from RAM to disk when absolutely necessary. What should you do?

  • A. Delete the /swapfile file on the node

  • B. Set vm.swappiness to o in /etc/sysctl.conf

  • C. Set the ram.swap parameter to o in core-site.xml

  • D. Delete the /etc/swap file on the node

  • E. Delete the /dev/vmswap file on the node

Answer: B

NEW QUESTION 4
Your cluster has the following characteristics:
✑ A rack aware topology is configured and on
✑ Replication is not set to 3
✑ Cluster block size is set to 64 MB
Which describes the file read process when a client application connects into the cluster and requests a 50MB file?

  • A. The client queries the NameNode which retrieves the block from the nearest DataNode to the client and then passes that block back to the client.

  • B. The client queries the NameNode for the locations of the block, and reads from a random location in the list it retrieves to eliminate network I/O leads by balancing which nodes it retrieves data from at any given time.

  • C. The client queries the NameNode for the locations of the block, and reads all three copie

  • D. The first copy to complete transfer to the client is the one the client reads as part of Hadoop’s speculative execution framework.

  • E. The client queries the NameNode for the locations of the block, and reads from the first location in the list it receives.

Answer: A

NEW QUESTION 5
You have a 20 node Hadoop cluster, with 18 slave nodes and 2 master nodes running HDFS High Availability (HA). You want to minimize the chance of data loss in you cluster. What should you do?

  • A. Add another master node to increase the number of nodes running the JournalNode which increases the number of machines available to HA to create a quorum

  • B. Configure the cluster’s disk drives with an appropriate fault tolerant RAID level

  • C. Run the ResourceManager on a different master from the NameNode in the order to load share HDFS metadata processing

  • D. Run a Secondary NameNode on a different master from the NameNode in order to load provide automatic recovery from a NameNode failure

  • E. Set an HDFS replication factor that provides data redundancy, protecting against failure

Answer: C

NEW QUESTION 6
Your cluster is running MapReduce vserion 2 (MRv2) on YARN. Your ResourceManager is configured to use the FairScheduler. Now you want to configure your scheduler such that a new user on the cluster can submit jobs into their own queue application submission. Which configuration should you set?

  • A. You can specify new queue name when user submits a job and new queue can be created dynamically if yarn.scheduler.fair.user-as-default-queue = false

  • B. Yarn.scheduler.fair.user-as-default-queue = false and yarn.scheduler.fair.allow- undeclared-people = true

  • C. You can specify new queue name per application in allocation.fair.allow-undeclared- people = true automatically assigned to the application queue

  • D. You can specify new queue name when user submits a job and new queue can be created dynamically if the property yarn.scheduler.fair.allow-undecleared-pools = true

Answer: A

NEW QUESTION 7
You have installed a cluster running HDFS and MapReduce version 2 (MRv2) on YARN. You have no afs.hosts entry()ies in your hdfs-alte.xml configuration file. You configure a new worker node by setting fs.default.name in its configuration files to point to the NameNode on your cluster, and you start the DataNode daemon on that worker node.
What do you have to do on the cluster to allow the worker node to join, and start storing HDFS blocks?

  • A. Nothing; the worker node will automatically join the cluster when the DataNode daemon is started.

  • B. Without creating a dfs.hosts file or making any entries, run the command hadoop dfsadmin –refreshHadoop on the NameNode

  • C. Create a dfs.hosts file on the NameNode, add the worker node’s name to it, then issue the command hadoop dfsadmin –refreshNodes on the NameNode

  • D. Restart the NameNode

Answer: B

NEW QUESTION 8
Assume you have a file named foo.txt in your local directory. You issue the following three commands:
Hadoop fs –mkdir input
Hadoop fs –put foo.txt input/foo.txt
Hadoop fs –put foo.txt input
What happens when you issue that third command?

  • A. The write succeeds, overwriting foo.txt in HDFS with no warning

  • B. The write silently fails

  • C. The file is uploaded and stored as a plain named input

  • D. You get an error message telling you that input is not a directory

  • E. You get a error message telling you that foo.txt already exist

  • F. The file is not written to HDFS

  • G. You get an error message telling you that foo.txt already exists, and asking you if you would like to overwrite

  • H. You get a warning that foo.txt is being overwritten

Answer: E

NEW QUESTION 9
Your Hadoop cluster is configured with HDFS and MapReduce version 2 (MRv2) on YARN. Can you configure a worker node to run a NodeManager daemon but not a DataNode daemon and still have a function cluster?

  • A. Ye

  • B. The daemon will receive data from the NameNode to run Map tasks

  • C. Ye

  • D. The daemon will get data from another (non-local) DataNode to run Map tasks

  • E. Ye

  • F. The daemon will receive Reduce tasks only

Answer: A

NEW QUESTION 10
Which YARN process runs as “controller O” of a submitted job and is responsible for resource requests?

  • A. ResourceManager

  • B. NodeManager

  • C. JobHistoryServer

  • D. ApplicationMaster

  • E. JobTracker

  • F. ApplicationManager

Answer: D

NEW QUESTION 11
Which YARN daemon or service negotiates map and reduce Containers from the Scheduler, tracking their status and monitoring for progress?

  • A. ResourceManager

  • B. ApplicationMaster

  • C. NodeManager

  • D. ApplicationManager

Answer: B

Explanation:
Reference: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1-latest/bk_using-apache-hadoop/content/yarn_overview.html

NEW QUESTION 12
Which three basic configuration parameters must you set to migrate your cluster from MapReduce1 (MRv1) to MapReduce v2 (MRv2)?

  • A. Configure the NodeManager hostname and enable services on YARN by setting the following property in yarn-site.xml:<name>yarn.nodemanager.hostname</name><value>your_nodeManager_hostname</value>

  • B. Configure the number of map tasks per job on YARN by setting the following property in mapred-site.xml:<name>mapreduce.job.maps</name><value>2</value>

  • C. Configure MapReduce as a framework running on YARN by setting the following property in mapred-site.xml:<name>mapreduce.framework.name</name><value>yarn</value>

  • D. Configure the ResourceManager hostname and enable node services on YARN by setting the following property in yarn-site.xml:<name>yarn.resourcemanager.hostname</name><value>your_responseManager_hostname</value>

  • E. Configure a default scheduler to run on YARN by setting the following property in sapred-site.xml:<name>mapreduce.jobtracker.taskScheduler</name><value>org.apache.hadoop.mapred.JobQueueTaskScheduler</value>

  • F. Configure the NodeManager to enable MapReduce services on YARN by adding following property in yarn-site.xml:<name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value>

Answer: ABD

NEW QUESTION 13
What processes must you do if you are running a Hadoop cluster with a single NameNode and six DataNodes, and you want to change a configuration parameter so that it affects all six DataNodes.

  • A. You must modify the configuration file on each of the six DataNode machines.

  • B. You must restart the NameNode daemon to apply the changes to the cluster

  • C. You must restart all six DatNode daemon to apply the changes to the cluste

  • D. You don’t need to restart any daemon, as they will pick up changes automatically

  • E. You must modify the configuration files on the NameNode onl

  • F. DataNodes read their configuration from the master nodes.

Answer: BE

NEW QUESTION 14
You are upgrading a Hadoop cluster from HDFS and MapReduce version 1 (MRv1) to one running HDFS and MapReduce version 2 (MRv2) on YARN. You want to set and enforce a block of 128MB for all new files written to the cluster after the upgrade. What should you do?

  • A. Set dfs.block.size to 128M on all the worker nodes, on all client machines, and on the NameNode, and set the parameter to final.

  • B. Set dfs.block.size to 134217728 on all the worker nodes, on all client machines, and on the NameNode, and set the parameter to final.

  • C. Set dfs.block.size to 134217728 on all the worker nodes and client machines, and set the parameter to fina

  • D. You do need to set this value on the NameNode.

  • E. Set dfs.block.size to 128M on all the worker nodes and client machines, and set the parameter to fina

  • F. You do need to set this value on the NameNode.

  • G. You cannot enforce this, since client code can always override this value.

Answer: C

NEW QUESTION 15
You are running a Hadoop cluster with MapReduce version 2 (MRv2) on YARN. You consistently see that MapReduce map tasks on your cluster are running slowly because of excessive garbage collection of JVM, how do you increase JVM heap property to 3GB to optimize performance?

  • A. Yarn.application.child.java.opts-Xax3072m

  • B. Yarn.application.child.java.opts=-3072m

  • C. Mapreduce.map.java.opts=-Xmx3072m

  • D. Mapreduce.map.java.opts=-Xms3072m

Answer: C

Explanation:
Reference: http://hortonworks.com/blog/how-to-plan-and-configure-yarn-in-hdp-2-0/

NEW QUESTION 16
Your Hadoop cluster contains nodes in three racks. You have NOT configured the dfs.hosts property in the NameNode’s configuration file. What results?

  • A. No new nodes can be added to the cluster until you specify them in the dfs.hosts file

  • B. Presented with a blank dfs.hosts property, the NameNode will permit DatNode specified in mapred.hosts to join the cluster

  • C. Any machine running the DataNode daemon can immediately join the cluster

  • D. The NameNode will update the dfs.hosts property to include machine running DataNode daemon on the next NameNode reboot or with the command dfsadmin -refreshNodes

Answer: C

NEW QUESTION 17
You have converted your Hadoop cluster from a MapReduce 1 (MRv1) architecture to a MapReduce 2 (MRv2) on YARN architecture. Your developers are accustomed to specifying map and reduce tasks (resource allocation) tasks when they run jobs. A developer wants to know how specify to reduce tasks when a specific job runs. Which method should you tell that developer to implement?

  • A. Developers specify reduce tasks in the exact same way for both MapReduce version 1 (MRv1) and MapReduce version 2 (MRv2) on YAR

  • B. Thus, executing –p mapreduce.job.reduce-2 will specify 2 reduce tasks.

  • C. In YARN, the ApplicationMaster is responsible for requesting the resources required for a specific jo

  • D. Thus, executing –p yarn.applicationmaster.reduce.tasks-2 will specify that the ApplicationMaster launch two task containers on the worker nodes.

  • E. In YARN, resource allocation is a function of megabytes of memory in multiple of 1024m

  • F. Thus, they should specify the amount of memory resource they need by executing –D mapreduce.reduce.memory-mp-2040

  • G. In YARN, resource allocation is a function of virtual cores specified by the ApplicationMaster making requests to the NodeManager where a reduce task is handled by a single container (and this a single virtual core). Thus, the developer needs to specify the number of virtual cores to the NodeManager by executing –p yarn.nodemanager.cpu- vcores=2

  • H. MapReduce version 2 (MRv2) on YARN abstracts resource allocation away from the idea of “tasks” into memory and virtual cores, thus eliminating the need for a developer to specify the number of reduce tasks, and indeed preventing the developer from specifying the number of reduce tasks.

Answer: D

NEW QUESTION 18
......

P.S. Easily pass CCA-505 Exam with 45 Q&As Simply pass Dumps & pdf Version, Welcome to Download the Newest Simply pass CCA-505 Dumps: https://www.passcertsure.com/{productsort}-test/ (45 New Questions)