Are you prepared in attending an interview? Then do not worry. If you are preparing for Hadoop Administration job interview and don’t know how to crack interview and what level or difficulty of questions to be asked in job interviews then go through Wisdomjobs Hadoop Administration interview questions and answers page to crack your job interview. A Hadoop Administrator is one who administers and manages hadoop clusters and all other resources in the entire Hadoop ecosystem. The role of a Hadoop Admin is mainly associated with tasks that involve installing and monitoring hadoop clusters. He is responsible for having those clusters safe. Below is the list of frequently asked Hadoop Administration interview questions and answers which gets you ready to face the interviews:
Answer :
Fair Scheduling is the process in which resources are assigned to jobs such that all jobs get to share equal number of resources over time.
Fair Scheduler can be used under the following circumstances:
i) If you wants the jobs to make equal progress instead of following the FIFO order then you must use Fair Scheduling.
ii) If you have slow connectivity and data locality plays a vital role and makes a significant difference to the job runtime then you must use Fair Scheduling.
iii) Use fair scheduling if there is lot of variability in the utilization between pools.
Capacity Scheduler allows runs the hadoop mapreduce cluster as a shared, multi-tenant cluster to maximize the utilization of the hadoop cluster and throughput.
Capacity Scheduler can be used under the following circumstances:
i) If the jobs require scheduler detrminism then Capacity Scheduler can be useful.
ii) CS's memory based scheduling method is useful if the jobs have varying memory requirements.
iii) If you want to enforce resource allocation because you know very well about the cluster utilization and workload then use Capacity Scheduler.
Question 2. What Are The Daemons Required To Run A Hadoop Cluster?
Answer :
NameNode, DataNode, TaskTracker and JobTracker
Question 3. How Will You Restart A Namenode?
Answer :
The easiest way of doing this is to run the command to stop running shell script i.e. click on stop-all.sh. Once this is done, restarts the NameNode by clicking on start-all.sh.
Question 4. Explain About The Different Schedulers Available In Hadoop.?
Answer :
FIFO Scheduler – This scheduler does not consider the heterogeneity in the system but orders the jobs based on their arrival times in a queue.
COSHH- This scheduler considers the workload, cluster and the user heterogeneity for scheduling decisions.
Fair Sharing-This Hadoop scheduler defines a pool for each user. The pool contains a number of map and reduce slots on a resource. Each user can use their own pool to execute the jobs.
Question 5. List Few Hadoop Shell Commands That Are Used To Perform A Copy Operation.?
Answer :
Question 6. What Is Jps Command Used For?
Answer :
jps command is used to verify whether the daemons that run the Hadoop cluster are working or not. The output of jps command shows the status of the NameNode, Secondary NameNode, DataNode, TaskTracker and JobTracker.
Answer :
Memory-System’s memory requirements will vary between the worker services and management services based on the application.
Operating System - a 64-bit operating system avoids any restrictions to be imposed on the amount of memory that can be used on worker nodes.
Storage- It is preferable to design a Hadoop platform by moving the compute activity to data to achieve scalability and high performance.
Capacity- Large Form Factor (3.5”) disks cost less and allow to store more, when compared to Small Form Factor disks.
Network - Two TOR switches per rack provide better redundancy.
Computational Capacity- This can be determined by the total number of MapReduce slots available across all the nodes within a Hadoop cluster.
Question 8. How Many Namenodes Can You Run On A Single Hadoop Cluster?
Answer :
Only one.
Question 9. What Happens When The Namenode On The Hadoop Cluster Goes Down?
Answer :
The file system goes offline whenever the NameNode is down.
Answer :
This file provides an environment for Hadoop to run and consists of the following variables-HADOOP_CLASSPATH, JAVA_HOME and HADOOP_LOG_DIR. JAVA_HOME variable should be set for Hadoop to run.
Answer :
Use the command -/etc/init.d/hadoop-0.20-namenode status.
Answer :
2 splits each for 127 MB and 65 MB files and 1 split for the 64KB file.
Question 13. Which Command Is Used To Verify If The Hdfs Is Corrupt Or Not?
Answer :
Hadoop FSCK (File System Check) command is used to check missing blocks.
Question 14. List Some Use Cases Of The Hadoop Ecosystem?
Answer :
Text Mining, Graph Analysis, Semantic Analysis, Sentiment Analysis, Recommendation Systems.
Question 15. How Can You Kill A Hadoop Job?
Answer :
Hadoop job –kill jobID
Question 16. I Want To See All The Jobs Running In A Hadoop Cluster. How Can You Do This?
Answer :
Using the command – Hadoop job –list, gives the list of jobs running in a Hadoop cluster.
Answer :
Yes, it is possible to copy files across multiple Hadoop clusters and this can be achieved using distributed copy. DistCP command is used for intra or inter cluster copying.
Question 18. Which Is The Best Operating System To Run Hadoop?
Answer :
Ubuntu or Linux is the most preferred operating system to run Hadoop. Though Windows OS can also be used to run Hadoop but it will lead to several problems and is not recommended.
Question 19. What Are The Network Requirements To Run Hadoop?
Answer :
Answer :
If the user does not want to compress the data for a particular job then he should create his own configuration file and set the mapred.output.compress property to false. This configuration file then should be loaded as a resource into the job.
Question 21. What Is The Best Practice To Deploy A Secondary Namenode?
Answer :
It is always better to deploy a secondary NameNode on a separate standalone machine. When the secondary NameNode is deployed on a separate machine it does not interfere with the operations of the primary node.
Question 22. How Often Should The Namenode Be Reformatted?
Answer :
The NameNode should never be reformatted. Doing so will result in complete data loss. NameNode is formatted only once at the beginning after which it creates the directory structure for file system metadata and namespace ID for the entire file system.
Question 23. If Hadoop Spawns 100 Tasks For A Job And One Of The Job Fails. What Does Hadoop Do?
Answer :
The task will be started again on a new TaskTracker and if it fails more than 4 times which is the default setting (the default value can be changed), the job will be killed.
Question 24. How Can You Add And Remove Nodes From The Hadoop Cluster?
Answer :
Answer :
Nothing could have actually wrong, if there is huge volume of data because data replication usually takes times based on data size as the cluster has to copy the data and it might take a few hours.
Question 26. Explain About The Different Configuration Files And Where Are They Located.?
Answer :
The configuration files are located in “conf” sub directory. Hadoop has 3 different Configuration files- hdfs-site.xml, core-site.xml and mapred-site.xml.
Hadoop Administration Related Tutorials |
|
---|---|
Informatica Tutorial | Teradata Tutorial |
Hadoop Tutorial | Java Tutorial |
Hadoop MapReduce Tutorial | Apache Pig Tutorial |
HBase Tutorial | MongoDB Tutorial |
Lucene Tutorial |
Hadoop Administration Related Practice Tests |
|
---|---|
Informatica Practice Tests | Teradata Practice Tests |
Hadoop Practice Tests | MongoDB Practice Tests |
Hadoop Administration Practice Test
All rights reserved © 2020 Wisdom IT Services India Pvt. Ltd
Wisdomjobs.com is one of the best job search sites in India.