Print Exit Print Mode
About the exam
Dear Participant,
Greetings!You have completed the "Final Exam" exam.At this juncture, it is important for you to understand your strengths and focus on them to achieve the best results.We present here a snapshot of your performance in "Final Exam" exam in terms of marks scored by you in each section, question-wise response pattern and difficulty-wise analysis ofyour performance.
This Report consists of the following sections that can be accessed using the left navigation panel:
Overall Performance: This part of report shows the summary of marks scored by you across all sections of the exam and the comparison of your performance across all sections.
Section-wise Performance: You can click on a section name in the left navigation panel to check your performance in that section. Section-wise performance includes the details of your response at each question level and difficulty-wise analysis of your performance for that section.
NOTE : For Short Answer, Subjective, Typing and Programing Type Questions participant willnot be able to view Bar Chart Report in the Performance Analysis.
Subject Questions Attempted Correct Score
Final 40/99 31 31
Final, 100%FinalMarks Obtained Subject Wise NOTE : Subject having negative marks are not considered in the pie chart. Pie chart
will not be shown if all the subject contains 0 marks.
FinalThe Final section comprises of a total of 99 questions with the following difficulty level
distribution: -
Difficulty Level No. of questions
Easy 0
Moderate 99
Hard 0
Question wise details
Please click on question to view detailed analysis
= Not Evaluated = Evaluated = Correct = Incorrect = Not Attempted = Marked For Review = Correct Option = Your Option
Question Details
Q1.Key/Value is considered as hadoop format.
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 1Option 1 : TrueOption 2 : False
Q2.What kind of servers are used for creating a hadoop cluster?
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 2Option 1 : Server grade machines.Option 2 : Commodity hardware.Option 3 : Only supercomputersOption 4 : None of the above.
Q3.Hadoop was developed by:
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 1Option 1 : Doug CuttingOption 2 : Lars GeorgeOption 3 : Tom WhiteOption 4 : Eric Sammer
Q4.One of the features of hadoop is you can achieve parallelism.
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 2Option 1 : FalseOption 2 : True
Q5.Hadoop can only work with structured data.
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 1Option 1 : FalseOption 2 : True
Q6.Hadoop cluster can scale out:
Difficulty Level : Moderate
Status : Incorrect
Marks Obtained : 0
Response : 2Option 1 : By upgrading existing serversOption 2 : By increasing the area of the cluster.Option 3 : By downgrading existing serversOption 4 : By adding more hardware
Q7.Hadoop can solve only use cases involving data from Social media.
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 2Option 1 : 1Option 2 : False
Q8.Hadoop can be utilized for demographic analysis.
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 1Option 1 : TrueOption 2 : False
Q9.Hadoop is inspired from which file system.
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 2Option 1 : AFSOption 2 : GFSOption 3 : MPPOption 4 : None of the above.
Q10.For Apache Hadoop one needs licensing before leveraging it.
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 2Option 1 : TrueOption 2 : False
Q11.HDFS runs in the same namespace as that of local filesystem.
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 1Option 1 : FalseOption 2 : True
Q12.HDFS follows a master-slave architecture.
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 2Option 1 : FalseOption 2 : True
Q13.Namenode only responds to:
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : FTP callsOption 2 : SFTP calls.Option 3 : RPC callsOption 4 : MPP calls
Q14.Perfect balancing can be achieved in a Hadoop cluster.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : FalseOption 2 : True
Q15.What does Namenode periodically expects from Datanodes?
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 2Option 1 : EditLogs
Option 2 : Block report and StatusOption 3 : FSImagesOption 4 : None of the above
Q16.After client requests JobTracker for running an application, whom does JT contacts?
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 3Option 1 : DataNodesOption 2 : TasktrackerOption 3 : NamenodeOption 4 : None of the above.
Q17.Intertaction to HDFS is done through which script.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : FsadminOption 2 : HiveOption 3 : MapreduceOption 4 : Hadoop
Q18.What is the usage of put command in HDFS?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : It deletes files from one file system to another.Option 2 : It copies files from one file system to anotherOption 3 : It puts configuration parameters in configuration files
Option 4 : None of the above.
Q19.Each directory or file has three kinds of permissions:
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : read,write,executeOption 2 : read,write,runOption 3 : read,write,appendOption 4 : read,write,update
Q20.Mapper output is written to HDFS.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : FalseOption 2 : True
Q21.A Reducer writes its output in what format.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : Key/ValueOption 2 : Text filesOption 3 : Sequence filesOption 4 : None of the above
Q22.Which of the following is a pre-requisite for hadoop cluster installation?
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 3Option 1 : Gather Hardware requirementOption 2 : Gather network requirementOption 3 : BothOption 4 : None of the above
Q23.Nagios and Ganglia are tools provided by:
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : ClouderaOption 2 : HortonworksOption 3 : MapROption 4 : None of the above
Q24.Which of the following are cloudera management services?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : Activity MonitorOption 2 : Host MonitorOption 3 : BothOption 4 : None of the above
Q25.Which of the following is used to collect information about activities runningin a hadoop cluster?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : Report ManagerOption 2 : Cloudera NavigatorOption 3 : Activity MonitorOption 4 : All of the above
Q26.Which of the following aggregates events and makes them available for alerting and searching?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : Event ServerOption 2 : Host MonitorOption 3 : Activity MonitorOption 4 : None of the above
Q27.Which tab in the cloudera manager is used to add a service?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : HostsOption 2 : ActivitiesOption 3 : ServicesOption 4 : None of the above
Q28.Which of the following provides http access to HDFS?
Difficulty Level : Moderate
Status : Incorrect
Marks Obtained : 0
Response : 3Option 1 : HttpsFSOption 2 : Name NodeOption 3 : Data NodeOption 4 : All of the above
Q29.Which of the following is used to balance a load in case of addition of a new node and in case of a failure?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : GatewayOption 2 : BalancerOption 3 : Secondary Name NodeOption 4 : None of the above
Q30.Which of the following is used to designate a host for a particular service?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : GatewayOption 2 : BalancerOption 3 : Secondary Name NodeOption 4 : All of the above
Q31.Which of the following are the configuration files?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : Core-site.xmlOption 2 : Hdfs-site.xmlOption 3 : BothOption 4 : None of the above
Q32.Which are the commercial leading Hadoop distributors in the market?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : Cloudera , Intel, MapROption 2 : MapR, Cloudera, TeradataOption 3 : Hortonworks, IBM, ClouderaOption 4 : MapR, Hortonworks, Cloudera
Q33.What are the core Apache components enclosed in its bundle?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : HDFS, Map-reduce,YARN,Hadoop CommonsOption 2 : HDFS, NFS, Combiners, Utility PackageOption 3 : HDFS, Map-reduce, Hadoop coreOption 4 : MapR-FS, Map-reduce,YARN,Hadoop Commons
Q34.Apart from its basic components Apache Hadoop also provides:
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : Apache HiveOption 2 : Apache PigOption 3 : Apache ZookeeperOption 4 : All the above
Q35.Rolling upgrades is not possible in which of the following?
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 2Option 1 : ClouderaOption 2 : HortonworksOption 3 : MapROption 4 : Possible in all of the above
Q36.In which of the following Hbase Latency is low with respect to each other:
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : ClouderaOption 2 : HortonworksOption 3 : MapROption 4 : IBM BigInsights
Q37.MetaData Replication is possible in:
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : ClouderaOption 2 : HortonworksOption 3 : MapROption 4 : Teradata
Q38.Disastor recovery management is not handled by:
Difficulty Level : Moderate
Status : Incorrect
Marks Obtained : 0
Response : 2Option 1 : HortonworksOption 2 : MapROption 3 : ClouderaOption 4 : Amazon Web Services EMR
Q39.Mirroring concept is possible in Cloudera.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : TrueOption 2 : False
Q40.Does MapR supports only Streaming Data Ingestion ?
Difficulty Level : Moderate
Status : Incorrect
Marks Obtained : 0
Response : 1Option 1 : TrueOption 2 : False
Q41.Hcatalog is open source metadata framework developed by:
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : ClouderaOption 2 : MapROption 3 : HortonworksOption 4 : Amazon EMR
Q42.BDA can be applicable to gain knowledge on user behaviour, prevents customer churn in Media and Telecommunications Industry.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : TrueOption 2 : False
Q43.What is the correct sequence of Big Data Analytics stages?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : Big Data Production > Big Data Consumption > Big Data ManagementOption 2 : Big Data Management > Big Data Production > Big Data ConsumptionOption 3 : Big Data Production > Big Data Management > Big Data ConsumptionOption 4 : None of these
Q44.Big Data Consumption involves:
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : MiningOption 2 : AnalyticOption 3 : Search and EnrichmentOption 4 : All of the above
Q45.Big Data Integration and Data Mining are the phases of Big Data Management.
Difficulty Level : Moderate
Status : Incorrect
Marks Obtained : 0
Response : 1Option 1 : TrueOption 2 : False
Q46.RDBMS, Social Media data, Sensor data are the possible input sources to a big data environment.
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 1Option 1 : TrueOption 2 : False
Q47.For which of the following type of data it is not possible to store in big data environment and then process/parse it?
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 4Option 1 : XML/JSON type of dataOption 2 : RDBMSOption 3 : Semi-structured dataOption 4 : None of the above
Q48.Software framework for writing applications that parallely process vast amounts of data is known as:
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : Map-reduceOption 2 : HiveOption 3 : ImpalaOption 4 : None of the above
Q49.In proper flow of the map-reduce, reducer will always be executed after mapper.
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 1Option 1 : TrueOption 2 : False
Q50.Which of the following are the features of Map-reduce?
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 4Option 1 : Automatic parallelization and distributionOption 2 : Fault-ToleranceOption 3 : Platform independentOption 4 : All of the above
Q51.Where does the intermediate output of mapper gets written to?
Difficulty Level : Moderate
Status : Incorrect
Marks Obtained : 0
Response : 4Option 1 : Local disk of node where it is executed.Option 2 : HDFS of node where it is executed.Option 3 : On a remote server outside the cluster.Option 4 : Mapper output gets written to the local disk of Name node machine.
Q52.Reducer is required in map-reduce job for:
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 1Option 1 : It combines all the intermediate data collected from mappers.Option 2 : It reduces the amount of data by half of what is supplied to it.Option 3 : Both a and bOption 4 : None of the above
Q53.Output of every map is passed to which component.
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 2Option 1 : PartitionerOption 2 : CombinerOption 3 : MapperOption 4 : None of the above
Q54.Data Locality concept is used for:
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : Localizing dataOption 2 : Avoiding network traffic in hadoop systemOption 3 : Both A and BOption 4 : None of the above
Q55.No of files in the output of map reduce job depends on:
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 1Option 1 : No of reducer used for the processOption 2 : Size of the dataOption 3 : Both A and BOption 4 : None of the above
Q56.Input format of the map-reduce job is specified in which class?
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 3Option 1 : Combiner classOption 2 : Reducer class
Option 3 : Mapper classOption 4 : Any of the above
Q57.The intermediate keys, and their value lists, are passed to the Reducer in sorted key order.
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 1Option 1 : TrueOption 2 : False
Q58.In which stage of the map-reduce job data is transferred between mapper and reducer?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : TransferOption 2 : CombinerOption 3 : Distributed CacheOption 4 : Shuffle and Sort
Q59.Maximum three reducers can run at any time in a MapReduce Job.
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 2Option 1 : TrueOption 2 : False
Q60.Functionality of the Jobtracker is to:
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 1Option 1 : Coordinate the job runOption 2 : Sorting the outputOption 3 : Both A and BOption 4 : None of the above
Q61.The submit() method on Job creates an internal JobSummitter instance and calls _____ on it.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : jobSubmitInternal()Option 2 : internalJobSubmit()Option 3 : submitJobInternal()Option 4 : None of these
Q62.Which method polls the job's progress and after how many seconds?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : WaitForCompletion() and after each secondOption 2 : WaitForCompletion() after every 15 secondsOption 3 : Not possible to pollOption 4 : None of the above
Q63.Job Submitter tells the task tracker that the job is ready for execution.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : TrueOption 2 : False
Q64.Hadoop 1.0 runs 3 instances of job tracker for parallel execution on hadoopcluster.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : TrueOption 2 : Flase
Q65.Map and Reduce tasks are created in job initialization phase.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : TrueOption 2 : False
Q66.Based on heartbeats received after how many seconds does it help the job tracker to decide regarding health of task tracker?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : After every 3 secondsOption 2 : After every 1 secondOption 3 : After every 60 secondsOption 4 : None of the above
Q67.Task tracker has assigned fixed number of slots for map and reduce tasks.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : TrueOption 2 : False
Q68.To improve the performance of the map-reduce task jar that contains map-reduce code is pushed to each slave node over HTTP.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : TrueOption 2 : False
Q69.Map-reduce can take which type of format as input?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : TextOption 2 : CSVOption 3 : Arbitrary
Option 4 : None of these
Q70.Input files can be located at hdfs or local system for map-reduce.
Difficulty Level : Moderate
Status : Incorrect
Marks Obtained : 0
Response : 2Option 1 : TrueOption 2 : False
Q71.Is there any default InputFormat for input files in map-reduce process?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : KeyValueInputFormatOption 2 : TextInputFormat.Option 3 : A and BOption 4 : None of these
Q72.An InputFormat is a class that provides the following functionality:
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : Selects the files or other objects that should be used for inputOption 2 : Defines the InputSplits that break a file into tasksOption 3 : Provides a factory for RecordReader objects that read the fileOption 4 : All of the above
Q73.An InputSplit describes a unit of work that comprises a ____ map task in a MapReduce program.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : OneOption 2 : TwoOption 3 : ThreeOption 4 : None of these
Q74.The FileInputFormat and its descendants break a file up into ____MB chunks.
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 2Option 1 : 128Option 2 : 64Option 3 : 32Option 4 : 256
Q75.What allows several map tasks to operate on a single file in parallel?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : Processing of a file in chunksOption 2 : Configuration file propertiesOption 3 : Both A and BOption 4 : None of the above
Q76.The Record Reader is invoked ________ on the input until the entire InputSplit has been consumed.
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 3Option 1 : OnceOption 2 : TwiceOption 3 : RepeatedlyOption 4 : None of these
Q77.Which of the following is KeyValueTextInputFormat?
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 1Option 1 : Key is separated from the value by TabOption 2 : Data is specified in binary sequenceOption 3 : Both A and BOption 4 : None of the above
Q78.In map-reduce programming model mappers can communicate with each other is:
Difficulty Level : Moderate
Status : Incorrect
Marks Obtained : 0
Response : 1Option 1 : TrueOption 2 : False
Q79.User can define own partitioner class.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : TrueOption 2 : False
Q80.The Output Format class is a factory for RecordWriter objects; these are used to write the individual records to the files as directed by the OutputFormat is:
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : TrueOption 2 : False
Q81.Which of the following are part of Hadoop ecosystem.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : Talend,MapR,NFSOption 2 : Mysql,ShellOption 3 : Pig,Hive,HbaseOption 4 : None of the above
Q82.Default Metostore location for Hive is:
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : MysqlOption 2 : DerbyOption 3 : PostgreSQLOption 4 : None of the above
Q83.Extend the following class to write a User Defined Function in Hive.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : HiveMapperOption 2 : EvalOption 3 : UDFOption 4 : None of the above
Q84.Which component of hadoop ecosystem supports updation?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : ZookeeperOption 2 : HiveOption 3 : PigOption 4 : Hbase
Q85.Which hadoop component should be used if a join of dataset is required?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :
Option 1 : HbaseOption 2 : HiveOption 3 : ZookeeperOption 4 : None of the above
Q86.Which hadoop component can be used for ETL?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : PigOption 2 : ZookeeperOption 3 : HbaseOption 4 : None of the above
Q87.Which hadoop component is best suited for pulling data from the web?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : HiveOption 2 : ZookeeperOption 3 : HbaseOption 4 : Flume
Q88.Which hadoop component can be used to transfer data from relational DB to HDFS?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : ZookeeperOption 2 : Pig
Option 3 : SqoopOption 4 : None of the above
Q89.In an application more than one hadoop component cannot be used on topof HDFS.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : TrueOption 2 : False
Q90.Hbase supports join.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : TrueOption 2 : False
Q91.Pig can work only with data present in HDFS.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : TrueOption 2 : False
Q92.Which tool out of the following can be used for an OLTP application?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : PentahoOption 2 : HiveOption 3 : HbaseOption 4 : None of the above
Q93.Which tool is best suited for real time writes?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : PigOption 2 : HiveOption 3 : HbaseOption 4 : Cassandra
Q94.Which out of the following hadoop component is called as ETL of hadoop?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : PigOption 2 : HbaseOption 3 : TalendOption 4 : None of the above
Q95.Hadoop can completely replace tradtional Dbs.
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 2Option 1 : TrueOption 2 : False
Q96.Zookeeper can be used as data transfer also.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : FalseOption 2 : True
Q97.Map-reduce cannot be tested on data/files present in local file system.
Difficulty Level : Moderate
Status : Incorrect
Marks Obtained : 0
Response : 1Option 1 : TrueOption 2 : False
Q98.Hive was developed by:
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 4Option 1 : Tom WhiteOption 2 : ClouderaOption 3 : Doug CuttingOption 4 : Facebook
Q99.Mrv1 programs cannot be run on top of clusters configured for Mrv2.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : TrueOption 2 : False
WgIMarks Obtained Subject WiseFinalWgI
Print Exit Print Mode
About the exam
Dear Participant,
Greetings!You have completed the "Final Exam" exam.At this juncture, it is important for you to understand your strengths and focus on them to achieve the best results.We present here a snapshot of your performance in "Final Exam" exam in terms of marks
scored by you in each section, question-wise response pattern and difficulty-wise analysis ofyour performance.
This Report consists of the following sections that can be accessed using the left navigation panel:
Overall Performance: This part of report shows the summary of marks scored by you across all sections of the exam and the comparison of your performance across all sections.
Section-wise Performance: You can click on a section name in the left navigation panel to check your performance in that section. Section-wise performance includes the details of your response at each question level and difficulty-wise analysis of your performance for that section.
NOTE : For Short Answer, Subjective, Typing and Programing Type Questions participant willnot be able to view Bar Chart Report in the Performance Analysis.
Subject Questions Attempted Correct Score
Final 40/99 17 17
Final, 100%FinalMarks Obtained Subject Wise NOTE : Subject having negative marks are not considered in the pie chart. Pie chart
will not be shown if all the subject contains 0 marks.
FinalThe Final section comprises of a total of 99 questions with the following difficulty level distribution: -
Difficulty Level No. of questions
Easy 0
Moderate 99
Hard 0
Question wise details
Please click on question to view detailed analysis
= Not Evaluated = Evaluated
= Correct = Incorrect = Not Attempted = Marked For Review = Correct Option = Your Option
Question Details
Q1.Key/Value is considered as hadoop format.
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 1Option 1 : TrueOption 2 : False
Q2.What kind of servers are used for creating a hadoop cluster?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : Server grade machines.Option 2 : Commodity hardware.Option 3 : Only supercomputersOption 4 : None of the above.
Q3.Hadoop was developed by:
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 1Option 1 : Doug CuttingOption 2 : Lars GeorgeOption 3 : Tom WhiteOption 4 : Eric Sammer
Q4.One of the features of hadoop is you can achieve parallelism.
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 2Option 1 : FalseOption 2 : True
Q5.Hadoop can only work with structured data.
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 1Option 1 : FalseOption 2 : True
Q6.Hadoop cluster can scale out:
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : By upgrading existing servers
Option 2 : By increasing the area of the cluster.Option 3 : By downgrading existing serversOption 4 : By adding more hardware
Q7.Hadoop can solve only use cases involving data from Social media.
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 2Option 1 : 1Option 2 : False
Q8.Hadoop can be utilized for demographic analysis.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : TrueOption 2 : False
Q9.Hadoop is inspired from which file system.
Difficulty Level : Moderate
Status : Incorrect
Marks Obtained : 0
Response : 3Option 1 : AFSOption 2 : GFSOption 3 : MPPOption 4 : None of the above.
Q10.For Apache Hadoop one needs licensing before leveraging it.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : TrueOption 2 : False
Q11.HDFS runs in the same namespace as that of local filesystem.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : FalseOption 2 : True
Q12.HDFS follows a master-slave architecture.
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 2Option 1 : FalseOption 2 : True
Q13.Namenode only responds to:
Difficulty Level : Moderate
Status : Incorrect
Marks Obtained : 0
Response : 4
Option 1 : FTP callsOption 2 : SFTP calls.Option 3 : RPC callsOption 4 : MPP calls
Q14.Perfect balancing can be achieved in a Hadoop cluster.
Difficulty Level : Moderate
Status : Incorrect
Marks Obtained : 0
Response : 2Option 1 : FalseOption 2 : True
Q15.What does Namenode periodically expects from Datanodes?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : EditLogsOption 2 : Block report and StatusOption 3 : FSImagesOption 4 : None of the above
Q16.After client requests JobTracker for running an application, whom does JT contacts?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : DataNodesOption 2 : TasktrackerOption 3 : NamenodeOption 4 : None of the above.
Q17.Intertaction to HDFS is done through which script.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : FsadminOption 2 : HiveOption 3 : MapreduceOption 4 : Hadoop
Q18.What is the usage of put command in HDFS?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : It deletes files from one file system to another.Option 2 : It copies files from one file system to anotherOption 3 : It puts configuration parameters in configuration filesOption 4 : None of the above.
Q19.Each directory or file has three kinds of permissions:
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 1Option 1 : read,write,executeOption 2 : read,write,runOption 3 : read,write,appendOption 4 : read,write,update
Q20.Mapper output is written to HDFS.
Difficulty Level : Moderate
Status : Incorrect
Marks Obtained : 0
Response : 2Option 1 : FalseOption 2 : True
Q21.A Reducer writes its output in what format.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : Key/ValueOption 2 : Text filesOption 3 : Sequence filesOption 4 : None of the above
Q22.Which of the following is a pre-requisite for hadoop cluster installation?
Difficulty Level : Moderate
Status : Incorrect
Marks Obtained : 0
Response : 4Option 1 : Gather Hardware requirementOption 2 : Gather network requirementOption 3 : BothOption 4 : None of the above
Q23.Nagios and Ganglia are tools provided by:
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : ClouderaOption 2 : HortonworksOption 3 : MapROption 4 : None of the above
Q24.Which of the following are cloudera management services?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : Activity MonitorOption 2 : Host MonitorOption 3 : BothOption 4 : None of the above
Q25.Which of the following is used to collect information about activities runningin a hadoop cluster?
Difficulty Level : Moderate
Status : Incorrect
Marks Obtained : 0
Response : 1Option 1 : Report ManagerOption 2 : Cloudera NavigatorOption 3 : Activity MonitorOption 4 : All of the above
Q26.Which of the following aggregates events and makes them available for alerting and searching?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : Event ServerOption 2 : Host MonitorOption 3 : Activity MonitorOption 4 : None of the above
Q27.Which tab in the cloudera manager is used to add a service?
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 3Option 1 : HostsOption 2 : ActivitiesOption 3 : ServicesOption 4 : None of the above
Q28.Which of the following provides http access to HDFS?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : HttpsFSOption 2 : Name NodeOption 3 : Data NodeOption 4 : All of the above
Q29.Which of the following is used to balance a load in case of addition of a new node and in case of a failure?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : GatewayOption 2 : BalancerOption 3 : Secondary Name NodeOption 4 : None of the above
Q30.Which of the following is used to designate a host for a particular service?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : GatewayOption 2 : BalancerOption 3 : Secondary Name NodeOption 4 : All of the above
Q31.Which of the following are the configuration files?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : Core-site.xmlOption 2 : Hdfs-site.xmlOption 3 : BothOption 4 : None of the above
Q32.Which are the commercial leading Hadoop distributors in the market?
Difficulty Level : Moderate
Status : Incorrect
Marks Obtained : 0
Response : 3
Option 1 : Cloudera , Intel, MapROption 2 : MapR, Cloudera, TeradataOption 3 : Hortonworks, IBM, ClouderaOption 4 : MapR, Hortonworks, Cloudera
Q33.What are the core Apache components enclosed in its bundle?
Difficulty Level : Moderate
Status : Incorrect
Marks Obtained : 0
Response : 3Option 1 : HDFS, Map-reduce,YARN,Hadoop CommonsOption 2 : HDFS, NFS, Combiners, Utility PackageOption 3 : HDFS, Map-reduce, Hadoop coreOption 4 : MapR-FS, Map-reduce,YARN,Hadoop Commons
Q34.Apart from its basic components Apache Hadoop also provides:
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 4Option 1 : Apache HiveOption 2 : Apache PigOption 3 : Apache ZookeeperOption 4 : All the above
Q35.Rolling upgrades is not possible in which of the following?
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 2Option 1 : ClouderaOption 2 : HortonworksOption 3 : MapR
Option 4 : Possible in all of the above
Q36.In which of the following Hbase Latency is low with respect to each other:
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : ClouderaOption 2 : HortonworksOption 3 : MapROption 4 : IBM BigInsights
Q37.MetaData Replication is possible in:
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : ClouderaOption 2 : HortonworksOption 3 : MapROption 4 : Teradata
Q38.Disastor recovery management is not handled by:
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : HortonworksOption 2 : MapROption 3 : ClouderaOption 4 : Amazon Web Services EMR
Q39.Mirroring concept is possible in Cloudera.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : TrueOption 2 : False
Q40.Does MapR supports only Streaming Data Ingestion ?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : TrueOption 2 : False
Q41.Hcatalog is open source metadata framework developed by:
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : ClouderaOption 2 : MapROption 3 : HortonworksOption 4 : Amazon EMR
Q42.BDA can be applicable to gain knowledge on user behaviour, prevents customer churn in Media and Telecommunications Industry.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : TrueOption 2 : False
Q43.What is the correct sequence of Big Data Analytics stages?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : Big Data Production > Big Data Consumption > Big Data ManagementOption 2 : Big Data Management > Big Data Production > Big Data ConsumptionOption 3 : Big Data Production > Big Data Management > Big Data ConsumptionOption 4 : None of these
Q44.Big Data Consumption involves:
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : MiningOption 2 : AnalyticOption 3 : Search and EnrichmentOption 4 : All of the above
Q45.Big Data Integration and Data Mining are the phases of Big Data Management.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : TrueOption 2 : False
Q46.RDBMS, Social Media data, Sensor data are the possible input sources to a big data environment.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : TrueOption 2 : False
Q47.For which of the following type of data it is not possible to store in big data environment and then process/parse it?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : XML/JSON type of dataOption 2 : RDBMSOption 3 : Semi-structured dataOption 4 : None of the above
Q48.Software framework for writing applications that parallely process vast amounts of data is known as:
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :
Option 1 : Map-reduceOption 2 : HiveOption 3 : ImpalaOption 4 : None of the above
Q49.In proper flow of the map-reduce, reducer will always be executed after mapper.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : TrueOption 2 : False
Q50.Which of the following are the features of Map-reduce?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : Automatic parallelization and distributionOption 2 : Fault-ToleranceOption 3 : Platform independentOption 4 : All of the above
Q51.Where does the intermediate output of mapper gets written to?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : Local disk of node where it is executed.Option 2 : HDFS of node where it is executed.Option 3 : On a remote server outside the cluster.Option 4 : Mapper output gets written to the local disk of Name node machine.
Q52.Reducer is required in map-reduce job for:
Difficulty Level : Moderate
Status : Incorrect
Marks Obtained : 0
Response : 3Option 1 : It combines all the intermediate data collected from mappers.Option 2 : It reduces the amount of data by half of what is supplied to it.Option 3 : Both a and bOption 4 : None of the above
Q53.Output of every map is passed to which component.
Difficulty Level : Moderate
Status : Incorrect
Marks Obtained : 0
Response : 3Option 1 : PartitionerOption 2 : CombinerOption 3 : MapperOption 4 : None of the above
Q54.Data Locality concept is used for:
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : Localizing dataOption 2 : Avoiding network traffic in hadoop systemOption 3 : Both A and BOption 4 : None of the above
Q55.No of files in the output of map reduce job depends on:
Difficulty Level : Moderate
Status : Incorrect
Marks Obtained : 0
Response : 3Option 1 : No of reducer used for the processOption 2 : Size of the dataOption 3 : Both A and BOption 4 : None of the above
Q56.Input format of the map-reduce job is specified in which class?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : Combiner classOption 2 : Reducer classOption 3 : Mapper classOption 4 : Any of the above
Q57.The intermediate keys, and their value lists, are passed to the Reducer in sorted key order.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : TrueOption 2 : False
Q58.In which stage of the map-reduce job data is transferred between mapper and reducer?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : TransferOption 2 : CombinerOption 3 : Distributed CacheOption 4 : Shuffle and Sort
Q59.Maximum three reducers can run at any time in a MapReduce Job.
Difficulty Level : Moderate
Status : Incorrect
Marks Obtained : 0
Response : 1Option 1 : TrueOption 2 : False
Q60.Functionality of the Jobtracker is to:
Difficulty Level : Moderate
Status : Incorrect
Marks Obtained : 0
Response : 3Option 1 : Coordinate the job runOption 2 : Sorting the outputOption 3 : Both A and BOption 4 : None of the above
Q61.The submit() method on Job creates an internal JobSummitter instance and calls _____ on it.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : jobSubmitInternal()Option 2 : internalJobSubmit()Option 3 : submitJobInternal()Option 4 : None of these
Q62.Which method polls the job's progress and after how many seconds?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : WaitForCompletion() and after each secondOption 2 : WaitForCompletion() after every 15 secondsOption 3 : Not possible to pollOption 4 : None of the above
Q63.Job Submitter tells the task tracker that the job is ready for execution.
Difficulty Level : Moderate
Status : Incorrect
Marks Obtained : 0
Response : 1Option 1 : TrueOption 2 : False
Q64.Hadoop 1.0 runs 3 instances of job tracker for parallel execution on hadoopcluster.
Difficulty Level : Moderate
Status : Incorrect
Marks Obtained : 0
Response : 1Option 1 : True
Option 2 : Flase
Q65.Map and Reduce tasks are created in job initialization phase.
Difficulty Level : Moderate
Status : Incorrect
Marks Obtained : 0
Response : 2Option 1 : TrueOption 2 : False
Q66.Based on heartbeats received after how many seconds does it help the job tracker to decide regarding health of task tracker?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : After every 3 secondsOption 2 : After every 1 secondOption 3 : After every 60 secondsOption 4 : None of the above
Q67.Task tracker has assigned fixed number of slots for map and reduce tasks.
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 1Option 1 : TrueOption 2 : False
Q68.To improve the performance of the map-reduce task jar that contains map-reduce code is pushed to each slave node over HTTP.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : TrueOption 2 : False
Q69.Map-reduce can take which type of format as input?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : TextOption 2 : CSVOption 3 : ArbitraryOption 4 : None of these
Q70.Input files can be located at hdfs or local system for map-reduce.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : TrueOption 2 : False
Q71.Is there any default InputFormat for input files in map-reduce process?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : KeyValueInputFormatOption 2 : TextInputFormat.Option 3 : A and BOption 4 : None of these
Q72.An InputFormat is a class that provides the following functionality:
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : Selects the files or other objects that should be used for inputOption 2 : Defines the InputSplits that break a file into tasksOption 3 : Provides a factory for RecordReader objects that read the fileOption 4 : All of the above
Q73.An InputSplit describes a unit of work that comprises a ____ map task in a MapReduce program.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : OneOption 2 : TwoOption 3 : ThreeOption 4 : None of these
Q74.The FileInputFormat and its descendants break a file up into ____MB chunks.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : 128Option 2 : 64Option 3 : 32Option 4 : 256
Q75.What allows several map tasks to operate on a single file in parallel?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : Processing of a file in chunksOption 2 : Configuration file propertiesOption 3 : Both A and BOption 4 : None of the above
Q76.The Record Reader is invoked ________ on the input until the entire InputSplit has been consumed.
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 3Option 1 : OnceOption 2 : TwiceOption 3 : RepeatedlyOption 4 : None of these
Q77.Which of the following is KeyValueTextInputFormat?
Difficulty Level : Moderate
Status : Incorrect
Marks Obtained : 0
Response : 3Option 1 : Key is separated from the value by TabOption 2 : Data is specified in binary sequenceOption 3 : Both A and BOption 4 : None of the above
Q78.In map-reduce programming model mappers can communicate with each other is:
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 2Option 1 : TrueOption 2 : False
Q79.User can define own partitioner class.
Difficulty Level : Moderate
Status : Incorrect
Marks Obtained : 0
Response : 2Option 1 : TrueOption 2 : False
Q80.The Output Format class is a factory for RecordWriter objects; these are used to write the individual records to the files as directed by the OutputFormat is:
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :
Option 1 : TrueOption 2 : False
Q81.Which of the following are part of Hadoop ecosystem.
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 3Option 1 : Talend,MapR,NFSOption 2 : Mysql,ShellOption 3 : Pig,Hive,HbaseOption 4 : None of the above
Q82.Default Metostore location for Hive is:
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : MysqlOption 2 : DerbyOption 3 : PostgreSQLOption 4 : None of the above
Q83.Extend the following class to write a User Defined Function in Hive.
Difficulty Level : Moderate
Status : Incorrect
Marks Obtained : 0
Response : 1Option 1 : HiveMapperOption 2 : EvalOption 3 : UDFOption 4 : None of the above
Q84.Which component of hadoop ecosystem supports updation?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : ZookeeperOption 2 : HiveOption 3 : PigOption 4 : Hbase
Q85.Which hadoop component should be used if a join of dataset is required?
Difficulty Level : Moderate
Status : Incorrect
Marks Obtained : 0
Response : 3Option 1 : HbaseOption 2 : HiveOption 3 : ZookeeperOption 4 : None of the above
Q86.Which hadoop component can be used for ETL?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : PigOption 2 : ZookeeperOption 3 : HbaseOption 4 : None of the above
Q87.Which hadoop component is best suited for pulling data from the web?
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 4Option 1 : HiveOption 2 : ZookeeperOption 3 : HbaseOption 4 : Flume
Q88.Which hadoop component can be used to transfer data from relational DB to HDFS?
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : ZookeeperOption 2 : PigOption 3 : SqoopOption 4 : None of the above
Q89.In an application more than one hadoop component cannot be used on topof HDFS.
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 2Option 1 : TrueOption 2 : False
Q90.Hbase supports join.
Difficulty Level : Moderate
Status : Correct
Marks Obtained : 1
Response : 1Option 1 : TrueOption 2 : False
Q91.Pig can work only with data present in HDFS.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : TrueOption 2 : False
Q92.Which tool out of the following can be used for an OLTP application?
Difficulty Level : Moderate
Status : Incorrect
Marks Obtained : 0
Response : 2Option 1 : PentahoOption 2 : HiveOption 3 : HbaseOption 4 : None of the above
Q93.Which tool is best suited for real time writes?
Difficulty Level : Moderate
Status : Incorrect
Marks Obtained : 0
Response : 1Option 1 : Pig
Option 2 : HiveOption 3 : HbaseOption 4 : Cassandra
Q94.Which out of the following hadoop component is called as ETL of hadoop?
Difficulty Level : Moderate
Status : Incorrect
Marks Obtained : 0
Response : 3Option 1 : PigOption 2 : HbaseOption 3 : TalendOption 4 : None of the above
Q95.Hadoop can completely replace tradtional Dbs.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : TrueOption 2 : False
Q96.Zookeeper can be used as data transfer also.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : FalseOption 2 : True
Q97.Map-reduce cannot be tested on data/files present in local file system.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : TrueOption 2 : False
Q98.Hive was developed by:
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : Tom WhiteOption 2 : ClouderaOption 3 : Doug CuttingOption 4 : Facebook
Q99.Mrv1 programs cannot be run on top of clusters configured for Mrv2.
Difficulty Level : Moderate
Status : Unanswered
Marks Obtained : 0
Response :Option 1 : TrueOption 2 : False
WgIMarks Obtained Subject WiseFinalWgI