17
www.garudatrainings. com BIG DATA/ Hadoop Interview Questions

Big Data & Hadoop Latest Interview Questions with Answers by Garuda Trainings

  • View
    3.826

  • Download
    2

Embed Size (px)

DESCRIPTION

Big Data & Hadoop Latest Interview Questions with Answers by Garuda Trainings. We, Garuda Trainings are provide Bigdata & Hadoop Online Training over globe. For More: http://garudatrainings.com/ Mail: [email protected] Phone: +1(508)841-6144

Citation preview

Page 1: Big Data & Hadoop Latest Interview Questions with Answers by Garuda Trainings

www.garudatrainings.com

BIG DATA/ Hadoop Interview Questions

Page 2: Big Data & Hadoop Latest Interview Questions with Answers by Garuda Trainings

1.What is Big Data?

www.garudatrainings.com

Big data is data that exceeds the processing capacity of traditional database

systems. The data is too big, moves too fast, or doesn’t fit the strictures of your

database architectures. To gain value from this data, you must choose an

alternative way to process it.

Page 3: Big Data & Hadoop Latest Interview Questions with Answers by Garuda Trainings

2.What is NoSQL?

www.garudatrainings.com

NoSQL is a whole new way of thinking about a database. NoSQL is not a

relational database. The reality is that a relational database model may not be the

best solution for all situations. The easiest way to think of NoSQL, is that of a

database which does not adhering to the traditional relational database

management system (RDMS) structure. Sometimes you will also see it revered to

as 'not only SQL'.

Page 4: Big Data & Hadoop Latest Interview Questions with Answers by Garuda Trainings

3.We have already SQL then Why NoSQL? 

www.garudatrainings.com

NoSQL is high performance with high availability, and offers rich query language and

easy scalability.

NoSQL is gaining momentum, and is supported by Hadoop, MongoDB and others.

The NoSQL Database site is a good reference for someone looking for more

information.

Page 5: Big Data & Hadoop Latest Interview Questions with Answers by Garuda Trainings

4.What is Hadoop and where did Hadoop come from?

www.garudatrainings.com

By Mike Olson: The underlying technology was invented by Google back in their

earlier days so they could usefully index all the rich textural and structural

information they were collecting, and then present meaningful and actionable results

to users. There was nothing on the market that would let them do that, so they built

their own platform. Google’s innovations were incorporated into Nutch, an open

source project, and Hadoop was later spun-off from that. Yahoo has played a key

role developing Hadoop for enterprise applications.

Page 6: Big Data & Hadoop Latest Interview Questions with Answers by Garuda Trainings

5.What problems can Hadoop solve?

www.garudatrainings.com

By Mike Olson: The Hadoop platform was designed to solve problems where you

have a lot of data — perhaps a mixture of complex and structured data — and it

doesn’t fit nicely into tables. It’s for situations where you want to run analytics

that are deep and computationally extensive, like clustering and targeting. That’s

exactly what Google was doing when it was indexing the web and examining user

behavior to improve performance algorithms.

Page 7: Big Data & Hadoop Latest Interview Questions with Answers by Garuda Trainings

6.What is the Difference between Hadoop and Apache Hadoop?

www.garudatrainings.com

There is no diff, Hadoop, formally called Apache Hadoop, is an Apache

Software Foundation project.

Page 8: Big Data & Hadoop Latest Interview Questions with Answers by Garuda Trainings

7.Why would NoSQL be better than using a SQL Database? And how much better is it?

www.garudatrainings.com

It would be better when your site needs to scale so massively that the best RDBMS

running on the best hardware you can afford and optimized as much as possible

simply can't keep up with the load. How much better it is depends on the specific use

case (lots of update activity combined with lots of joins is very hard on "traditional"

RDBMSs) - could well be a factor of 1000 in extreme cases.

Page 9: Big Data & Hadoop Latest Interview Questions with Answers by Garuda Trainings

8.Name the modes in which Hadoop can run?

www.garudatrainings.com

Hadoop can be run in one of three modes:

i. Standalone (or local) mode

ii. Pseudo-distributed mode

iii. Fully distributed mode

Page 10: Big Data & Hadoop Latest Interview Questions with Answers by Garuda Trainings

9.What do you understand by Standalone (or local) mode?

www.garudatrainings.com

There are no daemons running and everything runs in a single JVM. Standalone

mode is suitable for running MapReduce programs during development, since it is

easy to test and debug them.

Page 11: Big Data & Hadoop Latest Interview Questions with Answers by Garuda Trainings

10.What is the idea behind HDFS? Where does HDFS fail?

www.garudatrainings.com

HDFS is built around the idea that the most efficient approach to storing data for

processing is to optimize it for write once, and read many approach.

Cannot support large number of small files as the file system metadata increases

with every new file, and hence it is not able to scale to billions of files. This file

system metadata is loaded into memory and since memory is limited, so is the

number of files supported.

Page 12: Big Data & Hadoop Latest Interview Questions with Answers by Garuda Trainings

11.What are the ways of backing up the filesystem metadata?

www.garudatrainings.com

There are 2 ways of backing up the filesystem metadata which maps different

filenames with their data stored as different blocks on various data nodes:

Writing the filesystem metadata persistently onto a local disk as well as on a

remote NFS mount.

Running a secondary namenode.

Page 13: Big Data & Hadoop Latest Interview Questions with Answers by Garuda Trainings

12.What are the functions of JobTracker in Hadoop?

www.garudatrainings.com

Once you submit your code to your cluster, the JobTracker determines the

execution plan by determining which files to process, assigns nodes to different

tasks, and monitors all tasks as they are running. 

If a task fail, the JobTracker will automatically relaunch the task, possibly on a

different node, up to a predefined limit of retries. 

There is only one JobTracker daemon per Hadoop cluster. It is typically run on a

server as a master node of the cluster.

Page 14: Big Data & Hadoop Latest Interview Questions with Answers by Garuda Trainings

13.What is MapReduce in Hadoop?

www.garudatrainings.com

Hadoop MapReduce (Hadoop Map/Reduce) is a software framework for

distributed processing of large data sets on compute clusters of commodity

hardware. It is a sub-project of the Apache Hadoop project. The framework takes

care of scheduling tasks, monitoring them and re-executing any failed tasks.

Page 15: Big Data & Hadoop Latest Interview Questions with Answers by Garuda Trainings

14.What are the benefits of block transfer?

www.garudatrainings.com

A file can be larger than any single disk in the network. There is nothing that

requires the blocks from a file to be stored on the same disk, so they can take

advantage of any of the disks in the cluster. Making the unit of abstraction a block

rather than a file simplifies the storage subsystem. Blocks provide fault tolerance

and availability. To insure against corrupted blocks and disk and machine failure,

each block is replicated to a small number of physically separate machines

(typically three). If a block becomes unavailable, a copy can be read from another

location in a way that is transparent to the client.

Page 16: Big Data & Hadoop Latest Interview Questions with Answers by Garuda Trainings

15.What is the meaning of speculative execution in Hadoop? Why is it important?

www.garudatrainings.com

Speculative execution is a way of coping with individual Machine performance. In

large clusters where hundreds or thousands of machines are involved there may be

machines which are not performing as fast as others. This may result in delays in a

full job due to only one machine not performaing well. To avoid this, speculative

execution in hadoop can run multiple copies of same map or reduce task on

different slave nodes. The results from first node to finish are used.

Page 17: Big Data & Hadoop Latest Interview Questions with Answers by Garuda Trainings

Contact us For More Stuff:

www.garudatrainings.com

www.garudatrainings.com Mail : [email protected],

[email protected]

Phone : +1(508)841-6144