In Map Reduce – Record reader importance


Hadoop is running on Hadoop Distributed File System (HDFS) that means it is based on Distributed computing. If one data set is passing through Hadoop system it is split as blocks. The default size is 64 MB in Hadoop system. It can be split as multiple of these default size that means 64 MB or 128 MB or 256 MB like these sizes. In some interview they asked why it is 32 MB size. These default size of the block is configure during the setting up of Hadoop System.

Hadoop processing is Map Reduce Processing where these block of undergone the four basic operations like
Mapping
Splitting
sort and Shuffling
then Reducingrecord-reader-in-map-reduce
In these Record reader is belongs to mapping operation. What is this record reader and how it is function is explained in below video.
This is the Class room training video for Hadoop in Durga software Solutions.

What's Your Reaction?

Cry Cry
0
Cry
Cute Cute
0
Cute
Damn Damn
0
Damn
Dislike Dislike
0
Dislike
Like Like
0
Like
Lol Lol
0
Lol
Love Love
0
Love
Win Win
0
Win
WTF WTF
0
WTF
hate hate
0
hate
fun fun
0
fun

Comments 0

Your email address will not be published. Required fields are marked *

In Map Reduce – Record reader importance

log in

Become a part of our community!

Don't have an account?
sign up

reset password

Back to
log in

sign up

Join BoomBox Community

Back to
log in
Choose A Format
Personality quiz
Trivia quiz
Poll
Story
List
Open List
Ranked List
Meme
Video
Audio
Image