In Map Reduce – Record reader importance

Hadoop is running on Hadoop Distributed File System (HDFS) that means it is based on Distributed computing. If one data set is passing through Hadoop system it is split as blocks. The default size is 64 MB in Hadoop system. It can be split as multiple of these default size that means 64 MB or 128 MB or 256 MB like these sizes. In some interview they asked why it is 32 MB size. These default size of the block is configure during the setting up of Hadoop System.

Hadoop processing is Map Reduce Processing where these block of undergone the four basic operations like
Mapping
Splitting
sort and Shuffling
then Reducingrecord-reader-in-map-reduce
In these Record reader is belongs to mapping operation. What is this record reader and how it is function is explained in below video.
This is the Class room training video for Hadoop in Durga software Solutions.

Comments

comments

Leave a Reply

Your email address will not be published. Required fields are marked *