How the HDFS Blocks are replicated?

Answer

HDFS is designed to reliably store very large files across machines in a large cluster. It stores each file as a sequence of blocks; all blocks in a file except the last block are the same size.
The blocks of a file are replicated for fault tolerance. The block size and replication factor are configurable per file. An application can specify the number of replicas of a file. The replication factor can be specified at file creation time and can be changed later. Files in HDFS are write-once and have strictly one writer at any time.
The NameNode makes all decisions regarding replication of blocks. HDFS uses rack-aware replica placement policy. In default configuration there are total 3 copies of a datablock on HDFS, 2 copies are stored on datanodes on same rack and 3rd copy on a different rack.

All hadoop Questions

Ask your interview questions on hadoop

Write Your comment or Questions if you want the answers on hadoop from hadoop Experts
Name* :
Email Id* :
Mob no* :
Question
Or
Comment* :
 





Disclimer: PCDS.CO.IN not responsible for any content, information, data or any feature of website. If you are using this website then its your own responsibility to understand the content of the website

--------- Tutorials ---