What is HDFS ? How it is different from traditional file systems?

Answer

HDFS, the Hadoop Distributed File System, is responsible for storing huge data on the cluster. This is a distributed file system designed to run on commodity hardware. 

It has many similarities with existing distributed file systems. However, the differences from other distributed file systems are significant.
◦HDFS is highly fault-tolerant and is designed to be deployed on low-cost hardware.
◦HDFS provides high throughput access to application data and is suitable for applications that have large data sets.
◦HDFS is designed to support very large files. Applications that are compatible with HDFS are those that deal with large data sets. These applications write their data only once but they read it one or more times and require these reads to be satisfied at streaming speeds. HDFS supports write-once-read-many semantics on files.

All hadoop Questions

Ask your interview questions on hadoop

Write Your comment or Questions if you want the answers on hadoop from hadoop Experts
Name* :
Email Id* :
Mob no* :
Question
Or
Comment* :
 





Disclimer: PCDS.CO.IN not responsible for any content, information, data or any feature of website. If you are using this website then its your own responsibility to understand the content of the website

--------- Tutorials ---