Predictive Model for Data Availability in Big Data Processing
Proceedings of 3rd International Conference on Internet of Things and Connected Technologies (ICIoTCT), 2018 held at Malaviya National Institute of Technology, Jaipur (India) on March 26-27, 2018
6 Pages Posted: 8 May 2018
Date Written: April 29, 2018
In the world of cloud computing, huge sets of data are stored, gathered and integrated from everything we are surrounded by all the time. To extract important esteem and pertinent required information from enormous data, we require ideal analysis capabilities and skills. Hadoop distribution file system has a master and slave architecture. It comprises of a master name node i.e. the single server that deals with the record framework and controls access to documents by client’s. The name node keeps a reference to every block and file in the file system in memory. It is master of all the data nodes and contains critical information. The single name node architecture represents a SPOF, and it is a major limiting factor for the scalability of an HDFS cluster. In traditional HDFS, among every one of the segments, the single Name node is the most vulnerable piece of the entire framework.
Suggested Citation: Suggested Citation