A Framework for Distributed Machine Learning: Overcoming the Challenges of Network Latency and Data Consistency
28 Pages Posted: 21 Feb 2023
Date Written: February 15, 2023
Abstract
Distributed machine learning is becoming increasingly popular due to the abundance of data and the complexity of machine learning models. However, distributed machine learning faces several challenges, such as network latency and data inconsistency. In this thesis, we propose a framework for distributed machine learning that overcomes these challenges by introducing a novel communication protocol, data consistency mechanism, and partitioning technique. Our framework significantly reduces network latency, ensures data consistency, and provides an efficient partitioning technique that reduces the amount of communication overhead.
Keywords: distributed systems, machine learning, network latency, data consistency
Suggested Citation: Suggested Citation