A Framework for Hierarchical Deep Reinforcement Learning with Conceptual Embedding
31 Pages Posted: 27 Feb 2025
Abstract
Deep reinforcement learning (DRL) faces challenges when the combinatorial state-action space becomes excessively large. Hierarchical reinforcement learning is a promising approach to resolve the scalability challenges. A primary problem of hierarchical DRL is how to build the hierarchical architecture of an agent's decision-making process. To improve the training efficiency, this paper proposes a framework with conceptual embedding to build the hierarchical architecture and restrict the exploration space. In this framework, we decouple the recognition and decision functions from the DRL policy, dividing them into two main functional modules. One is the recognition module used to recognize the hierarchical latent state spaces of the environment. Another is the decision module used to plan hierarchical strategies of serial actions according to the corresponding latent state spaces. Through this approach, the DRL agent establishes a transparent inference pipeline, enabling the integration of prior knowledge into the deep model. The high-level abstract concepts can guide the policy learning process, rendering the agent's exploration more efficient compared to free trial-and-error learning. The complexity of exploration space is defined and analyzed, and the experimental results validate effectiveness of the method.
Keywords: hierarchical deep reinforcement learning, state space abstraction, conceptual embedding, prior knowledge constraint
Suggested Citation: Suggested Citation