Dhhnn: Adynamic Hypergraph Hyperbolic Neural Network Based on Variational Autoencoder For Multimodal Data Integration and Node Classification
23 Pages Posted: 12 Nov 2024
Abstract
In recent years, the integration of hyperbolic geometry with Graph Neural Networks (GNNs) has garnered significant attention due to its effectiveness in capturing complex hierarchical structures, particularly within real-world graphs and scale-free networks. Although hyperbolic neural networks have shown strong performance across various domains, most existing models rely on static graph structures, limiting their adaptability to dynamic data. Previous studies have primarily focused on improving the modeling capacity of hyperbolic space for latent space data during training, often neglecting the preservation of high-order intrinsic features before training. To address this, we propose a novel Dynamic Hypergraph Hyperbolic Neural Network (DHHNN) based on a Variational Autoencoder for multimodal data integration. This model combines the advantages of hyperbolic geometry, dynamic hypergraphs, and the self-attention mechanism to enhance multimodal data representation learning. DHHNN introduces a dynamic hypergraph framework that continuously adjusts the relationships between hypernodes and hyperedges during training, effectively capturing higher-order dependencies within complex networks. Furthermore, the self-attention mechanism dynamically regulates the dependency levels between hypernodes and hyperedges, enhancing the model's ability to capture long-range dependencies and complex feature interactions. Leveraging the negative curvature of hyperbolic space, DHHNN compactly and accurately represents complex scale-free networks. Experimental results on seven benchmark datasets and latent space visualizations demonstrate that DHHNN provides compact and effective data representation, outperforming existing models and achieving state-of-the-art performance in node classification tasks.
Keywords: Hyperbolic Hypergraph, Dynamic Hypergraph, Variational Autoencoder
Suggested Citation: Suggested Citation