From Neural Signals to Images: Generative Reconstruction of Visual Stimuli from Electroencephalography Signals
19 Pages Posted: 28 Jan 2025
Abstract
Understanding a patient's visual perceptions can be critical for diagnosing and managing certain conditions, such as neurological disorders or vision impairments. Current methods of visual perception assessment rely on subjective patient reporting or indirect measurements, which may lack accuracy and objectivity. There is a need for a non-invasive and objective automated tool that can provide insights into patients' visual experiences in real-time, enabling healthcare professionals to make more informed decisions regarding diagnosis, treatment, and rehabilitation.This work aims to regenerate high-resolution images from Electroencephalography (EEG) data that correspond to the visual stimuli observed by a person during recording. It bridges the gap in conventional EEG analysis methods, which struggle with large data dimensionality and the challenges of extracting meaningful visualizations. The proposed solution uses EEG-based technology to record and analyze the patients' brain signals associated with visual perception. By combining Autoencoders (AE) and Conditional Generative Adversarial Networks (cGANs), the framework enables precise and semantically meaningful image reconstruction. The AE effectively compresses raw EEG signals into a compact feature space, preserving critical information while reducing noise. These features, paired with labels derived directly from the primary data, serve as structured input to the cGAN's Generator. Unlike traditional GAN architectures, which rely on random noise, this model utilizes conditional EEG-based features, allowing the Generator to produce images that accurately represent specific brainwave patterns and neural states.We demonstrate the effectiveness of this approach, achieving high-quality image reconstructions with Peak Signal to Noise Ratio (PSNR) values of 31.2177 and moderate structural similarity with Structured Similarity Index (SSIM) values of 0.7380 across all classes on a primary dataset of 33 participants acquired as part of this study. This novel approach demonstrates the potential to decode complex neural activity into visual representations, paving the way for advancements in brain-computer interfaces and neuroimaging.
Keywords: image reconstruction, Autoencoders, Conditional Generative Adversarial Networks (cGAN), semantic alignment, brain-computer interfaces, Neuroimaging
Suggested Citation: Suggested Citation