From Neural Signals to Images: Generative Reconstruction of Visual Stimuli from Electroencephalography Signals

19 Pages Posted: 28 Jan 2025

See all articles by Rahee Walambe

Rahee Walambe

Symbiosis International (Deemed University)

Akshay Kushwaha

affiliation not provided to SSRN

Divya Shah

affiliation not provided to SSRN

Mranmay Shetty

affiliation not provided to SSRN

Vaibhav Tasgaonkar

University of Pisa

Ketan Kotecha

Symbiosis International (Deemed University)

Sultan Alfarhood

King Saud University

Abstract

Understanding a patient's visual perceptions can be critical for diagnosing and managing certain conditions, such as neurological disorders or vision impairments. Current methods of visual perception assessment rely on subjective patient reporting or indirect measurements, which may lack accuracy and objectivity. There is a need for a non-invasive and objective automated tool that can provide insights into patients' visual experiences in real-time, enabling healthcare professionals to make more informed decisions regarding diagnosis, treatment, and rehabilitation.This work aims to regenerate high-resolution images from Electroencephalography (EEG) data that correspond to the visual stimuli observed by a person during recording. It bridges the gap in conventional EEG analysis methods, which struggle with large data dimensionality and the challenges of extracting meaningful visualizations. The proposed solution uses EEG-based technology to record and analyze the patients' brain signals associated with visual perception. By combining Autoencoders (AE) and Conditional Generative Adversarial Networks (cGANs), the framework enables precise and semantically meaningful image reconstruction. The AE effectively compresses raw EEG signals into a compact feature space, preserving critical information while reducing noise. These features, paired with labels derived directly from the primary data, serve as structured input to the cGAN's Generator. Unlike traditional GAN architectures, which rely on random noise, this model utilizes conditional EEG-based features, allowing the Generator to produce images that accurately represent specific brainwave patterns and neural states.We demonstrate the effectiveness of this approach, achieving high-quality image reconstructions with Peak Signal to Noise Ratio (PSNR) values of 31.2177 and moderate structural similarity with Structured Similarity Index (SSIM) values of 0.7380 across all classes on a primary dataset of 33 participants acquired as part of this study. This novel approach demonstrates the potential to decode complex neural activity into visual representations, paving the way for advancements in brain-computer interfaces and neuroimaging.

Keywords: image reconstruction, Autoencoders, Conditional Generative Adversarial Networks (cGAN), semantic alignment, brain-computer interfaces, Neuroimaging

Suggested Citation

Walambe, Rahee and Kushwaha, Akshay and Shah, Divya and Shetty, Mranmay and Tasgaonkar, Vaibhav and Kotecha, Ketan and Alfarhood, Sultan, From Neural Signals to Images: Generative Reconstruction of Visual Stimuli from Electroencephalography Signals. Available at SSRN: https://ssrn.com/abstract=5098699 or http://dx.doi.org/10.2139/ssrn.5098699

Rahee Walambe (Contact Author)

Symbiosis International (Deemed University)

Akshay Kushwaha

affiliation not provided to SSRN ( email )

No Address Available

Divya Shah

affiliation not provided to SSRN ( email )

No Address Available

Mranmay Shetty

affiliation not provided to SSRN ( email )

No Address Available

Vaibhav Tasgaonkar

University of Pisa ( email )

Lungarno Pacinotti, 43
Pisa PI, 56126
Italy

Ketan Kotecha

Symbiosis International (Deemed University) ( email )

Sultan Alfarhood

King Saud University ( email )

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
103
Abstract Views
300
Rank
567,171
PlumX Metrics