Attention Enabled Ensemble Deep Learning Models and its Validation for Depression Detection: A Domain Adoption Paradigm
45 Pages Posted: 6 Apr 2023
Abstract
Background and Motivation: Depression is increasingly prevalent leading to higher suicide risk. Depression detection and sentimental analysis in text inputs in cross-domain frameworks is challenging. Just solo deep learning (SDL) and ensemble deep learning (EDL) models are not robust enough. Recently attention mechanisms were introduced in SDL. We hypothesize that attention-enabled EDL (aeEDL) architectures would be superior compared to attention-not-enabled SDL (aneSDL) or aeSDL models.
Method: We designed EDL-based architectures with attention blocks to build eleven kinds of SDL and five kinds of EDL models on four domain-specific datasets. We scientifically validate our models by comparing 'seen' and 'unseen' paradigms (SUP). We benchmark against SemEval (2016) sentimental dataset and establish reliability tests.
Results: The mean increase in accuracy for EDL over their corresponding SDL components was 4.49%. For studying the effect of attention block, the increase in mean accuracy (AUC) of aeSDL over aneSDL was 2.58% (1.73%), and increase in mean accuracy (AUC) of aeEDL over aneEDL was 2.76% (2.80%). When comparing EDL vs. SDL for non-attention and attention, the mean aneEDL was greater than aneSDL by 4.82% (3.71%) and mean aeEDL was greater than aeSDL by 5.06% (4.81%). On benchmarking dataset (SemEval), the best-performing aeEDL model (ALBERT+BERT-BiLSTM) was superior to the best aeSDL (BERT-BiLSTM) model by 3.86%. Our scientific validation and robust design showed a difference of only 2.7% in SUP, thereby meeting regulatory constraints.
Conclusion: We validated all our hypotheses and further demonstrated that aeEDL is a very effective and generalized method for detecting symptoms of depression in cross-domain settings.
Keywords: Depression, Sentiment analysis, ensemble deep learnin, attention-enabled, domain adoption, scientific validation.
Suggested Citation: Suggested Citation