Predicting Learners' Engagement and Help-Seeking Behaviors in an E-Learning Environment by Using Facial and Head Pose Features

29 Pages Posted: 19 Oct 2023

See all articles by Guan-Yun Wang

Guan-Yun Wang

Tohoku University

Yasuhiro Hatori

Tohoku University

Yoshiyuki Sato

Tohoku University

Chia-Huei Tseng

Tohoku University

Satoshi Shioiri

Tohoku University

Abstract

In an e-learning environment, it is difficult for teachers to track learners’ engagement or detect when they need help. The current study estimates two mental states: the engagement state and the help-seeking state. We asked participants to solve a problem on an intelligent tutoring system (ITS) and recorded their facial videos, clicks of hint buttons, and answers. Action Units (AUs) and head pose features were extracted from OpenFace to consist of three feature sets: Basic AUs, Head Pose, and Co-occurring AUs feature sets. LightGBM (Light Gradient Boosting Machine) and SVM (support vector machine) classifiers showed 0.69 to 0.93 accuracy in estimating the two mental states. The classification performance revealed that LightGBM is better than SVM. We used SHAP (Shapley Additive exPlanations) analysis to evaluate the importance of Basic AUs and Head Pose features. The results showed that AU02 (outer brow raiser), AU23 (lip tightener), and AU04 (brow lowerer) are important for estimating the engagement states; AU04, AU23, and AU14 (dimpler) are important for estimating the help-seeking states. The current study succeeded in estimating when participants are engaging in solving a problem and when they need help. Features obtained from facial videos are useful in improving e-learning education.

Keywords: Machine Learning, Facial expression, Hint processing, Action Units, Engagement, Help-seeking

Suggested Citation

Wang, Guan-Yun and Hatori, Yasuhiro and Sato, Yoshiyuki and Tseng, Chia-Huei and Shioiri, Satoshi, Predicting Learners' Engagement and Help-Seeking Behaviors in an E-Learning Environment by Using Facial and Head Pose Features. Available at SSRN: https://ssrn.com/abstract=4600003 or http://dx.doi.org/10.2139/ssrn.4600003

Guan-Yun Wang (Contact Author)

Tohoku University ( email )

SKK Building, Katahira 2
Aoba-ku, Sendai, 980-8577
Japan

Yasuhiro Hatori

Tohoku University ( email )

SKK Building, Katahira 2
Aoba-ku, Sendai, 980-8577
Japan

Yoshiyuki Sato

Tohoku University ( email )

SKK Building, Katahira 2
Aoba-ku, Sendai, 980-8577
Japan

Chia-Huei Tseng

Tohoku University ( email )

SKK Building, Katahira 2
Aoba-ku, Sendai, 980-8577
Japan

Satoshi Shioiri

Tohoku University ( email )

SKK Building, Katahira 2
Aoba-ku, Sendai, 980-8577
Japan

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
44
Abstract Views
178
PlumX Metrics