TY - JOUR
T1 - Multimodal driver emotion recognition using motor activity and facial expressions
AU - Espino-Salinas, Carlos H.
AU - Luna-García, Huizilopoztli
AU - Celaya-Padilla, José M.
AU - Barría-Huidobro, Cristian
AU - Gamboa Rosales, Nadia Karina
AU - Rondon, David
AU - Villalba-Condori, Klinge Orlando
N1 - Publisher Copyright:
Copyright © 2024 Espino-Salinas, Luna-García, Celaya-Padilla, Barría-Huidobro, Gamboa Rosales, Rondon and Villalba-Condori.
PY - 2024
Y1 - 2024
N2 - Driving performance can be significantly impacted when a person experiences intense emotions behind the wheel. Research shows that emotions such as anger, sadness, agitation, and joy can increase the risk of traffic accidents. This study introduces a methodology to recognize four specific emotions using an intelligent model that processes and analyzes signals from motor activity and driver behavior, which are generated by interactions with basic driving elements, along with facial geometry images captured during emotion induction. The research applies machine learning to identify the most relevant motor activity signals for emotion recognition. Furthermore, a pre-trained Convolutional Neural Network (CNN) model is employed to extract probability vectors from images corresponding to the four emotions under investigation. These data sources are integrated through a unidimensional network for emotion classification. The main proposal of this research was to develop a multimodal intelligent model that combines motor activity signals and facial geometry images to accurately recognize four specific emotions (anger, sadness, agitation, and joy) in drivers, achieving a 96.0% accuracy in a simulated environment. The study confirmed a significant relationship between drivers' motor activity, behavior, facial geometry, and the induced emotions.
AB - Driving performance can be significantly impacted when a person experiences intense emotions behind the wheel. Research shows that emotions such as anger, sadness, agitation, and joy can increase the risk of traffic accidents. This study introduces a methodology to recognize four specific emotions using an intelligent model that processes and analyzes signals from motor activity and driver behavior, which are generated by interactions with basic driving elements, along with facial geometry images captured during emotion induction. The research applies machine learning to identify the most relevant motor activity signals for emotion recognition. Furthermore, a pre-trained Convolutional Neural Network (CNN) model is employed to extract probability vectors from images corresponding to the four emotions under investigation. These data sources are integrated through a unidimensional network for emotion classification. The main proposal of this research was to develop a multimodal intelligent model that combines motor activity signals and facial geometry images to accurately recognize four specific emotions (anger, sadness, agitation, and joy) in drivers, achieving a 96.0% accuracy in a simulated environment. The study confirmed a significant relationship between drivers' motor activity, behavior, facial geometry, and the induced emotions.
KW - ADAS
KW - convolutional neural network
KW - driver emotions
KW - facial emotion recognition
KW - motor activity
KW - transfer learning
UR - http://www.scopus.com/inward/record.url?scp=85211625963&partnerID=8YFLogxK
U2 - 10.3389/frai.2024.1467051
DO - 10.3389/frai.2024.1467051
M3 - Article
AN - SCOPUS:85211625963
SN - 2624-8212
VL - 7
JO - Frontiers in Artificial Intelligence
JF - Frontiers in Artificial Intelligence
M1 - 1467051
ER -