Deep facial emotion recognition in video using eigenframes

dc.contributor.authorHajarolasvadi, Noushin
dc.contributor.authorDemirel, Hasan
dc.date.accessioned2026-02-06T18:43:44Z
dc.date.issued2020
dc.departmentDoğu Akdeniz Üniversitesi
dc.description.abstractRecently, video-based facial emotion recognition (FER) has been an attractive topic in the computer vision society. However, processing several hundreds of frames for a single video of a particular emotion is not efficient. In this study, the authors propose a novel approach to obtain a representative set of frames for a video in the eigenspace domain. Principal component analysis (PCA) is applied to a single emotional video extracting the most significant eigenframes representing the temporal motion variance embedded in the video. Given that faces are segmented and normalised, the variance captured by PCA is attributed to the facial expression dynamics. The variation in the temporal domain is mapped to the eigenspace reducing the redundancy. The proposed approach is used to extract the input eigenframes. Later, VGG-16, ResNet50, and 2D and 3D CNN architectures called eigenFaceNet are trained on the RML, eNTERFACE'05, and AFEW 6.0 databases. The experimental results are superior to the state-of-the-art by 8 and 4% for RML, eNTERFACE'05 databases, respectively. The performance achievement is also coupled with a reduction in the computational time.
dc.description.sponsorshipBAP-C project of Eastern Mediterranean University [BAP-C-02-18-0001]
dc.description.sponsorshipThis research was funded by BAP-C project of Eastern Mediterranean University under grant no. BAP-C-02-18-0001.
dc.identifier.doi10.1049/iet-ipr.2019.1566
dc.identifier.endpage3546
dc.identifier.issn1751-9659
dc.identifier.issn1751-9667
dc.identifier.issue14
dc.identifier.orcid0000-0002-3120-5370
dc.identifier.orcid0009-0008-5201-5817
dc.identifier.scopus2-s2.0-85098736896
dc.identifier.scopusqualityQ2
dc.identifier.startpage3536
dc.identifier.urihttps://doi.org/10.1049/iet-ipr.2019.1566
dc.identifier.urihttps://hdl.handle.net/11129/13747
dc.identifier.volume14
dc.identifier.wosWOS:000605364800026
dc.identifier.wosqualityQ3
dc.indekslendigikaynakWeb of Science
dc.indekslendigikaynakScopus
dc.language.isoen
dc.publisherWiley
dc.relation.ispartofIet Image Processing
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı
dc.rightsinfo:eu-repo/semantics/openAccess
dc.snmzKA_WoS_20260204
dc.subjectprincipal component analysis
dc.subjectemotion recognition
dc.subjectcomputer vision
dc.subjectface recognition
dc.subjectvideo signal processing
dc.subjectimage representation
dc.subjectimage motion analysis
dc.subjectvideo-based facial emotion recognition
dc.subjectcomputer vision society
dc.subjectsingle video
dc.subjecteigenspace domain
dc.subjectprincipal component analysis
dc.subjectPCA
dc.subjectsingle emotional video
dc.subjecttemporal motion variance
dc.subjectfacial expression dynamics
dc.subjecttemporal domain
dc.subjectinput eigenframes
dc.titleDeep facial emotion recognition in video using eigenframes
dc.typeArticle

Files