Deep facial emotion recognition in video using eigenframes

Loading...
Thumbnail Image

Date

Journal Title

Journal ISSN

Volume Title

Publisher

Wiley

Access Rights

info:eu-repo/semantics/openAccess

Abstract

Recently, video-based facial emotion recognition (FER) has been an attractive topic in the computer vision society. However, processing several hundreds of frames for a single video of a particular emotion is not efficient. In this study, the authors propose a novel approach to obtain a representative set of frames for a video in the eigenspace domain. Principal component analysis (PCA) is applied to a single emotional video extracting the most significant eigenframes representing the temporal motion variance embedded in the video. Given that faces are segmented and normalised, the variance captured by PCA is attributed to the facial expression dynamics. The variation in the temporal domain is mapped to the eigenspace reducing the redundancy. The proposed approach is used to extract the input eigenframes. Later, VGG-16, ResNet50, and 2D and 3D CNN architectures called eigenFaceNet are trained on the RML, eNTERFACE'05, and AFEW 6.0 databases. The experimental results are superior to the state-of-the-art by 8 and 4% for RML, eNTERFACE'05 databases, respectively. The performance achievement is also coupled with a reduction in the computational time.

Description

Keywords

principal component analysis, emotion recognition, computer vision, face recognition, video signal processing, image representation, image motion analysis, video-based facial emotion recognition, computer vision society, single video, eigenspace domain, principal component analysis, PCA, single emotional video, temporal motion variance, facial expression dynamics, temporal domain, input eigenframes

Journal or Series

Iet Image Processing

WoS Q Value

Scopus Q Value

Volume

14

Issue

14

Citation

Endorsement

Review

Supplemented By

Referenced By