Dominant and Complementary Emotion Recognition From Still Images of Faces

dc.contributor.authorGuo, Jianzhu
dc.contributor.authorLei, Zhen
dc.contributor.authorWan, Jun
dc.contributor.authorAvots, Egils
dc.contributor.authorHajarolasvadi, Noushin
dc.contributor.authorKnyazev, Boris
dc.contributor.authorAnbarjafari, Gholamreza
dc.date.accessioned2026-02-06T18:49:38Z
dc.date.issued2018
dc.departmentDoğu Akdeniz Üniversitesi
dc.description.abstractEmotion recognition has a key role in affective computing. Recently, fine-grained emotion analysis, such as compound facial expression of emotions, has attracted high interest of researchers working on affective computing. A compound facial emotion includes dominant and complementary emotions (e.g., happily-disgusted and sadly-fearful), which is more detailed than the seven classical facial emotions (e.g., happy, disgust, and so on). Current studies on compound emotions are limited to use data sets with limited number of categories and unbalanced data distributions, with labels obtained automatically by machine learning-based algorithms which could lead to inaccuracies. To address these problems, we released the iCV-MEFED data set, which includes 50 classes of compound emotions and labels assessed by psychologists. The task is challenging due to high similarities of compound facial emotions from different categories. In addition, we have organized a challenge based on the proposed iCV-MEFED data set, held at FG workshop 2017. In this paper, we analyze the top three winner methods and perform further detailed experiments on the proposed data set. Experiments indicate that pairs of compound emotion (e.g., surprisingly-happy vs happily-surprised) are more difficult to be recognized if compared with the seven basic emotions. However, we hope the proposed data set can help to pave the way for further research on compound facial emotion recognition.
dc.description.sponsorshipEstonian Research Council [PUT638, IUT213]; Estonian Center of Excellence in IT through the European Regional Development Fund; Spanish projects (MINECO/FEDER, UE) [TIN2015-66951-C2-2-R, TIN2016-74946-P]; CERCA Programme / Generalitat de Catalunya; European Commission Horizon 2020 granted project SEE.4C [H2020-ICT-2015]; CERCA Programme/Generalitat de Catalunya; National Key Research and Development Plan [2016YFC0801002]; Chinese National Natural Science Foundation [61502491, 61473291, 61572501, 61572536, 61673052, 61773392, 61403405]; Scientific and Technological Research Council of Turkey (TAIJBArTAK) 1001 Project [116E097]
dc.description.sponsorshipThis work was supported in part by the Estonian Research Council under Grant PUT638 and Grant IUT213, in part by the Estonian Center of Excellence in IT through the European Regional Development Fund, in part by the Spanish projects (MINECO/FEDER, UE) under Grant TIN2015-66951-C2-2-R and Grant TIN2016-74946-P and CERCA Programme / Generalitat de Catalunya, in part by the European Commission Horizon 2020 granted project SEE.4C under Grant H2020-ICT-2015, in part by the CERCA Programme/Generalitat de Catalunya, in part by the National Key Research and Development Plan under Grant 2016YFC0801002, in part by the Chinese National Natural Science Foundation Projects under Grant 61502491, Grant 61473291, Grant 61572501, Grant 61572536, Grant 61673052, Grant 61773392, and Grant 61403405, and in part by the Scientific and Technological Research Council of Turkey (TAIJBArTAK) 1001 Project under Grant 116E097.
dc.identifier.doi10.1109/ACCESS.2018.2831927
dc.identifier.endpage26403
dc.identifier.issn2169-3536
dc.identifier.orcid0000-0001-8460-5717
dc.identifier.orcid0000-0001-5338-3007
dc.identifier.orcid0000-0002-3120-5370
dc.identifier.orcid0009-0008-5201-5817
dc.identifier.scopus2-s2.0-85046369915
dc.identifier.scopusqualityQ1
dc.identifier.startpage26391
dc.identifier.urihttps://doi.org/10.1109/ACCESS.2018.2831927
dc.identifier.urihttps://hdl.handle.net/11129/14964
dc.identifier.volume6
dc.identifier.wosWOS:000434935200001
dc.identifier.wosqualityQ2
dc.indekslendigikaynakWeb of Science
dc.indekslendigikaynakScopus
dc.language.isoen
dc.publisherIEEE-Inst Electrical Electronics Engineers Inc
dc.relation.ispartofIeee Access
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı
dc.rightsinfo:eu-repo/semantics/openAccess
dc.snmzKA_WoS_20260204
dc.subjectDominant and complementary emotion recognition
dc.subjectcompound emotions
dc.subjectfine-grained face emotion dataset
dc.titleDominant and Complementary Emotion Recognition From Still Images of Faces
dc.typeArticle

Files