Dominant and Complementary Emotion Recognition From Still Images of Faces
| dc.contributor.author | Guo, Jianzhu | |
| dc.contributor.author | Lei, Zhen | |
| dc.contributor.author | Wan, Jun | |
| dc.contributor.author | Avots, Egils | |
| dc.contributor.author | Hajarolasvadi, Noushin | |
| dc.contributor.author | Knyazev, Boris | |
| dc.contributor.author | Anbarjafari, Gholamreza | |
| dc.date.accessioned | 2026-02-06T18:49:38Z | |
| dc.date.issued | 2018 | |
| dc.department | Doğu Akdeniz Üniversitesi | |
| dc.description.abstract | Emotion recognition has a key role in affective computing. Recently, fine-grained emotion analysis, such as compound facial expression of emotions, has attracted high interest of researchers working on affective computing. A compound facial emotion includes dominant and complementary emotions (e.g., happily-disgusted and sadly-fearful), which is more detailed than the seven classical facial emotions (e.g., happy, disgust, and so on). Current studies on compound emotions are limited to use data sets with limited number of categories and unbalanced data distributions, with labels obtained automatically by machine learning-based algorithms which could lead to inaccuracies. To address these problems, we released the iCV-MEFED data set, which includes 50 classes of compound emotions and labels assessed by psychologists. The task is challenging due to high similarities of compound facial emotions from different categories. In addition, we have organized a challenge based on the proposed iCV-MEFED data set, held at FG workshop 2017. In this paper, we analyze the top three winner methods and perform further detailed experiments on the proposed data set. Experiments indicate that pairs of compound emotion (e.g., surprisingly-happy vs happily-surprised) are more difficult to be recognized if compared with the seven basic emotions. However, we hope the proposed data set can help to pave the way for further research on compound facial emotion recognition. | |
| dc.description.sponsorship | Estonian Research Council [PUT638, IUT213]; Estonian Center of Excellence in IT through the European Regional Development Fund; Spanish projects (MINECO/FEDER, UE) [TIN2015-66951-C2-2-R, TIN2016-74946-P]; CERCA Programme / Generalitat de Catalunya; European Commission Horizon 2020 granted project SEE.4C [H2020-ICT-2015]; CERCA Programme/Generalitat de Catalunya; National Key Research and Development Plan [2016YFC0801002]; Chinese National Natural Science Foundation [61502491, 61473291, 61572501, 61572536, 61673052, 61773392, 61403405]; Scientific and Technological Research Council of Turkey (TAIJBArTAK) 1001 Project [116E097] | |
| dc.description.sponsorship | This work was supported in part by the Estonian Research Council under Grant PUT638 and Grant IUT213, in part by the Estonian Center of Excellence in IT through the European Regional Development Fund, in part by the Spanish projects (MINECO/FEDER, UE) under Grant TIN2015-66951-C2-2-R and Grant TIN2016-74946-P and CERCA Programme / Generalitat de Catalunya, in part by the European Commission Horizon 2020 granted project SEE.4C under Grant H2020-ICT-2015, in part by the CERCA Programme/Generalitat de Catalunya, in part by the National Key Research and Development Plan under Grant 2016YFC0801002, in part by the Chinese National Natural Science Foundation Projects under Grant 61502491, Grant 61473291, Grant 61572501, Grant 61572536, Grant 61673052, Grant 61773392, and Grant 61403405, and in part by the Scientific and Technological Research Council of Turkey (TAIJBArTAK) 1001 Project under Grant 116E097. | |
| dc.identifier.doi | 10.1109/ACCESS.2018.2831927 | |
| dc.identifier.endpage | 26403 | |
| dc.identifier.issn | 2169-3536 | |
| dc.identifier.orcid | 0000-0001-8460-5717 | |
| dc.identifier.orcid | 0000-0001-5338-3007 | |
| dc.identifier.orcid | 0000-0002-3120-5370 | |
| dc.identifier.orcid | 0009-0008-5201-5817 | |
| dc.identifier.scopus | 2-s2.0-85046369915 | |
| dc.identifier.scopusquality | Q1 | |
| dc.identifier.startpage | 26391 | |
| dc.identifier.uri | https://doi.org/10.1109/ACCESS.2018.2831927 | |
| dc.identifier.uri | https://hdl.handle.net/11129/14964 | |
| dc.identifier.volume | 6 | |
| dc.identifier.wos | WOS:000434935200001 | |
| dc.identifier.wosquality | Q2 | |
| dc.indekslendigikaynak | Web of Science | |
| dc.indekslendigikaynak | Scopus | |
| dc.language.iso | en | |
| dc.publisher | IEEE-Inst Electrical Electronics Engineers Inc | |
| dc.relation.ispartof | Ieee Access | |
| dc.relation.publicationcategory | Makale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı | |
| dc.rights | info:eu-repo/semantics/openAccess | |
| dc.snmz | KA_WoS_20260204 | |
| dc.subject | Dominant and complementary emotion recognition | |
| dc.subject | compound emotions | |
| dc.subject | fine-grained face emotion dataset | |
| dc.title | Dominant and Complementary Emotion Recognition From Still Images of Faces | |
| dc.type | Article |










