Wang, Y.J.[Yong-Jin],
Guan, L.[Ling],
Recognizing Human Emotional State From Audiovisual Signals*,
MultMed(10), No. 5, August 2008, pp. 936-946.
IEEE DOI
0905
BibRef
Wang, Y.J.[Yong-Jin],
Guan, L.[Ling],
Recognizing Human Emotional State From Audiovisual Signals,
MultMed(10), No. 4, June 2008, pp. 659-668.
IEEE DOI
0905
BibRef
Mower, E.,
Mataric, M.J.,
Narayanan, S.,
Human Perception of Audio-Visual Synthetic Character Emotion Expression
in the Presence of Ambiguous and Conflicting Information,
MultMed(11), No. 5, 2009, pp. 843-855.
IEEE DOI
0907
BibRef
Wang, Y.J.[Yong-Jin],
Guan, L.[Ling],
Venetsanopoulos, A.N.,
Kernel Cross-Modal Factor Analysis for Information Fusion With
Application to Bimodal Emotion Recognition,
MultMed(14), No. 3, 2012, pp. 597-607.
IEEE DOI
1202
BibRef
Wang, Y.J.[Yong-Jin],
Zhang, R.[Rui],
Guan, L.[Ling],
Venetsanopoulos, A.N.,
Kernel Fusion of Audio and Visual Information for Emotion Recognition,
ICIAR11(II: 140-150).
Springer DOI
1106
BibRef
Metallinou, A.[Angeliki],
Wollmer, M.[Martin],
Katsamanis, A.[Athanasios],
Eyben, F.[Florian],
Schuller, B.[Bjorn],
Narayanan, S.[Shrikanth],
Context-Sensitive Learning for Enhanced Audiovisual Emotion
Classification,
AffCom(3), No. 2, 2012, pp. 184-198.
IEEE DOI
1208
BibRef
Lin, J.C.[Jen-Chun],
Wu, C.H.[Chung-Hsien],
Wei, W.L.[Wen-Li],
Error Weighted Semi-Coupled Hidden Markov Model for Audio-Visual
Emotion Recognition,
MultMed(14), No. 1, January 2012, pp. 142-156.
IEEE DOI
1201
BibRef
Rashid, M.[Munaf],
Abu-Bakar, S.A.R.,
Mokji, M.[Musa],
Human emotion recognition from videos using spatio-temporal and audio
features,
VC(29), No. 12, December 2013, pp. 1269-1275.
WWW Link.
1312
BibRef
v
Mariooryad, S.[Soroosh],
Busso, C.[Carlos],
Exploring Cross-Modality Affective Reactions for Audiovisual Emotion
Recognition,
AffCom(4), No. 2, 2013, pp. 183-196.
IEEE DOI
1307
Entrainment
BibRef
Wu, C.H.[Chung-Hsien],
Lin, J.C.[Jen-Chun],
Wei, W.L.[Wen-Li],
Two-Level Hierarchical Alignment for Semi-Coupled HMM-Based
Audiovisual Emotion Recognition With Temporal Course,
MultMed(15), No. 8, December 2013, pp. 1880-1895.
IEEE DOI
1402
audio signal processing
BibRef
Wöllmer, M.[Martin],
Kaiser, M.[Moritz],
Eyben, F.[Florian],
Schuller, B.[Björn],
Rigoll, G.[Gerhard],
LSTM-Modeling of continuous emotions in an audiovisual affect
recognition framework,
IVC(31), No. 2, February 2013, pp. 153-163.
Elsevier DOI
1303
Emotion recognition; Long Short-Term Memory; Facial movement features;
Context modeling
BibRef
Wu, C.H.[Chung-Hsien],
Wei, W.L.[Wen-Li],
Lin, J.C.[Jen-Chun],
Lee, W.Y.[Wei-Yu],
Speaking Effect Removal on Emotion Recognition From Facial
Expressions Based on Eigenface Conversion,
MultMed(15), No. 8, December 2013, pp. 1732-1744.
IEEE DOI
1402
Gaussian processes
BibRef
Ringeval, F.[Fabien],
Eyben, F.[Florian],
Kroupi, E.[Eleni],
Yuce, A.[Anil],
Thiran, J.P.[Jean-Philippe],
Ebrahimi, T.[Touradj],
Lalanne, D.[Denis],
Schuller, B.[Björn],
Prediction of asynchronous dimensional emotion ratings from
audiovisual and physiological data,
PRL(66), No. 1, 2015, pp. 22-30.
Elsevier DOI
1511
Context-learning long short-term memory recurrent neural networks
BibRef
Kim, J.C.,
Clements, M.A.,
Multimodal Affect Classification at Various Temporal Lengths,
AffCom(6), No. 4, October 2015, pp. 371-384.
IEEE DOI
1512
Audio-visual systems
BibRef
Wei, W.L.[Wen-Li],
Lin, J.C.[Jen-Chun],
Wu, C.H.[Chung-Hsien],
Interaction Style Recognition Based on Multi-Layer Multi-View Profile
Representation,
AffCom(8), No. 3, July 2017, pp. 355-368.
IEEE DOI
1709
Emotion recognition, Feature extraction, Pragmatics, Speech,
Speech recognition, Support vector machines, Text recognition,
Interaction style, dialogue system, dialogue topic, emotion,
probabilistic, fusion
BibRef
Turker, B.B.,
Yemez, Y.,
Sezgin, T.M.,
Erzin, E.,
Audio-Facial Laughter Detection in Naturalistic Dyadic Conversations,
AffCom(8), No. 4, October 2017, pp. 534-545.
IEEE DOI
1712
Bagging, Databases, Feature extraction, Hidden Markov models,
Magnetic heads,
naturalistic dyadic conversations
BibRef
Georgakis, C.[Christos],
Panagakis, Y.[Yannis],
Zafeiriou, S.P.[Stefanos P.],
Pantic, M.[Maja],
The Conflict Escalation Resolution (CONFER) Database,
IVC(65), No. 1, 2017, pp. 37-48.
Elsevier DOI
1709
BibRef
Earlier: A2, A3, A4, Only:
Audiovisual Conflict Detection in Political Debates,
FacBeh14(306-314).
Springer DOI
1504
Automatic conflict analysis
BibRef
Seng, K.P.,
Ang, L.M.,
Ooi, C.S.,
A Combined Rule-Based Machine Learning Audio-Visual Emotion
Recognition Approach,
AffCom(9), No. 1, January 2018, pp. 3-13.
IEEE DOI
1804
audio signal processing, cepstral analysis, emotion recognition,
feature extraction, learning (artificial intelligence),
rule-based
BibRef
Zhang, S.,
Zhang, S.,
Huang, T.,
Gao, W.,
Tian, Q.,
Learning Affective Features With a Hybrid Deep Model for Audio-Visual
Emotion Recognition,
CirSysVideo(28), No. 10, October 2018, pp. 3030-3043.
IEEE DOI
1811
Feature extraction, Emotion recognition, Visualization,
Image segmentation, Machine learning, Databases, Convolution,
multimodality fusion
BibRef
Seng, K.P.,
Ang, L.M.,
Video Analytics for Customer Emotion and Satisfaction at Contact
Centers,
HMS(48), No. 3, June 2018, pp. 266-278.
IEEE DOI
1805
Companies, Customer satisfaction, Emotion recognition, Speech,
Speech recognition, Visualization, Customer experience,
video analytics
BibRef
Zhang, B.,
Provost, E.M.,
Essl, G.,
Cross-Corpus Acoustic Emotion Recognition with Multi-Task Learning:
Seeking Common Ground While Preserving Differences,
AffCom(10), No. 1, January 2019, pp. 85-99.
IEEE DOI
1903
Emotion recognition, Training, Speech, Speech recognition, Acoustics,
Training data, Data models, Emotion recognition, cross-corpus,
multi-task learning
BibRef
Noroozi, F.,
Marjanovic, M.,
Njegus, A.,
Escalera, S.,
Anbarjafari, G.,
Audio-Visual Emotion Recognition in Video Clips,
AffCom(10), No. 1, January 2019, pp. 60-75.
IEEE DOI
1903
Emotion recognition, Visualization, Feature extraction, Databases,
Face, Neural networks, Mel frequency cepstral coefficient,
convolutional neural networks
BibRef
Meng, Z.,
Han, S.,
Tong, Y.,
Listen to Your Face: Inferring Facial Action Units from Audio Channel,
AffCom(10), No. 4, October 2019, pp. 537-551.
IEEE DOI
1912
Face recognition, Feature extraction, Speech recognition,
Visualization, Image recognition, Facial action units,
audio-based facial action unit recognition
BibRef
Kim, Y.,
Provost, E.M.,
ISLA: Temporal Segmentation and Labeling for Audio-Visual Emotion
Recognition,
AffCom(10), No. 2, April 2019, pp. 196-208.
IEEE DOI
1906
Speech, Emotion recognition, Speech recognition, Correlation,
Labeling, Eyebrows, Visualization, Audio-visual, emotion, recognition,
speech
BibRef
Avots, E.[Egils],
Sapinski, T.[Tomasz],
Bachmann, M.[Maie],
Kaminska, D.[Dorota],
Audiovisual emotion recognition in wild,
MVA(30), No. 5, July 2019, pp. 975-985.
Springer DOI
1907
BibRef
Basnet, R.[Ramesh],
Islam, M.T.[Mohammad Tariqul],
Howlader, T.[Tamanna],
Rahman, S.M.M.[S. M. Mahbubur],
Hatzinakos, D.[Dimitrios],
Estimation of affective dimensions using CNN-based features of
audiovisual data,
PRL(128), 2019, pp. 290-297.
Elsevier DOI
1912
Convolutional neural network, Affective features, Emotional dimensions
BibRef
Hajarolasvadi, N.[Noushin],
Demirel, H.[Hasan],
Deep emotion recognition based on audio-visual correlation,
IET-CV(14), No. 7, October 2020, pp. 517-527.
DOI Link
2010
BibRef
Ghaleb, E.,
Popa, M.,
Asteriadis, S.,
Metric Learning-Based Multimodal Audio-Visual Emotion Recognition,
MultMedMag(27), No. 1, January 2020, pp. 37-48.
IEEE DOI
2004
Measurement, Emotion recognition, Visualization,
Support vector machines, Feature extraction, Task analysis,
Fisher vectors
BibRef
Schoneveld, L.[Liam],
Othmani, A.[Alice],
Abdelkawy, H.[Hazem],
Leveraging recent advances in deep learning for audio-Visual emotion
recognition,
PRL(146), 2021, pp. 1-7.
Elsevier DOI
2105
Human behavior recognition, Audiovisual emotion recognition,
Affective computing, Video sequences, Deep learning
BibRef
Nie, W.Z.[Wei-Zhi],
Ren, M.J.[Min-Jie],
Nie, J.[Jie],
Zhao, S.C.[Si-Cheng],
C-GCN: Correlation Based Graph Convolutional Network for Audio-Video
Emotion Recognition,
MultMed(23), 2021, pp. 3793-3804.
IEEE DOI
2110
Emotion recognition, Feature extraction, Correlation,
Task analysis, Visualization, Face recognition, Convolution, multiple graphs
BibRef
Nie, W.Z.[Wei-Zhi],
Chang, R.[Rihao],
Ren, M.J.[Min-Jie],
Su, Y.T.[Yu-Ting],
Liu, A.[Anan],
I-GCN: Incremental Graph Convolution Network for Conversation Emotion
Detection,
MultMed(24), 2022, pp. 4471-4481.
IEEE DOI
2212
Correlation, Semantics, Social networking (online), Convolution,
Transformers, Task analysis, Emotion recognition, GCN
BibRef
Ren, M.J.[Min-Jie],
Huang, X.D.[Xiang-Dong],
Shi, X.Q.[Xiao-Qi],
Nie, W.Z.[Wei-Zhi],
Interactive Multimodal Attention Network for Emotion Recognition in
Conversation,
SPLetters(28), 2021, pp. 1046-1050.
IEEE DOI
2106
Visualization, Solid modeling, Acoustics, Task analysis,
Emotion recognition, Convolution, Context modeling,
recurrent neural network
BibRef
Shirian, A.[Amir],
Tripathi, S.[Subarna],
Guha, T.[Tanaya],
Dynamic Emotion Modeling With Learnable Graphs and Graph Inception
Network,
MultMed(24), 2022, pp. 780-790.
IEEE DOI
2202
Emotion recognition, Videos, Dynamics, Convolution, Databases,
Task analysis, Speech recognition, Emotion recognition,
inception network
BibRef
Xu, Y.F.[Yi-Fan],
Cui, Y.Q.[Yu-Qi],
Jiang, X.[Xue],
Yin, Y.J.[Ying-Jie],
Ding, J.T.[Jing-Ting],
Li, L.[Liang],
Wu, D.R.[Dong-Rui],
Inconsistency-Based Multi-Task Cooperative Learning for Emotion
Recognition,
AffCom(13), No. 4, October 2022, pp. 2017-2027.
IEEE DOI
2212
Task analysis, Multitasking, Estimation, Emotion recognition,
Computational modeling, Speech recognition, Labeling,
emotion recognition
BibRef
Xu, Y.F.[Yi-Fan],
Jiang, X.[Xue],
Wu, D.R.[Dong-Rui],
Cross-Task Inconsistency Based Active Learning (CTIAL) for Emotion
Recognition,
AffCom(15), No. 3, July 2024, pp. 1659-1668.
IEEE DOI
2409
Task analysis, Uncertainty, Measurement uncertainty, Estimation,
Entropy, Emotion recognition, Affective computing, Active learning,
transfer learning
BibRef
Ren, M.J.[Min-Jie],
Huang, X.D.[Xiang-Dong],
Li, W.H.[Wen-Hui],
Liu, J.[Jing],
Multi-loop graph convolutional network for multimodal conversational
emotion recognition,
JVCIR(94), 2023, pp. 103846.
Elsevier DOI
2306
Conversational emotion recognition,
Multi-modal sentiment analysis, Graph convolutional network
BibRef
Ren, M.J.[Min-Jie],
Huang, X.D.[Xiang-Dong],
Liu, J.[Jing],
Liu, M.[Ming],
Li, X.Y.[Xuan-Ya],
Liu, A.A.[An-An],
MALN: Multimodal Adversarial Learning Network for Conversational
Emotion Recognition,
CirSysVideo(33), No. 11, November 2023, pp. 6965-6980.
IEEE DOI
2311
BibRef
Ren, M.J.[Min-Jie],
Huang, X.D.[Xiang-Dong],
Li, W.H.[Wen-Hui],
Song, D.[Dan],
Nie, W.Z.[Wei-Zhi],
LR-GCN: Latent Relation-Aware Graph Convolutional Network for
Conversational Emotion Recognition,
MultMed(24), 2022, pp. 4422-4432.
IEEE DOI
2212
Correlation, Emotion recognition, Task analysis, Context modeling,
Computer architecture, Transformers, Social networking (online),
graph convolutional network
BibRef
Goncalves, L.[Lucas],
Busso, C.[Carlos],
Robust Audiovisual Emotion Recognition: Aligning Modalities,
Capturing Temporal Information, and Handling Missing Features,
AffCom(13), No. 4, October 2022, pp. 2156-2170.
IEEE DOI
2212
Visualization, Emotion recognition, Feature extraction, Acoustics,
Training, Transformers, Robustness, Multimodal emotion recognition,
auxiliary networks
BibRef
Kansizoglou, I.[Ioannis],
Bampis, L.[Loukas],
Gasteratos, A.[Antonios],
An Active Learning Paradigm for Online Audio-Visual Emotion
Recognition,
AffCom(13), No. 2, April 2022, pp. 756-768.
IEEE DOI
2206
Feature extraction, Emotion recognition, Computer architecture,
Visualization, Robots, Monitoring, Data mining,
emotion in human-computer interaction
BibRef
Mocanu, B.[Bogdan],
Tapu, R.[Ruxandra],
Zaharia, T.[Titus],
Multimodal emotion recognition using cross modal audio-video fusion
with attention and deep metric learning,
IVC(133), 2023, pp. 104676.
Elsevier DOI
2305
Spatial attention, Channel attention, Temporal attention,
Cross-modal fusion, Emotional metric constraint
BibRef
Chen, G.H.[Guang-Hui],
Jiao, S.[Shuang],
Speech-Visual Emotion Recognition by Fusing Shared and Specific
Features,
SPLetters(30), 2023, pp. 678-682.
IEEE DOI
2307
Emotion recognition, Visualization, Feature extraction,
Speech recognition, Mel frequency cepstral coefficient, feature fusion
BibRef
Liu, Z.S.[Zhi-Song],
Courant, R.[Robin],
Kalogeiton, V.[Vicky],
Funnynet: Audiovisual Learning of Funny Moments in Videos,
ACCV22(IV:433-450).
Springer DOI
2307
BibRef
Wagner, J.[Johannes],
Triantafyllopoulos, A.[Andreas],
Wierstorf, H.[Hagen],
Schmitt, M.[Maximilian],
Burkhardt, F.[Felix],
Eyben, F.[Florian],
Schuller, B.W.[Björn W.],
Dawn of the Transformer Era in Speech Emotion Recognition:
Closing the Valence Gap,
PAMI(45), No. 9, September 2023, pp. 10745-10759.
IEEE DOI
2309
BibRef
Li, X.[Xiaoke],
Zhang, Z.[Zufan],
Gan, C.Q.[Chen-Quan],
Xiang, Y.[Yong],
Multi-Label Speech Emotion Recognition via Inter-Class Difference
Loss Under Response Residual Network,
MultMed(25), 2023, pp. 3230-3244.
IEEE DOI
2309
BibRef
Chen, F.Y.[Fei-Yu],
Shao, J.[Jie],
Zhu, S.Y.[Shu-Yuan],
Shen, H.T.[Heng Tao],
Multivariate, Multi-Frequency and Multimodal: Rethinking Graph Neural
Networks for Emotion Recognition in Conversation,
CVPR23(10761-10770)
IEEE DOI
2309
BibRef
Li, W.[Wei],
Li, Y.[Yang],
Pandelea, V.[Vlad],
Ge, M.[Mengshi],
Zhu, L.[Luyao],
Cambria, E.[Erik],
ECPEC: Emotion-Cause Pair Extraction in Conversations,
AffCom(14), No. 3, July 2023, pp. 1754-1765.
IEEE DOI
2310
BibRef
Tu, G.[Geng],
Liang, B.[Bin],
Jiang, D.[Dazhi],
Xu, R.F.[Rui-Feng],
Sentiment- Emotion- and Context-Guided Knowledge Selection Framework
for Emotion Recognition in Conversations,
AffCom(14), No. 3, July 2023, pp. 1803-1816.
IEEE DOI
2310
BibRef
Wang, F.F.[Fan-Fan],
Ding, Z.X.[Zi-Xiang],
Xia, R.[Rui],
Li, Z.Y.[Zhao-Yu],
Yu, J.F.[Jian-Fei],
Multimodal Emotion-Cause Pair Extraction in Conversations,
AffCom(14), No. 3, July 2023, pp. 1832-1844.
IEEE DOI
2310
BibRef
Jiang, D.[Dazhi],
Wei, R.[Runguo],
Wen, J.T.[Jin-Tao],
Tu, G.[Geng],
Cambria, E.[Erik],
AutoML-Emo: Automatic Knowledge Selection Using Congruent Effect for
Emotion Identification in Conversations,
AffCom(14), No. 3, July 2023, pp. 1845-1856.
IEEE DOI
2310
BibRef
Aspandi, D.[Decky],
Sukno, F.[Federico],
Schuller, B.W.[Björn W.],
Binefa, X.[Xavier],
Audio-Visual Gated-Sequenced Neural Networks for Affect Recognition,
AffCom(14), No. 3, July 2023, pp. 2193-2208.
IEEE DOI
2310
BibRef
Tellamekala, M.K.[Mani Kumar],
Giesbrecht, T.[Timo],
Valstar, M.[Michel],
Modelling Stochastic Context of Audio-Visual Expressive Behaviour
With Affective Processes,
AffCom(14), No. 3, July 2023, pp. 2290-2303.
IEEE DOI
2310
BibRef
Latif, S.[Siddique],
Rana, R.[Rajib],
Khalifa, S.[Sara],
Jurdak, R.[Raja],
Schuller, B.W.[Björn W.],
Multitask Learning From Augmented Auxiliary Data for Improving Speech
Emotion Recognition,
AffCom(14), No. 4, October 2023, pp. 3164-3176.
IEEE DOI
2312
BibRef
Lei, Y.Y.[Yuan-Yuan],
Cao, H.[Houwei],
Audio-Visual Emotion Recognition With Preference Learning Based on
Intended and Multi-Modal Perceived Labels,
AffCom(14), No. 4, October 2023, pp. 2954-2969.
IEEE DOI
2312
BibRef
Hsu, J.H.[Jia-Hao],
Wu, C.H.[Chung-Hsien],
Applying Segment-Level Attention on Bi-Modal Transformer Encoder for
Audio-Visual Emotion Recognition,
AffCom(14), No. 4, October 2023, pp. 3231-3243.
IEEE DOI
2312
BibRef
Zhang, D.[Duzhen],
Chen, F.L.[Fei-Long],
Chang, J.L.[Jian-Long],
Chen, X.[Xiuyi],
Tian, Q.[Qi],
Structure Aware Multi-Graph Network for Multi-Modal Emotion
Recognition in Conversations,
MultMed(26), 2024, pp. 3987-3997.
IEEE DOI
2402
Emotion recognition, Context modeling, Feature extraction, Visualization,
Acoustics, Oral communication, Transformers,
emotion recognition in conversations
BibRef
Li, J.[Jiang],
Wang, X.P.[Xiao-Ping],
Lv, G.Q.[Guo-Qing],
Zeng, Z.G.[Zhi-Gang],
GA2MIF: Graph and Attention Based Two-Stage Multi-Source Information
Fusion for Conversational Emotion Detection,
AffCom(15), No. 1, January 2024, pp. 130-143.
IEEE DOI
2403
Emotion recognition, Context modeling, Acoustics,
Computational modeling, Oral communication, Data models,
multimodal fusion
BibRef
Chen, T.T.[Tian-Tian],
Shen, Y.[Ying],
Chen, X.[Xuri],
Zhang, L.[Lin],
Zhao, S.J.[Sheng-Jie],
MPEG: A Multi-Perspective Enhanced Graph Attention Network for Causal
Emotion Entailment in Conversations,
AffCom(15), No. 3, July 2024, pp. 1004-1017.
IEEE DOI
2409
Task analysis, Emotion recognition, Transform coding,
Oral communication, Context modeling, Emotional responses, Bridges,
dialogue system
BibRef
Zhang, X.H.[Xiao-Heng],
Cui, W.G.[Wei-Gang],
Hu, B.[Bin],
Li, Y.[Yang],
A Multi-Level Alignment and Cross-Modal Unified Semantic Graph
Refinement Network for Conversational Emotion Recognition,
AffCom(15), No. 3, July 2024, pp. 1553-1566.
IEEE DOI
2409
Semantics, Emotion recognition, Uncertainty, Context modeling,
Task analysis, Self-supervised learning, Syntactics,
semantic refinement
BibRef
Yang, Z.Y.[Zhen-Yu],
Li, X.Y.[Xiao-Yang],
Cheng, Y.[Yuhu],
Zhang, T.[Tong],
Wang, X.S.[Xue-Song],
Emotion Recognition in Conversation Based on a Dynamic Complementary
Graph Convolutional Network,
AffCom(15), No. 3, July 2024, pp. 1567-1579.
IEEE DOI
2409
Emotion recognition, Context modeling, Oral communication,
Commonsense reasoning, Feature extraction, Data models,
utterance density
BibRef
Liao, R.F.[Rong-Fan],
Song, S.Y.[Si-Yang],
Gunes, H.[Hatice],
An Open-Source Benchmark of Deep Learning Models for Audio-Visual
Apparent and Self-Reported Personality Recognition,
AffCom(15), No. 3, July 2024, pp. 1590-1607.
IEEE DOI
2409
Computational modeling, Benchmark testing, Visualization, Codes,
Feature extraction, Predictive models, Face recognition, deep learning
BibRef
Quan, X.J.[Xiao-Jun],
Wu, S.Y.[Si-Yue],
Chen, J.Q.[Jun-Qing],
Shen, W.Z.[Wei-Zhou],
Yu, J.X.[Jian-Xing],
Multi-Party Conversation Modeling for Emotion Recognition,
AffCom(15), No. 3, July 2024, pp. 751-768.
IEEE DOI
2409
Oral communication, Emotion recognition, Task analysis,
Context modeling, Predictive models, Computational modeling,
pre-trained language models
BibRef
Nagasawa, F.[Fuminori],
Okada, S.[Shogo],
Ishihara, T.[Takuya],
Nitta, K.[Katsumi],
Adaptive Interview Strategy Based on Interviewees' Speaking
Willingness Recognition for Interview Robots,
AffCom(15), No. 3, July 2024, pp. 942-957.
IEEE DOI
2409
Interviews, Behavioral sciences, Robot sensing systems,
Adaptation models, Adaptive systems, Feature extraction, Sensors,
speaker's willingness
BibRef
Abakarim, F.[Fadwa],
Abenaou, A.[Abdenbi],
Speech Emotion Recognition System Using Discrete Wavelet Transform
and Support Vector Machine,
ISCV24(1-5)
IEEE DOI
2408
Support vector machines, Training, Emotion recognition,
Speech recognition, Feature extraction, Kurtosis, Vectors,
discrete wavelet transform
BibRef
Zhou, W.W.[Wei-Wei],
Lu, J.[Jiada],
Xiong, Z.[Zhaolong],
Wang, W.F.[Wei-Feng],
Leveraging TCN and Transformer for effective visual-audio fusion in
continuous emotion recognition,
ABAW23(5756-5763)
IEEE DOI
2309
BibRef
Mathur, L.[Leena],
Adolphs, R.[Ralph],
Mataric, M.J.[Maja J.],
Towards Intercultural Affect Recognition: Audio-Visual Affect
Recognition in the Wild Across Six Cultures,
FG23(1-6)
IEEE DOI
2303
Training, Visualization, Systematics, Face recognition,
Computational modeling, Gesture recognition, Robustness
BibRef
Chumachenko, K.[Kateryna],
Iosifidis, A.[Alexandros],
Gabbouj, M.[Moncef],
MMA-DFER: MultiModal Adaptation of unimodal models for Dynamic Facial
Expression Recognition in-the-wild,
ABAW24(4673-4682)
IEEE DOI
2410
Adaptation models, Emotion recognition, Face recognition,
Computational modeling, Self-supervised learning,
multi-modal
BibRef
Chumachenko, K.[Kateryna],
Iosifidis, A.[Alexandros],
Gabbouj, M.[Moncef],
Self-attention fusion for audiovisual emotion recognition with
incomplete data,
ICPR22(2822-2828)
IEEE DOI
2212
Emotion recognition, Data analysis, Robustness, Data models,
Noise measurement, Standards
BibRef
Praveen, R.G.[R. Gnana],
Alam, J.[Jahangir],
Recursive Joint Cross-Modal Attention for Multimodal Fusion in
Dimensional Emotion Recognition,
ABAW24(4803-4813)
IEEE DOI Code:
WWW Link.
2410
Correlation coefficient, Visualization, Emotion recognition,
Computational modeling, Refining,
Joint Respresentation
BibRef
Praveen, R.G.[R. Gnana],
de Melo, W.C.[Wheidima Carneiro],
Ullah, N.[Nasib],
Aslam, H.[Haseeb],
Zeeshan, O.[Osama],
Denorme, T.[Théo],
Pedersoli, M.[Marco],
Koerich, A.L.[Alessandro L.],
Bacon, S.[Simon],
Cardinal, P.[Patrick],
Granger, E.[Eric],
A Joint Cross-Attention Model for Audio-Visual Fusion in Dimensional
Emotion Recognition,
ABAW22(2485-2494)
IEEE DOI
2210
Correlation coefficient, Emotion recognition, Visualization,
Correlation, Computational modeling, Predictive models, Feature extraction
BibRef
Praveen, R.G.[R. Gnana],
Granger, E.[Eric],
Cardinal, P.[Patrick],
Cross Attentional Audio-Visual Fusion for Dimensional Emotion
Recognition,
FG21(1-8)
IEEE DOI
2303
Emotion recognition, Face recognition,
Computational modeling, Gesture recognition, Feature extraction, Fatigue
BibRef
Zhang, S.[Su],
An, R.[Ruyi],
Ding, Y.[Yi],
Guan, C.T.[Cun-Tai],
Continuous Emotion Recognition using Visual-audio-linguistic
Information: A Technical Report for ABAW3,
ABAW22(2375-2380)
IEEE DOI
2210
Training, Correlation coefficient, Visualization,
Emotion recognition, Fuses, Databases, Writing
BibRef
Zhang, S.[Su],
Ding, Y.[Yi],
Wei, Z.Q.[Zi-Quan],
Guan, C.T.[Cun-Tai],
Continuous Emotion Recognition with Audio-visual Leader-follower
Attentive Fusion,
ABAW21(3560-3567)
IEEE DOI
2112
Deep learning, Training, Correlation coefficient,
Convolutional codes, Visualization, Emotion recognition, Databases
BibRef
Antoniadis, P.[Panagiotis],
Pikoulis, I.[Ioannis],
Filntisis, P.P.[Panagiotis P.],
Maragos, P.[Petros],
An audiovisual and contextual approach for categorical and continuous
emotion recognition in-the-wild,
ABAW21(3638-3644)
IEEE DOI
2112
Emotion recognition, Image resolution, Lighting,
Streaming media, Feature extraction, Task analysis
BibRef
Ji, X.[Xinya],
Zhou, H.[Hang],
Wang, K.[Kaisiyuan],
Wu, W.[Wayne],
Loy, C.C.[Chen Change],
Cao, X.[Xun],
Xu, F.[Feng],
Audio-Driven Emotional Video Portraits,
CVPR21(14075-14084)
IEEE DOI
2111
Correlation, Shape, Heuristic algorithms, Mouth,
Faces
BibRef
Ghaleb, E.,
Niehues, J.,
Asteriadis, S.,
Multimodal Attention-Mechanism For Temporal Emotion Recognition,
ICIP20(251-255)
IEEE DOI
2011
Emotion recognition, Visualization, Training,
Human computer interaction, Faces, Fuses, attention,
audiovisual emotion recognition
BibRef
Aydin, B.,
Kindiroglu, A.A.,
Aran, O.,
Akarun, L.,
Automatic personality prediction from audiovisual data using random
forest regression,
ICPR16(37-42)
IEEE DOI
1705
Correlation, Feature extraction, Social network services, Speech,
Standards, Time-frequency analysis, Visualization
BibRef
Noroozi, F.,
Marjanovic, M.,
Njegus, A.,
Escalera, S.,
Anbarjafari, G.,
Fusion of classifier predictions for audio-visual emotion recognition,
ICPR16(61-66)
IEEE DOI
1705
Databases, Emotion recognition, Eyebrows, Face, Feature extraction,
Mouth, Visualization
BibRef
Araujo, R.[Rodrigo],
Kamel, M.S.[Mohamed S.],
Audio-Visual Emotion Analysis Using Semi-Supervised Temporal Clustering
with Constraint Propagation,
ICIAR14(II: 3-11).
Springer DOI
1410
BibRef
Lu, K.[Kun],
Jia, Y.D.[Yun-De],
Audio-visual emotion recognition with boosted coupled HMM,
ICPR12(1148-1151).
WWW Link.
1302
BibRef
And:
Audio-visual emotion recognition using Boltzmann Zippers,
ICIP12(2589-2592).
IEEE DOI
1302
BibRef
Pitas, I.,
Kotsia, I.,
Martin, O.,
Macq, B.,
The eNTERFACE-05 Audio-Visual Emotion Database,
ICDEW06(8).
IEEE DOI
BibRef
0600
Xiao, Z.Z.[Zhong-Zhe],
Dellandrea, E.,
Dou, W.B.[Wei-Bei],
Chen, L.M.[Li-Ming],
Features extraction and selection for emotional speech classification,
AVSBS05(411-416).
IEEE DOI
0602
BibRef
Chen, L.S.[Lawrence S.],
Huang, T.S.[Thomas S.],
Emotional Expressions In Audiovisual Human Computer Interaction,
ICME00(MP7).
0007
BibRef
Chapter on Face Recognition, Human Pose, Detection, Tracking, Gesture Recognition, Fingerprints, Biometrics continues in
Multi-Modal Emotion, Multimodal Emotion Recognition .