ABAW21
* *Affective Behavior Analysis In-the-Wild
* Analysing Affective Behavior in the second ABAW2 Competition
* audiovisual and contextual approach for categorical and continuous emotion recognition in-the-wild, An
* Causal affect prediction model using a past facial image sequence
* Continuous Emotion Recognition with Audio-visual Leader-follower Attentive Fusion
* Emotion Recognition Based on Body and Context Fusion in the Wild
* Emotion Recognition With Sequential Multi-task Learning Technique
* Evaluating the Performance of Ensemble Methods and Voting Strategies for Dense 2D Pedestrian Detection in the Wild
* FSER: Deep Convolutional Neural Networks for Speech Emotion Recognition
* Iterative Distillation for Better Uncertainty Estimates in Multitask Emotion Recognition
* MTMSN: Multi-Task and Multi-Modal Sequence Network for Facial Action Unit and Expression Recognition
* Multi-task Mean Teacher for Semi-supervised Facial Affective Behavior Analysis, A
* Multitask Multi-database Emotion Recognition
* Noisy Annotations Robust Consensual Collaborative Affect Expression Recognition
* Prior Aided Streaming Network for Multi-task Affective Analysis
* Public Life in Public Space (PLPS): A multi-task, multi-group video dataset for public life research
* Student Engagement Dataset
17 for ABAW21
ABAW22
* *Affective Behavior Analysis In-the-Wild
* ABAW: Valence-Arousal Estimation, Expression Recognition, Action Unit Detection & Multi-Task Learning Challenges
* Accurate 3D Hand Pose Estimation for Whole-Body 3D Human Mesh Estimation
* Action unit detection by exploiting spatial-temporal and label-wise attention with transformer
* Attention-based Method for Multi-label Facial Action Unit Detection, An
* Best of Both Worlds: Combining Model-based and Nonparametric Approaches for 3D Human Body Estimation, The
* Bridging the Gap Between Automated and Human Facial Emotion Perception
* Classification of Facial Expression In-the-Wild based on Ensemble of Multi-head Cross Attention Networks
* Coarse-to-Fine Cascaded Networks with Smooth Predicting for Video Facial Expression Recognition
* Continuous Emotion Recognition using Visual-audio-linguistic Information: A Technical Report for ABAW3
* Cross Transferring Activity Recognition to Word Level Sign Language Detection
* Ensemble Approach for Facial Behavior Analysis in-the-wild Video, An
* Estimating Multiple Emotion Descriptors by Separating Description and Inference
* Facial Expression Classification using Fusion of Deep Neural Network in Video
* Joint Cross-Attention Model for Audio-Visual Fusion in Dimensional Emotion Recognition, A
* Long-term Action Forecasting Using Multi-headed Attention-based Variational Recurrent Neural Networks
* MixAugment & Mixup: Augmentation Methods for Facial Expression Recognition
* Model Level Ensemble for Facial Action Unit Recognition at the 3rd ABAW Challenge
* Multi-task Learning for Human Affect Prediction with Auditory-Visual Synchronized Representation
* NeuralAnnot: Neural Annotator for 3D Human Mesh Training Sets
* Three Stream Graph Attention Network using Dynamic Patch Selection for the classification of micro-expressions
* TikTok for good: Creating a diverse emotion expression database
* Time-Continuous Audiovisual Fusion with Recurrence vs Attention for In-The-Wild Affect Recognition
* Transformer-based Multimodal Information Fusion for Facial Expression Analysis
* Valence and Arousal Estimation based on Multimodal Temporal-Aware Features for Videos in the Wild
* Video-based Frame-level Facial Analysis of Affective Behavior on Mobile Devices using EfficientNets
* Video-based multimodal spontaneous emotion recognition using facial expressions and physiological signals
27 for ABAW22
ABAW23
* *Affective Behavior Analysis In-the-Wild
* ABAW5 Challenge: A Facial Affect Recognition Approach Utilizing Transformer Encoder and Audiovisual Fusion
* ABAW: Valence-Arousal Estimation, Expression Recognition, Action Unit Detection and Emotional Reaction Intensity Estimation Challenges
* Analysis of Emotion Annotation Strength Improves Generalization in Speech Emotion Recognition Models
* Compound Expression Recognition In-the-wild with AU-assisted Meta Multi-task Learning
* Dual Branch Network for Emotional Reaction Intensity Estimation, A
* Dynamic Noise Injection for Facial Expression Recognition In-the-Wild
* EmotiEffNets for Facial Processing in Video-based Valence-Arousal Prediction, Expression Classification and Action Unit Detection
* Ensemble Spatial and Temporal Vision Transformer for Action Units Detection
* EVAEF: Ensemble Valence-Arousal Estimation Framework in the Wild
* Exploring Expression-related Self-supervised Learning and Spatial Reserve Pooling for Affective Behaviour Analysis
* Exploring Large-scale Unlabeled Faces to Enhance Facial Expression Recognition
* Facial Expression Recognition Based on Multi-modal Features for Videos in the Wild
* Frame Level Emotion Guided Dynamic Facial Expression Recognition with Emotion Grouping
* Inferring Affective Experience from the Big Picture Metaphor: A Two-dimensional Visual Breadth Model
* Integrating Holistic and Local Information to Estimate Emotional Reaction Intensity
* Large-Scale Facial Expression Recognition Using Dual-Domain Affect Fusion for Noisy Labels
* Leveraging TCN and Transformer for effective visual-audio fusion in continuous emotion recognition
* Local Region Perception and Relationship Learning Combined with Feature Fusion for Facial Action Unit Detection
* Multi-modal Emotion Reaction Intensity Estimation with Temporal Augmentation
* Multi-modal Facial Affective Analysis based on Masked Autoencoder
* Multi-modal Information Fusion for Action Unit Detection in the Wild
* Multimodal Continuous Emotion Recognition: A Technical Report for ABAW5
* Multimodal Feature Extraction and Fusion for Emotional Reaction Intensity Estimation and Expression Classification in Videos with Transformers
* Relational Edge-Node Graph Attention Network for Classification of Micro-Expressions
* Spatial-Temporal Graph-Based AU Relationship Learning for Facial Action Unit Detection
* SPECTRE: Visual Speech-Informed Perceptual 3D Facial Expression Reconstruction from Videos
* t-RAIN: Robust generalization under weather-aliasing label shift attacks
* TempT: Temporal consistency for Test-time adaptation
* Unified Approach to Facial Affect Analysis: the MAE-Face Visual Representation, A
* Unmasking Your Expression: Expression-Conditioned GAN for Masked Face Inpainting
31 for ABAW23
ABAW24
* *Affective Behavior Analysis In-the-Wild
* 3D Human Pose Estimation with Occlusions: Introducing BlendMimic3D Dataset and GCN Refinement
* 6th Affective Behavior Analysis in-the-wild (ABAW) Competition, The
* Advanced Facial Analysis in Multi-Modal Data with Cascaded Cross-Attention based Transformer
* AUD-TGN: Advancing Action Unit Detection with Temporal Convolution and GPT-2 in Wild Audiovisual Contexts
* CAGE: Circumplex Affect Guided Expression Inference
* CMOSE: Comprehensive Multi-Modality Online Student Engagement Dataset with High-Quality Labels
* CUE-Net: Violence Detection Video Analytics with Spatial Cropping, Enhanced UniformerV2 and Modified Efficient Additive Attention
* Drone-HAT: Hybrid Attention Transformer for Complex Action Recognition in Drone Surveillance Videos
* Effective Ensemble Learning Framework for Affective Behaviour Analysis, An
* Efficient Feature Extraction and Late Fusion Strategy for Audiovisual Emotional Mimicry Intensity Estimation
* Emotic Masked Autoencoder on Dual-views with Attention Fusion for Facial Expression Recognition
* EmotiEffNet and Temporal Convolutional Networks in Video-based Facial Expression Recognition and Action Unit Detection
* Emotion Recognition Using Transformers with Random Masking
* Enhancing Emotion Recognition with Pre-trained Masked Autoencoders and Sequential Learning
* Evaluating the Effectiveness of Video Anomaly Detection in the Wild Online Learning and Inference for Real-world Deployment
* Exploring Facial Expression Recognition through Semi-Supervised Pre-training and Temporal Modeling
* Improving Valence-Arousal Estimation with Spatiotemporal Relationship Learning and Multimodal Fusion
* Joint Multimodal Transformer for Emotion Recognition in the Wild
* Language-guided Multi-modal Emotional Mimicry Intensity Estimation
* Learning Transferable Compound Expressions from Masked AutoEncoder Pretraining
* Leveraging Pre-trained Multi-task Deep Models for Trustworthy Facial Analysis in Affective Behaviour Analysis in-the-Wild
* MMA-DFER: MultiModal Adaptation of unimodal models for Dynamic Facial Expression Recognition in-the-wild
* Multi Model Ensemble for Compound Expression Recognition
* Multi-modal Arousal and Valence Estimation under Noisy Conditions
* Multi-Task Multi-Modal Self-Supervised Learning for Facial Expression Recognition
* Purposeful Regularization with Reinforcement Learning for Facial Expression Recognition In-the-Wild
* Recursive Joint Cross-Modal Attention for Multimodal Fusion in Dimensional Emotion Recognition
* REFA: Real-time Egocentric Facial Animations for Virtual Reality
* TCCT-Net: Two-Stream Network Architecture for Fast and Efficient Engagement Estimation via Behavioral Feature Signals
* Uncovering Hidden Emotions with Adaptive Multi-Attention Graph Networks
* Unimodal Multi-Task Fusion for Emotional Mimicry Intensity Prediction
* Unravelling Robustness of Deep Face Recognition Networks Against Illicit Drug Abuse Images
* Unsupervised Multi-Person 3D Human Pose Estimation From 2D Poses Alone
* Video Representation Learning for Conversational Facial Expression Recognition Guided by Multiple View Reconstruction
* Zero-Shot Audio-Visual Compound Expression Recognition Method based on Emotion Probability Fusion
36 for ABAW24
ABAWE22
* *Affective Behavior Analysis In-the-Wild
* ABAW: Learning from Synthetic Data & Multi-task Learning Challenges
* Affective Behavior Analysis Using Action Unit Relation Graph and Multi-task Cross Attention
* Affective Behaviour Analysis Using Pretrained Model with Facial Prior
* Byel: Bootstrap Your Emotion Latent
* Deep Semantic Manipulation of Facial Videos
* Ensemble of Multi-task Learning Networks for Facial Expression Recognition In-the-wild with Learning from Synthetic Data
* Facial Affect Recognition Using Semi-supervised Learning with Adaptive Threshold
* Facial Expression Recognition In-the-wild with Deep Pre-trained Models
* Facial Expression Recognition with Mid-level Representation Enhancement and Graph Embedded Uncertainty Suppressing
* Geometric Pose Affordance: Monocular 3d Human Pose Estimation with Scene Constraints
* MT-emotieffnet for Multi-task Human Affective Behavior Analysis and Learning from Synthetic Data
* Multi-task Learning Framework for Emotion Recognition In-the-wild
* Peri: Part Aware Emotion Recognition in the Wild
* Two-aspect Information Interaction Model for ABAW4 Multi-task Challenge
15 for ABAWE22