LLID21
* *Learning From Limited or Imperfect Data
* BalaGAN: Cross-Modal Image Translation Between Imbalanced Domains
* Boosting Co-teaching with Compression Regularization for Label Noise
* Boosting Unconstrained Face Recognition with Auxiliary Unlabeled Data
* Closer Look at Self-training for Zero-Label Semantic Segmentation, A
* Cluster-driven Graph Federated Learning over Multiple Domains
* Contrastive Learning Improves Model Robustness Under Label Noise
* DAMSL: Domain Agnostic Meta Score-based Learning
* Distill on the Go: Online knowledge distillation in self-supervised learning
* Efficacy of Bayesian Neural Networks in Active Learning
* Efficient Pre-trained Features and Recurrent Pseudo-Labeling in Unsupervised Domain Adaptation
* Improving Semi-Supervised Domain Adaptation Using Effective Target Selection and Semantics
* Learning from Incomplete Features by Simultaneous Training of Neural Networks and Sparse Coding
* Learning Unbiased Representations via Mutual Information Backpropagation
* One-shot action recognition in challenging therapy scenarios
* One-Shot GAN: Learning to Generate Samples from Single Images and Videos
* PLM: Partial Label Masking for Imbalanced Multi-label Classification
* ReMP: Rectified Metric Propagation for Few-Shot Learning
* Rethinking Ensemble-Distillation for Semantic Segmentation Based Unsupervised Domain Adaption
* Shot in the Dark: Few-Shot Learning with No Base-Class Labels
* TAEN: Temporal Aware Embedding Network for Few-Shot Action Recognition
* Training Deep Generative Models in Highly Incomplete Data Scenarios with Prior Regularization
* Training Rare Object Detection in Satellite Imagery with Synthetic GAN Images
* Unlocking the Full Potential of Small Data with Diverse Supervision
* Weak Multi-View Supervision for Surface Mapping Estimation
25 for LLID21