Journals starting with ecv2

ECV21 * *Efficient Deep Learning for Computer Vision
* Alps: Adaptive Quantization of Deep Neural Networks with GeneraLized PositS
* BasisNet: Two-stage Model Synthesis for Efficient Inference
* CompConv: A Compact Convolution Module for Efficient Feature Learning
* Data-Efficient Language-Supervised Zero-Shot Learning with Self-Distillation
* Discovering Multi-Hardware Mobile Models via Architecture Search
* Dynamic-OFA: Runtime DNN Architecture Switching for Performance Scaling on Heterogeneous Embedded Platforms
* Efficient Two-stream Action Recognition on FPGA
* Extracurricular Learning: Knowledge Transfer Beyond Empirical Distribution
* Generative Zero-shot Network Quantization
* In-Hindsight Quantization Range Estimation for Quantized Training
* Is In-Domain Data Really Needed? A Pilot Study on Cross-Domain Calibration for Network Quantization
* Network Space Search for Pareto-Efficient Spaces
* Pareto-Optimal Quantized ResNet Is Mostly 4-bit
* Rethinking the Self-Attention in Vision Transformers
* Width transfer: on the (in)variance of width optimization
16 for ECV21

ECV22 * *Efficient Deep Learning for Computer Vision
* Active Object Detection with Epistemic Uncertainty and Hierarchical Information Aggregation
* ANT: Adapt Network Across Time for Efficient Video Processing
* Area Under the ROC Curve Maximization for Metric Learning
* Conjugate Adder Net (CAddNet) - a Space-Efficient Approximate CNN
* Cyclical Pruning for Sparse Neural Networks
* DA3: Dynamic Additive Attention Adaption for Memory-Efficient On-Device Multi-Domain Learning
* Discriminability-enforcing loss to improve representation learning
* Disentangled Loss for Low-Bit Quantization-Aware Training
* Event Transformer. A sparse-aware solution for efficient event data processing
* Hybrid Consistency Training with Prototype Adaptation for Few-Shot Learning
* Integrating Pose and Mask Predictions for Multi-person in Videos
* Linear Combination Approximation of Feature for Channel Pruning
* Low Memory Footprint Quantized Neural Network for Depth Completion of Very Sparse Time-of-Flight Depth Maps, A
* MAPLE: Microprocessor A Priori for Latency Estimation
* Momentum Contrastive Pruning
* Once-for-All Budgeted Pruning Framework for ConvNets Considering Input Resolution, An
* PEA: Improving the Performance of ReLU Networks for Free by Using Progressive Ensemble Activations
* ResNeSt: Split-Attention Networks
* Searching for Efficient Neural Architectures for On-Device ML on Edge TPUs
* Semi-Supervised Few-Shot Learning from A Dependency-Discriminant Perspective
* Simple and Efficient Architectures for Semantic Segmentation
* Simulated Quantization, Real Power Savings
* SqueezeNeRF: Further factorized FastNeRF for memory-efficient inference
* TinyOps: ImageNet Scale Deep Learning on Microcontrollers
* TorMentor: Deterministic dynamic-path, data augmentations with fractals
* Towards efficient feature sharing in MIMO architectures
* When NAS Meets Trees: An Efficient Algorithm for Neural Architecture Search
* YOLO-Pose: Enhancing YOLO for Multi Person Pose Estimation Using Object Keypoint Similarity Loss
29 for ECV22

ECV23 * *Efficient Deep Learning for Computer Vision
* Accelerable Lottery Tickets with the Mixed-Precision Quantization
* AdaMTL: Adaptive Input-dependent Inference for Efficient Multi-Task Learning
* BinaryViT: Pushing Binary Vision Transformers Towards Convolutional Models
* BlazeStyleGAN: A Real-Time On-Device StyleGAN
* CFDP: Common Frequency Domain Pruning
* Content-Adaptive Downsampling in Convolutional Neural Networks
* Data-Free Model Pruning at Initialization via Expanders
* Dataset Efficient Training with Model Ensembling
* DeCAtt: Efficient Vision Transformers with Decorrelated Attention Heads
* DeepGEMM: Accelerated Ultra Low-Precision Inference on CPU Architectures using Lookup Tables
* Dynamic Inference Acceleration of 3D Point Cloud Deep Neural Networks Using Point Density and Entropy
* DynaShare: Task and Instance Conditioned Parameter Sharing for Multi-Task Learning
* Envisioning a Next Generation Extended Reality Conferencing System with Efficient Photorealistic Human Rendering
* ETAD: Training Action Detection End to End on a Laptop
* Localized Latent Updates for Fine-Tuning Vision-Language Models
* Making Models Shallow Again: Jointly Learning to Reduce Non-Linearity and Depth for Latency-Efficient Private Inference
* MARRS: Modern Backbones Assisted Co-training for Rapid and Robust Semi-Supervised Domain Adaptation
* MIMMO: Multi-Input Massive Multi-Output Neural Network
* Phase-field Models for Lightweight Graph Convolutional Networks
* Pre-training Auto-generated Volumetric Shapes for 3D Medical Image Segmentation
* Quantized Proximal Averaging Networks for Compressed Image Recovery
* Recursions Are All You Need: Towards Efficient Deep Unfolding Networks
* Rethinking Dilated Convolution for Real-time Semantic Segmentation
* Revisiting Class Imbalance for End-to-end Semi-Supervised Object Detection
* Similar Class Style Augmentation for Efficient Cross-Domain Few-Shot Learning
* Speed Is All You Need: On-Device Acceleration of Large Diffusion Models via GPU-Aware Optimizations
* STAR: Sparse Thresholded Activation under partial-Regularization for Activation Sparsity Exploration
* Token Merging for Fast Stable Diffusion
* Vision Transformers with Mixed-Resolution Tokenization
30 for ECV23

Index for "e"


Last update:25-Mar-24 16:25:22
Use price@usc.edu for comments.