Index for cun_

Cun, X. Co Author Listing * Improving the Harmony of the Composite Image by Spatial-Separated Attention Module

Cun, X.D.[Xiao Dong] Co Author Listing * 3D GAN Inversion with Facial Symmetry Prior
* Applying stochastic second-order entropy images to multi-modal image registration
* CodeTalker: Speech-Driven 3D Facial Animation with Discrete Motion Prior
* Defocus Blur Detection via Depth Distillation
* DEIM: DETR with Improved Matching for Fast Convergence
* Depth-Aware Test-Time Training for Zero-Shot Video Object Segmentation
* DepthCrafter: Generating Consistent Long Depth Sequences for Open-world Videos
* DH-GAN: Image manipulation localization via a dual homology-aware generative adversarial network
* DiTCtrl: Exploring Attention Control in Multi-Modal Diffusion Transformer for Tuning-Free Multi-Prompt Longer Video Generation
* DPE: Disentanglement of Pose and Expression for General Video Portrait Editing
* EvalCrafter: Benchmarking and Evaluating Large Video Generation Models
* Explicit Visual Prompting for Low-Level Structure Segmentations
* FateZero: Fusing Attentions for Zero-shot Text-based Video Editing
* Generating Human Motion from Textual Descriptions with Discrete Representations
* High-Resolution Document Shadow Removal via A Large-Scale Real-World Dataset and A Frequency-Aware Shadow Erasing Net
* Image Splicing Localization via Semi-global Network and Fully Connected Conditional Random Fields
* LivelySpeaker: Towards Semantic-Aware Co-Speech Gesture Generation
* MagicStick: Controllable Video Editing via Control Handle Transformations
* Make a Cheap Scaling: A Self-Cascade Diffusion Model for Higher-Resolution Adaptation
* Make-Your-Anchor: A Diffusion-based 2D Avatar Generation Framework
* MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model
* Noise Calibration: Plug-and-play Content-preserving Video Enhancement Using Pre-trained Video Diffusion Models
* SadTalker: Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation
* SmartEdit: Exploring Complex Instruction-Based Image Editing with Multimodal Large Language Models
* Spatial-Separated Curve Rendering Network for Efficient and High-Resolution Image Harmonization
* StyleHEAT: One-Shot High-Resolution Editable Talking Face Generation via Pre-trained StyleGAN
* ToonTalker: Cross-Domain Face Reenactment
* Uformer: A General U-Shaped Transformer for Image Restoration
* VideoCrafter2: Overcoming Data Limitations for High-Quality Video Diffusion Models
* X-Adapter: Universal Compatibility of Plugins for Upgraded Diffusion Model
Includes: Cun, X.D.[Xiao Dong] Cun, X.D.[Xiao-Dong]
30 for Cun, X.D.

Index for "c"


Last update: 8-Jan-26 13:30:24
Use price@usc.edu for comments.