Wang, J.[Jie],
Luo, C.[Chang],
Huang, H.Q.[Han-Qiao],
Zhao, H.Z.[Hui-Zhen],
Wang, S.Q.[Shi-Qiang],
Transferring Pre-Trained Deep CNNs for Remote Scene Classification
with General Features Learned from Linear PCA Network,
RS(9), No. 3, 2017, pp. xx-yy.
DOI Link
1704
BibRef
Wen, Y.[Yang],
Chen, L.T.[Lei-Ting],
Deng, Y.[Yu],
Zhou, C.[Chuan],
Rethinking pre-training on medical imaging,
JVCIR(78), 2021, pp. 103145.
Elsevier DOI
2107
Transfer learning, Medical image analysis,
Convolutional neural network, Survival prediction
BibRef
Zhang, T.[Tong],
Gao, P.[Peng],
Dong, H.[Hao],
Zhuang, Y.[Yin],
Wang, G.Q.[Guan-Qun],
Zhang, W.[Wei],
Chen, H.[He],
Consecutive Pre-Training: A Knowledge Transfer Learning Strategy with
Relevant Unlabeled Data for Remote Sensing Domain,
RS(14), No. 22, 2022, pp. xx-yy.
DOI Link
2212
BibRef
Kataoka, H.[Hirokatsu],
Okayasu, K.[Kazushige],
Matsumoto, A.[Asato],
Yamagata, E.[Eisuke],
Yamada, R.[Ryosuke],
Inoue, N.[Nakamasa],
Nakamura, A.[Akio],
Satoh, Y.[Yutaka],
Pre-Training Without Natural Images,
IJCV(130), No. 1, January 2022, pp. 990-1007.
Springer DOI
2204
BibRef
Earlier:
ACCV20(VI:583-600).
Springer DOI
2103
BibRef
Xu, C.[Cong],
Li, D.[Dan],
Yang, M.[Min],
Adversarial momentum-contrastive pre-training,
PRL(160), 2022, pp. 172-179.
Elsevier DOI
2208
Real samples and adversarial samples for training.
Adversarial robustness, Contrastive learning, Memory bank, Fine-tuning
BibRef
Zhou, H.Y.[Hong-Yu],
Lu, C.X.[Chi-Xiang],
Chen, C.Q.[Chao-Qi],
Yang, S.[Sibei],
Yu, Y.Z.[Yi-Zhou],
A Unified Visual Information Preservation Framework for
Self-supervised Pre-Training in Medical Image Analysis,
PAMI(45), No. 7, July 2023, pp. 8020-8035.
IEEE DOI
2306
Semantics, Image restoration, Task analysis, Visualization,
Medical diagnostic imaging, Image segmentation,
transfer learning
BibRef
Chen, Z.H.[Zi-Han],
Zhu, H.Y.[Hong-Yuan],
Cheng, H.[Hao],
Mi, S.[Siya],
Zhang, Y.[Yu],
Geng, X.[Xin],
LPCL: Localized prominence contrastive learning for self-supervised
dense visual pre-training,
PR(135), 2023, pp. 109185.
Elsevier DOI
2212
Self-supervised learning, Contrastive learning, Dense representation
BibRef
Zhang, Y.[Yu],
Zhang, T.[Tao],
Zhu, H.Y.[Hong-Yuan],
Chen, Z.H.[Zi-Han],
Mi, S.[Siya],
Peng, X.[Xi],
Geng, X.[Xin],
Object Adaptive Self-Supervised Dense Visual Pre-Training,
IP(34), 2025, pp. 2228-2240.
IEEE DOI
2504
Contrastive learning, Object detection, Visualization, Training,
Image classification, Feature extraction, Semantic segmentation,
multi-scale representation
BibRef
Lv, P.[Pei],
Ren, J.Y.[Jun-Ying],
Han, G.[Genwang],
Lu, J.W.[Ji-Wen],
Xu, M.L.[Ming-Liang],
Local Cross-Patch Activation From Multi-Direction for Weakly
Supervised Object Localization,
IP(34), 2025, pp. 2213-2227.
IEEE DOI Code:
WWW Link.
2504
Transformers, Location awareness, Contrastive learning, Semantics,
Training, Object detection, Artificial intelligence, Accuracy,
contrastive learning
BibRef
Wei, L.H.[Long-Hui],
Xie, L.X.[Ling-Xi],
Zhou, W.G.[Wen-Gang],
Li, H.Q.[Hou-Qiang],
Tian, Q.[Qi],
Exploring the diversity and invariance in yourself for visual
pre-training task,
PR(139), 2023, pp. 109437.
Elsevier DOI
2304
Visual pre-training, Self-supervised learning, Multi-grained visual information
BibRef
Peng, J.[Junran],
Chang, Q.[Qing],
Yin, H.R.[Hao-Ran],
Bu, X.Y.[Xing-Yuan],
Sun, J.J.[Jia-Jun],
Xie, L.X.[Ling-Xi],
Zhang, X.P.[Xiao-Peng],
Tian, Q.[Qi],
Zhang, Z.X.[Zhao-Xiang],
GAIA-Universe: Everything is Super-Netify,
PAMI(45), No. 10, October 2023, pp. 11856-11868.
IEEE DOI
2310
WWW Link.
BibRef
Dong, X.N.[Xing-Ning],
Guo, Q.P.[Qing-Pei],
Gan, T.[Tian],
Wang, Q.[Qing],
Wu, J.L.[Jian-Long],
Ren, X.Y.[Xiang-Yuan],
Cheng, Y.[Yuan],
Chu, W.[Wei],
SNP-S3: Shared Network Pre-Training and Significant Semantic
Strengthening for Various Video-Text Tasks,
CirSysVideo(34), No. 4, April 2024, pp. 2525-2535.
IEEE DOI Code:
WWW Link.
2404
Task analysis, Visualization, Feature extraction, Semantics,
Training, Transformers, Video-text pre-training, video-text matching
BibRef
Zhao, T.C.[Tian-Cheng],
Liu, P.[Peng],
Lee, K.[Kyusong],
OmDet: Large-scale vision-language multi-dataset pre-training with
multimodal detection network,
IET-CV(18), No. 5, 2024, pp. 626-639.
DOI Link
2408
object detection, object recognition
BibRef
Tang, Y.[Yuan],
Li, X.Z.[Xian-Zhi],
Xu, J.F.[Jin-Feng],
Yu, Q.[Qiao],
Hu, L.[Long],
Hao, Y.X.[Yi-Xue],
Chen, M.[Min],
Point-LGMask: Local and Global Contexts Embedding for Point Cloud
Pre-Training With Multi-Ratio Masking,
MultMed(26), 2024, pp. 8360-8370.
IEEE DOI
2408
Point cloud compression, Task analysis, Predictive models,
Self-supervised learning, Representation learning,
representation learning
BibRef
Yu, B.X.B.[Bruce X.B.],
Chang, J.L.[Jian-Long],
Wang, H.X.[Hai-Xin],
Liu, L.B.[Ling-Bo],
Wang, S.J.[Shi-Jie],
Wang, Z.Y.[Zhi-Yu],
Lin, J.F.[Jun-Fan],
Xie, L.X.[Ling-Xi],
Li, H.J.[Hao-Jie],
Lin, Z.C.[Zhou-Chen],
Tian, Q.[Qi],
Chen, C.W.[Chang Wen],
Visual Tuning,
Surveys(56), No. 12, July 2024, pp. xx-yy.
DOI Link
2410
Foundation model, fine-tuning, parameter-efficient, pre-training
BibRef
Huang, Y.[Yipo],
Li, L.[Leida],
Chen, P.F.[Peng-Fei],
Wu, H.N.[Hao-Ning],
Lin, W.S.[Wei-Si],
Shi, G.M.[Guang-Ming],
Multi-Modality Multi-Attribute Contrastive Pre-Training for Image
Aesthetics Computing,
PAMI(47), No. 2, February 2025, pp. 1205-1218.
IEEE DOI
2501
Computational modeling, Databases, Image color analysis, Lighting,
Contrastive learning, Visualization, Semantics, Reviews,
aesthetic representation
BibRef
Baraldi, L.[Lorenzo],
Amoroso, R.[Roberto],
Cornia, M.[Marcella],
Baraldi, L.[Lorenzo],
Pilzer, A.[Andrea],
Cucchiara, R.[Rita],
Learning to mask and permute visual tokens for Vision Transformer
pre-training,
CVIU(252), 2025, pp. 104294.
Elsevier DOI Code:
WWW Link.
2502
BibRef
Huseljic, D.[Denis],
Herde, M.[Marek],
Hahn, P.[Paul],
Müjde, M.[Mehmet],
Sick, B.[Bernhard],
Systematic Evaluation of Uncertainty Calibration in Pretrained Object
Detectors,
IJCV(133), No. 3, March 2025, pp. 1033-1047.
Springer DOI
2502
BibRef
Tian, Y.J.[Yun-Jie],
Xie, L.X.[Ling-Xi],
Fang, J.[Jiemin],
Jiao, J.B.[Jian-Bin],
Tian, Q.[Qi],
Beyond masking: Demystifying token-based pre-training for vision
transformers,
PR(162), 2025, pp. 111386.
Elsevier DOI
2503
Self-supervised learning, Vision transformers,
Token-based pre-training, Masked image modeling
BibRef
Huang, L.[Lan],
Zeng, J.[Jia],
Yu, M.Q.[Meng-Qiang],
Ding, W.P.[Wei-Ping],
Bai, X.Y.[Xing-Yu],
Wang, K.[Kangping],
Efficient feature selection for pre-trained vision transformers,
CVIU(254), 2025, pp. 104326.
Elsevier DOI Code:
WWW Link.
2503
Feature selection, Vision transformer, Model pruning
BibRef
Xiu, H.Y.[Hao-Yi],
Liu, X.[Xin],
Kim, T.[Taehoon],
Kim, K.S.[Kyoung-Sook],
Advancing ALS Applications with Large-Scale Pre-Training: Framework,
Dataset, and Downstream Assessment,
RS(17), No. 11, 2025, pp. 1859.
DOI Link
2506
BibRef
Zhu, H.Y.[Hao-Yi],
Yang, H.H.[Hong-Hui],
Wu, X.Y.[Xiao-Yang],
Huang, D.[Di],
Zhang, S.[Sha],
He, X.L.[Xiang-Long],
Zhao, H.S.[Heng-Shuang],
Shen, C.H.[Chun-Hua],
Qiao, Y.[Yu],
He, T.[Tong],
Ouyang, W.L.[Wan-Li],
PonderV2: Improved 3D Representation With a Universal Pre-Training
Paradigm,
PAMI(47), No. 8, August 2025, pp. 6550-6565.
IEEE DOI
2507
Rendering (computer graphics), Point cloud compression,
Image reconstruction, Training, Benchmark testing, Solid modeling,
multi-view image
BibRef
Marks, M.[Markus],
Knott, M.[Manuel],
Kondapaneni, N.[Neehar],
Cole, E.[Elijah],
Defraeye, T.[Thijs],
Perez-Cruz, F.[Fernando],
Perona, P.[Pietro],
A Closer Look at Benchmarking Self-supervised Pre-training with Image
Classification,
IJCV(133), No. 8, August 2025, pp. 5013-5025.
Springer DOI
2508
BibRef
Yuan, Z.[Zheng],
Zhang, J.[Jie],
Shan, S.G.[Shi-Guang],
Chen, X.L.[Xi-Lin],
FullLoRA: Efficiently Boosting the Robustness of Pretrained Vision
Transformers,
IP(34), 2025, pp. 4580-4590.
IEEE DOI
2508
Training, Computational modeling, Robustness, Adaptation models,
Transformers, Visualization, Natural language processing,
pretrained model
BibRef
Chen, Y.W.[Ya-Wen],
Wen, Z.Y.[Ze-Yi],
Chen, J.[Jian],
Huang, J.[Jin],
Leveraging pre-trained models for kernel machines,
PR(170), 2026, pp. 111961.
Elsevier DOI
2509
Kernel machines, Pre-training, Model inferring
BibRef
Li, Z.Y.[Zi-Yu],
Zhu, Z.Y.[Zhi-Yuan],
Li, Q.[Qing],
Wu, X.[Xia],
Graph pre-trained framework with spatio-temporal importance masking
and fine-grained optimizing for neural decoding,
PR(170), 2026, pp. 112006.
Elsevier DOI
2509
Temporal-aware, Graph self-supervised learning,
Spatio-temporal, Neural decoding
BibRef
Qiu, Q.[Qibo],
Yang, H.H.[Hong-Hui],
Jiang, J.[Jian],
Zhang, S.[Shun],
Ying, H.[Haochao],
Gao, H.M.[Hai-Ming],
Wang, W.X.[Wen-Xiao],
He, X.F.[Xiao-Fei],
M3CS: Multi-Target Masked Point Modeling With Learnable Codebook and
Siamese Decoders,
CirSysVideo(35), No. 9, September 2025, pp. 8807-8818.
IEEE DOI
2509
self-supervised pre-training for point clouds.
Decoding, Image reconstruction, Point cloud compression, Solid modeling,
Semantics, Overfitting, Transformers, Training, self-supervised learning
BibRef
Zhuang, J.X.[Jia-Xin],
Wu, L.S.[Lin-Shan],
Wang, Q.[Qiong],
Fei, P.[Peng],
Vardhanabhuti, V.[Varut],
Luo, L.[Lin],
Chen, H.[Hao],
MiM: Mask in Mask Self-Supervised Pre-Training for 3D Medical Image
Analysis,
MedImg(44), No. 9, September 2025, pp. 3727-3740.
IEEE DOI Code:
WWW Link.
2510
Biomedical imaging, Image reconstruction, Image analysis,
Technological innovation, Transformers, Solid modeling,
3D medical images
BibRef
Lu, H.[Han],
Xie, Y.C.[Yi-Chen],
Ding, M.Y.[Ming-Yu],
Zhan, W.[Wei],
Yang, X.K.[Xiao-Kang],
Tomizuka, M.[Masayoshi],
Yan, J.C.[Jun-Chi],
Sel4FT: Annotation Selection for Pretraining-Finetuning With
Distribution Shift,
PAMI(47), No. 11, November 2025, pp. 9922-9937.
IEEE DOI
2510
Training, Annotations, Data augmentation, Active learning,
Optimization, Data models, Training data, Artificial intelligence,
continuous space optimization
BibRef
Wi, J.M.[Jung-Myung],
Jang, Y.K.[Young-Kyun],
Lee, D.[Dujin],
Nam, M.[Myeongseok],
Kim, D.H.[Dong-Hyun],
Delving into Pre-training for Domain Transfer: A Broad Study of
Pre-training for Domain Generalization and Domain Adaptation,
IJCV(134), No. 2, February 2026, pp. 50.
Springer DOI
2601
BibRef
Kim, D.H.[Dong-Hyun],
Wang, K.[Kaihong],
Sclaroff, S.[Stan],
Saenko, K.[Kate],
A Broad Study of Pre-training for Domain Generalization and Adaptation,
ECCV22(XXXIII:621-638).
Springer DOI
2211
BibRef
Kalapos, A.[András],
Gyires-Tóth, B.[Bálint],
Exploring joint embedding predictive architectures for pretraining
convolutional neural networks,
CVIU(263), 2026, pp. 104595.
Elsevier DOI
2601
Self-supervised learning, Computer vision,
Convolutional neural networks, Semantic segmentation, Data efficiency
BibRef
Wu, Y.[Yue],
Wang, Y.H.[Yun-Hong],
Wang, G.D.[Guo-Dong],
Zhang, J.J.[Jin-Jin],
Gao, Y.J.[Ying-Jie],
Bao, X.[Xiuguo],
Huang, D.[Di],
Label-informed knowledge integration:
Advancing visual prompt for VLMs adaptation,
CVIU(263), 2026, pp. 104614.
Elsevier DOI
2601
Vision-language models, Prompt tuning, Zero-shot learning, Few-shot learning
BibRef
Zhou, Z.D.[Zheng-Dong],
Dong, S.L.[Song-Lin],
Ding, C.H.[Chen-Hao],
Gao, X.Y.[Xin-Yuan],
He, Y.H.[Yu-Hang],
Gong, Y.H.[Yi-Hong],
Diversity covariance-aware prompt learning for vision-language models,
PR(173), 2026, pp. 112806.
Elsevier DOI
2601
Visual-language model, Prompt tuning, Few-shot,
Covariance-aware, Diversity-aware
BibRef
Huang, Z.H.[Zhang-Hui],
Feng, Z.L.[Zun-Lei],
Sun, X.Y.[Xiao-Yan],
Sun, S.[Shuifa],
Yuan, Z.M.[Zhen-Ming],
Yu, J.[Jun],
Zhang, J.[Jian],
Divide-and-conquer towards optimal adaptation of pre-trained model to
medical tasks,
PR(174), 2026, pp. 112949.
Elsevier DOI
2602
Pre-trained model, Model adaptation, Fine-tuning, Back-propagation
BibRef
Zhu, X.L.[Xue-Lin],
Li, J.S.[Jian-Shu],
Liu, J.[Jian],
Tang, D.Q.[Dong-Qi],
Ge, J.W.[Jia-Wei],
Liu, W.J.[Wei-Jia],
Liu, B.[Bo],
Cao, J.X.[Jiu-Xin],
AutoIT: Automated Image Tagging with Random Perturbation,
IJCV(134), No. 1, January 2026, pp. 110.
Springer DOI
2602
BibRef
Guo, B.Y.[Bo-Yang],
Li, L.[Liang],
Zhang, J.H.[Jie-Hua],
Sun, Y.Q.[Yao-Qi],
Yan, C.G.[Cheng-Gang],
Sheng, X.C.[Xi-Chun],
Prompt Learning with Knowledge Regularization for Pre-Trained
Vision-Language Models,
MultMed(28), 2026, pp. 1457-1468.
IEEE DOI
2603
Adaptation models, Computational modeling, Training, Sorting,
Ranking (statistics), Optimization, Visualization, Overfitting,
cross-dataset transfer
BibRef
Wu, Y.[Yue],
Qi, Z.B.[Zhao-Bo],
Sun, J.[Junshu],
Wang, Y.W.[Yao-Wei],
Huang, Q.M.[Qing-Ming],
Wang, S.H.[Shu-Hui],
Video Language Model Pretraining with Spatio-temporal Masking,
CVPR25(8557-8567)
IEEE DOI
2508
Visualization, Computational modeling, Semantics, Linguistics,
Spatiotemporal phenomena, Decoding, Image reconstruction, Videos,
BibRef
Kim, K.[Kwonyoung],
Park, J.[Jungin],
Kim, J.[Jin],
Kwon, H.[Hyeongjun],
Sohn, K.H.[Kwang-Hoon],
Faster Parameter-Efficient Tuning with Token Redundancy Reduction,
CVPR25(30189-30198)
IEEE DOI Code:
WWW Link.
2508
Transfer pre-trained model.
Training, Adaptation models, Limiting, Foundation models,
Computational modeling, Redundancy, Merging, Memory management
BibRef
Roth, K.[Karsten],
Akata, Z.[Zeynep],
Damen, D.[Dima],
Balaževic, I.[Ivana],
Hénaff, O.J.[Olivier J.],
Context-Aware Multimodal Pretraining,
CVPR25(4267-4279)
IEEE DOI
2508
Representation learning, Training, Visualization, Adaptation models,
Computational modeling, Contrastive learning, post-training
BibRef
Wang, L.[Lixu],
Shang, B.Q.[Bing-Qi],
Li, Y.[Yi],
Mohapatra, P.[Payal],
Dong, W.[Wei],
Wang, X.[Xiao],
Zhu, Q.[Qi],
Split Adaptation for Pre-trained Vision Transformers,
CVPR25(20092-20102)
IEEE DOI Code:
WWW Link.
2508
Adaptation models, Quantization (signal), Computational modeling,
Noise, Data visualization, Transformers, Data models, Data mining,
downstream adaptation
BibRef
Yang, H.Y.[Hao-Yuan],
Li, X.[Xiaoou],
Lv, J.M.[Jia-Ming],
Cheng, X.J.[Xian-Jun],
Wang, Q.L.[Qi-Long],
Li, P.H.[Pei-Hua],
ImagineFSL: Self-Supervised Pretraining Matters on Imagined Base Set
for VLM-based Few-shot Learning,
CVPR25(30020-30031)
IEEE DOI
2508
Adaptation models, Systematics, Foundation models, Pipelines,
Buildings, Text to image, Few shot learning, Synthetic data,
text-to-image synthesis
BibRef
Pan, K.H.[Kai-Hang],
Lin, W.[Wang],
Yue, Z.Q.[Zhong-Qi],
Ao, T.L.[Teng-Long],
Jia, L.[Liyu],
Zhao, W.[Wei],
Li, J.C.[Jun-Cheng],
Tang, S.L.[Si-Liang],
Zhang, H.W.[Han-Wang],
Generative Multimodal Pretraining with Discrete Diffusion Timestep
Tokens,
CVPR25(26136-26146)
IEEE DOI Code:
WWW Link.
2508
Award, CVPR, Student Paper HM. Training, Visualization, Image synthesis, Large language models,
Writing, Diffusion models, Decoding, Noise measurement,
multimodal large language model
BibRef
Shi, B.[Baifeng],
Li, B.[Boyi],
Cai, H.[Han],
Lu, Y.[Yao],
Liu, S.[Sifei],
Pavone, M.[Marco],
Kautz, J.[Jan],
Han, S.[Song],
Darrell, T.J.[Trevor J.],
Molchanov, P.[Pavlo],
Yin, H.X.[Hong-Xu],
Scaling Vision Pre-Training to 4K Resolution,
CVPR25(9631-9640)
IEEE DOI
2508
Representation learning, Visualization, Image resolution, Costs,
Image coding, Contrastive learning, Benchmark testing,
Visual perception
BibRef
Fini, E.[Enrico],
Shukor, M.[Mustafa],
Li, X.J.[Xiu-Jun],
Dufter, P.[Philipp],
Klein, M.[Michal],
Haldimann, D.[David],
Aitharaju, S.[Sai],
da Costa, V.G.T.[Victor G. Turrisi],
Béthune, L.[Louis],
Gan, Z.[Zhe],
Toshev, A.[Alexander],
Eichner, M.[Marcin],
Nabi, M.[Moin],
Yang, Y.F.[Yin-Fei],
Susskind, J.[Joshua],
El-Nouby, A.[Alaaeldin],
Multimodal Autoregressive Pre-training of Large Vision Encoders,
CVPR25(9641-9654)
IEEE DOI
2508
Training, Location awareness, Image recognition, Grounding,
Scalability, Computational modeling, Decoding, autoregressive
BibRef
Wen, X.[Xin],
Zhao, B.C.[Bing-Chen],
Chen, Y.L.[Yi-Lun],
Pang, J.M.[Jiang-Miao],
Qi, X.J.[Xiao-Juan],
A Data-Centric Revisit of Pre-Trained Vision Models for Robot
Learning,
CVPR25(12143-12154)
IEEE DOI Code:
WWW Link.
2508
Degradation, Systematics, Image recognition, Scalability, Semantics,
Prototypes, Benchmark testing, Lead, Robot learning,
embodied ai
BibRef
Zhang, X.S.[Xiao-Shuai],
Wang, Z.C.[Zhi-Cheng],
Zhou, H.[Howard],
Ghosh, S.[Soham],
Gnanapragasam, D.[Danushen],
Jampani, V.[Varun],
Su, H.[Hao],
Guibas, L.J.[Leonidas J.],
Condense: Consistent 2d/3d Pre-training for Dense and Sparse Features
from Multi-view Images,
ECCV24(LIV: 19-38).
Springer DOI
2412
3D using pre-trained 2D models
BibRef
Feng, T.[Tuo],
Wang, W.G.[Wen-Guan],
Quan, R.J.[Rui-Jie],
Yang, Y.[Yi],
Shape2scene: 3d Scene Representation Learning Through Pre-training on
Shape Data,
ECCV24(LV: 73-91).
Springer DOI
2412
BibRef
Tang, Y.W.[Yi-Wen],
Zhang, R.[Ray],
Liu, J.M.[Jia-Ming],
Guo, Z.[Zoey],
Zhao, B.[Bin],
Wang, Z.G.[Zhi-Gang],
Gao, P.[Peng],
Li, H.S.[Hong-Sheng],
Wang, D.[Dong],
Li, X.L.[Xue-Long],
Any2point: Empowering Any-modality Large Models for Efficient 3d
Understanding,
ECCV24(XXXVI: 456-473).
Springer DOI
2412
Adapt pre-trained 2d to 3d.
Code:
WWW Link.
BibRef
Zheng, M.Y.[Meng-Yu],
Hao, Z.W.[Zhi-Wei],
Tang, Y.H.[Ye-Hui],
Xu, C.[Chang],
Visual Prompting via Partial Optimal Transport,
ECCV24(XXXV: 1-18).
Springer DOI
2412
BibRef
Wu, S.[Shuchi],
Ma, C.[Chuan],
Wei, K.[Kang],
Xu, X.G.[Xiao-Gang],
Ding, M.[Ming],
Qian, Y.[Yuwen],
Xiao, D.[Di],
Xiang, T.[Tao],
Refine, Discriminate and Align: Stealing Encoders via Sample-wise
Prototypes and Multi-relational Extraction,
ECCV24(XXXIV: 186-203).
Springer DOI
2412
Code:
WWW Link.
BibRef
Choi, H.[Hyesong],
Park, H.[Hyejin],
Yi, K.M.[Kwang Moo],
Cha, S.[Sungmin],
Min, D.B.[Dong-Bo],
Salience-based Adaptive Masking:
Revisiting Token Dynamics for Enhanced Pre-Training,
ECCV24(LXXVIII: 343-359).
Springer DOI
2412
BibRef
Huynh, A.V.[Andy V.],
Gillespie, L.E.[Lauren E.],
Lopez-Saucedo, J.[Jael],
Tang, C.[Claire],
Sikand, R.[Rohan],
Expósito-Alonso, M.[Moisés],
Contrastive Ground-level Image and Remote Sensing Pre-training Improves
Representation Learning for Natural World Imagery,
ECCV24(LXXX: 173-190).
Springer DOI
2412
BibRef
Luo, H.[Hao],
Zhou, B.[Bohan],
Lu, Z.Q.[Zong-Qing],
Pre-trained Visual Dynamics Representations for Efficient Policy
Learning,
ECCV24(LXXXI: 249-267).
Springer DOI
2412
BibRef
Choi, H.[Hyesong],
Lee, H.[Hunsang],
Joung, S.[Seyoung],
Park, H.[Hyejin],
Kim, J.Y.[Ji-Yeong],
Min, D.B.[Dong-Bo],
Emerging Property of Masked Token for Effective Pre-training,
ECCV24(LXXVI: 272-289).
Springer DOI
2412
BibRef
Zhang, Y.Y.[Ying-Ying],
Guo, X.[Xin],
Lao, J.W.[Jiang-Wei],
Yu, L.[Lei],
Ru, L.X.[Li-Xiang],
Wang, J.[Jian],
Ye, G.[Guo],
He, H.M.[Hui-Mei],
Chen, J.D.[Jing-Dong],
Yang, M.[Ming],
POA: Pre-training Once for Models of All Sizes,
ECCV24(III: 131-148).
Springer DOI
2412
BibRef
Nakamura, R.[Ryo],
Tadokoro, R.[Ryu],
Yamada, R.[Ryosuke],
Asano, Y.M.[Yuki M.],
Laina, I.[Iro],
Rupprecht, C.[Christian],
Inoue, N.[Nakamasa],
Yokota, R.[Rio],
Kataoka, H.[Hirokatsu],
Scaling Backwards: Minimal Synthetic Pre-Training?,
ECCV24(XV: 153-171).
Springer DOI
2412
BibRef
Yamada, R.[Ryosuke],
Hara, K.[Kensho],
Kataoka, H.[Hirokatsu],
Makihara, K.[Koshi],
Inoue, N.[Nakamasa],
Yokota, R.[Rio],
Satoh, Y.[Yutaka],
Formula-Supervised Visual-Geometric Pre-Training,
ECCV24(XXII: 57-74).
Springer DOI
2412
BibRef
Zhang, L.[Lixuan],
Kan, M.[Meina],
Shan, S.G.[Shi-Guang],
Chen, X.L.[Xi-Lin],
Prelar: World Model Pre-Training with Learnable Action Representation,
ECCV24(XXIII: 185-201).
Springer DOI
2412
BibRef
Yang, M.Y.[Meng-Yu],
Tian, Y.[Ye],
Zhang, L.[Lanshan],
Liang, X.[Xiao],
Ran, X.M.[Xu-Ming],
Wang, W.D.[Wen-Dong],
AdaViPro: Region-Based Adaptive Visual Prompt for Large-Scale Models
Adapting,
ICIP24(1316-1322)
IEEE DOI
2411
Training, Adaptation models, Visualization, Image resolution,
Accuracy, Decision making, Benchmark testing
BibRef
Li, X.[Xiang],
Togo, R.[Ren],
Maeda, K.[Keisuke],
Ogawa, T.[Takahiro],
Haseyama, M.[Miki],
Reinforcing Pre-Trained Models Using Counterfactual Images,
ICIP24(486-492)
IEEE DOI
2411
Deep learning, Training, Image recognition, Decision making,
Data augmentation, Robustness, Data models, Deep learning
BibRef
Han, K.[Kai],
Wang, Y.H.[Yun-He],
Guo, J.Y.[Jian-Yuan],
Wu, E.[Enhua],
ParameterNet: Parameters are All You Need for Large-Scale Visual
Pretraining of Mobile Networks,
CVPR24(15751-15761)
IEEE DOI Code:
WWW Link.
2410
Convolutional codes, Visualization, Accuracy, Transformers
BibRef
Zhao, Z.Y.[Zhi-Yu],
Huang, B.K.[Bing-Kun],
Xing, S.[Sen],
Wu, G.S.[Gang-Shan],
Qiao, Y.[Yu],
Wang, L.M.[Li-Min],
Asymmetric Masked Distillation for Pre-Training Small Foundation
Models,
CVPR24(18516-18526)
IEEE DOI Code:
WWW Link.
2410
Adaptation models, Accuracy, Image recognition,
Computational modeling, Transformer cores, Transformers
BibRef
Chiche, B.N.[Benjamin Naoto],
Horikawa, Y.[Yuto],
Fujita, R.[Ryo],
Pre-Training Vision Models with Mandelbulb Variations,
CVPR24(22062-22071)
IEEE DOI
2410
Training, Ethics, Accuracy, Licenses, Transformers,
Formula-driven supervised learning, pre-training, mandelbulb
BibRef
Miao, Y.[Yibo],
Lei, Y.[Yu],
Zhou, F.[Feng],
Deng, Z.J.[Zhi-Jie],
Bayesian Exploration of Pre-Trained Models for Low-Shot Image
Classification,
CVPR24(23849-23859)
IEEE DOI
2410
Uncertainty, Computational modeling, Probabilistic logic,
Robustness, Bayes methods, Kernel, low-shot, classification
BibRef
Noman, M.[Mubashir],
Naseer, M.[Muzammal],
Cholakkal, H.[Hisham],
Anwar, R.M.[Rao Muhammad],
Khan, S.[Salman],
Khan, F.S.[Fahad Shahbaz],
Rethinking Transformers Pre-training for Multi-Spectral Satellite
Imagery,
CVPR24(27811-27819)
IEEE DOI Code:
WWW Link.
2410
Image resolution, Transformers, Optical imaging, Satellite images,
Optical sensors, Remote sensing, multi-spectral imagery
BibRef
Obadic, I.[Ivica],
Levering, A.[Alex],
Pennig, L.[Lars],
Oliveira, D.[Dario],
Marcos, D.[Diego],
Zhu, X.X.[Xiao-Xiang],
Contrastive Pretraining for Visual Concept Explanations of
Socioeconomic Outcomes,
EarthVision24(575-584)
IEEE DOI
2410
Deep learning, Training, Visualization, Sensitivity,
Vegetation mapping, Predictive models, Vectors,
contrastive-pretraining
BibRef
Koch, S.[Sebastian],
Hermosilla, P.[Pedro],
Vaskevicius, N.[Narunas],
Colosi, M.[Mirco],
Ropinski, T.[Timo],
Lang3DSG: Language-based contrastive pre-training for 3D Scene Graph
prediction,
3DV24(1037-1047)
IEEE DOI
2408
Training, Point cloud compression, Knowledge engineering,
Solid modeling, Semantics, Natural languages, 3D Scene Graph, GCN
BibRef
Sadhu, A.[Arka],
Nevatia, R.[Ram],
Leveraging Task-Specific Pre-Training to Reason across Images and
Videos,
WACV24(5782-5792)
IEEE DOI
2404
Visualization, Image recognition, Annotations, Focusing, Cognition,
Data models, Algorithms, Vision + language and/or other modalities
BibRef
Zha, Y.H.[Yao-Hua],
Wang, J.P.[Jin-Peng],
Dai, T.[Tao],
Chen, B.[Bin],
Wang, Z.[Zhi],
Xia, S.T.[Shu-Tao],
Instance-aware Dynamic Prompt Tuning for Pre-trained Point Cloud
Models,
ICCV23(14115-14124)
IEEE DOI Code:
WWW Link.
2401
BibRef
Kil, J.[Jihyung],
Changpinyo, S.[Soravit],
Chen, X.[Xi],
Hu, H.X.[He-Xiang],
Goodman, S.[Sebastian],
Chao, W.L.[Wei-Lun],
Soricut, R.[Radu],
PreSTU: Pre-Training for Scene-Text Understanding,
ICCV23(15224-15234)
IEEE DOI
2401
BibRef
Huang, D.[Di],
Peng, S.[Sida],
He, T.[Tong],
Yang, H.H.[Hong-Hui],
Zhou, X.W.[Xiao-Wei],
Ouyang, W.L.[Wan-Li],
Ponder: Point Cloud Pre-training via Neural Rendering,
ICCV23(16043-16052)
IEEE DOI
2401
BibRef
Mendieta, M.[Matías],
Han, B.[Boran],
Shi, X.J.[Xing-Jian],
Zhu, Y.[Yi],
Chen, C.[Chen],
Towards Geospatial Foundation Models via Continual Pretraining,
ICCV23(16760-16770)
IEEE DOI Code:
WWW Link.
2401
BibRef
Gao, M.Z.[Ming-Ze],
Wang, Q.L.[Qi-Long],
Lin, Z.Y.[Zhen-Yi],
Zhu, P.F.[Peng-Fei],
Hu, Q.H.[Qing-Hua],
Zhou, J.B.[Jing-Bo],
Tuning Pre-trained Model via Moment Probing,
ICCV23(11769-11779)
IEEE DOI
2401
BibRef
Wang, J.R.[Jian-Ren],
Dasari, S.[Sudeep],
Srirama, M.K.[Mohan Kumar],
Tulsiani, S.[Shubham],
Gupta, A.[Abhinav],
Manipulate by Seeing: Creating Manipulation Controllers from
Pre-Trained Representations,
ICCV23(3836-3845)
IEEE DOI Code:
WWW Link.
2401
BibRef
Wang, Z.J.[Zi-Jian],
Luo, Y.[Yadan],
Zheng, L.[Liang],
Huang, Z.[Zi],
Baktashmotlagh, M.[Mahsa],
How Far Pre-trained Models Are from Neural Collapse on the Target
Dataset Informs their Transferability,
ICCV23(5526-5535)
IEEE DOI
2401
BibRef
Jain, N.[Nishant],
Behl, H.[Harkirat],
Rawat, Y.S.[Yogesh Singh],
Vineet, V.[Vibhav],
Efficiently Robustify Pre-Trained Models,
ICCV23(5482-5492)
IEEE DOI
2401
BibRef
Kim, B.[Bumsoo],
Jo, Y.[Yeonsik],
Kim, J.[Jinhyung],
Kim, S.[Seunghwan],
Misalign, Contrast then Distill:
Rethinking Misalignments in Language-Image Pretraining,
ICCV23(2563-2572)
IEEE DOI
2401
BibRef
Wang, A.[Angelina],
Russakovsky, O.[Olga],
Overwriting Pretrained Bias with Finetuning Data,
ICCV23(3934-3945)
IEEE DOI
2401
BibRef
Chavhan, R.[Ruchika],
Gouk, H.[Henry],
Li, D.[Da],
Hospedales, T.M.[Timothy M.],
Quality Diversity for Visual Pre-Training,
ICCV23(5361-5371)
IEEE DOI Code:
WWW Link.
2401
BibRef
Singh, M.[Mannat],
Duval, Q.[Quentin],
Alwala, K.V.[Kalyan Vasudev],
Fan, H.Q.[Hao-Qi],
Aggarwal, V.[Vaibhav],
Adcock, A.[Aaron],
Joulin, A.[Armand],
Dollár, P.[Piotr],
Feichtenhofer, C.[Christoph],
Girshick, R.[Ross],
Girdhar, R.[Rohit],
Misra, I.[Ishan],
The effectiveness of MAE pre-pretraining for billion-scale
pretraining,
ICCV23(5461-5471)
IEEE DOI
2401
BibRef
Fu, C.[Cheng],
Huang, H.X.[Han-Xian],
Jiang, Z.X.[Zi-Xuan],
Ni, Y.[Yun],
Nai, L.F.[Li-Feng],
Wu, G.[Gang],
Cheng, L.Q.[Li-Qun],
Zhou, Y.Q.[Yan-Qi],
Li, S.[Sheng],
Li, A.[Andrew],
Zhao, J.[Jishen],
TripLe: Revisiting Pretrained Model Reuse and Progressive Learning
for Efficient Vision Transformer Scaling and Searching,
ICCV23(17107-17117)
IEEE DOI
2401
BibRef
Li, D.Q.[Dai-Qing],
Ling, H.[Huan],
Kar, A.[Amlan],
Acuna, D.[David],
Kim, S.W.[Seung Wook],
Kreis, K.[Karsten],
Torralba, A.[Antonio],
Fidler, S.[Sanja],
DreamTeacher: Pretraining Image Backbones with Deep Generative Models,
ICCV23(16652-16662)
IEEE DOI
2401
BibRef
Lew, B.G.[Byoung-Gyu],
Son, D.H.[Dong-Hyun],
Chang, B.[Buru],
Gradient Estimation for Unseen Domain Risk Minimization with
Pre-Trained Models,
OutDistri23(4438-4448)
IEEE DOI
2401
BibRef
Liu, S.[Sheng],
Huynh, C.P.[Cong Phuoc],
Chen, C.[Cong],
Arap, M.[Maxim],
Hamid, R.[Raffay],
LEMaRT: Label-Efficient Masked Region Transform for Image
Harmonization,
CVPR23(18290-18299)
IEEE DOI
2309
BibRef
Wang, Y.M.[Yao-Ming],
Shi, B.[Bowen],
Zhang, X.P.[Xiao-Peng],
Li, J.[Jin],
Liu, Y.C.[Yu-Chen],
Dai, W.R.[Wen-Rui],
Li, C.L.[Cheng-Lin],
Xiong, H.K.[Hong-Kai],
Tian, Q.[Qi],
Adapting Shortcut with Normalizing Flow: An Efficient Tuning
Framework for Visual Recognition,
CVPR23(15965-15974)
IEEE DOI
2309
WWW Link.
BibRef
Ni, M.H.[Min-Heng],
Huang, H.Y.[Hao-Yang],
Su, L.[Lin],
Cui, E.[Edward],
Bharti, T.[Taroon],
Wang, L.J.[Li-Juan],
Zhang, D.D.[Dong-Dong],
Duan, N.[Nan],
M3P: Learning Universal Representations via Multitask Multilingual
Multimodal Pre-training,
CVPR21(3976-3985)
IEEE DOI
2111
Training, Computational modeling, Semantics,
Image retrieval, Benchmark testing, Data models
BibRef
Li, T.J.[Tian-Jiao],
Foo, L.G.[Lin Geng],
Hu, P.[Ping],
Shang, X.[Xindi],
Rahmani, H.[Hossein],
Yuan, Z.H.[Ze-Huan],
Liu, J.[Jun],
Token Boosting for Robust Self-Supervised Visual Transformer
Pre-training,
CVPR23(24027-24038)
IEEE DOI
2309
BibRef
Yan, X.Y.[Xiang-Yi],
Naushad, J.[Junayed],
Sun, S.L.[Shan-Lin],
Han, K.[Kun],
Tang, H.[Hao],
Kong, D.Y.[De-Ying],
Ma, H.Y.[Hao-Yu],
You, C.Y.[Chen-Yu],
Xie, X.H.[Xiao-Hui],
Representation Recovering for Self-Supervised Pre-training on Medical
Images,
WACV23(2684-2694)
IEEE DOI
2302
Representation learning, Visualization, Image segmentation,
Semantics, Self-supervised learning, Feature extraction
BibRef
Lee, K.Y.[Kuan-Ying],
Zhong, Y.[Yuanyi],
Wang, Y.X.[Yu-Xiong],
Do Pre-trained Models Benefit Equally in Continual Learning?,
WACV23(6474-6482)
IEEE DOI
2302
Training, Systematics, Codes, Computational modeling, Pipelines,
Benchmark testing, Algorithms: Machine learning architectures,
and algorithms (including transfer)
BibRef
Su, W.J.[Wei-Jie],
Zhu, X.Z.[Xi-Zhou],
Tao, C.X.[Chen-Xin],
Lu, L.W.[Le-Wei],
Li, B.[Bin],
Huang, G.[Gao],
Qiao, Y.[Yu],
Wang, X.G.[Xiao-Gang],
Zhou, J.[Jie],
Dai, J.F.[Ji-Feng],
Towards All-in-One Pre-Training via Maximizing Multi-Modal Mutual
Information,
CVPR23(15888-15899)
IEEE DOI
2309
BibRef
Wei, L.H.[Long-Hui],
Xie, L.X.[Ling-Xi],
Zhou, W.G.[Wen-Gang],
Li, H.Q.[Hou-Qiang],
Tian, Q.[Qi],
MVP: Multimodality-Guided Visual Pre-training,
ECCV22(XXX:337-353).
Springer DOI
2211
BibRef
Yuan, Z.W.[Zhuo-Wen],
Wu, F.[Fan],
Long, Y.H.[Yun-Hui],
Xiao, C.W.[Chao-Wei],
Li, B.[Bo],
SecretGen: Privacy Recovery on Pre-trained Models via Distribution
Discrimination,
ECCV22(V:139-155).
Springer DOI
2211
BibRef
Yang, J.W.[Jia-Wei],
Chen, H.[Hanbo],
Liang, Y.[Yuan],
Huang, J.Z.[Jun-Zhou],
He, L.[Lei],
Yao, J.H.[Jian-Hua],
ConCL: Concept Contrastive Learning for Dense Prediction Pre-training
in Pathology Images,
ECCV22(XXI:523-539).
Springer DOI
2211
BibRef
You, H.X.[Hao-Xuan],
Zhou, L.W.[Luo-Wei],
Xiao, B.[Bin],
Codella, N.[Noel],
Cheng, Y.[Yu],
Xu, R.C.[Ruo-Chen],
Chang, S.F.[Shih-Fu],
Yuan, L.[Lu],
Learning Visual Representation from Modality-Shared Contrastive
Language-Image Pre-training,
ECCV22(XXVII:69-87).
Springer DOI
2211
BibRef
Chakraborty, S.[Shuvam],
Uzkent, B.[Burak],
Ayush, K.[Kumar],
Tanmay, K.[Kumar],
Sheehan, E.[Evan],
Ermon, S.[Stefano],
Efficient Conditional Pre-training for Transfer Learning,
L3D-IVU22(4240-4249)
IEEE DOI
2210
Training, Costs, Image resolution, Filtering, Computational modeling,
Transfer learning
BibRef
Li, Z.W.[Zhao-Wen],
Zhu, Y.S.[You-Song],
Yang, F.[Fan],
Li, W.[Wei],
Zhao, C.Y.[Chao-Yang],
Chen, Y.Y.[Ying-Ying],
Chen, Z.Y.[Zhi-Yang],
Xie, J.H.[Jia-Hao],
Wu, L.W.[Li-Wei],
Zhao, R.[Rui],
Tang, M.[Ming],
Wang, J.Q.[Jin-Qiao],
UniVIP: A Unified Framework for Self-Supervised Visual Pre-training,
CVPR22(14607-14616)
IEEE DOI
2210
Representation learning, Visualization, Image segmentation,
Correlation, Semantics, Self-supervised learning, Object detection,
Transfer/low-shot/long-tail learning
BibRef
Li, W.[Wei],
Xie, J.H.[Jia-Hao],
Loy, C.C.[Chen Change],
Correlational Image Modeling for Self-Supervised Visual Pre-Training,
CVPR23(15105-15115)
IEEE DOI
2309
BibRef
Jia, M.L.[Meng-Lin],
Tang, L.[Luming],
Chen, B.C.[Bor-Chun],
Cardie, C.[Claire],
Belongie, S.[Serge],
Hariharan, B.[Bharath],
Lim, S.N.[Ser-Nam],
Visual Prompt Tuning,
ECCV22(XXXIII:709-727).
Springer DOI
2211
WWW Link. Adapt pre-trainted model
BibRef
Xu, C.F.[Chen-Feng],
Li, T.[Tian],
Tang, C.[Chen],
Sun, L.F.[Ling-Feng],
Keutzer, K.[Kurt],
Tomizuka, M.[Masayoshi],
Fathi, A.[Alireza],
Zhan, W.[Wei],
PreTraM: Self-supervised Pre-training via Connecting Trajectory and Map,
ECCV22(XXIX:34-50).
Springer DOI
2211
BibRef
Wei, C.[Chen],
Fan, H.Q.[Hao-Qi],
Xie, S.[Saining],
Wu, C.Y.[Chao-Yuan],
Yuille, A.L.[Alan L.],
Feichtenhofer, C.[Christoph],
Masked Feature Prediction for Self-Supervised Visual Pre-Training,
CVPR22(14648-14658)
IEEE DOI
2210
Deep learning, Visualization, Histograms, Computational modeling,
Transfer learning, Predictive models, Video analysis and understanding
BibRef
Mishra, S.[Samarth],
Panda, R.[Rameswar],
Phoo, C.P.[Cheng Perng],
Chen, C.F.R.[Chun-Fu Richard],
Karlinsky, L.[Leonid],
Saenko, K.[Kate],
Saligrama, V.[Venkatesh],
Feris, R.S.[Rogerio S.],
Task2Sim: Towards Effective Pre-training and Transfer from Synthetic
Data,
CVPR22(9184-9194)
IEEE DOI
2210
Graphics, Training, Representation learning, Adaptation models,
Computational modeling, Data models, retrieval
BibRef
Singh, M.[Mannat],
Gustafson, L.[Laura],
Adcock, A.[Aaron],
de Freitas-Reis, V.[Vinicius],
Gedik, B.[Bugra],
Kosaraju, R.P.[Raj Prateek],
Mahajan, D.[Dhruv],
Girshick, R.[Ross],
Dollár, P.[Piotr],
van der Maaten, L.[Laurens],
Revisiting Weakly Supervised Pre-Training of Visual Perception Models,
CVPR22(794-804)
IEEE DOI
2210
Visualization, Computational modeling, Supervised learning,
Self-supervised learning, Standards,
Transfer/low-shot/long-tail learning
BibRef
Cha, J.[Junbum],
Lee, K.[Kyungjae],
Park, S.[Sungrae],
Chun, S.[Sanghyuk],
Domain Generalization by Mutual-Information Regularization with
Pre-trained Models,
ECCV22(XXIII:440-457).
Springer DOI
2211
BibRef
Zhu, X.Z.[Xi-Zhou],
Zhu, J.G.[Jin-Guo],
Li, H.[Hao],
Wu, X.S.[Xiao-Shi],
Li, H.S.[Hong-Sheng],
Wang, X.H.[Xiao-Hua],
Dai, J.F.[Ji-Feng],
Uni-Perceiver: Pre-training Unified Architecture for Generic
Perception for Zero-shot and Few-shot Tasks,
CVPR22(16783-16794)
IEEE DOI
2210
Representation learning, Costs, Collaboration,
Transformers, Data models,
BibRef
Wang, X.L.[Xin-Long],
Zhang, R.F.[Ru-Feng],
Shen, C.H.[Chun-Hua],
Kong, T.[Tao],
Li, L.[Lei],
Dense Contrastive Learning for Self-Supervised Visual Pre-Training,
CVPR21(3023-3032)
IEEE DOI
2111
Learning systems, Image segmentation,
Visualization, Computational modeling, Semantics, Object detection
BibRef
Mañas, O.[Oscar],
Lacoste, A.[Alexandre],
Giró-i-Nieto, X.[Xavier],
Vazquez, D.[David],
Rodríguez, P.[Pau],
Seasonal Contrast:
Unsupervised Pre-Training from Uncurated Remote Sensing Data,
ICCV21(9394-9403)
IEEE DOI
2203
Earth, Deep learning, Satellites, Transfer learning, Pipelines,
Supervised learning, Data models, Vision applications and systems
BibRef
Zhang, Y.[Youshan],
Davison, B.D.[Brian D.],
Efficient Pre-trained Features and Recurrent Pseudo-Labeling in
Unsupervised Domain Adaptation,
LLID21(2713-2722)
IEEE DOI
2109
Training, Adaptation models, Computational modeling, Benchmark testing
BibRef
Chowdhury, A.[Arkabandhu],
Jiang, M.C.[Ming-Chao],
Chaudhuri, S.[Swarat],
Jermaine, C.[Chris],
Few-shot Image Classification: Just Use a Library of Pre-trained
Feature Extractors and a Simple Classifier,
ICCV21(9425-9434)
IEEE DOI
2203
Transfer learning, Feature extraction, Libraries,
Computational efficiency, Classification algorithms, Feeds,
Vision applications and systems
BibRef
Kim, D.H.[Dong-Hyun],
Saito, K.[Kuniaki],
Oh, T.H.[Tae-Hyun],
Plummer, B.A.[Bryan A.],
Sclaroff, S.[Stan],
Saenko, K.[Kate],
CDS: Cross-Domain Self-supervised Pre-training,
ICCV21(9103-9112)
IEEE DOI
2203
Transfer learning, Task analysis, Standards,
Transfer/Low-shot/Semi/Unsupervised Learning, Representation learning
BibRef
Zhang, J.O.[Jeffrey O.],
Sax, A.[Alexander],
Zamir, A.[Amir],
Guibas, L.J.[Leonidas J.],
Malik, J.[Jitendra],
Side-Tuning:
A Baseline for Network Adaptation via Additive Side Networks,
ECCV20(III:698-714).
Springer DOI
2012
Adapt pre-trained network, not start from beginning.
BibRef
Yan, X.T.[Xue-Ting],
Misra, I.[Ishan],
Gupta, A.[Abhinav],
Ghadiyaram, D.[Deepti],
Mahajan, D.[Dhruv],
ClusterFit: Improving Generalization of Visual Representations,
CVPR20(6508-6517)
IEEE DOI
2008
Pre-training.
Task analysis, Training, Feature extraction, Visualization, Videos,
Tagging, Twitter
BibRef
Tang, H.X.[Hong-Xiang],
Ortis, A.[Alessandro],
Battiato, S.[Sebastiano],
The Impact of Padding on Image Classification by Using Pre-trained
Convolutional Neural Networks,
CIAP19(II:337-344).
Springer DOI
1909
BibRef
Chakraborty, R.,
Yang, C.,
Vemuri, B.C.,
A Mixture Model for Aggregation of Multiple Pre-Trained Weak
Classifiers,
Diff-CVML18(454-4547)
IEEE DOI
1812
Feature extraction, Training, Frequency modulation, Boosting,
Geometry, Nickel, Mixture models
BibRef
Chapter on Pattern Recognition, Clustering, Statistics, Grammars, Learning, Neural Nets, Genetic Algorithms continues in
Sequential Monte Carlo Mehtods, Particle Filters .