14.1.7 Pre-Training

Chapter Contents (Back)
Pre-Training.
See also Transfer Learning from Other Tasks, Other Classes.
See also Domain Generalization.
See also CLIP, Contrastive Language-Image Pre-Training.
See also Fine Tuning, Fine-Tuning, Pre-Training, Zero-Shot, One-Shot.

Wang, J.[Jie], Luo, C.[Chang], Huang, H.Q.[Han-Qiao], Zhao, H.Z.[Hui-Zhen], Wang, S.Q.[Shi-Qiang],
Transferring Pre-Trained Deep CNNs for Remote Scene Classification with General Features Learned from Linear PCA Network,
RS(9), No. 3, 2017, pp. xx-yy.
DOI Link 1704
BibRef

Wen, Y.[Yang], Chen, L.T.[Lei-Ting], Deng, Y.[Yu], Zhou, C.[Chuan],
Rethinking pre-training on medical imaging,
JVCIR(78), 2021, pp. 103145.
Elsevier DOI 2107
Transfer learning, Medical image analysis, Convolutional neural network, Survival prediction BibRef

Zhang, T.[Tong], Gao, P.[Peng], Dong, H.[Hao], Zhuang, Y.[Yin], Wang, G.Q.[Guan-Qun], Zhang, W.[Wei], Chen, H.[He],
Consecutive Pre-Training: A Knowledge Transfer Learning Strategy with Relevant Unlabeled Data for Remote Sensing Domain,
RS(14), No. 22, 2022, pp. xx-yy.
DOI Link 2212
BibRef

Kataoka, H.[Hirokatsu], Okayasu, K.[Kazushige], Matsumoto, A.[Asato], Yamagata, E.[Eisuke], Yamada, R.[Ryosuke], Inoue, N.[Nakamasa], Nakamura, A.[Akio], Satoh, Y.[Yutaka],
Pre-Training Without Natural Images,
IJCV(130), No. 1, January 2022, pp. 990-1007.
Springer DOI 2204
BibRef
Earlier: ACCV20(VI:583-600).
Springer DOI 2103
BibRef

Xu, C.[Cong], Li, D.[Dan], Yang, M.[Min],
Adversarial momentum-contrastive pre-training,
PRL(160), 2022, pp. 172-179.
Elsevier DOI 2208
Real samples and adversarial samples for training. Adversarial robustness, Contrastive learning, Memory bank, Fine-tuning BibRef

Zhou, H.Y.[Hong-Yu], Lu, C.X.[Chi-Xiang], Chen, C.Q.[Chao-Qi], Yang, S.[Sibei], Yu, Y.Z.[Yi-Zhou],
A Unified Visual Information Preservation Framework for Self-supervised Pre-Training in Medical Image Analysis,
PAMI(45), No. 7, July 2023, pp. 8020-8035.
IEEE DOI 2306
Semantics, Image restoration, Task analysis, Visualization, Medical diagnostic imaging, Image segmentation, transfer learning BibRef

Chen, Z.H.[Zi-Han], Zhu, H.Y.[Hong-Yuan], Cheng, H.[Hao], Mi, S.[Siya], Zhang, Y.[Yu], Geng, X.[Xin],
LPCL: Localized prominence contrastive learning for self-supervised dense visual pre-training,
PR(135), 2023, pp. 109185.
Elsevier DOI 2212
Self-supervised learning, Contrastive learning, Dense representation BibRef

Wei, L.H.[Long-Hui], Xie, L.X.[Ling-Xi], Zhou, W.G.[Wen-Gang], Li, H.Q.[Hou-Qiang], Tian, Q.[Qi],
Exploring the diversity and invariance in yourself for visual pre-training task,
PR(139), 2023, pp. 109437.
Elsevier DOI 2304
Visual pre-training, Self-supervised learning, Multi-grained visual information BibRef

Peng, J.[Junran], Chang, Q.[Qing], Yin, H.R.[Hao-Ran], Bu, X.Y.[Xing-Yuan], Sun, J.J.[Jia-Jun], Xie, L.X.[Ling-Xi], Zhang, X.P.[Xiao-Peng], Tian, Q.[Qi], Zhang, Z.X.[Zhao-Xiang],
GAIA-Universe: Everything is Super-Netify,
PAMI(45), No. 10, October 2023, pp. 11856-11868.
IEEE DOI 2310

WWW Link. BibRef

Dong, X.N.[Xing-Ning], Guo, Q.P.[Qing-Pei], Gan, T.[Tian], Wang, Q.[Qing], Wu, J.L.[Jian-Long], Ren, X.Y.[Xiang-Yuan], Cheng, Y.[Yuan], Chu, W.[Wei],
SNP-S3: Shared Network Pre-Training and Significant Semantic Strengthening for Various Video-Text Tasks,
CirSysVideo(34), No. 4, April 2024, pp. 2525-2535.
IEEE DOI Code:
WWW Link. 2404
Task analysis, Visualization, Feature extraction, Semantics, Training, Transformers, Video-text pre-training, video-text matching BibRef

Zhao, T.C.[Tian-Cheng], Liu, P.[Peng], Lee, K.[Kyusong],
OmDet: Large-scale vision-language multi-dataset pre-training with multimodal detection network,
IET-CV(18), No. 5, 2024, pp. 626-639.
DOI Link 2408
object detection, object recognition BibRef

Tang, Y.[Yuan], Li, X.Z.[Xian-Zhi], Xu, J.F.[Jin-Feng], Yu, Q.[Qiao], Hu, L.[Long], Hao, Y.X.[Yi-Xue], Chen, M.[Min],
Point-LGMask: Local and Global Contexts Embedding for Point Cloud Pre-Training With Multi-Ratio Masking,
MultMed(26), 2024, pp. 8360-8370.
IEEE DOI 2408
Point cloud compression, Task analysis, Predictive models, Self-supervised learning, Representation learning, representation learning BibRef


Koch, S.[Sebastian], Hermosilla, P.[Pedro], Vaskevicius, N.[Narunas], Colosi, M.[Mirco], Ropinski, T.[Timo],
Lang3DSG: Language-based contrastive pre-training for 3D Scene Graph prediction,
3DV24(1037-1047)
IEEE DOI 2408
Training, Point cloud compression, Knowledge engineering, Solid modeling, Semantics, Natural languages, 3D Scene Graph, GCN BibRef

Sadhu, A.[Arka], Nevatia, R.[Ram],
Leveraging Task-Specific Pre-Training to Reason across Images and Videos,
WACV24(5782-5792)
IEEE DOI 2404
Visualization, Image recognition, Annotations, Focusing, Cognition, Data models, Algorithms, Vision + language and/or other modalities BibRef

Lin, J.Y.[Jia-Ying], Lau, R.W.H.[Rynson W. H.],
Self-supervised Pre-training for Mirror Detection,
ICCV23(12193-12202)
IEEE DOI Code:
WWW Link. 2401
BibRef

Zha, Y.[Yaohua], Wang, J.P.[Jin-Peng], Dai, T.[Tao], Chen, B.[Bin], Wang, Z.[Zhi], Xia, S.T.[Shu-Tao],
Instance-aware Dynamic Prompt Tuning for Pre-trained Point Cloud Models,
ICCV23(14115-14124)
IEEE DOI Code:
WWW Link. 2401
BibRef

Kil, J.[Jihyung], Changpinyo, S.[Soravit], Chen, X.[Xi], Hu, H.X.[He-Xiang], Goodman, S.[Sebastian], Chao, W.L.[Wei-Lun], Soricut, R.[Radu],
PreSTU: Pre-Training for Scene-Text Understanding,
ICCV23(15224-15234)
IEEE DOI 2401
BibRef

Huang, D.[Di], Peng, S.[Sida], He, T.[Tong], Yang, H.H.[Hong-Hui], Zhou, X.W.[Xiao-Wei], Ouyang, W.L.[Wan-Li],
Ponder: Point Cloud Pre-training via Neural Rendering,
ICCV23(16043-16052)
IEEE DOI 2401
BibRef

Mendieta, M.[Matías], Han, B.[Boran], Shi, X.J.[Xing-Jian], Zhu, Y.[Yi], Chen, C.[Chen],
Towards Geospatial Foundation Models via Continual Pretraining,
ICCV23(16760-16770)
IEEE DOI Code:
WWW Link. 2401
BibRef

Gao, M.Z.[Ming-Ze], Wang, Q.L.[Qi-Long], Lin, Z.[Zhenyi], Zhu, P.F.[Peng-Fei], Hu, Q.H.[Qing-Hua], Zhou, J.B.[Jing-Bo],
Tuning Pre-trained Model via Moment Probing,
ICCV23(11769-11779)
IEEE DOI 2401
BibRef

Wang, J.R.[Jian-Ren], Dasari, S.[Sudeep], Srirama, M.K.[Mohan Kumar], Tulsiani, S.[Shubham], Gupta, A.[Abhinav],
Manipulate by Seeing: Creating Manipulation Controllers from Pre-Trained Representations,
ICCV23(3836-3845)
IEEE DOI Code:
WWW Link. 2401
BibRef

Wang, Z.J.[Zi-Jian], Luo, Y.[Yadan], Zheng, L.[Liang], Huang, Z.[Zi], Baktashmotlagh, M.[Mahsa],
How Far Pre-trained Models Are from Neural Collapse on the Target Dataset Informs their Transferability,
ICCV23(5526-5535)
IEEE DOI 2401
BibRef

Jain, N.[Nishant], Behl, H.[Harkirat], Rawat, Y.S.[Yogesh Singh], Vineet, V.[Vibhav],
Efficiently Robustify Pre-Trained Models,
ICCV23(5482-5492)
IEEE DOI 2401
BibRef

Kim, B.[Bumsoo], Jo, Y.[Yeonsik], Kim, J.[Jinhyung], Kim, S.[Seunghwan],
Misalign, Contrast then Distill: Rethinking Misalignments in Language-Image Pretraining,
ICCV23(2563-2572)
IEEE DOI 2401
BibRef

Wang, A.[Angelina], Russakovsky, O.[Olga],
Overwriting Pretrained Bias with Finetuning Data,
ICCV23(3934-3945)
IEEE DOI 2401
BibRef

Chavhan, R.[Ruchika], Gouk, H.[Henry], Li, D.[Da], Hospedales, T.M.[Timothy M.],
Quality Diversity for Visual Pre-Training,
ICCV23(5361-5371)
IEEE DOI Code:
WWW Link. 2401
BibRef

Singh, M.[Mannat], Duval, Q.[Quentin], Alwala, K.V.[Kalyan Vasudev], Fan, H.Q.[Hao-Qi], Aggarwal, V.[Vaibhav], Adcock, A.[Aaron], Joulin, A.[Armand], Dollár, P.[Piotr], Feichtenhofer, C.[Christoph], Girshick, R.[Ross], Girdhar, R.[Rohit], Misra, I.[Ishan],
The effectiveness of MAE pre-pretraining for billion-scale pretraining,
ICCV23(5461-5471)
IEEE DOI 2401
BibRef

Fu, C.[Cheng], Huang, H.X.[Han-Xian], Jiang, Z.X.[Zi-Xuan], Ni, Y.[Yun], Nai, L.F.[Li-Feng], Wu, G.[Gang], Cheng, L.Q.[Li-Qun], Zhou, Y.Q.[Yan-Qi], Li, S.[Sheng], Li, A.[Andrew], Zhao, J.[Jishen],
TripLe: Revisiting Pretrained Model Reuse and Progressive Learning for Efficient Vision Transformer Scaling and Searching,
ICCV23(17107-17117)
IEEE DOI 2401
BibRef

Li, D.Q.[Dai-Qing], Ling, H.[Huan], Kar, A.[Amlan], Acuna, D.[David], Kim, S.W.[Seung Wook], Kreis, K.[Karsten], Torralba, A.[Antonio], Fidler, S.[Sanja],
DreamTeacher: Pretraining Image Backbones with Deep Generative Models,
ICCV23(16652-16662)
IEEE DOI 2401
BibRef

Lew, B.G.[Byoung-Gyu], Son, D.H.[Dong-Hyun], Chang, B.[Buru],
Gradient Estimation for Unseen Domain Risk Minimization with Pre-Trained Models,
OutDistri23(4438-4448)
IEEE DOI 2401
BibRef

Liu, S.[Sheng], Huynh, C.P.[Cong Phuoc], Chen, C.[Cong], Arap, M.[Maxim], Hamid, R.[Raffay],
LEMaRT: Label-Efficient Masked Region Transform for Image Harmonization,
CVPR23(18290-18299)
IEEE DOI 2309
BibRef

Wang, Y.M.[Yao-Ming], Shi, B.[Bowen], Zhang, X.P.[Xiao-Peng], Li, J.[Jin], Liu, Y.C.[Yu-Chen], Dai, W.R.[Wen-Rui], Li, C.L.[Cheng-Lin], Xiong, H.K.[Hong-Kai], Tian, Q.[Qi],
Adapting Shortcut with Normalizing Flow: An Efficient Tuning Framework for Visual Recognition,
CVPR23(15965-15974)
IEEE DOI 2309

WWW Link. BibRef

Ni, M.H.[Min-Heng], Huang, H.Y.[Hao-Yang], Su, L.[Lin], Cui, E.[Edward], Bharti, T.[Taroon], Wang, L.J.[Li-Juan], Zhang, D.D.[Dong-Dong], Duan, N.[Nan],
M3P: Learning Universal Representations via Multitask Multilingual Multimodal Pre-training,
CVPR21(3976-3985)
IEEE DOI 2111
Training, Computational modeling, Semantics, Image retrieval, Benchmark testing, Data models BibRef

Li, T.J.[Tian-Jiao], Foo, L.G.[Lin Geng], Hu, P.[Ping], Shang, X.[Xindi], Rahmani, H.[Hossein], Yuan, Z.H.[Ze-Huan], Liu, J.[Jun],
Token Boosting for Robust Self-Supervised Visual Transformer Pre-training,
CVPR23(24027-24038)
IEEE DOI 2309
BibRef

Yan, X.Y.[Xiang-Yi], Naushad, J.[Junayed], Sun, S.L.[Shan-Lin], Han, K.[Kun], Tang, H.[Hao], Kong, D.Y.[De-Ying], Ma, H.Y.[Hao-Yu], You, C.Y.[Chen-Yu], Xie, X.H.[Xiao-Hui],
Representation Recovering for Self-Supervised Pre-training on Medical Images,
WACV23(2684-2694)
IEEE DOI 2302
Representation learning, Visualization, Image segmentation, Semantics, Self-supervised learning, Feature extraction BibRef

Lee, K.Y.[Kuan-Ying], Zhong, Y.[Yuanyi], Wang, Y.X.[Yu-Xiong],
Do Pre-trained Models Benefit Equally in Continual Learning?,
WACV23(6474-6482)
IEEE DOI 2302
Training, Systematics, Codes, Computational modeling, Pipelines, Benchmark testing, Algorithms: Machine learning architectures, and algorithms (including transfer) BibRef

Su, W.J.[Wei-Jie], Zhu, X.[Xizhou], Tao, C.X.[Chen-Xin], Lu, L.W.[Le-Wei], Li, B.[Bin], Huang, G.[Gao], Qiao, Y.[Yu], Wang, X.G.[Xiao-Gang], Zhou, J.[Jie], Dai, J.F.[Ji-Feng],
Towards All-in-One Pre-Training via Maximizing Multi-Modal Mutual Information,
CVPR23(15888-15899)
IEEE DOI 2309
BibRef

Wei, L.[Longhui], Xie, L.X.[Ling-Xi], Zhou, W.G.[Wen-Gang], Li, H.Q.[Hou-Qiang], Tian, Q.[Qi],
MVP: Multimodality-Guided Visual Pre-training,
ECCV22(XXX:337-353).
Springer DOI 2211
BibRef

Yuan, Z.W.[Zhuo-Wen], Wu, F.[Fan], Long, Y.H.[Yun-Hui], Xiao, C.W.[Chao-Wei], Li, B.[Bo],
SecretGen: Privacy Recovery on Pre-trained Models via Distribution Discrimination,
ECCV22(V:139-155).
Springer DOI 2211
BibRef

Yang, J.W.[Jia-Wei], Chen, H.[Hanbo], Liang, Y.[Yuan], Huang, J.Z.[Jun-Zhou], He, L.[Lei], Yao, J.H.[Jian-Hua],
ConCL: Concept Contrastive Learning for Dense Prediction Pre-training in Pathology Images,
ECCV22(XXI:523-539).
Springer DOI 2211
BibRef

You, H.X.[Hao-Xuan], Zhou, L.W.[Luo-Wei], Xiao, B.[Bin], Codella, N.[Noel], Cheng, Y.[Yu], Xu, R.C.[Ruo-Chen], Chang, S.F.[Shih-Fu], Yuan, L.[Lu],
Learning Visual Representation from Modality-Shared Contrastive Language-Image Pre-training,
ECCV22(XXVII:69-87).
Springer DOI 2211
BibRef

Chakraborty, S.[Shuvam], Uzkent, B.[Burak], Ayush, K.[Kumar], Tanmay, K.[Kumar], Sheehan, E.[Evan], Ermon, S.[Stefano],
Efficient Conditional Pre-training for Transfer Learning,
L3D-IVU22(4240-4249)
IEEE DOI 2210
Training, Costs, Image resolution, Filtering, Computational modeling, Transfer learning BibRef

Li, Z.W.[Zhao-Wen], Zhu, Y.S.[You-Song], Yang, F.[Fan], Li, W.[Wei], Zhao, C.Y.[Chao-Yang], Chen, Y.Y.[Ying-Ying], Chen, Z.Y.[Zhi-Yang], Xie, J.H.[Jia-Hao], Wu, L.W.[Li-Wei], Zhao, R.[Rui], Tang, M.[Ming], Wang, J.Q.[Jin-Qiao],
UniVIP: A Unified Framework for Self-Supervised Visual Pre-training,
CVPR22(14607-14616)
IEEE DOI 2210
Representation learning, Visualization, Image segmentation, Correlation, Semantics, Self-supervised learning, Object detection, Transfer/low-shot/long-tail learning BibRef

Li, W.[Wei], Xie, J.H.[Jia-Hao], Loy, C.C.[Chen Change],
Correlational Image Modeling for Self-Supervised Visual Pre-Training,
CVPR23(15105-15115)
IEEE DOI 2309
BibRef

Jia, M.L.[Meng-Lin], Tang, L.[Luming], Chen, B.C.[Bor-Chun], Cardie, C.[Claire], Belongie, S.[Serge], Hariharan, B.[Bharath], Lim, S.N.[Ser-Nam],
Visual Prompt Tuning,
ECCV22(XXXIII:709-727).
Springer DOI 2211

WWW Link. Adapt pre-trainted model BibRef

Xu, C.F.[Chen-Feng], Li, T.[Tian], Tang, C.[Chen], Sun, L.F.[Ling-Feng], Keutzer, K.[Kurt], Tomizuka, M.[Masayoshi], Fathi, A.[Alireza], Zhan, W.[Wei],
PreTraM: Self-supervised Pre-training via Connecting Trajectory and Map,
ECCV22(XXIX:34-50).
Springer DOI 2211
BibRef

Wei, C.[Chen], Fan, H.Q.[Hao-Qi], Xie, S.[Saining], Wu, C.Y.[Chao-Yuan], Yuille, A.L.[Alan L.], Feichtenhofer, C.[Christoph],
Masked Feature Prediction for Self-Supervised Visual Pre-Training,
CVPR22(14648-14658)
IEEE DOI 2210
Deep learning, Visualization, Histograms, Computational modeling, Transfer learning, Predictive models, Video analysis and understanding BibRef

Mishra, S.[Samarth], Panda, R.[Rameswar], Phoo, C.P.[Cheng Perng], Chen, C.F.R.[Chun-Fu Richard], Karlinsky, L.[Leonid], Saenko, K.[Kate], Saligrama, V.[Venkatesh], Feris, R.S.[Rogerio S.],
Task2Sim: Towards Effective Pre-training and Transfer from Synthetic Data,
CVPR22(9184-9194)
IEEE DOI 2210
Graphics, Training, Representation learning, Adaptation models, Computational modeling, Data models, retrieval BibRef

Singh, M.[Mannat], Gustafson, L.[Laura], Adcock, A.[Aaron], de Freitas-Reis, V.[Vinicius], Gedik, B.[Bugra], Kosaraju, R.P.[Raj Prateek], Mahajan, D.[Dhruv], Girshick, R.[Ross], Dollár, P.[Piotr], van der Maaten, L.[Laurens],
Revisiting Weakly Supervised Pre-Training of Visual Perception Models,
CVPR22(794-804)
IEEE DOI 2210
Visualization, Computational modeling, Supervised learning, Self-supervised learning, Pattern recognition, Standards, Transfer/low-shot/long-tail learning BibRef

Cha, J.[Junbum], Lee, K.[Kyungjae], Park, S.[Sungrae], Chun, S.[Sanghyuk],
Domain Generalization by Mutual-Information Regularization with Pre-trained Models,
ECCV22(XXIII:440-457).
Springer DOI 2211
BibRef

Kim, D.H.[Dong-Hyun], Wang, K.[Kaihong], Sclaroff, S.[Stan], Saenko, K.[Kate],
A Broad Study of Pre-training for Domain Generalization and Adaptation,
ECCV22(XXXIII:621-638).
Springer DOI 2211
BibRef

Zhu, X.Z.[Xi-Zhou], Zhu, J.[Jinguo], Li, H.[Hao], Wu, X.S.[Xiao-Shi], Li, H.S.[Hong-Sheng], Wang, X.H.[Xiao-Hua], Dai, J.F.[Ji-Feng],
Uni-Perceiver: Pre-training Unified Architecture for Generic Perception for Zero-shot and Few-shot Tasks,
CVPR22(16783-16794)
IEEE DOI 2210
Representation learning, Costs, Collaboration, Transformers, Data models, BibRef

Wang, X.L.[Xin-Long], Zhang, R.F.[Ru-Feng], Shen, C.H.[Chun-Hua], Kong, T.[Tao], Li, L.[Lei],
Dense Contrastive Learning for Self-Supervised Visual Pre-Training,
CVPR21(3023-3032)
IEEE DOI 2111
Learning systems, Image segmentation, Visualization, Computational modeling, Semantics, Object detection BibRef

Mañas, O.[Oscar], Lacoste, A.[Alexandre], Giró-i-Nieto, X.[Xavier], Vazquez, D.[David], Rodríguez, P.[Pau],
Seasonal Contrast: Unsupervised Pre-Training from Uncurated Remote Sensing Data,
ICCV21(9394-9403)
IEEE DOI 2203
Earth, Deep learning, Satellites, Transfer learning, Pipelines, Supervised learning, Data models, Vision applications and systems BibRef

Zhang, Y.[Youshan], Davison, B.D.[Brian D.],
Efficient Pre-trained Features and Recurrent Pseudo-Labeling in Unsupervised Domain Adaptation,
LLID21(2713-2722)
IEEE DOI 2109
Training, Adaptation models, Computational modeling, Benchmark testing BibRef

Chowdhury, A.[Arkabandhu], Jiang, M.C.[Ming-Chao], Chaudhuri, S.[Swarat], Jermaine, C.[Chris],
Few-shot Image Classification: Just Use a Library of Pre-trained Feature Extractors and a Simple Classifier,
ICCV21(9425-9434)
IEEE DOI 2203
Transfer learning, Feature extraction, Libraries, Computational efficiency, Classification algorithms, Feeds, Vision applications and systems BibRef

Kim, D.H.[Dong-Hyun], Saito, K.[Kuniaki], Oh, T.H.[Tae-Hyun], Plummer, B.A.[Bryan A.], Sclaroff, S.[Stan], Saenko, K.[Kate],
CDS: Cross-Domain Self-supervised Pre-training,
ICCV21(9103-9112)
IEEE DOI 2203
Transfer learning, Task analysis, Standards, Transfer/Low-shot/Semi/Unsupervised Learning, Representation learning BibRef

Zhang, J.O.[Jeffrey O.], Sax, A.[Alexander], Zamir, A.[Amir], Guibas, L.J.[Leonidas J.], Malik, J.[Jitendra],
Side-Tuning: A Baseline for Network Adaptation via Additive Side Networks,
ECCV20(III:698-714).
Springer DOI 2012
Adapt pre-trained network, not start from beginning. BibRef

Yan, X.T.[Xue-Ting], Misra, I.[Ishan], Gupta, A.[Abhinav], Ghadiyaram, D.[Deepti], Mahajan, D.[Dhruv],
ClusterFit: Improving Generalization of Visual Representations,
CVPR20(6508-6517)
IEEE DOI 2008
Pre-training. Task analysis, Training, Feature extraction, Visualization, Videos, Tagging, Twitter BibRef

Tang, H.X.[Hong-Xiang], Ortis, A.[Alessandro], Battiato, S.[Sebastiano],
The Impact of Padding on Image Classification by Using Pre-trained Convolutional Neural Networks,
CIAP19(II:337-344).
Springer DOI 1909
BibRef

Chakraborty, R., Yang, C., Vemuri, B.C.,
A Mixture Model for Aggregation of Multiple Pre-Trained Weak Classifiers,
Diff-CVML18(454-4547)
IEEE DOI 1812
Feature extraction, Training, Frequency modulation, Boosting, Geometry, Nickel, Mixture models BibRef

Chapter on Pattern Recognition, Clustering, Statistics, Grammars, Learning, Neural Nets, Genetic Algorithms continues in
Domain Adaptation .


Last update:Sep 28, 2024 at 17:47:54