14.5.10.8.5 Training Issues for Convolutional Neural Networks

Chapter Contents (Back)
Convolutional Neural Networks. Network Training. A somewhat arbirtary subsection -- more training issues in the design. Also consider:
See also Receptive Field Issues.

Geng, J., Wang, H., Fan, J., Ma, X.,
Deep Supervised and Contractive Neural Network for SAR Image Classification,
GeoRS(55), No. 4, April 2017, pp. 2442-2459.
IEEE DOI 1704
feature extraction BibRef

Geng, J., Wang, H., Fan, J., Ma, X.,
SAR Image Classification via Deep Recurrent Encoding Neural Networks,
GeoRS(56), No. 4, April 2018, pp. 2255-2269.
IEEE DOI 1804
Artificial neural networks, Feature extraction, Logic gates, Machine learning, Radar imaging, Synthetic aperture radar, synthetic aperture radar (SAR) image BibRef

Zhang, Z., Wang, H., Xu, F., Jin, Y.Q.,
Complex-Valued Convolutional Neural Network and Its Application in Polarimetric SAR Image Classification,
GeoRS(55), No. 12, December 2017, pp. 7177-7188.
IEEE DOI 1712
Convolution, Feature extraction, Machine learning, Neural networks, Synthetic aperture radar, Training, terrain classification BibRef

Cheng, Y., Wang, D., Zhou, P., Zhang, T.,
Model Compression and Acceleration for Deep Neural Networks: The Principles, Progress, and Challenges,
SPMag(35), No. 1, January 2018, pp. 126-136.
IEEE DOI 1801
Computational modeling, Convolution, Convolutional codes, Machine learning, Neural networks, Quantization (signal), Training data BibRef

Liang, P.[Peng], Shi, W.Z.[Wen-Zhong], Zhang, X.K.[Xiao-Kang],
Remote Sensing Image Classification Based on Stacked Denoising Autoencoder,
RS(10), No. 1, 2018, pp. xx-yy.
DOI Link 1802
Train then add noise and train. BibRef

Peng, F.F.[Fei-Fei], Lu, W.[Wei], Tan, W.X.[Wen-Xia], Qi, K.L.[Kun-Lun], Zhang, X.K.[Xiao-Kang], Zhu, Q.S.[Quan-Sheng],
Multi-Output Network Combining GNN and CNN for Remote Sensing Scene Classification,
RS(14), No. 6, 2022, pp. xx-yy.
DOI Link 2204
BibRef

Li, X.[Xin], Jie, Z.[Zequn], Feng, J.S.[Jia-Shi], Liu, C.S.[Chang-Song], Yan, S.C.[Shui-Cheng],
Learning with rethinking: Recurrently improving convolutional neural networks through feedback,
PR(79), 2018, pp. 183-194.
Elsevier DOI 1804
Convolutional neural network, Image classification, Deep learning BibRef

Yang, Y., Wu, Q.M.J.[Qing-Ming Jonathan], Feng, X., Akilan, T.[Thangarajah],
Recomputation of the Dense Layers for Performance Improvement of DCNN,
PAMI(42), No. 11, November 2020, pp. 2912-2925.
IEEE DOI 2010
Training, Mathematical model, Optimization, Neurons, Convolutional neural networks, Deep learning, deep learning BibRef

Yin, P., Zhang, S., Lyu, J., Osher, S., Qi, Y., Xin, J.,
BinaryRelax: A Relaxation Approach for Training Deep Neural Networks with Quantized Weights,
SIIMS(11), No. 4, 2018, pp. 2205-2223.
DOI Link 1901
BibRef

Zhang, C.L.[Chen-Lin], Wu, J.X.[Jian-Xin],
Improving CNN linear layers with power mean non-linearity,
PR(89), 2019, pp. 12-21.
Elsevier DOI 1902
Non-linearity in deep learning, Pre-trained CNN models, Object recognition, Transfer learning BibRef

Cruz, L.[Leonel], Tous, R.[Ruben], Otero, B.[Beatriz],
Distributed training of deep neural networks with spark: The MareNostrum experience,
PRL(125), 2019, pp. 174-178.
Elsevier DOI 1909
On parallel system. Deep Learning, Spark, DL4J, HPC, Performance, Scalability, MareNostrum BibRef

Sourati, J.[Jamshid], Gholipour, A.[Ali], Dy, J.G.[Jennifer G.], Tomas-Fernandez, X.[Xavier], Kurugol, S.[Sila], Warfield, S.K.[Simon K.],
Intelligent Labeling Based on Fisher Information for Medical Image Segmentation Using Deep Learning,
MedImg(38), No. 11, November 2019, pp. 2642-2653.
IEEE DOI 1911
Training issues. Image segmentation, Uncertainty, Data models, Biomedical imaging, Computational modeling, Labeling, Brain modeling, patch-wise segmentation BibRef

Liang, J.X.[Jin-Xiu], Xu, Y.[Yong], Bao, C.L.[Cheng-Long], Quan, Y.H.[Yu-Hui], Ji, H.[Hui],
Barzilai-Borwein-based adaptive learning rate for deep learning,
PRL(128), 2019, pp. 197-203.
Elsevier DOI 1912
Barzilai-Borwein method, Deep neural network, Stochastic gradient descent, Adaptive learning rate BibRef

Chun, I.Y., Fessler, J.A.,
Convolutional Analysis Operator Learning: Acceleration and Convergence,
IP(29), 2020, pp. 2108-2122.
IEEE DOI 2001
Convolution, Training, Kernel, Convolutional codes, Computed tomography, Convergence, Image reconstruction, X-ray computed tomography BibRef

Tu, S.S.[Shan-Shan], ur Rehman, S.[Sadaqat], Waqas, M.[Muhammad], ur Rehman, O.[Obaid], Yang, Z.L.[Zhong-Liang], Ahmad, B.[Basharat], Halim, Z.[Zahid], Zhao, W.[Wei],
Optimisation-based training of evolutionary convolution neural network for visual classification applications,
IET-CV(14), No. 5, August 2020, pp. 259-267.
DOI Link 2007
BibRef

Lu, Z., Deb, K., Naresh Boddeti, V.,
MUXConv: Information Multiplexing in Convolutional Neural Networks,
CVPR20(12041-12050)
IEEE DOI 2008
Multiplexing, Computational modeling, Standards, Predictive models, Convolutional codes, Computational complexity BibRef

Schult, J.[Jonas], Engelmann, F.[Francis], Kontogianni, T.[Theodora], Leibe, B.[Bastian],
DualConvMesh-Net: Joint Geodesic and Euclidean Convolutions on 3D Meshes,
CVPR20(8609-8619)
IEEE DOI 2008
Convolutional codes, Kernel, Shape, Measurement, Convolution, Semantics BibRef

Liu, Y.S.[Yi-Shu], Han, Z.Z.[Zheng-Zhuo], Chen, C.H.[Cong-Hui], Ding, L.W.[Li-Wang], Liu, Y.B.[Ying-Bin],
Eagle-Eyed Multitask CNNs for Aerial Image Retrieval and Scene Classification,
GeoRS(58), No. 9, September 2020, pp. 6699-6721.
IEEE DOI 2008
Image retrieval, Computational modeling, Uncertainty, Training, Feature extraction, Task analysis, Convolutional neural networks, similarity distribution learning BibRef

Liang, C., Zhang, H., Yuan, D., Zhang, M.,
A Novel CNN Training Framework: Loss Transferring,
CirSysVideo(30), No. 12, December 2020, pp. 4611-4625.
IEEE DOI 2012
Training, Computational modeling, Convolutional neural networks, Benchmark testing, Task analysis, Loss measurement, softmax BibRef

Zunino, A.[Andrea], Bargal, S.A.[Sarah Adel], Morerio, P.[Pietro], Zhang, J.M.[Jian-Ming], Sclaroff, S.[Stan], Murino, V.[Vittorio],
Excitation Dropout: Encouraging Plasticity in Deep Neural Networks,
IJCV(129), No. 4, April 2021, pp. 1139-1152.
Springer DOI 2104
BibRef

Morerio, P.[Pietro], Cavazza, J.[Jacopo], Volpi, R.[Riccardo], Vidal, R.[René], Murino, V.[Vittorio],
Curriculum Dropout,
ICCV17(3564-3572)
IEEE DOI 1802
Remove NN units to reduce over-specific detectors. feature extraction, generalisation (artificial intelligence), image classification, image representation, Training BibRef

Dang, Z.[Zheng], Yi, K.M.[Kwang Moo], Hu, Y.L.[Yin-Lin], Wang, F.[Fei], Fua, P.[Pascal], Salzmann, M.[Mathieu],
Eigendecomposition-Free Training of Deep Networks for Linear Least-Square Problems,
PAMI(43), No. 9, September 2021, pp. 3167-3182.
IEEE DOI 2108
BibRef
Earlier:
Eigendecomposition-Free Training of Deep Networks with Zero Eigenvalue-Based Losses,
ECCV18(VI: 792-807).
Springer DOI 1810
Eigenvalues and eigenfunctions, Machine learning, Optimization, Task analysis, geometric vision BibRef


Sun, Y.[Yi], Li, J.[Jian], Xu, X.[Xin],
Meta-GF: Training Dynamic-Depth Neural Networks Harmoniously,
ECCV22(XI:691-708).
Springer DOI 2211
BibRef

Yuan, G.[Geng], Chang, S.E.[Sung-En], Jin, Q.[Qing], Lu, A.[Alec], Li, Y.[Yanyu], Wu, Y.S.[Yu-Shu], Kong, Z.[Zhenglun], Xie, Y.[Yanyue], Dong, P.[Peiyan], Qin, M.[Minghai], Ma, X.L.[Xiao-Long], Tang, X.[Xulong], Fang, Z.[Zhenman], Wang, Y.Z.[Yan-Zhi],
You Already Have It: A Generator-Free Low-Precision DNN Training Framework Using Stochastic Rounding,
ECCV22(XII:34-51).
Springer DOI 2211
BibRef

Huang, T.[Tao], You, S.[Shan], Zhang, B.[Bohan], Du, Y.X.[Yu-Xuan], Wang, F.[Fei], Qian, C.[Chen], Xu, C.[Chang],
DyRep: Bootstrapping Training with Dynamic Re-parameterization,
CVPR22(578-587)
IEEE DOI 2210
Training, Runtime, Costs, Codes, Computational modeling, Pattern recognition, retrieval BibRef

Tang, Z.D.[Ze-Dong], Jiang, F.L.[Fen-Long], Gong, M.[Maoguo], Li, H.[Hao], Wu, Y.[Yue], Yu, F.[Fan], Wang, Z.D.[Zi-Dong], Wang, M.[Min],
SKFAC: Training Neural Networks with Faster Kronecker-Factored Approximate Curvature,
CVPR21(13474-13482)
IEEE DOI 2111
Training, Deep learning, Dimensionality reduction, Neural networks, Text categorization, Approximation algorithms, Robustness BibRef

Arenas, R.T., Delmas, P.J., Strozzi, A.G.,
Development of a Virtual Environment Based Image Generation Tool for Neural Network Training,
IVCNZ20(1-6)
IEEE DOI 2012
Training, Visualization, Image recognition, Neural networks, Virtual environments, Tools, BibRef

Cao, Z., Zhang, K., Wu, J.,
FPB: Improving Multi-Scale Feature Representation Inside Convolutional Layer Via Feature Pyramid Block,
ICIP20(1666-1670)
IEEE DOI 2011
Convolution, Lesions, Task analysis, Training, Biomedical imaging, Biological system modeling, Diseases, multi-scale features, feature pyramid block BibRef

Frerix, T.[Thomas], Nießner, M.[Matthias], Cremers, D.[Daniel],
Homogeneous Linear Inequality Constraints for Neural Network Activations,
DeepVision20(3229-3234)
IEEE DOI 2008
Training, Neural networks, Computational modeling, Task analysis, Optimization, Machine learning, Computer architecture BibRef

Semih Kayhan, O., van Gemert, J.C.,
On Translation Invariance in CNNs: Convolutional Layers Can Exploit Absolute Spatial Location,
CVPR20(14262-14273)
IEEE DOI 2008
Convolution, Visualization, Machine learning, Standards, Kernel, Training BibRef

Zhou, Y.Z.[Yi-Zhou], Sun, X.Y.[Xiao-Yan], Luo, C.[Chong], Zha, Z.J.[Zheng-Jun], Zeng, W.J.[Wen-Jun],
Spatiotemporal Fusion in 3D CNNs: A Probabilistic View,
CVPR20(9826-9835)
IEEE DOI 2008
CNNs for video. Spatiotemporal phenomena, Training, Convolutional codes, Kernel BibRef

Kim, I.[Ildoo], Baek, W.[Woonhyuk], Kim, S.[Sungwoong],
Spatially Attentive Output Layer for Image Classification,
CVPR20(9530-9539)
IEEE DOI 2008
Task analysis, Convolution, Aggregates, Training, Semantics BibRef

Wang, J., Chen, Y., Chakraborty, R., Yu, S.X.,
Orthogonal Convolutional Neural Networks,
CVPR20(11502-11512)
IEEE DOI 2008
Kernel, Training, Convolutional codes, Redundancy, Task analysis, Matrix converters, Convolutional neural networks BibRef

Zhang, X., Liu, S., Zhang, R., Liu, C., Huang, D., Zhou, S., Guo, J., Guo, Q., Du, Z., Zhi, T., Chen, Y.,
Fixed-Point Back-Propagation Training,
CVPR20(2327-2335)
IEEE DOI 2008
Training, Quantization (signal), Neural networks, Convergence, Machine learning, Network architecture, Convolution BibRef

Wu, C.Y.[Chao-Yuan], Girshick, R.[Ross], He, K.M.[Kai-Ming], Feichtenhofer, C.[Christoph], Krähenbühl, P.[Philipp],
A Multigrid Method for Efficiently Training Video Models,
CVPR20(150-159)
IEEE DOI 2008
Training, Shape, Computational modeling, Multigrid methods, Schedules, Biological system modeling, Numerical models BibRef

Benbihi, A., Geist, M., Pradalier, C.,
ELF: Embedded Localisation of Features in Pre-Trained CNN,
ICCV19(7939-7948)
IEEE DOI 2004
convolutional neural nets, feature extraction, image matching, SLAM (robots), ELF, embedded localisation of features, CNN, BibRef

Cai, Q.[Qi], Pan, Y.W.[Ying-Wei], Ngo, C.W.[Chong-Wah], Tian, X.[Xinmei], Duan, L.Y.[Ling-Yu], Yao, T.[Ting],
Exploring Object Relation in Mean Teacher for Cross-Domain Detection,
CVPR19(11449-11458).
IEEE DOI 2002
Using synthetic (rendered) data to train. BibRef

Ding, R.Z.[Rui-Zhou], Chin, T.W.[Ting-Wu], Liu, Z.Y.[Ze-Ye], Marculescu, D.[Diana],
Regularizing Activation Distribution for Training Binarized Deep Networks,
CVPR19(11400-11409).
IEEE DOI 2002
BibRef

Zou, F.Y.[Fang-Yu], Shen, L.[Li], Jie, Z.[Zequn], Zhang, W.Z.[Wei-Zhong], Liu, W.[Wei],
A Sufficient Condition for Convergences of Adam and RMSProp,
CVPR19(11119-11127).
IEEE DOI 2002
adaptive stochastic algorithms for training deep neural networks. BibRef

Cheng, H.[Hao], Lian, D.Z.[Dong-Ze], Deng, B.[Bowen], Gao, S.H.[Sheng-Hua], Tan, T.[Tao], Geng, Y.L.[Yan-Lin],
Local to Global Learning: Gradually Adding Classes for Training Deep Neural Networks,
CVPR19(4743-4751).
IEEE DOI 2002
BibRef

Laermann, J.[Jan], Samek, W.[Wojciech], Strodthoff, N.[Nils],
Achieving Generalizable Robustness of Deep Neural Networks by Stability Training,
GCPR19(360-373).
Springer DOI 1911
BibRef

Zhang, Z.W.[Zheng-Wen], Yang, J.[Jian], Zhang, Z.L.[Zi-Lin], Li, Y.[Yan],
Cross-Training Deep Neural Networks for Learning from Label Noise,
ICIP19(4100-4104)
IEEE DOI 1910
Deal with label corruption. Cross-training, label noise, curriculum learning, deep neural networks, robustness BibRef

Kamilaris, A.[Andreas], van den Brink, C.[Corjan], Karatsiolis, S.[Savvas],
Training Deep Learning Models via Synthetic Data: Application in Unmanned Aerial Vehicles,
CAIPWS19(81-90).
Springer DOI 1909
BibRef

Cui, L.X.[Li-Xin], Bai, L.[Lu], Rossi, L.[Luca], Wang, Y.[Yue], Jiao, Y.H.[Yu-Hang], Hancock, E.R.[Edwin R.],
A Deep Hybrid Graph Kernel Through Deep Learning Networks,
ICPR18(1030-1035)
IEEE DOI 1812
Kernel, Decoding, Training, Tools, Convolution, Reliability BibRef

Ma, J.B.[Jia-Bin], Guo, W.Y.[Wei-Yu], Wang, W.[Wei], Wang, L.[Liang],
RotateConv: Making Asymmetric Convolutional Kernels Rotatable,
ICPR18(55-60)
IEEE DOI 1812
Kernel, Shape, Convolution, Training, Visualization, Computational modeling, Image coding BibRef

An, W.P.[Wang-Peng], Wang, H.Q.[Hao-Qian], Sun, Q.Y.[Qing-Yun], Xu, J.[Jun], Dai, Q.H.[Qiong-Hai], Zhang, L.[Lei],
A PID Controller Approach for Stochastic Optimization of Deep Networks,
CVPR18(8522-8531)
IEEE DOI 1812
Optimization, Training, Acceleration, PD control, PI control, Neural networks BibRef

Mostajabi, M.[Mohammadreza], Maire, M.[Michael], Shakhnarovich, G.[Gregory],
Regularizing Deep Networks by Modeling and Predicting Label Structure,
CVPR18(5629-5638)
IEEE DOI 1812
Training, Task analysis, Decoding, Semantics, Image segmentation, Cats, Convolutional neural networks BibRef

Xie, S.Q.[Shu-Qin], Chen, Z.I.[Zit-Ian], Xu, C.[Chao], Lu, C.W.[Ce-Wu],
Environment Upgrade Reinforcement Learning for Non-differentiable Multi-stage Pipelines,
CVPR18(3810-3819)
IEEE DOI 1812
Training time and complexity. Training, Pose estimation, Task analysis, Pipelines, Feeds, Object detection BibRef

Chang, X.B.[Xiao-Bin], Xiang, T.[Tao], Hospedales, T.M.[Timothy M.],
Scalable and Effective Deep CCA via Soft Decorrelation,
CVPR18(1488-1497)
IEEE DOI 1812
Canonical Correlation Analysis. Decorrelation, Computational modeling, Correlation, Training, Stochastic processes, Optimization, Covariance matrices BibRef

Hara, K., Kataoka, H., Satoh, Y.,
Can Spatiotemporal 3D CNNs Retrace the History of 2D CNNs and ImageNet?,
CVPR18(6546-6555)
IEEE DOI 1812
Kinetic theory, Training, Task analysis, Kernel BibRef

Yang, Y., Zhong, Z., Shen, T., Lin, Z.,
Convolutional Neural Networks with Alternately Updated Clique,
CVPR18(2413-2422)
IEEE DOI 1812
Training, Convolutional neural networks, Network architecture, Visualization, Computational modeling, Recurrent neural networks, Neurons BibRef

Gordon, A., Eban, E., Nachum, O., Chen, B., Wu, H., Yang, T., Choi, E.,
MorphNet: Fast & Simple Resource-Constrained Structure Learning of Deep Networks,
CVPR18(1586-1595)
IEEE DOI 1812
Biological neural networks, Training, Computational modeling, Network architecture, Neurons BibRef

Huangi, L.[Lei], Huangi, L.[Lei], Yang, D.W.[Da-Wei], Lang, B.[Bo], Deng, J.[Jia],
Decorrelated Batch Normalization,
CVPR18(791-800)
IEEE DOI 1812
Training, Decorrelation, Neural networks, Principal component analysis, Covariance matrices, Matrix decomposition BibRef

Chen, Y.P.[Yun-Peng], Kalantidis, Y.[Yannis], Li, J.S.[Jian-Shu], Yan, S.C.[Shui-Cheng], Feng, J.S.[Jia-Shi],
Multi-fiber Networks for Video Recognition,
ECCV18(I: 364-380).
Springer DOI 1810
Training with video data. BibRef

Mopuri, K.R.[Konda Reddy], Uppala, P.K.[Phani Krishna], Babu, R.V.[R. Venkatesh],
Ask, Acquire, and Attack: Data-Free UAP Generation Using Class Impressions,
ECCV18(IX: 20-35).
Springer DOI 1810
Generate appropriate noise for deep training. BibRef

Jenni, S.[Simon], Favaro, P.[Paolo],
Deep Bilevel Learning,
ECCV18(X: 632-648).
Springer DOI 1810
Cross-validation to improve training. BibRef

Rayar, F.[Frédéric], Uchida, S.[Seiichi],
On Fast Sample Preselection for Speeding up Convolutional Neural Network Training,
SSSPR18(65-75).
Springer DOI 1810
BibRef

Jiang, C., Su, J.,
Gabor Binary Layer in Convolutional Neural Networks,
ICIP18(3408-3412)
IEEE DOI 1809
Training, Feature extraction, Convolutional codes, Image recognition, Shape, Convolutional neural networks, image recognition BibRef

Gillot, P., Benois-Pineau, J., Zemmari, A., Nesterov, Y.,
Increasing Training Stability for Deep CNNS,
ICIP18(3423-3427)
IEEE DOI 1809
Training, Biological neural networks, Optimization, Neurons, Linear programming, Machine learning, Stochastic processes, gradient descent BibRef

Pal, A., Arora, C.,
Making Deep Neural Network Fooling Practical,
ICIP18(3428-3432)
IEEE DOI 1809
Robustness, Perturbation methods, Image edge detection, Neural networks, Training, Distortion, Image generation, Robustness of Adversarial Attacks BibRef

Mancini, M., Bulò, S.R., Caputo, B., Ricci, E.,
Best Sources Forward: Domain Generalization through Source-Specific Nets,
ICIP18(1353-1357)
IEEE DOI 1809
Training, Computational modeling, Visualization, Semantics, Benchmark testing, Machine learning, Deep Learning BibRef

Li, J., Dai, T., Tang, Q., Xing, Y., Xia, S.,
Cyclic Annealing Training Convolutional Neural Networks for Image Classification with Noisy Labels,
ICIP18(21-25)
IEEE DOI 1809
Training, Noise measurement, Cats, Annealing, Robustness, Bagging, Adaptation models, Image Classification, Noisy Labels, Bagging CNNs BibRef

An, W., Wang, H., Zhang, Y., Dai, Q.,
Exponential decay sine wave learning rate for fast deep neural network training,
VCIP17(1-4)
IEEE DOI 1804
gradient methods, image classification, learning (artificial intelligence), neural nets, optimisation, optimization BibRef

Gupta, K.[Kavya], Majumdar, A.[Angshul],
Learning autoencoders with low-rank weights,
ICIP17(3899-3903)
IEEE DOI 1803
Artificial neural networks, Biological neural networks, Decoding, Neurons, Noise reduction, Redundancy, Training, autoencoder, nuclear norm BibRef

Chadha, A., Abbas, A., Andreopoulos, Y.,
Compressed-domain video classification with deep neural networks: 'There's way too much information to decode the matrix',
ICIP17(1832-1836)
IEEE DOI 1803
Neural networks, Optical imaging, Optical network units, Standards, Training, classification, video coding BibRef

Bochinski, E., Senst, T., Sikora, T.,
Hyper-parameter optimization for convolutional neural network committees based on evolutionary algorithms,
ICIP17(3924-3928)
IEEE DOI 1803
Error analysis, Evolutionary computation, Kernel, Optimization, Sociology, Statistics, Training, Convolutional Neural Network, MNIST BibRef

Dahia, G., Santos, M., Segundo, M.P.,
A study of CNN outside of training conditions,
ICIP17(3820-3824)
IEEE DOI 1803
Color, Databases, Face, Face recognition, Image color analysis, Machine learning, Training, CNNs, Deep Learning, Face Recognition BibRef

Zhong, Y., Ettinger, G.,
Enlightening Deep Neural Networks with Knowledge of Confounding Factors,
CEFR-LCV17(1077-1086)
IEEE DOI 1802
Biological neural networks, Data models, Neurons, Object recognition, Training BibRef

Kolkin, N., Shakhnarovich, G., Shechtman, E.,
Training Deep Networks to be Spatially Sensitive,
ICCV17(5669-5678)
IEEE DOI 1802
Spatial issues. approximation theory, computational complexity, gradient methods, image denoising, image segmentation, Training BibRef

Yu, A.[Aron], Grauman, K.[Kristen],
Semantic Jitter: Dense Supervision for Visual Comparisons via Synthetic Images,
ICCV17(5571-5580)
IEEE DOI 1802
Augment real training images by artivicial noisy images. image processing, learning (artificial intelligence), dense supervision, fashion images, semantic jitter, Visualization BibRef

Zhang, F.H.[Fei-Hu], Wah, B.W.[Benjamin W.],
Supplementary Meta-Learning: Towards a Dynamic Model for Deep Neural Networks,
ICCV17(4354-4363)
IEEE DOI 1802
Network results depend on image. image classification, image resolution, learning (artificial intelligence), neural nets, MLNN, SNN, Training BibRef

Kamiya, R., Yamashita, T., Ambai, M., Sato, I., Yamauchi, Y., Fujiyoshi, H.,
Binary-Decomposed DCNN for Accelerating Computation and Compressing Model Without Retraining,
CEFR-LCV17(1095-1102)
IEEE DOI 1802
Acceleration, Approximation algorithms, Computational modeling, Image recognition, Matrix decomposition, Quantization (signal) BibRef

Li, Y.H.[Yang-Hao], Wang, N.Y.[Nai-Yan], Liu, J.Y.[Jia-Ying], Hou, X.D.[Xiao-Di],
Factorized Bilinear Models for Image Recognition,
ICCV17(2098-2106)
IEEE DOI 1802
added layer to CNN. convolution, image recognition, image representation, learning (artificial intelligence), matrix decomposition, Training BibRef

Xie, D., Xiong, J., Pu, S.,
All You Need is Beyond a Good Init: Exploring Better Solution for Training Extremely Deep Convolutional Neural Networks with Orthonormality and Modulation,
CVPR17(5075-5084)
IEEE DOI 1711
Convolution, Jacobian matrices, Modulation, Network architecture, Neural networks, Training BibRef

Chen, B.H.[Bing-Hui], Deng, W.H.[Wei-Hong], Du, J.P.[Jun-Ping],
Noisy Softmax: Improving the Generalization Ability of DCNN via Postponing the Early Softmax Saturation,
CVPR17(4021-4030)
IEEE DOI 1711
Annealing, Noise measurement, Robustness, Standards, Telecommunications, Training BibRef

Patrini, G.[Giorgio], Rozza, A.[Alessandro], Menon, A.K.[Aditya Krishna], Nock, R.[Richard], Qu, L.Z.[Li-Zhen],
Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach,
CVPR17(2233-2241)
IEEE DOI 1711
Clothing, Neural networks, Noise measurement, Robustness, Training BibRef

Kokkinos, I.,
UberNet: Training a Universal Convolutional Neural Network for Low-, Mid-, and High-Level Vision Using Diverse Datasets and Limited Memory,
CVPR17(5454-5463)
IEEE DOI 1711
Discrete wavelet transforms, Estimation, Proposals, Semantics, Training BibRef

Bagherinezhad, H., Rastegari, M., Farhadi, A.,
LCNN: Lookup-Based Convolutional Neural Network,
CVPR17(860-869)
IEEE DOI 1711
Computational modeling, Dictionaries, Machine learning, Neural networks, Solid modeling, Tensile stress, Training BibRef

Juefei-Xu, F.[Felix], Boddeti, V.N., Savvides, M.[Marios],
Local Binary Convolutional Neural Networks,
CVPR17(4284-4293)
IEEE DOI 1711
Computational modeling, Convolution, Encoding, Neural networks, Standards, Training BibRef

Liu, X., Li, S., Kan, M., Shan, S., Chen, X.,
Self-Error-Correcting Convolutional Neural Network for Learning with Noisy Labels,
FG17(111-117)
IEEE DOI 1707
Biological neural networks, Face, Neurons, Noise measurement, Noise robustness, Switches, Training BibRef

Xu, X., Todorovic, S.,
Beam search for learning a deep Convolutional Neural Network of 3D shapes,
ICPR16(3506-3511)
IEEE DOI 1705
Computational modeling, Knowledge transfer, Shape, Solid modeling, Training BibRef

Gwon, Y.[Youngjune], Cha, M.[Miriam], Kung, H.T.,
Deep Sparse-coded Network (DSN),
ICPR16(2610-2615)
IEEE DOI 1705
Backpropagation, Dictionaries, Encoding, Neural networks, Nonhomogeneous media, Training BibRef

Teerapittayanon, S., McDanel, B., Kung, H.T.,
BranchyNet: Fast inference via early exiting from deep neural networks,
ICPR16(2464-2469)
IEEE DOI 1705
Entropy, Feedforward neural networks, Inference algorithms, Optimization, Runtime, Training BibRef

Pham, T., Tran, T., Phung, D., Venkatesh, S.,
Faster training of very deep networks via p-norm gates,
ICPR16(3542-3547)
IEEE DOI 1705
Feedforward neural networks, Logic gates, Road transportation, Standards, Training BibRef

Kabkab, M., Hand, E., Chellappa, R.,
On the size of Convolutional Neural Networks and generalization performance,
ICPR16(3572-3577)
IEEE DOI 1705
Boolean functions, Databases, Feedforward neural networks, Probability distribution, Testing, Training BibRef

Uchida, K., Tanaka, M., Okutomi, M.,
Coupled convolution layer for convolutional neural network,
ICPR16(3548-3553)
IEEE DOI 1705
Cells (biology), Convolution, Optical imaging, Photonics, Photoreceptors, Retina, Training BibRef

Wang, Y.Q.[Ye-Qing], Li, Y.[Yi], Porikli, F.M.[Fatih M.],
Finetuning Convolutional Neural Networks for visual aesthetics,
ICPR16(3554-3559)
IEEE DOI 1705
Feature extraction, Machine learning, Neural networks, Semantics, Training, Visualization, Deep learning, visual, aesthetics BibRef

Tobías, L., Ducournau, A., Rousseau, F., Mercier, G., Fablet, R.,
Convolutional Neural Networks for object recognition on mobile devices: A case study,
ICPR16(3530-3535)
IEEE DOI 1705
Biological neural networks, Computational modeling, Feature extraction, Kernel, Mobile handsets, Training, Convolutional Neural Networks, Deep Learning, Machine Learning, Mobile Devices, Object, Detection BibRef

Afridi, M.J., Ross, A., Shapiro, E.M.,
L-CNN: Exploiting labeling latency in a CNN learning framework,
ICPR16(2156-2161)
IEEE DOI 1705
Biomedical imaging, Labeling, Magnetic resonance imaging, Microprocessors, Testing, Training BibRef

Ghaderi, A., Athitsos, V.,
Selective unsupervised feature learning with Convolutional Neural Network (S-CNN),
ICPR16(2486-2490)
IEEE DOI 1705
Classification algorithms, Convolutional codes, Kernel, Neural networks, Search problems, Support vector machines, Training, Artificial Neural Networks, Classification and Clustring, Deep, Learning BibRef

Kirillov, A., Schlesinger, D., Zheng, S., Savchynskyy, B., Torr, P.H.S.[Philip H.S.], Rother, C.,
Joint Training of Generic CNN-CRF Models with Stochastic Optimization,
ACCV16(II: 221-236).
Springer DOI 1704
BibRef

Chapter on Pattern Recognition, Clustering, Statistics, Grammars, Learning, Neural Nets, Genetic Algorithms continues in
Pooling in Convolutional Neural Networks Implementations .


Last update:Mar 25, 2024 at 16:07:51