14.5.9.6.7 Neural Net Compression

Chapter Contents (Back)
CNN. Compression. Efficient Implementation.
See also Neural Net Pruning.
See also Knowledge Distillation.
See also Neural Net Quantization.

Wang, W.[Wei], Zhu, L.Q.[Li-Qiang],
Structured feature sparsity training for convolutional neural network compression,
JVCIR(71), 2020, pp. 102867.
Elsevier DOI 2009
Convolutional neural network, CNN compression, Structured sparsity, Pruning criterion BibRef

Kaplan, C.[Cagri], Bulbul, A.[Abdullah],
Goal driven network pruning for object recognition,
PR(110), 2021, pp. 107468.
Elsevier DOI 2011
Deep learning, Network pruning, Network compressing, Top-down attention, Perceptual visioning BibRef

Yao, K.X.[Kai-Xuan], Cao, F.L.[Fei-Long], Leung, Y.[Yee], Liang, J.[Jiye],
Deep neural network compression through interpretability-based filter pruning,
PR(119), 2021, pp. 108056.
Elsevier DOI 2106
Deep neural network (DNN), Convolutional neural network (CNN), Visualization, Compression BibRef

Gowdra, N.[Nidhi], Sinha, R.[Roopak], MacDonell, S.[Stephen], Yan, W.Q.[Wei Qi],
Mitigating severe over-parameterization in deep convolutional neural networks through forced feature abstraction and compression with an entropy-based heuristic,
PR(119), 2021, pp. 108057.
Elsevier DOI 2106
Convolutional neural networks (CNNs), Depth redundancy, Entropy, Feature compression, EBCLE BibRef

Zhang, H.[Huijie], An, L.[Li], Chu, V.W.[Vena W.], Stow, D.A.[Douglas A.], Liu, X.B.[Xiao-Bai], Ding, Q.H.[Qing-Hua],
Learning Adjustable Reduced Downsampling Network for Small Object Detection in Urban Environments,
RS(13), No. 18, 2021, pp. xx-yy.
DOI Link 2109
BibRef


Kondratyuk, D.[Dan], Yuan, L.Z.[Liang-Zhe], Li, Y.D.[Yan-Dong], Zhang, L.[Li], Tan, M.X.[Ming-Xing], Brown, M.[Matthew], Gong, B.Q.[Bo-Qing],
MoViNets: Mobile Video Networks for Efficient Video Recognition,
CVPR21(16015-16025)
IEEE DOI 2111
Training, Costs, Computational modeling, Memory management, Video sequences, Computational efficiency BibRef

Yu, C.Q.[Chang-Qian], Xiao, B.[Bin], Gao, C.X.[Chang-Xin], Yuan, L.[Lu], Zhang, L.[Lei], Sang, N.[Nong], Wang, J.D.[Jing-Dong],
Lite-HRNet: A Lightweight High-Resolution Network,
CVPR21(10435-10445)
IEEE DOI 2111
Convolutional codes, Bridges, Computational modeling, Pose estimation, Semantics, Pattern recognition BibRef

Li, Y.[Yuchao], Lin, S.H.[Shao-Hui], Liu, J.Z.[Jian-Zhuang], Ye, Q.X.[Qi-Xiang], Wang, M.[Mengdi], Chao, F.[Fei], Yang, F.[Fan], Ma, J.C.[Jin-Cheng], Tian, Q.[Qi], Ji, R.R.[Rong-Rong],
Towards Compact CNNs via Collaborative Compression,
CVPR21(6434-6443)
IEEE DOI 2111
Image coding, Tensors, Sensitivity, Collaboration, Transforms, Performance gain, Pattern recognition BibRef

Shen, Z.Q.[Zhi-Qiang], Liu, Z.[Zechun], Qin, J.[Jie], Huang, L.[Lei], Cheng, K.T.[Kwang-Ting], Savvides, M.[Marios],
S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural Networks via Guided Distribution Calibration,
CVPR21(2165-2174)
IEEE DOI 2111

WWW Link. Code, Learning. Training, Degradation, Codes, Neural networks, Supervised learning, Predictive models BibRef

Yin, M.[Miao], Sui, Y.[Yang], Liao, S.[Siyu], Yuan, B.[Bo],
Towards Efficient Tensor Decomposition-Based DNN Model Compression with Optimization Framework,
CVPR21(10669-10678)
IEEE DOI 2111
Tensors, Image coding, Systematics, Recurrent neural networks, Image recognition, Computational modeling, Convex functions BibRef

Martinez, J.[Julieta], Shewakramani, J.[Jashan], Liu, T.W.[Ting Wei], Bārsan, I.A.[Ioan Andrei], Zeng, W.Y.[Wen-Yuan], Urtasun, R.[Raquel],
Permute, Quantize, and Fine-tune: Efficient Compression of Neural Networks,
CVPR21(15694-15703)
IEEE DOI 2111
Convolutional codes, Visualization, Image coding, Annealing, Vector quantization, Neural networks, Rate-distortion BibRef

Oh, S.[Sangyun], Sim, H.[Hyeonuk], Lee, S.[Sugil], Lee, J.[Jongeun],
Automated Log-Scale Quantization for Low-Cost Deep Neural Networks,
CVPR21(742-751)
IEEE DOI 2111
Training, Deep learning, Image segmentation, Quantization (signal), Semantics, Computer architecture BibRef

Yamamoto, K.[Kohei],
Learnable Companding Quantization for Accurate Low-bit Neural Networks,
CVPR21(5027-5036)
IEEE DOI 2111
Training, Quantization (signal), Limiting, Memory management, Neural networks, Object detection, Table lookup BibRef

Lee, J.[Junghyup], Kim, D.[Dohyung], Ham, B.[Bumsub],
Network Quantization with Element-wise Gradient Scaling,
CVPR21(6444-6453)
IEEE DOI 2111
Training, Deep learning, Quantization (signal), Computer architecture, Network architecture, Hardware BibRef

Jaume, G.[Guillaume], Pati, P.[Pushpak], Bozorgtabar, B.[Behzad], Foncubierta, A.[Antonio], Anniciello, A.M.[Anna Maria], Feroce, F.[Florinda], Rau, T.[Tilman], Thiran, J.P.[Jean-Philippe], Gabrani, M.[Maria], Goksel, O.[Orcun],
Quantifying Explainers of Graph Neural Networks in Computational Pathology,
CVPR21(8102-8112)
IEEE DOI 2111
Measurement, Deep learning, Pathology, Terminology, Satellite broadcasting, Radiology, Breast cancer BibRef

Zhao, S.[Sijie], Yue, T.[Tao], Hu, X.[Xuemei],
Distribution-aware Adaptive Multi-bit Quantization,
CVPR21(9277-9286)
IEEE DOI 2111
Training, Quantization (signal), Sensitivity, Neural networks, Taylor series, Pattern recognition, Resource management BibRef

Kryzhanovskiy, V.[Vladimir], Balitskiy, G.[Gleb], Kozyrskiy, N.[Nikolay], Zuruev, A.[Aleksandr],
QPP: Real-Time Quantization Parameter Prediction for Deep Neural Networks,
CVPR21(10679-10687)
IEEE DOI 2111
Deep learning, Training, Quantization (signal), Runtime, Superresolution, Predictive models, Stability analysis BibRef

Aghli, N.[Nima], Ribeiro, E.[Eraldo],
Combining Weight Pruning and Knowledge Distillation For CNN Compression,
EVW21(3185-3192)
IEEE DOI 2109
Image coding, Neurons, Estimation, Graphics processing units, Computer architecture, Real-time systems, Convolutional neural networks BibRef

Ran, J.[Jie], Lin, R.[Rui], So, H.K.H.[Hayden K.H.], Chesi, G.[Graziano], Wong, N.[Ngai],
Exploiting Elasticity in Tensor Ranks for Compressing Neural Networks,
ICPR21(9866-9873)
IEEE DOI 2105
Training, Tensors, Neural networks, Redundancy, Games, Elasticity, Minimization BibRef

Shah, M.A.[Muhammad A.], Olivier, R.[Raphael], Raj, B.[Bhiksha],
Exploiting Non-Linear Redundancy for Neural Model Compression,
ICPR21(9928-9935)
IEEE DOI 2105
Training, Image coding, Computational modeling, Neurons, Transfer learning, Redundancy, Nonlinear filters BibRef

Bui, K.[Kevin], Park, F.[Fredrick], Zhang, S.[Shuai], Qi, Y.[Yingyong], Xin, J.[Jack],
Nonconvex Regularization for Network Slimming: Compressing CNNS Even More,
ISVC20(I:39-53).
Springer DOI 2103
BibRef

Wang, H.T.[Hao-Tao], Gui, S.P.[Shu-Peng], Yang, H.C.[Hai-Chuan], Liu, J.[Ji], Wang, Z.Y.[Zhang-Yang],
GAN Slimming: All-in-one GAN Compression by a Unified Optimization Framework,
ECCV20(IV:54-73).
Springer DOI 2011
BibRef

Guo, J., Ouyang, W., Xu, D.,
Multi-Dimensional Pruning: A Unified Framework for Model Compression,
CVPR20(1505-1514)
IEEE DOI 2008
Tensile stress, Redundancy, Logic gates, Convolution, Solid modeling BibRef

Heo, B.[Byeongho], Kim, J.[Jeesoo], Yun, S.[Sangdoo], Park, H.[Hyojin], Kwak, N.[Nojun], Choi, J.Y.[Jin Young],
A Comprehensive Overhaul of Feature Distillation,
ICCV19(1921-1930)
IEEE DOI 2004
feature extraction, image classification, image segmentation, object detection, distillation loss, Artificial intelligence BibRef

Yu, J., Huang, T.,
Universally Slimmable Networks and Improved Training Techniques,
ICCV19(1803-1811)
IEEE DOI 2004
Code, Neural Networks.
WWW Link. image classification, image resolution, learning (artificial intelligence), mobile computing, Testing BibRef

Chapter on Pattern Recognition, Clustering, Statistics, Grammars, Learning, Neural Nets, Genetic Algorithms continues in
Neural Net Quantization .


Last update:Nov 30, 2021 at 22:19:38