14.5.9.8.8 Neural Net Pruning

Chapter Contents (Back)
CNN. Pruning. Efficient Implementation. And a subset:
See also Neural Net Compression.
See also Neural Net Quantization.

Chen, S.[Shi], Zhao, Q.[Qi],
Shallowing Deep Networks: Layer-Wise Pruning Based on Feature Representations,
PAMI(41), No. 12, December 2019, pp. 3048-3056.
IEEE DOI 1911
Computational modeling, Computational efficiency, Feature extraction, Task analysis, Convolutional neural networks, convolutional neural networks BibRef

Singh, P.[Pravendra], Kadi, V.S.R.[Vinay Sameer Raja], Namboodiri, V.P.[Vinay P.],
FALF ConvNets: Fatuous auxiliary loss based filter-pruning for efficient deep CNNs,
IVC(93), 2020, pp. 103857.
Elsevier DOI 2001
Filter pruning, Model compression, Convolutional neural network, Image recognition, Deep learning BibRef

Singh, P.[Pravendra], Kadi, V.S.R.[Vinay Sameer Raja], Verma, N., Namboodiri, V.P.[Vinay P.],
Stability Based Filter Pruning for Accelerating Deep CNNs,
WACV19(1166-1174)
IEEE DOI 1904
computer networks, graphics processing units, learning (artificial intelligence), neural nets, Libraries BibRef

Mittal, D.[Deepak], Bhardwaj, S.[Shweta], Khapra, M.M.[Mitesh M.], Ravindran, B.[Balaraman],
Studying the plasticity in deep convolutional neural networks using random pruning,
MVA(30), No. 2, March 2019, pp. 203-216.
Springer DOI 1904
BibRef
Earlier:
Recovering from Random Pruning: On the Plasticity of Deep Convolutional Neural Networks,
WACV18(848-857)
IEEE DOI 1806
image classification, learning (artificial intelligence), neural nets, object detection, RCNN model, class specific pruning, Tuning BibRef

Bhardwaj, S.[Shweta], Srinivasan, M.[Mukundhan], Khapra, M.M.[Mitesh M.],
Efficient Video Classification Using Fewer Frames,
CVPR19(354-363).
IEEE DOI 2002
BibRef

Yang, W.Z.[Wen-Zhu], Jin, L.L.[Li-Lei], Wang, S.[Sile], Cu, Z.C.[Zhen-Chao], Chen, X.Y.[Xiang-Yang], Chen, L.P.[Li-Ping],
Thinning of convolutional neural network with mixed pruning,
IET-IPR(13), No. 5, 18 April 2019, pp. 779-784.
DOI Link 1904
BibRef

Luo, J.H.[Jian-Hao], Zhang, H.[Hao], Zhou, H.Y.[Hong-Yu], Xie, C.W.[Chen-Wei], Wu, J.X.[Jian-Xin], Lin, W.Y.[Wei-Yao],
ThiNet: Pruning CNN Filters for a Thinner Net,
PAMI(41), No. 10, October 2019, pp. 2525-2538.
IEEE DOI 1909
Convolution, Computational modeling, Task analysis, Acceleration, Training, Neural networks, Image coding, model compression BibRef

Ide, H.[Hidenori], Kobayashi, T.[Takumi], Watanabe, K.[Kenji], Kurita, T.[Takio],
Robust pruning for efficient CNNs,
PRL(135), 2020, pp. 90-98.
Elsevier DOI 2006
CNN, Pruning, Empirical classification loss, Taylor expansion BibRef

Kang, H.,
Accelerator-Aware Pruning for Convolutional Neural Networks,
CirSysVideo(30), No. 7, July 2020, pp. 2093-2103.
IEEE DOI 2007
Accelerator architectures, Field programmable gate arrays, Convolutional codes, Acceleration, Convolutional neural networks, neural network accelerator BibRef

Tsai, C.Y.[Chun-Ya], Gao, D.Q.[De-Qin], Ruan, S.J.[Shanq-Jang],
An effective hybrid pruning architecture of dynamic convolution for surveillance videos,
JVCIR(70), 2020, pp. 102798.
Elsevier DOI 2007
Optimize CNN, Dynamic convolution, Pruning, Smart surveillance application BibRef

Wang, Z., Hong, W., Tan, Y., Yuan, J.,
Pruning 3D Filters For Accelerating 3D ConvNets,
MultMed(22), No. 8, August 2020, pp. 2126-2137.
IEEE DOI 2007
Acceleration, Feature extraction, Task analysis, Maximum Abs. of Filters (MAF) BibRef

Luo, J.H.[Jian-Hao], Wu, J.X.[Jian-Xin],
AutoPruner: An end-to-end trainable filter pruning method for efficient deep model inference,
PR(107), 2020, pp. 107461.
Elsevier DOI 2008
Neural network pruning, Model compression, CNN acceleration BibRef

Ding, G., Zhang, S., Jia, Z., Zhong, J., Han, J.,
Where to Prune: Using LSTM to Guide Data-Dependent Soft Pruning,
IP(30), 2021, pp. 293-304.
IEEE DOI 2012
Computational modeling, Reinforcement learning, Image coding, Training, Convolution, Tensors, image classification BibRef

Tian, Q.[Qing], Arbel, T.[Tal], Clark, J.J.[James J.],
Task dependent deep LDA pruning of neural networks,
CVIU(203), 2021, pp. 103154.
Elsevier DOI 2101
Deep neural networks pruning, Deep linear discriminant analysis, Deep feature learning BibRef

Tian, G., Chen, J., Zeng, X., Liu, Y.,
Pruning by Training: A Novel Deep Neural Network Compression Framework for Image Processing,
SPLetters(28), 2021, pp. 344-348.
IEEE DOI 2102
Collaboration, Training, Computational modeling, Neural networks, Convolution, Optimization, Size measurement, model compression BibRef

Guo, J.Y.[Jin-Yang], Zhang, W.C.[Wei-Chen], Ouyang, W.L.[Wan-Li], Xu, D.[Dong],
Model Compression Using Progressive Channel Pruning,
CirSysVideo(31), No. 3, March 2021, pp. 1114-1124.
IEEE DOI 2103
Acceleration, Adaptation models, Convolution, Supervised learning, Neural networks, Computational modeling, Model compression, transfer learning BibRef

Alqahtani, A.[Ali], Xie, X.H.[Xiang-Hua], Jones, M.W.[Mark W.], Essa, E.[Ehab],
Pruning CNN filters via quantifying the importance of deep visual representations,
CVIU(208-209), 2021, pp. 103220.
Elsevier DOI 2106
BibRef
Earlier: A1, A2, A4, A3:
Neuron-based Network Pruning Based on Majority Voting,
ICPR21(3090-3097)
IEEE DOI 2105
Deep learning, Convolutional neural networks, Filter pruning, Model compression. Training, Neurons, Memory management, Computational efficiency, Complexity theory BibRef

Wang, Y.[Yooseung], Park, H.[Hyunseong], Lee, J.[Jwajin],
Memory-Free Stochastic Weight Averaging by One-Way Variational Pruning,
SPLetters(28), 2021, pp. 1021-1025.
IEEE DOI 2106
Training, Computational modeling, Stochastic processes, Brain modeling, Trajectory, Mathematical model, neural network pruning BibRef

Li, G.[Guo], Xu, G.[Gang],
Providing clear pruning threshold: A novel CNN pruning method via L0 regularisation,
IET-IPR(15), No. 2, 2021, pp. 405-418.
DOI Link 2106
BibRef

Liu, Y.X.[Yi-Xin], Guo, Y.[Yong], Guo, J.X.[Jia-Xin], Jiang, L.Q.[Luo-Qian], Chen, J.[Jian],
Conditional Automated Channel Pruning for Deep Neural Networks,
SPLetters(28), 2021, pp. 1275-1279.
IEEE DOI 2107
Computational modeling, Image coding, Optimization, Markov processes, Signal processing algorithms, Search problems, model compression BibRef

Osaku, D., Gomes, J.F., Falcão, A.X.,
Convolutional neural network simplification with progressive retraining,
PRL(150), 2021, pp. 235-241.
Elsevier DOI 2109
Kernel pruning, Deep learning, Image classification BibRef

Fan, F.G.[Fu-Gui], Su, Y.T.[Yu-Ting], Jing, P.G.[Pei-Guang], Lu, W.[Wei],
A Dual Rank-Constrained Filter Pruning Approach for Convolutional Neural Networks,
SPLetters(28), 2021, pp. 1734-1738.
IEEE DOI 2109
Manifolds, Adaptation models, Correlation, Adaptive systems, Adaptive filters, Information filters, high-rank BibRef

Liu, Z.C.[Ze-Chun], Zhang, X.Y.[Xiang-Yu], Shen, Z.Q.[Zhi-Qiang], Wei, Y.C.[Yi-Chen], Cheng, K.T.[Kwang-Ting], Sun, J.[Jian],
Joint Multi-Dimension Pruning via Numerical Gradient Update,
IP(30), 2021, pp. 8034-8045.
IEEE DOI 2109
Estimation, Numerical models, Optimization, Training, Task analysis, Spatial resolution, Adaptation models, multi-dimension BibRef

Tan, J.H.[Jia Huei], Chan, C.S.[Chee Seng], Chuah, J.H.[Joon Huang],
End-to-End Supermask Pruning: Learning to Prune Image Captioning Models,
PR(122), 2022, pp. 108366.
Elsevier DOI 2112
Image captioning, Deep network compression, Deep learning BibRef

Liu, Z.F.[Zhou-Feng], Liu, X.H.[Xiao-Hui], Li, C.L.[Chun-Lei], Ding, S.M.[Shu-Min], Liao, L.[Liang],
Learning compact ConvNets through filter pruning based on the saliency of a feature map,
IET-IPR(16), No. 1, 2022, pp. 123-133.
DOI Link 2112
BibRef

Ioannidis, V.N.[Vassilis N.], Chen, S.[Siheng], Giannakis, G.B.[Georgios B.],
Efficient and Stable Graph Scattering Transforms via Pruning,
PAMI(44), No. 3, March 2022, pp. 1232-1246.
IEEE DOI 2202
Scattering, Transforms, Feature extraction, Stability analysis, Perturbation methods, Convolution BibRef

Tofigh, S.[Sadegh], Ahmad, M.O.[M. Omair], Swamy, M.N.S.,
A Low-Complexity Modified ThiNet Algorithm for Pruning Convolutional Neural Networks,
SPLetters(29), 2022, pp. 1012-1016.
IEEE DOI 2205
Signal processing algorithms, Convolution, Convolutional neural networks, Training, Testing, Tensors, ThiNet algorithm BibRef

Zhang, K.[Ke], Liu, G.Z.[Guang-Zhe],
Layer Pruning for Obtaining Shallower ResNets,
SPLetters(29), 2022, pp. 1172-1176.
IEEE DOI 2205
Training, Taylor series, Convolution, Cost function, Sorting, Signal processing algorithms, Kernel, Layer pruning, network pruning BibRef

Wang, H.Y.[Huan-Yu], Zhang, Y.S.[Yong-Shun], Wu, J.X.[Jian-Xin],
Versatile, full-spectrum, and swift network sampling for model generation,
PR(129), 2022, pp. 108729.
Elsevier DOI 2206
Model generation, Convolutional neural networks, Structured pruning, Model compression BibRef

Liu, J.[Jing], Zhuang, B.[Bohan], Zhuang, Z.W.[Zhuang-Wei], Guo, Y.[Yong], Huang, J.Z.[Jun-Zhou], Zhu, J.H.[Jin-Hui], Tan, M.K.[Ming-Kui],
Discrimination-Aware Network Pruning for Deep Model Compression,
PAMI(44), No. 8, August 2022, pp. 4035-4051.
IEEE DOI 2207
Kernel, Computational modeling, Quantization (signal), Training, Adaptation models, Acceleration, Redundancy, Channel pruning, deep neural networks BibRef

Mondal, M.[Milton], Das, B.[Bishshoy], Roy, S.D.[Sumantra Dutta], Singh, P.[Pushpendra], Lall, B.[Brejesh], Joshi, S.D.[Shiv Dutt],
Adaptive CNN filter pruning using global importance metric,
CVIU(222), 2022, pp. 103511.
Elsevier DOI 2209
Adaptive pruning, Convolutional neural networks, Filter pruning, Model compression BibRef

Camci, E.[Efe], Gupta, M.[Manas], Wu, M.[Min], Lin, J.[Jie],
QLP: Deep Q-Learning for Pruning Deep Neural Networks,
CirSysVideo(32), No. 10, October 2022, pp. 6488-6501.
IEEE DOI 2210
Training, Neural networks, Indexes, Deep learning, Biological neural networks, Task analysis, deep reinforcement learning BibRef

Xu, K.X.[Kai-Xin], Wang, Z.[Zhe], Geng, X.[Xue], Wu, M.[Min], Li, X.L.[Xiao-Li], Lin, W.S.[Wei-Si],
Efficient Joint Optimization of Layer-Adaptive Weight Pruning in Deep Neural Networks,
ICCV23(17401-17411)
IEEE DOI Code:
WWW Link. 2401
BibRef

Zhang, H.N.[Hao-Nan], Liu, L.J.[Long-Jun], Zhou, H.Y.[Heng-Yi], Si, L.[Liang], Sun, H.B.[Hong-Bin], Zheng, N.N.[Nan-Ning],
FCHP: Exploring the Discriminative Feature and Feature Correlation of Feature Maps for Hierarchical DNN Pruning and Compression,
CirSysVideo(32), No. 10, October 2022, pp. 6807-6820.
IEEE DOI 2210
Frequency modulation, Correlation, Feature extraction, Matrix decomposition, Indexes, Neural networks, Sparse matrices, feature correlation BibRef

Li, Y.[Yun], Liu, Z.C.[Ze-Chun], Wu, W.Q.[Wei-Qun], Yao, H.T.[Hao-Tian], Zhang, X.Y.[Xiang-Yu], Zhang, C.[Chi], Yin, B.[Baoqun],
Weight-Dependent Gates for Network Pruning,
CirSysVideo(32), No. 10, October 2022, pp. 6941-6954.
IEEE DOI 2210
Logic gates, Information filters, Encoding, Hardware, Switches, Training, Manuals, Weight-dependent gates, network pruning BibRef

Wang, J.L.[Jie-Lei], Cui, Z.Y.[Zong-Yong], Zang, Z.P.[Zhi-Peng], Meng, X.J.[Xiang-Jie], Cao, Z.[Zongjie],
Absorption Pruning of Deep Neural Network for Object Detection in Remote Sensing Imagery,
RS(14), No. 24, 2022, pp. xx-yy.
DOI Link 2212
BibRef

Yvinec, E.[Edouard], Dapogny, A.[Arnaud], Cord, M.[Matthieu], Bailly, K.[Kevin],
RED++: Data-Free Pruning of Deep Neural Networks via Input Splitting and Output Merging,
PAMI(45), No. 3, March 2023, pp. 3664-3676.
IEEE DOI 2302
Neurons, Merging, Redundancy, Training, Quantization (signal), Kernel, Tensors, Deep learning, pruning, data-free, machine learning, neural networks BibRef

Lin, M.B.[Ming-Bao], Zhang, Y.X.[Yu-Xin], Li, Y.C.[Yu-Chao], Chen, B.[Bohong], Chao, F.[Fei], Wang, M.D.[Meng-Di], Li, S.[Shen], Tian, Y.H.[Yong-Hong], Ji, R.R.[Rong-Rong],
1xN Pattern for Pruning Convolutional Neural Networks,
PAMI(45), No. 4, April 2023, pp. 3999-4008.
IEEE DOI 2303
Kernel, Indexes, Convolutional neural networks, Training, Shape, Filtering algorithms, Convolution, Network pruning, CNNs BibRef

Zhong, Y.S.[Yun-Shan], Lin, M.B.[Ming-Bao], Chen, M.Z.[Meng-Zhao], Li, K.[Ke], Shen, Y.H.[Yun-Hang], Chao, F.[Fei], Wu, Y.J.[Yong-Jian], Ji, R.R.[Rong-Rong],
Fine-grained Data Distribution Alignment for Post-Training Quantization,
ECCV22(XI:70-86).
Springer DOI 2211
BibRef

Celaya, A.[Adrian], Actor, J.A.[Jonas A.], Muthusivarajan, R.[Rajarajesawari], Gates, E.[Evan], Chung, C.[Caroline], Schellingerhout, D.[Dawid], Riviere, B.[Beatrice], Fuentes, D.[David],
PocketNet: A Smaller Neural Network for Medical Image Analysis,
MedImg(42), No. 4, April 2023, pp. 1172-1184.
IEEE DOI 2304
Convolution, Training, Biomedical imaging, Neural networks, Deep learning, Task analysis, Memory management, Neural network, pattern recognition and classification BibRef

Cho, Y.[Yucheol], Ham, G.[Gyeongdo], Lee, J.H.[Jae-Hyeok], Kim, D.[Daeshik],
Ambiguity-aware robust teacher (ART): Enhanced self-knowledge distillation framework with pruned teacher network,
PR(140), 2023, pp. 109541.
Elsevier DOI 2305
Knowledge distillation, Self-knowledge distillation, Network pruning, Teacher-student model, Long-tail samples, Data augmentation BibRef

Chen, W.H.[Wei-Han], Wang, P.S.[Pei-Song], Cheng, J.[Jian],
Towards Automatic Model Compression via a Unified Two-Stage Framework,
PR(140), 2023, pp. 109527.
Elsevier DOI 2305
Deep neural networks, Model compression, Quantization, Pruning BibRef

Li, G.Q.[Guo-Qiang], Liu, B.[Bowen], Chen, A.B.[An-Bang],
DDFP: A data driven filter pruning method with pruning compensation,
JVCIR(94), 2023, pp. 103833.
Elsevier DOI 2306
Data driven, Model compression, Filter pruning, Pruning compensation BibRef

Lei, Y.[Yu], Wang, D.[Dayu], Yang, S.[Shenghui], Shi, J.[Jiao], Tian, D.[Dayong], Min, L.T.[Ling-Tong],
Network Collaborative Pruning Method for Hyperspectral Image Classification Based on Evolutionary Multi-Task Optimization,
RS(15), No. 12, 2023, pp. xx-yy.
DOI Link 2307
BibRef

Li, P.[Ping], Cao, J.C.[Jia-Chen], Yuan, L.[Li], Ye, Q.H.[Qing-Hao], Xu, X.H.[Xiang-Hua],
Truncated attention-aware proposal networks with multi-scale dilation for temporal action detection,
PR(142), 2023, pp. 109684.
Elsevier DOI 2307
Temporal action detection, Attention mechanism, Graph convolution, Multi-scale dilation, Proposal network BibRef

Lu, X.T.[Xiao-Tong], Dong, W.S.[Wei-Sheng], Li, X.[Xin], Wu, J.J.[Jin-Jian], Li, L.[Leida], Shi, G.M.[Guang-Ming],
Adaptive Search-and-Training for Robust and Efficient Network Pruning,
PAMI(45), No. 8, August 2023, pp. 9325-9338.
IEEE DOI 2307
Training, Knowledge engineering, Adaptive systems, Kernel, Optimization, Neural networks, Convolution, Knowledge distillation, network pruning BibRef

Ye, H.C.[Han-Cheng], Zhang, B.[Bo], Chen, T.[Tao], Fan, J.Y.[Jia-Yuan], Wang, B.[Bin],
Performance-Aware Approximation of Global Channel Pruning for Multitask CNNs,
PAMI(45), No. 8, August 2023, pp. 10267-10284.
IEEE DOI 2307
Task analysis, Information filters, Adaptation models, Analytical models, Predictive models, Sensitivity, sequentially greedy algorithm BibRef

Zhang, X.[Xin], Xie, W.Y.[Wei-Ying], Li, Y.S.[Yun-Song], Jiang, K.[Kai], Fang, L.Y.[Le-Yuan],
REAF: Remembering Enhancement and Entropy-Based Asymptotic Forgetting for Filter Pruning,
IP(32), 2023, pp. 3912-3923.
IEEE DOI 2307
Training, Mathematical models, Costs, Convolution, Data models, Robustness, Image coding, Filter pruning, remembering enhancement, entropy-based asymptotic forgetting BibRef

Ghimire, D.[Deepak], Lee, K.[Kilho], Kim, S.H.[Seong-Heum],
Loss-aware automatic selection of structured pruning criteria for deep neural network acceleration,
IVC(136), 2023, pp. 104745.
Elsevier DOI 2308
Deep neural networks, Structured pruning, Pruning criteria BibRef

Niu, T.[Tao], Teng, Y.[Yinglei], Jin, L.[Lei], Zou, P.P.[Pan-Pan], Liu, Y.D.[Yi-Ding],
Pruning-and-distillation: One-stage joint compression framework for CNNs via clustering,
IVC(136), 2023, pp. 104743.
Elsevier DOI 2308
Filter pruning, Clustering, Knowledge distillation, Deep neural networks BibRef

Lv, X.W.[Xian-Wei], Persello, C.[Claudio], Zhao, W.F.[Wu-Fan], Huang, X.[Xiao], Hu, Z.W.[Zhong-Wen], Ming, D.P.[Dong-Ping], Stein, A.[Alfred],
Pruning for image segmentation: Improving computational efficiency for large-scale remote sensing applications,
PandRS(202), 2023, pp. 13-29.
Elsevier DOI 2308
Image segmentation, Region-merging, Region adjacency graph, Nearest neighbour graph, Pruning BibRef

Mondal, M.[Milton], Das, B.[Bishshoy], Lall, B.[Brejesh], Singh, P.[Pushpendra], Roy, S.D.[Sumantra Dutta], Joshi, S.D.[Shiv Dutt],
Feature independent Filter Pruning by Successive Layers analysis,
CVIU(236), 2023, pp. 103828.
Elsevier DOI 2310
Feature independent filter pruning, Convolutional neural networks, Deep learning, Model compression BibRef

Zhao, C.L.[Cheng-Long], Zhang, Y.[Yunxiang], Ni, B.B.[Bing-Bing],
Exploiting Channel Similarity for Network Pruning,
CirSysVideo(33), No. 9, September 2023, pp. 5049-5061.
IEEE DOI 2310
BibRef

Li, X.Y.L.B.P.[Xin-Yu Liu Bao-Pu], Chen, Z.[Zhen], Yuan, Y.X.[Yi-Xuan],
Generalized Gradient Flow Based Saliency for Pruning Deep Convolutional Neural Networks,
IJCV(131), No. 12, December 2023, pp. 3121-3135.
Springer DOI 2311
BibRef

Hou, Y.N.[Yue-Nan], Ma, Z.[Zheng], Liu, C.X.[Chun-Xiao], Wang, Z.[Zhe], Loy, C.C.[Chen Change],
Network pruning via resource reallocation,
PR(145), 2024, pp. 109886.
Elsevier DOI Code:
WWW Link. 2311
Network pruning, Resource reallocation, Searching cost BibRef

Zhang, Y.X.[Yu-Xin], Lin, M.[Mingbao], Zhong, Y.[Yunshan], Chao, F.[Fei], Ji, R.R.[Rong-Rong],
Lottery Jackpots Exist in Pre-Trained Models,
PAMI(45), No. 12, December 2023, pp. 14990-15004.
IEEE DOI 2311
BibRef

Kim, N.J.[Nam Joon], Kim, H.[Hyun],
FP-AGL: Filter Pruning With Adaptive Gradient Learning for Accelerating Deep Convolutional Neural Networks,
MultMed(25), 2023, pp. 5279-5290.
IEEE DOI 2311
BibRef

Wu, H.[Hai], He, R.[Ruifei], Tan, H.[Haoru], Qi, X.J.[Xiao-Juan], Huang, K.B.[Kai-Bin],
Vertical Layering of Quantized Neural Networks for Heterogeneous Inference,
PAMI(45), No. 12, December 2023, pp. 15964-15978.
IEEE DOI 2311
BibRef

Zhang, H.[Haonan], Liu, L.J.[Long-Jun], Kang, B.Y.[Bing-Yao], Zheng, N.N.[Nan-Ning],
Hierarchical Model Compression via Shape-Edge Representation of Feature Maps: An Enlightenment From the Primate Visual System,
MultMed(25), 2023, pp. 6958-6970.
IEEE DOI 2311
BibRef

dos Santos, S.F.[Samuel Felipe], Berriel, R.[Rodrigo], Oliveira-Santos, T.[Thiago], Sebe, N.[Nicu], Almeida, J.[Jurandy],
Budget-aware Pruning for Multi-domain Learning,
CIAP23(II:477-489).
Springer DOI 2312
BibRef

Berriel, R.F.[Rodrigo F.], Lathuillere, S., Nabi, M., Klein, T., Oliveira-Santos, T., Sebe, N., Ricci, E.,
Budget-Aware Adapters for Multi-Domain Learning,
ICCV19(382-391)
IEEE DOI 2004
computational complexity, learning (artificial intelligence), network theory (graphs). BibRef

Yang, L.[Liu], Gu, S.Q.[Shi-Qiao], Shen, C.Y.[Chen-Yang], Zhao, X.[Xile], Hu, Q.H.[Qing-Hua],
Skeleton Neural Networks via Low-Rank Guided Filter Pruning,
CirSysVideo(33), No. 12, December 2023, pp. 7197-7211.
IEEE DOI 2312
BibRef

Wang, X.D.[Xiao-Dong], Zheng, Z.[Zhedong], He, Y.[Yang], Yan, F.[Fei], Zeng, Z.Q.[Zhi-Qiang], Yang, Y.[Yi],
Progressive Local Filter Pruning for Image Retrieval Acceleration,
MultMed(25), 2023, pp. 9597-9607.
IEEE DOI 2312
BibRef

Wong, K.C.L.[Ken C.L.], Kashyap, S.[Satyananda], Moradi, M.[Mehdi],
Basis scaling and double pruning for efficient inference in network-based transfer learning,
PRL(177), 2024, pp. 1-6.
Elsevier DOI 2401
Network pruning, Transfer learning, Efficient inference, Singular value decomposition, Double pruning BibRef

Jin, X.Q.[Xiao-Qiang], Zhang, D.W.[Da-Wei], Wu, Q.[Qiner], Xiao, X.[Xin], Zhao, P.[Pengsen], Zheng, Z.[Zhonglong],
Improved SiamCAR with ranking-based pruning and optimization for efficient UAV tracking,
IVC(141), 2024, pp. 104886.
Elsevier DOI 2402
Siamese network, Model pruning, Ranking loss, Attention mechanism BibRef

Pan, J.H.[Jian-Hong], Yang, S.Y.[Si-Yuan], Foo, L.G.[Lin Geng], Ke, Q.H.[Qiu-Hong], Rahmani, H.[Hossein], Fan, Z.P.[Zhi-Peng], Liu, J.[Jun],
Progressive Channel-Shrinking Network,
MultMed(26), 2024, pp. 2016-2026.
IEEE DOI Code:
WWW Link. 2402
Training, Indexing, Convolution, Costs, Generators, Feature extraction, Testing, Progressive, network shrinking BibRef

Li, Y.[Yishi], Zhang, Y.H.[Yu-Hao], Lai, R.[Rui],
TinyPillarNet: Tiny Pillar-Based Network for 3D Point Cloud Object Detection at Edge,
CirSysVideo(34), No. 3, March 2024, pp. 1772-1785.
IEEE DOI 2403
To implement on edge hardware. Feature extraction, Point cloud compression, Object detection, Hardware, Memory management, Internet of Things, FPGA BibRef

Kim, N.J.[Nam Joon], Kim, H.[Hyun],
Trunk Pruning: Highly Compatible Channel Pruning for Convolutional Neural Networks Without Fine-Tuning,
MultMed(26), 2024, pp. 5588-5599.
IEEE DOI 2404
Training, Kernel, Scalability, Channel estimation, Taylor series, Probabilistic logic, Indexes, Convolutional Neural Network (CNN), Fine-Tuning BibRef

He, Y.[Yang], Xiao, L.[Lingao],
Structured Pruning for Deep Convolutional Neural Networks: A Survey,
PAMI(46), No. 5, May 2024, pp. 2900-2919.
IEEE DOI Code:
WWW Link. 2404
Surveys, Computational modeling, Information filters, Transformers, Filtering theory, Correlation, Convolutional neural networks, unstructured pruning BibRef

He, H.Y.[Hao-Yu], Cai, J.F.[Jian-Fei], Liu, J.[Jing], Pan, Z.Z.[Zi-Zheng], Zhang, J.[Jing], Tao, D.C.[Da-Cheng], Zhuang, B.[Bohan],
Pruning Self-Attentions Into Convolutional Layers in Single Path,
PAMI(46), No. 5, May 2024, pp. 3910-3922.
IEEE DOI 2404
Computational modeling, Transformers, Search problems, Logic gates, Costs, Convolution, Efficient models, vision transformers BibRef

Yang, L.[Liu], Gu, S.Q.[Shi-Qiao], Shen, C.Y.[Chen-Yang], Zhao, X.[Xile], Hu, Q.H.[Qing-Hua],
Soft independence guided filter pruning,
PR(153), 2024, pp. 110488.
Elsevier DOI 2405
Convolutional neural networks, Filter pruning, Filter independence BibRef

Hu, F.[Fuyi], Zhang, J.[Jin], Gao, S.[Song], Lin, Y.[Yu], Zhou, W.[Wei], Wang, R.[Ruxin],
An efficient training-from-scratch framework with BN-based structural compressor,
PR(153), 2024, pp. 110546.
Elsevier DOI 2405
Channel pruning, Model compression, Knowledge distillation, Convolutional Neural Network (CNN) BibRef

Ran, Q.[Qiong], Li, M.W.[Meng-Wei], Zhao, B.[Boya], He, Z.P.[Zhi-Peng], Wu, Y.F.[Yuan-Feng],
L1RR: Model Pruning Using Dynamic and Self-Adaptive Sparsity for Remote-Sensing Target Detection to Prevent Target Feature Loss,
RS(16), No. 11, 2024, pp. 2026.
DOI Link 2406
BibRef

Guo, Y.[Yang], Gao, W.[Wei], Li, G.[Ge],
Interpretable Task-inspired Adaptive Filter Pruning for Neural Networks Under Multiple Constraints,
IJCV(132), No. 6, June 2024, pp. 2060-2076.
Springer DOI 2406
BibRef

Shi, M.[Mengnan], Liu, C.[Chang], Jiao, J.B.[Jian-Bin], Ye, Q.X.[Qi-Xiang],
Self-supervised feature-gate coupling for dynamic network pruning,
PR(154), 2024, pp. 110594.
Elsevier DOI 2406
Contrastive self-supervised learning (CSL), Dynamic network pruning (DNP), Feature-gate coupling, Instance neighborhood relationship BibRef

Luo, H.[Hui], Zhuang, Z.[Zhuangwei], Li, Y.Q.[Yuan-Qing], Tan, M.K.[Ming-Kui], Chen, C.[Cen], Zhang, J.L.[Jian-Lin],
Toward Compact and Robust Model Learning Under Dynamically Perturbed Environments,
CirSysVideo(34), No. 6, June 2024, pp. 4857-4873.
IEEE DOI 2406
Robustness, Data models, Perturbation methods, Training, Computational modeling, Predictive models, Pipelines, adversarial pruning BibRef

Poh, S.C.[Soon Chang], Chan, C.S.[Chee Seng], Lim, C.K.[Chee Kau],
Efficient label-free pruning and retraining for Text-VQA Transformers,
PRL(183), 2024, pp. 1-8.
Elsevier DOI Code:
WWW Link. 2406
Transformer, Pruning, Scene text visual question answering BibRef

Yin, W.F.[Wen-Feng], Dong, G.[Gang], An, D.[Dianzheng], Zhao, Y.Q.[Ya-Qian], Wang, B.Q.[Bin-Qiang],
Easy Pruning via Coresets and Structural Re-Parameterization,
SPLetters(31), 2024, pp. 1725-1729.
IEEE DOI 2407
Accuracy, Noise measurement, Kernel, Convolution, Sun, Signal processing algorithms, Residual neural networks, transfer learning BibRef

Jayasimhan, A.[Anusha], Pabitha, P.,
ResPrune: An energy-efficient restorative filter pruning method using stochastic optimization for accelerating CNN,
PR(155), 2024, pp. 110671.
Elsevier DOI 2408
Model compression, Image classification, Neural networks, Deep learning, Pruning BibRef

Bicici, U.C.[Ufuk Can], Meral, T.H.S.[Tuna Han Salih], Akarun, L.[Lale],
Conditional Information Gain Trellis,
PRL(184), 2024, pp. 212-218.
Elsevier DOI Code:
WWW Link. 2408
Machine learning, Deep learning, Conditional deep learning BibRef

Hedegaard, L.[Lukas], Alok, A.[Aman], Jose, J.[Juby], Iosifidis, A.[Alexandros],
Structured pruning adapters,
PR(156), 2024, pp. 110724.
Elsevier DOI 2408
Transfer learning, Adapters, Pruning, Structured pruning, Parameter efficient, Image classification, Vision transformer BibRef

Fakhfakh, M.[Mohamed], Chaari, L.[Lotfi],
Bayesian Optimization for Sparse Neural Networks With Trainable Activation Functions,
PAMI(46), No. 10, October 2024, pp. 6699-6712.
IEEE DOI 2409
Bayes methods, Task analysis, Standards, Data models, Training, Shape, Probability density function, Activation function, Hamiltonian dynamics BibRef

Diao, H.[Huabin], Li, G.[Gongyan], Xu, S.Y.[Shao-Yun], Kong, C.[Chao], Wang, W.[Wei], Liu, S.[Shuai], He, Y.F.[Yue-Feng],
Self-distillation enhanced adaptive pruning of convolutional neural networks,
PR(157), 2025, pp. 110942.
Elsevier DOI 2409
Convolutional neural networks, Self-distillation, Adaptive pruning BibRef

Li, J.H.[Jia-Hao], Xu, M.[Ming], Chen, H.[He], Liu, W.C.[Wen-Chao], Chen, L.[Liang], Xie, Y.Z.[Yi-Zhuang],
Spatio-Temporal Pruning for Training Ultra-Low-Latency Spiking Neural Networks in Remote Sensing Scene Classification,
RS(16), No. 17, 2024, pp. 3200.
DOI Link 2409
BibRef

Bayasi, N.[Nourhan], Hamarneh, G.[Ghassan], Garbi, R.[Rafeef],
GC2: Generalizable Continual Classification of Medical Images,
MedImg(43), No. 11, November 2024, pp. 3767-3779.
IEEE DOI Code:
WWW Link. 2411
Task analysis, Biomedical imaging, Training, Data models, Knowledge engineering, Image classification, Adaptation models, network pruning BibRef

Cheng, H.R.[Hong-Rong], Zhang, M.[Miao], Shi, J.Q.F.[Javen Qin-Feng],
Influence Function Based Second-Order Channel Pruning: Evaluating True Loss Changes for Pruning is Possible Without Retraining,
PAMI(46), No. 12, December 2024, pp. 9023-9037.
IEEE DOI 2411
Neural networks, Computational modeling, Accuracy, Training, Optimization, Classification algorithms, Task analysis, model compression BibRef

Cheng, H.[Hongrong], Zhang, M.[Miao], Shi, J.Q.F.[Javen Qin-Feng],
A Survey on Deep Neural Network Pruning: Taxonomy, Comparison, Analysis, and Recommendations,
PAMI(46), No. 12, December 2024, pp. 10558-10578.
IEEE DOI Code:
WWW Link. 2411
Training, Neural networks, Artificial neural networks, Surveys, Taxonomy, Reviews, Computational modeling, edge devices BibRef

Huang, F.[Feicheng], Zhou, W.B.[Wen-Bo], Huang, Y.[Yue], Ding, X.H.[Xing-Hao],
Efficient Training Acceleration via Sample-Wise Dynamic Probabilistic Pruning,
SPLetters(31), 2024, pp. 3034-3038.
IEEE DOI 2411
Training, Probabilistic logic, Data models, Computational modeling, Accuracy, Vectors, Predictive models, Optimization, selection bias BibRef

Liu, D.B.[De-Bin], Bai, X.[Xiang], Zhao, R.N.[Ruo-Nan], Deng, X.J.[Xian-Jun], Yang, L.T.[Laurence T.],
Dual-Grained Lightweight Strategy,
PAMI(46), No. 12, December 2024, pp. 10228-10245.
IEEE DOI 2411
Computational modeling, Training, Tensors, Accuracy, Performance evaluation, Data models, Costs, tensor decomposition BibRef


Nor-Azman, M.N.A.[Muhammad Nor Azzafri], Sheikh, U.U.[Usman Ullah], Mohammed, M.S.[Mohammed Sultan], Sirkunan, J.[Jeevan], Marsono, M.N.[Muhammad Nadzir],
Correlation-Aware Joint Pruning-Quantization using Graph Neural Networks,
ICIP24(1403-1409)
IEEE DOI 2411
Image coding, Correlation, Accuracy, Computational modeling, Object detection, Graph neural networks, Complexity theory, GNN BibRef

Castells, T.[Thibault], Song, H.K.[Hyoung-Kyu], Kim, B.K.[Bo-Kyeong], Choi, S.[Shinkook],
LD-Pruner: Efficient Pruning of Latent Diffusion Models using Task-Agnostic Insights,
EDGE24(821-830)
IEEE DOI 2410
Training, Performance evaluation, Image coding, Image synthesis, Computational modeling, Memory management, Text to image, pruning, task-agnostic BibRef

Ganjdanesh, A.[Alireza], Gao, S.Q.[Shang-Qian], Huang, H.[Heng],
Jointly Training and Pruning CNNs via Learnable Agent Guidance and Alignment,
CVPR24(16058-16069)
IEEE DOI 2410
Training, Accuracy, Computational modeling, Reinforcement learning, Predictive models, Decoding, Trajectory, Efficient Deep Learning, Structural Pruning. BibRef

Gao, S.Q.[Shang-Qian], Zhang, Y.[Yanfu], Huang, F.H.[Fei-Hu], Huang, H.[Heng],
BilevelPruning: Unified Dynamic and Static Channel Pruning for Convolutional Neural Networks,
CVPR24(16090-16100)
IEEE DOI 2410
Costs, Runtime, Convolutional neural networks, Optimization BibRef

Iurada, L.[Leonardo], Ciccone, M.[Marco], Tommasi, T.[Tatiana],
Finding Lottery Tickets in Vision Models via Data-Driven Spectral Foresight Pruning,
CVPR24(16142-16151)
IEEE DOI Code:
WWW Link. 2410
Training, Deep learning, Upper bound, Costs, Computational modeling, Heuristic algorithms, Pruning-at-Initialization, Efficient Neural Network Pruning BibRef

Wu, X.[Xidong], Gao, S.Q.[Shang-Qian], Zhang, Z.[Zeyu], Li, Z.Z.[Zhen-Zhen], Bao, R.[Runxue], Zhang, Y.[Yanfu], Wang, X.Q.[Xiao-Qian], Huang, H.[Heng],
Auto- Train-Once: Controller Network Guided Automatic Network Pruning from Scratch,
CVPR24(16163-16173)
IEEE DOI Code:
WWW Link. 2410
Training, Costs, Computational modeling, Heuristic algorithms, Stochastic processes, Computer architecture, Benchmark testing BibRef

Ilhan, F.[Fatih], Su, G.[Gong], Tekin, S.F.[Selim Furkan], Huang, T.[Tiansheng], Hu, S.[Sihao], Liu, L.[Ling],
Resource- Efficient Transformer Pruning for Finetuning of Large Models,
CVPR24(16206-16215)
IEEE DOI 2410
Computational modeling, Memory management, Graphics processing units, Transformers, vision transformers BibRef

Yu, Z.Y.[Zhi-Yuan], Shen, L.[Li], Ding, L.[Liang], Tian, X.[Xinmei], Chen, Y.X.[Yi-Xin], Tao, D.C.[Da-Cheng],
Sheared Backpropagation for Fine-Tuning Foundation Models,
CVPR24(5883-5892)
IEEE DOI 2410
Training, Backpropagation, Performance evaluation, Accuracy, Costs, Computational modeling, Memory management, Activation Pruning BibRef

Geng, X.Y.[Xin-Yu], Wang, J.M.[Jia-Ming], Gong, J.W.[Jia-Wei], Xue, Y.R.[Yue-Rong], Xu, J.[Jun], Chen, F.L.[Fang-Lin], Huang, X.L.[Xiao-Lin],
OrthCaps: An Orthogonal CapsNet with Sparse Attention Routing and Pruning,
CVPR24(6037-6046)
IEEE DOI 2410
Knowledge engineering, Redundancy, Routing, Robustness, Computational efficiency, Pruning BibRef

Gao, S.Q.[Shang-Qian], Li, J.[Junyi], Zhang, Z.[Zeyu], Zhang, Y.[Yanfu], Cai, W.D.[Wei-Dong], Huang, H.[Heng],
Device-Wise Federated Network Pruning,
CVPR24(12342-12352)
IEEE DOI 2410
Performance evaluation, Training, Deep learning, Costs, Federated learning, Neural networks BibRef

Mitra, P.[Pallavi], Schwalbe, G.[Gesina], Klein, N.[Nadja],
Investigating Calibration and Corruption Robustness of Post-hoc Pruned Perception CNNs: An Image Classification Benchmark Study,
SAIAD24(3542-3552)
IEEE DOI 2410
Performance evaluation, Uncertainty, Image coding, Computational modeling, Robustness, Calibration, pruning, image classification BibRef

Wang, H.J.[Hung-Jui], Wu, Y.Y.[Yu-Yu], Chen, S.T.[Shang-Tse],
Enhancing Targeted Attack Transferability via Diversified Weight Pruning,
AML24(2904-2914)
IEEE DOI 2410
Image coding, Costs, Perturbation methods, Computer architecture, Adversarial Attacks BibRef

Agarwal, P.[Parakh], Mathew, M.[Manu], Patel, K.R.[Kunal Ranjan], Tripathi, V.[Varun], Swami, P.[Pramod],
Prune Efficiently by Soft Pruning,
ECVW24(2210-2217)
IEEE DOI 2410
Training, Performance evaluation, Accuracy, Embedded systems, Instruments, Neural networks, Transformers BibRef

Frickenstein, L.[Lukas], Mori, P.[Pierpaolo], Sampath, S.B.[Shambhavi Balamuthu], Thoma, M.[Moritz], Fasfous, N.[Nael], Vemparala, M.R.[Manoj Rohit], Frickenstein, A.[Alexander], Unger, C.[Christian], Passerone, C.[Claudio], Stechele, W.[Walter],
Pruning as a Binarization Technique,
ECVW24(2131-2140)
IEEE DOI 2410
Training, Accuracy, Quantization (signal), Image coding, Convolution, Semantic segmentation, Neural networks, Binary Neural Network, BNN, CNN BibRef

Huang, H.[Hong], Zhuang, W.M.[Wei-Ming], Chen, C.[Chen], Lyu, L.[Lingjuan],
FedMef: Towards Memory-Efficient Federated Dynamic Pruning,
CVPR24(27538-27547)
IEEE DOI 2410
Performance evaluation, Training, Degradation, Accuracy, Federated learning, Computational modeling, Neural networks BibRef

Sun, X.L.[Xing-Long], Shi, H.[Humphrey],
Towards Better Structured Pruning Saliency by Reorganizing Convolution,
WACV24(2193-2203)
IEEE DOI Code:
WWW Link. 2404
Convolutional codes, Tensors, Convolution, Matrix decomposition, Data mining, Kernel, Algorithms, Machine learning architectures BibRef

Lee, D.[Donghyeon], Lee, E.[Eunho], Hwang, Y.[Youngbae],
Pruning from Scratch via Shared Pruning Module and Nuclear norm-based Regularization,
WACV24(1382-1391)
IEEE DOI Code:
WWW Link. 2404
Training, Image coding, Costs, Codes, Computational modeling, Complexity theory, Algorithms, Embedded sensing / real-time techniques BibRef

Kim, M.[Minchul], Gao, S.Q.[Shang-Qian], Hsu, Y.C.[Yen-Chang], Shen, Y.L.[Yi-Lin], Jin, H.X.[Hong-Xia],
Token Fusion: Bridging the Gap between Token Pruning and Token Merging,
WACV24(1372-1381)
IEEE DOI 2404
Training, Sensitivity, Image synthesis, Computational modeling, Merging, Linearity, Algorithms BibRef

Gupta, A.[Arshita], Bau, T.[Tien], Kim, J.S.[Joon-Soo], Zhu, Z.[Zhe], Jha, S.[Sumit], Garud, H.[Hrishikesh],
Torque based Structured Pruning for Deep Neural Network,
WACV24(2699-2708)
IEEE DOI 2404
Training, Filters, Torque, Memory management, Network architecture, Hardware, Convolutional neural networks, Algorithms, Smartphones / end user devices BibRef

Ding, S.W.[Shi-Wei], Zhang, L.[Lan], Pan, M.[Miao], Yuan, X.Y.[Xiao-Yong],
PATROL: Privacy-Oriented Pruning for Collaborative Inference Against Model Inversion Attacks,
WACV24(4704-4713)
IEEE DOI 2404
Training, Performance evaluation, Privacy, Perturbation methods, Collaboration, Artificial neural networks, Feature extraction, ethical computer vision BibRef

Glandorf, P.[Patrick], Kaiser, T.[Timo], Rosenhahn, B.[Bodo],
HyperSparse Neural Networks: Shifting Exploration to Exploitation through Adaptive Regularization,
REDLCV23(1226-1235)
IEEE DOI 2401
BibRef

Peters, J.[Jorn], Fournarakis, M.[Marios], Nagel, M.[Markus], van Baalen, M.[Mart], Blankevoort, T.[Tijmen],
QBitOpt: Fast and Accurate Bitwidth Reallocation during Training,
REDLCV23(1274-1283)
IEEE DOI 2401
BibRef

Jordao, A.[Artur], de Araújo, G.[George], de Almeida-Maia, H.[Helena], Pedrini, H.[Helio],
When Layers Play the Lottery, all Tickets Win at Initialization,
REDLCV23(1196-1205)
IEEE DOI 2401
BibRef

Spadaro, G.[Gabriele], Renzulli, R.[Riccardo], Bragagnolo, A.[Andrea], Giraldo, J.H.[Jhony H.], Fiandrotti, A.[Attilio], Grangetto, M.[Marco], Tartaglione, E.[Enzo],
Shannon Strikes Again! Entropy-based Pruning in Deep Neural Networks for Transfer Learning under Extreme Memory and Computation Budgets,
REDLCV23(1510-1514)
IEEE DOI 2401
BibRef

Nahon, R.[Rémi], Nguyen, V.T.[Van-Tam], Tartaglione, E.[Enzo],
Mining bias-target Alignment from Voronoi Cells,
ICCV23(4923-4932)
IEEE DOI 2401
BibRef

Li, Z.Y.[Zi-Yu], Tartaglione, E.[Enzo], Nguyen, V.T.[Van-Tam],
SCoTTi: Save Computation at Training Time with an adaptive framework,
REDLCV23(1435-1444)
IEEE DOI 2401
BibRef

Liao, Z.[Zhu], Quétu, V.[Victor], Nguyen, V.T.[Van-Tam], Tartaglione, E.[Enzo],
Can Unstructured Pruning Reduce the Depth in Deep Neural Networks?,
REDLCV23(1394-1398)
IEEE DOI 2401
BibRef

Kumar, A.[Aman], Anand, K.[Khushboo], Mandloi, S.[Shubham], Mishra, A.[Ashutosh], Thakur, A.[Avinash], Kasera, N.[Neeraj], Prathosh, A.P.,
CoroNetGAN: Controlled Pruning of GANs via Hypernetworks,
REDLCV23(1254-1263)
IEEE DOI 2401
BibRef

Miles, R.[Roy], Mikolajczyk, K.[Krystian],
Reconstructing Pruned Filters using Cheap Spatial Transformations,
REDLCV23(1236-1244)
IEEE DOI 2401
BibRef

Guo, S.[Song], Zhang, L.[Lei], Zheng, X.[Xiawu], Wang, Y.[Yan], Li, Y.C.[Yu-Chao], Chao, F.[Fei], Wu, C.L.[Cheng-Lin], Zhang, S.[Shengchuan], Ji, R.R.[Rong-Rong],
Automatic Network Pruning via Hilbert-Schmidt Independence Criterion Lasso under Information Bottleneck Principle,
ICCV23(17412-17423)
IEEE DOI Code:
WWW Link. 2401
BibRef

Gao, S.Q.[Shang-Qian], Zhang, Z.[Zeyu], Zhang, Y.[Yanfu], Huang, F.H.[Fei-Hu], Huang, H.[Heng],
Structural Alignment for Network Pruning through Partial Regularization,
ICCV23(17356-17366)
IEEE DOI 2401
BibRef

Li, Y.Q.[Yun-Qiang], van Gemert, J.C.[Jan C.], Hoefler, T.[Torsten], Moons, B.[Bert], Eleftheriou, E.[Evangelos], Verhoef, B.E.[Bram-Ernst],
Differentiable Transportation Pruning,
ICCV23(16911-16921)
IEEE DOI 2401
BibRef

Zhang, L.[Lei], Wang, Z.B.[Zhi-Bo], Dong, X.W.[Xiao-Wei], Feng, Y.H.[Yun-He], Pang, X.Y.[Xiao-Yi], Zhang, Z.F.[Zhi-Fei], Ren, K.[Kui],
Towards Fairness-aware Adversarial Network Pruning,
ICCV23(5145-5154)
IEEE DOI 2401
BibRef

Kohama, H.[Hirokazu], Minoura, H.[Hiroaki], Hirakawa, T.[Tsubasa], Yamashita, T.[Takayoshi], Fujiyoshi, H.[Hironobu],
Single-Shot Pruning for Pre-trained Models: Rethinking the Importance of Magnitude Pruning,
REDLCV23(1425-1434)
IEEE DOI 2401
BibRef

Chen, W.X.[Wei-Xuan], Yang, Q.Q.[Qian-Qian],
Efficient Pruning Method for Learned Lossy Image Compression Models Based on Side Information,
ICIP23(3464-3468)
IEEE DOI 2312
BibRef

Iofinova, E.[Eugenia], Peste, A.[Alexandra], Alistarh, D.[Dan],
Bias in Pruned Vision Models: In-Depth Analysis and Countermeasures,
CVPR23(24364-24373)
IEEE DOI 2309
BibRef

Khaki, S.[Samir], Luo, W.H.[Wei-Han],
CFDP: Common Frequency Domain Pruning,
ECV23(4715-4724)
IEEE DOI 2309
BibRef

Kundu, S.[Souvik], Zhang, Y.[Yuke], Chen, D.[Dake], Beerel, P.A.[Peter A.],
Making Models Shallow Again: Jointly Learning to Reduce Non-Linearity and Depth for Latency-Efficient Private Inference,
ECV23(4685-4689)
IEEE DOI 2309
BibRef

Huang, Y.[Yaomin], Liu, N.[Ning], Che, Z.P.[Zheng-Ping], Xu, Z.Y.[Zhi-Yuan], Shen, C.M.[Chao-Min], Peng, Y.X.[Ya-Xin], Zhang, G.X.[Gui-Xu], Liu, X.[Xinmei], Feng, F.F.[Fei-Fei], Tang, J.[Jian],
CP3: Channel Pruning Plug-in for Point-Based Networks,
CVPR23(5302-5312)
IEEE DOI 2309
BibRef

Park, G.Y.[Geon Yeong], Lee, S.[Sangmin], Lee, S.W.[Sang Wan], Ye, J.C.[Jong Chul],
Training Debiased Subnetworks with Contrastive Weight Pruning,
CVPR23(7929-7938)
IEEE DOI 2309
BibRef

Fang, G.[Gongfan], Ma, X.Y.[Xin-Yin], Song, M.L.[Ming-Li], Mi, M.B.[Michael Bi], Wang, X.C.[Xin-Chao],
DepGraph: Towards Any Structural Pruning,
CVPR23(16091-16101)
IEEE DOI 2309
BibRef

Plochaet, J.[Jef], Goedemé, T.[Toon],
Hardware-Aware Pruning for FPGA Deep Learning Accelerators,
EVW23(4482-4490)
IEEE DOI 2309
BibRef

Stewart, J.[James], Michieli, U.[Umberto], Ozay, M.[Mete],
Data-Free Model Pruning at Initialization via Expanders,
ECV23(4519-4524)
IEEE DOI 2309
BibRef

Shin, J.[Juncheol], So, J.[Junhyuk], Park, S.[Sein], Kang, S.[Seungyeop], Yoo, S.[Sungjoo], Park, E.[Eunhyeok],
NIPQ: Noise proxy-based Integrated Pseudo-Quantization,
CVPR23(3852-3861)
IEEE DOI 2309
BibRef

Sun, Q.M.[Qi-Ming], Cao, S.[Shan], Chen, Z.X.[Zhi-Xiang],
Filter Pruning via Automatic Pruning Rate Search,
ACCV22(VI:594-610).
Springer DOI 2307
BibRef

Kim, A.[Aeri], Lee, S.[Seungju], Kwon, E.[Eunji], Kang, S.[Seokhyeong],
Adaptive FSP: Adaptive Architecture Search with Filter Shape Pruning,
ACCV22(I:539-555).
Springer DOI 2307
BibRef

Duan, Y.Z.[Yuan-Zhi], Zhou, Y.[Yue], He, P.[Peng], Liu, Q.[Qiang], Duan, S.[Shukai], Hu, X.F.[Xiao-Fang],
Network Pruning via Feature Shift Minimization,
ACCV22(I:618-634).
Springer DOI 2307
BibRef

Akiva-Hochman, R.[Ruth], Finder, S.E.[Shahaf E.], Turek, J.S.[Javier S.], Treister, E.[Eran],
Searching for N:m Fine-grained Sparsity of Weights and Activations in Neural Networks,
CADK22(130-143).
Springer DOI 2304
BibRef

Patra, R.[Rishabh], Hebbalaguppe, R.[Ramya], Dash, T.[Tirtharaj], Shroff, G.[Gautam], Vig, L.[Lovekesh],
Calibrating Deep Neural Networks using Explicit Regularisation and Dynamic Data Pruning,
WACV23(1541-1549)
IEEE DOI 2302
Training, Deep learning, Neural networks, Manuals, Predictive models, Inspection, Calibration, Algorithms: Explainable, fair, accountable, Social good BibRef

Long, X.[Xin], Zeng, X.R.[Xiang-Rong], Liu, Y.[Yu], Qiao, M.[Mu],
Low Bit Neural Networks with Channel Sparsity and Sharing,
ICIVC22(889-894)
IEEE DOI 2301
Training, Visualization, Tensors, Quantization (signal), Computational modeling, OWL, Redundancy, Ordered weighted l1, Sharing BibRef

Massart, E.[Estelle],
Improving weight clipping in Wasserstein GANs,
ICPR22(2286-2292)
IEEE DOI 2212
Training, Convolution, Generators, Numerical models, Standards BibRef

Zhao, Y.[Yu], Lee, C.K.[Chung-Kuei],
Differentiable Channel Sparsity Search via Weight Sharing within Filters,
ICPR22(2012-2018)
IEEE DOI 2212

WWW Link. Image resolution, Shape, Semantic segmentation, Memory management, Filtering algorithms, Information filters, Stability analysis BibRef

Xie, X.[Xiang], Chen, T.T.[Tian-Tian], Chu, A.[Anqi], Stork, W.[Wilhelm],
Efficient Network Pruning via Feature Selection,
ICPR22(1843-1850)
IEEE DOI 2212
Training, Redundancy, Neurons, Network architecture, Feature extraction, Convolutional neural networks, Time factors BibRef

Malik, S.[Shehryar], Haider, M.U.[Muhammad Umair], Iqbal, O.[Omer], Taj, M.[Murtaza],
Neural Network Pruning Through Constrained Reinforcement Learning,
ICPR22(3027-3033)
IEEE DOI 2212
Training, Measurement, Neurons, Memory management, Reinforcement learning, Biological neural networks BibRef

Sakai, Y.[Yasufumi], Iwakawa, A.[Akinori], Tabaru, T.[Tsuguchika], Inoue, A.[Atsuki], Kawaguchi, H.[Hiroshi],
Automatic Pruning Rate Derivation for Structured Pruning of Deep Neural Networks,
ICPR22(2561-2567)
IEEE DOI 2212
Degradation, Deep learning, Image coding, Neural networks, Bit error rate, Manuals, Transformers, Neural networks, Automatic pruning rate search BibRef

McDanel, B.[Bradley], Dinh, H.[Helia], Magallanes, J.[John],
Accelerating DNN Training with Structured Data Gradient Pruning,
ICPR22(2293-2299)
IEEE DOI 2212
Training, Deep learning, Computational modeling, Source coding, Neural networks, Graphics processing units, Data models BibRef

Famili, A.[Azadeh], Lao, Y.J.[Ying-Jie],
Genetic-based Joint Dynamic Pruning and Learning Algorithm to Boost DNN Performance,
ICPR22(2100-2106)
IEEE DOI 2212
Training, Heuristic algorithms, Neurons, Biological systems, Inference algorithms, Stability analysis BibRef

Dupont, R.[Robin], Alaoui, M.A.[Mohammed Amine], Sahbi, H.[Hichem], Lebois, A.[Alice],
Extracting Effective Subnetworks with Gumbel-Softmax,
ICIP22(931-935)
IEEE DOI 2211
Weight measurement, Training, Network topology, Scalability, Neural networks, Probability distribution, Lightweight networks, topology selection BibRef

Humble, R.[Ryan], Shen, M.[Maying], Latorre, J.A.[Jorge Albericio], Darve, E.[Eric], Alvarez, J.[Jose],
Soft Masking for Cost-Constrained Channel Pruning,
ECCV22(XI:641-657).
Springer DOI 2211
BibRef

Gao, S.Q.[Shang-Qian], Huang, F.H.[Fei-Hu], Zhang, Y.F.[Yan-Fu], Huang, H.[Heng],
Disentangled Differentiable Network Pruning,
ECCV22(XI:328-345).
Springer DOI 2211
BibRef

Lee, S.H.[Seung-Hyun], Song, B.C.[Byung Cheol],
Ensemble Knowledge Guided Sub-network Search and Fine-Tuning for Filter Pruning,
ECCV22(XI:569-585).
Springer DOI 2211
BibRef

Kim, T.[Taeho], Kwon, Y.[Yongin], Lee, J.[Jemin], Kim, T.[Taeho], Ha, S.[Sangtae],
CPrune: Compiler-Informed Model Pruning for Efficient Target-Aware DNN Execution,
ECCV22(XX:651-667).
Springer DOI 2211
BibRef

He, Z.Q.[Zhi-Qiang], Qian, Y.G.[Ya-Guan], Wang, Y.Q.[Yu-Qi], Wang, B.[Bin], Guan, X.H.[Xiao-Hui], Gu, Z.Q.[Zhao-Quan], Ling, X.[Xiang], Zeng, S.N.[Shao-Ning], Wang, H.J.[Hai-Jiang], Zhou, W.[Wujie],
Filter Pruning via Feature Discrimination in Deep Neural Networks,
ECCV22(XXI:245-261).
Springer DOI 2211
BibRef

Ganjdanesh, A.[Alireza], Gao, S.Q.[Shang-Qian], Huang, H.[Heng],
Interpretations Steered Network Pruning via Amortized Inferred Saliency Maps,
ECCV22(XXI:278-296).
Springer DOI 2211
BibRef

Fan, H.[Hanwei], Mu, J.[Jiandong], Zhang, W.[Wei],
Bayesian Optimization with Clustering and Rollback for CNN Auto Pruning,
ECCV22(XXIII:494-511).
Springer DOI 2211
BibRef

Saniee, I.[Iraj], Zhang, L.[Lisa], Magnetta, B.[Bradley],
Truncated Lottery Ticket for Deep Pruning,
ICIP22(606-610)
IEEE DOI 2211
Deep learning, Quantization (signal), Image coding, Image edge detection, Neural networks, Reduced order systems, lottery ticket hypothesis BibRef

Joo, D.G.[Dong-Gyu], Baek, S.[Sunghyun], Kim, J.[Junmo],
Which Metrics for Network Pruning: Final Accuracy? Or Accuracy Drop?,
ICIP22(1071-1075)
IEEE DOI 2211
Measurement, Deep learning, Image coding, Correlation, Computer network reliability, Neural networks, Reliability, Evaluation Metric BibRef

Mille, J.[Julien],
Convex Quadratic Programming for Slimming Convolutional Networks,
ICIP22(1121-1125)
IEEE DOI 2211
Costs, Linear programming, Quadratic programming, Image reconstruction, Optimization, Pruning, ConvNet, convex optimization BibRef

Flich, J.[José], Medina, L.[Laura], Catalán, I.[Izan], Hernández, C.[Carles], Bragagnolo, A.[Andrea], Auzanneau, F.[Fabrice], Briand, D.[David],
Efficient Inference Of Image-Based Neural Network Models In Reconfigurable Systems With Pruning And Quantization,
ICIP22(2491-2495)
IEEE DOI 2211
Quantization (signal), Embedded systems, Image coding, Computational modeling, Artificial neural networks, Libraries, inference BibRef

Sahbi, H.[Hichem],
Topologically-Consistent Magnitude Pruning for Very Lightweight Graph Convolutional Networks,
ICIP22(3495-3499)
IEEE DOI 2211
Image recognition, Convolution, Message passing, Network architecture, Task analysis, Context modeling, skeleton-based recognition BibRef

Sharma, A.[Ankit], Foroosh, H.[Hassan],
RAPID: A Single Stage Pruning Framework,
ICIP22(3611-3615)
IEEE DOI 2211
Training, Knowledge engineering, Deep learning, Quantization (signal), Computational modeling, Pipelines, Distillation BibRef

Tayyab, M.[Muhammad], Mahalanobis, A.[Abhijit],
Simultaneous Learning and Compression for Convolution Neural Networks,
ICIP22(3636-3640)
IEEE DOI 2211
Training, Maximum likelihood detection, Image coding, Convolution, Nonlinear filters, Neural network compression, Pruning BibRef

Hubens, N.[Nathan], Mancas, M.[Matei], Gosselin, B.[Bernard], Preda, M.[Marius], Zaharia, T.[Titus],
One-Cycle Pruning: Pruning Convnets With Tight Training Budget,
ICIP22(4128-4132)
IEEE DOI 2211
Training, Schedules, Image coding, Pipelines, Neural networks, Complexity theory, Neural Network Pruning, Pruning Schedule BibRef

Zhao, T.L.[Tian-Li], Zhang, X.S.[Xi Sheryl], Zhu, W.T.[Wen-Tao], Wang, J.X.[Jia-Xing], Yang, S.[Sen], Liu, J.[Ji], Cheng, J.[Jian],
Multi-granularity Pruning for Model Acceleration on Mobile Devices,
ECCV22(XI:484-501).
Springer DOI 2211
BibRef

Li, Y.[Yawei], Adamczewski, K.[Kamil], Li, W.[Wen], Gu, S.H.[Shu-Hang], Timofte, R.[Radu], Van Gool, L.J.[Luc J.],
Revisiting Random Channel Pruning for Neural Network Compression,
CVPR22(191-201)
IEEE DOI 2210
Training, Neural network compression, Benchmark testing, Network architecture, Filtering algorithms, Machine learning, retrieval BibRef

Shen, M.[Maying], Molchanov, P.[Pavlo], Yin, H.X.[Hong-Xu], Alvarez, J.M.[Jose M.],
When to Prune? A Policy towards Early Structural Pruning,
CVPR22(12237-12246)
IEEE DOI 2210
Training, Measurement, Costs, Neurons, Object detection, Power system stability, Efficient learning and inferences BibRef

Elkerdawy, S.[Sara], Elhoushi, M.[Mostafa], Zhang, H.[Hong], Ray, N.[Nilanjan],
Fire Together Wire Together: A Dynamic Pruning Approach with Self-Supervised Mask Prediction,
CVPR22(12444-12453)
IEEE DOI 2210
Heating systems, Training, Head, Neuroscience, Wires, Predictive models, Deep learning architectures and techniques BibRef

Wimmer, P.[Paul], Mehnert, J.[Jens], Condurache, A.[Alexandru],
Interspace Pruning: Using Adaptive Filter Representations to Improve Training of Sparse CNNs,
CVPR22(12517-12527)
IEEE DOI 2210
Training, Deep learning, Maximum likelihood detection, Runtime, Adaptive filters, Optimization methods, Nonlinear filters, Optimization methods BibRef

Sun, W.Y.[Wen-Yu], Cao, J.[Jian], Xu, P.[Pengtao], Liu, X.C.[Xiang-Cheng], Zhang, Y.[Yuan], Wang, Y.[Yuan],
An Once-for-All Budgeted Pruning Framework for ConvNets Considering Input Resolution,
ECV22(2608-2617)
IEEE DOI 2210
Training, Image resolution, Adaptive systems, Image edge detection, Energy resolution, Object detection, Computer architecture BibRef

Pan, S.Y.[Si-Yuan], Qin, Y.M.[Yi-Ming], Li, T.Y.[Ting-Yao], Li, X.S.[Xiao-Shuang], Hou, L.[Liang],
Momentum Contrastive Pruning,
ECV22(2646-2655)
IEEE DOI 2210
Representation learning, Visualization, Computational modeling, Supervised learning, Self-supervised learning BibRef

Srinivas, S.[Suraj], Kuzmin, A.[Andrey], Nagel, M.[Markus], van Baalen, M.[Mart], Skliar, A.[Andrii], Blankevoort, T.[Tijmen],
Cyclical Pruning for Sparse Neural Networks,
ECV22(2761-2770)
IEEE DOI 2210
Deep learning, Schedules, Computational modeling, Neural networks, Pipelines BibRef

Joo, D.G.[Dong-Gyu], Kim, D.[Doyeon], Yi, E.[Eojindl], Kim, J.[Junmo],
Linear Combination Approximation of Feature for Channel Pruning,
ECV22(2771-2780)
IEEE DOI 2210
Deep learning, Correlation, Convolution, Neural networks, Linearity BibRef

dos Santos, C.F.G.[Claudio Filipi Goncalves], Roder, M.[Mateus], Passos, L.A.[Leandro Aparecido], Papa, J.P.[João Paulo],
MaxDropoutV2: An Improved Method to Drop Out Neurons in Convolutional Neural Networks,
IbPRIA22(271-282).
Springer DOI 2205
BibRef

Hubens, N.[Nathan], Mancas, M.[Matei], Gosselin, B.[Bernard], Preda, M.[Marius], Zaharia, T.[Titus],
Improve Convolutional Neural Network Pruning by Maximizing Filter Variety,
CIAP22(I:379-390).
Springer DOI 2205
BibRef

Merkle, F.[Florian], Samsinger, M.[Maximilian], Schöttle, P.[Pascal],
Pruning in the Face of Adversaries,
CIAP22(I:658-669).
Springer DOI 2205
BibRef

Liu, F.X.[Fang-Xin], Zhao, W.B.[Wen-Bo], He, Z.[Zhezhi], Wang, Y.Z.[Yan-Zhi], Wang, Z.[Zongwu], Dai, C.Z.[Chang-Zhi], Liang, X.Y.[Xiao-Yao], Jiang, L.[Li],
Improving Neural Network Efficiency via Post-training Quantization with Adaptive Floating-Point,
ICCV21(5261-5270)
IEEE DOI 2203
Degradation, Training, Adaptation models, Energy consumption, Quantization (signal), Computational modeling, Encoding, Recognition and classification BibRef

Kim, D.[Dohyung], Lee, J.[Junghyup], Ham, B.[Bumsub],
Distance-aware Quantization,
ICCV21(5251-5260)
IEEE DOI 2203
Training, Quantization (signal), Data acquisition, Network architecture, Benchmark testing, Temperature control, Representation learning BibRef

Han, T.T.[Tian-Tian], Li, D.[Dong], Liu, J.[Ji], Tian, L.[Lu], Shan, Y.[Yi],
Improving Low-Precision Network Quantization via Bin Regularization,
ICCV21(5241-5250)
IEEE DOI 2203
Training, Deep learning, Quantization (signal), Computational modeling, Neural networks, Network architecture, Recognition and classification BibRef

Bulat, A.[Adrian], Tzimiropoulos, G.[Georgios],
Bit-Mixer: Mixed-precision networks with runtime bit-width selection,
ICCV21(5168-5177)
IEEE DOI 2203
Training, Knowledge engineering, Runtime, Quantization (signal), Codes, Pipelines, Efficient training and inference methods, BibRef

Xu, Z.H.[Zi-Han], Lin, M.[Mingbao], Liu, J.Z.[Jian-Zhuang], Chen, J.[Jie], Shao, L.[Ling], Gao, Y.[Yue], Tian, Y.H.[Yong-Hong], Ji, R.R.[Rong-Rong],
ReCU: Reviving the Dead Weights in Binary Neural Networks,
ICCV21(5178-5188)
IEEE DOI 2203
Training, Quantization (signal), Codes, Neural networks, Standardization, Clamps, BibRef

Chen, P.[Peng], Zhuang, B.[Bohan], Shen, C.H.[Chun-Hua],
FATNN: Fast and Accurate Ternary Neural Networks*,
ICCV21(5199-5208)
IEEE DOI 2203
Quantization (signal), Neural networks, Object detection, Benchmark testing, Network architecture, Transformers, Representation learning BibRef

Gu, J.Q.[Jia-Qi], Zhu, H.Q.[Han-Qing], Feng, C.H.[Cheng-Hao], Liu, M.J.[Ming-Jie], Jiang, Z.X.[Zi-Xuan], Chen, R.T.[Ray T.], Pan, D.Z.[David Z.],
Towards Memory-Efficient Neural Networks via Multi-Level in situ Generation,
ICCV21(5209-5218)
IEEE DOI 2203
Correlation, Quantization (signal), Computational modeling, Memory management, Redundancy, Neural networks, System-on-chip, BibRef

Chang, S.E.[Sung-En], Li, Y.Y.[Yan-Yu], Sun, M.S.[Meng-Shu], Jiang, W.W.[Wei-Wen], Liu, S.[Sijia], Wang, Y.Z.[Yan-Zhi], Lin, X.[Xue],
RMSMP: A Novel Deep Neural Network Quantization Framework with Row-wise Mixed Schemes and Multiple Precisions,
ICCV21(5231-5240)
IEEE DOI 2203
Performance evaluation, Deep learning, Quantization (signal), Neural networks, Search problems, Hardware, Recognition and classification BibRef

Chen, W.H.[Wei-Han], Wang, P.S.[Pei-Song], Cheng, J.[Jian],
Towards Mixed-Precision Quantization of Neural Networks via Constrained Optimization,
ICCV21(5330-5339)
IEEE DOI 2203
Degradation, Deep learning, Quantization (signal), Neural networks, Network architecture, Search problems, Taylor series, BibRef

Sun, X.[Ximeng], Panda, R.[Rameswar], Chen, C.F.R.[Chun-Fu Richard], Oliva, A.[Aude], Feris, R.S.[Rogerio S.], Saenko, K.[Kate],
Dynamic Network Quantization for Efficient Video Inference,
ICCV21(7355-7365)
IEEE DOI 2203
Backpropagation, Quantization (signal), Benchmark testing, Boosting, Standards, Video analysis and understanding, Efficient training and inference methods BibRef

Lin, H.[Haowen], Lou, J.[Jian], Xiong, L.[Li], Shahabi, C.[Cyrus],
Integer-arithmetic-only Certified Robustness for Quantized Neural Networks,
ICCV21(7808-7817)
IEEE DOI 2203
Smoothing methods, Program processors, Tensors, Quantization (signal), Computational modeling, Neural networks, Efficient training and inference methods BibRef

Wang, Y.K.[Yi-Kai], Yang, Y.[Yi], Sun, F.C.[Fu-Chun], Yao, A.[Anbang],
Sub-bit Neural Networks: Learning to Compress and Accelerate Binary Neural Networks,
ICCV21(5340-5349)
IEEE DOI 2203
Convolutional codes, Training, Visualization, Quantization (signal), Image coding, Runtime, Image recognition, Representation learning BibRef

Lee, J.H.[Jung Hyun], Yun, J.[Jihun], Hwang, S.J.[Sung Ju], Yang, E.[Eunho],
Cluster-Promoting Quantization with Bit-Drop for Minimizing Network Quantization Loss,
ICCV21(5350-5359)
IEEE DOI 2203
Training, Performance evaluation, Quantization (signal), Neurons, Neural networks, Network architecture, BibRef

Shen, M.Z.[Ming-Zhu], Liang, F.[Feng], Gong, R.H.[Rui-Hao], Li, Y.H.[Yu-Hang], Li, C.M.[Chu-Ming], Lin, C.[Chen], Yu, F.W.[Feng-Wei], Yan, J.J.[Jun-Jie], Ouyang, W.L.[Wan-Li],
Once Quantization-Aware Training: High Performance Extremely Low-bit Architecture Search,
ICCV21(5320-5329)
IEEE DOI 2203
Training, Degradation, Quantization (signal), Costs, Computational modeling, Neural networks, BibRef

Guo, Y.[Yi], Yuan, H.[Huan], Tan, J.C.[Jian-Chao], Wang, Z.Y.[Zhang-Yang], Yang, S.[Sen], Liu, J.[Ji],
GDP: Stabilized Neural Network Pruning via Gates with Differentiable Polarization,
ICCV21(5219-5230)
IEEE DOI 2203
Training, Image segmentation, Economic indicators, Neural networks, Logic gates, Benchmark testing, Real-time systems, Optimization and learning methods BibRef

Ding, X.H.[Xiao-Han], Hao, T.X.[Tian-Xiang], Tan, J.C.[Jian-Chao], Liu, J.[Ji], Han, J.G.[Jun-Gong], Guo, Y.C.[Yu-Chen], Ding, G.[Guiguang],
ResRep: Lossless CNN Pruning via Decoupling Remembering and Forgetting,
ICCV21(4490-4500)
IEEE DOI 2203
Training, Convolutional codes, Image coding, Standards, Efficient training and inference methods, BibRef

Wu, Y.S.[Yu-Shu], Gong, Y.F.[Yi-Fan], Zhao, P.[Pu], Li, Y.[Yanyu], Zhan, Z.[Zheng], Niu, W.[Wei], Tang, H.[Hao], Qin, M.H.[Ming-Hai], Ren, B.[Bin], Wang, Y.Z.[Yan-Zhi],
Compiler-Aware Neural Architecture Search for On-Mobile Real-time Super-Resolution,
ECCV22(XIX:92-111).
Springer DOI 2211
BibRef

Zhan, Z.[Zheng], Gong, Y.F.[Yi-Fan], Zhao, P.[Pu], Yuan, G.[Geng], Niu, W.[Wei], Wu, Y.S.[Yu-Shu], Zhang, T.Y.[Tian-Yun], Jayaweera, M.[Malith], Kaeli, D.[David], Ren, B.[Bin], Lin, X.[Xue], Wang, Y.Z.[Yan-Zhi],
Achieving on-Mobile Real-Time Super-Resolution with Neural Architecture and Pruning Search,
ICCV21(4801-4811)
IEEE DOI 2203
Image quality, Deep learning, Computational modeling, Superresolution, Neural networks, Memory management, Vision applications and systems BibRef

Yu, S.X.[Si-Xing], Mazaheri, A.[Arya], Jannesari, A.[Ali],
Auto Graph Encoder-Decoder for Neural Network Pruning,
ICCV21(6342-6352)
IEEE DOI 2203
Learning systems, Deep learning, Computational modeling, Reinforcement learning, Mobile handsets, Graph neural networks, BibRef

Wang, Z.[Zi], Li, C.C.[Cheng-Cheng],
Channel Pruning via Lookahead Search Guided Reinforcement Learning,
WACV22(3513-3524)
IEEE DOI 2202
Training, Degradation, Monte Carlo methods, Neural networks, Reinforcement learning, Benchmark testing, Deep Learning Deep Learning -> Efficient Training and Inference Methods for Networks BibRef

Lin, R.[Rui], Ran, J.[Jie], Wang, D.P.[Dong-Peng], Chiu, K.H.[King Hung], Wong, N.[Ngai],
EZCrop: Energy-Zoned Channels for Robust Output Pruning,
WACV22(3595-3604)
IEEE DOI 2202
Runtime, Codes, Fast Fourier transforms, Frequency-domain analysis, Robustness, Computational efficiency, Analysis and Understanding BibRef

Hickson, S.[Steven], Raveendran, K.[Karthik], Essa, I.[Irfan],
Sharing Decoders: Network Fission for Multi-task Pixel Prediction,
WACV22(3655-3664)
IEEE DOI 2202
Semantics, Memory management, Prediction methods, Multitasking, Real-time systems, Mobile handsets, Decoding, Vision Systems and Applications BibRef

Yu, S.X.[Shi-Xing], Yao, Z.W.[Zhe-Wei], Gholami, A.[Amir], Dong, Z.[Zhen], Kim, S.H.[Se-Hoon], Mahoney, M.W.[Michael W.], Keutzer, K.[Kurt],
Hessian-Aware Pruning and Optimal Neural Implant,
WACV22(3665-3676)
IEEE DOI 2202
Degradation, Sensitivity, Head, Natural languages, Neural implants, Transformers, Deep Learning -> Efficient Training and Inference Methods for Networks BibRef

Bragagnolo, A.[Andrea], Tartaglione, E.[Enzo], Fiandrotti, A.[Attilio], Grangetto, M.[Marco],
On the Role of Structured Pruning for Neural Network Compression,
ICIP21(3527-3531)
IEEE DOI 2201
Image coding, Tensors, Neural networks, Pipelines, Transform coding, Standardization, Pruning, Deep learning, Compression, MPEG-7 BibRef

Tavakoli, H.R.[Hamed R.], Wabnig, J.[Joachim], Cricri, F.[Francesco], Zhang, H.L.[Hong-Lei], Aksu, E.[Emre], Saniee, I.[Iraj],
Hybrid Pruning and Sparsification,
ICIP21(3542-3546)
IEEE DOI 2201
Deep learning, Image coding, Convolution, Neurons, Diffusion processes, Network architecture, graph diffusion BibRef

Haider, M.U.[Muhammad Umair], Taj, M.[Murtaza],
Comprehensive Online Network Pruning Via Learnable Scaling Factors,
ICIP21(3557-3561)
IEEE DOI 2201
Deep learning, Image coding, Neurons, Memory management, Logic gates, Benchmark testing, Neural Networks, synaptic pruning, recognition BibRef

Retsinas, G.[George], Elafrou, A.[Athena], Goumas, G.[Georgios], Maragos, P.[Petros],
Online Weight Pruning Via Adaptive Sparsity Loss,
ICIP21(3517-3521)
IEEE DOI 2201
Training, Deep learning, Adaptive systems, Image coding, Neural networks, Network architecture, Weight Pruning, Budget-aware Compression BibRef

Cho, S.[Sungmin], Kim, H.[Hyeseong], Kwon, J.[Junseok],
Filter Pruning Via Softmax Attention,
ICIP21(3507-3511)
IEEE DOI 2201
Image processing, Probabilistic logic, Softmax attention channel pruning, relative depth-wise separable convolutions BibRef

Dupont, R.[Robin], Sahbi, H.[Hichem], Michel, G.[Guillaume],
Weight Reparametrization for Budget-Aware Network Pruning,
ICIP21(789-793)
IEEE DOI 2201
Training, Degradation, Image processing, Task analysis, Standards, Videos, Lightweight network design, pruning, reparametrization BibRef

Boone-Sifuentes, T.[Tanya], Robles-Kelly, A.[Antonio], Nazari, A.[Asef],
Max-Variance Convolutional Neural Network Model Compression,
DICTA20(1-6)
IEEE DOI 2201
Couplings, Training, Image coding, Face recognition, Digital images, Filter banks, Convolutional neural networks, network pruning and max-variance pruning BibRef

Guerra, L.[Luis], Drummond, T.[Tom],
Automatic Pruning for Quantized Neural Networks,
DICTA21(01-08)
IEEE DOI 2201
Measurement, Quantization (signal), Image coding, Digital images, Neural networks, Euclidean distance BibRef

Jordão, A.[Artur], Pedrini, H.[Hélio],
On the Effect of Pruning on Adversarial Robustness,
AROW21(1-11)
IEEE DOI 2112
Training, Tools, Robustness, Computational efficiency BibRef

Lazarevich, I.[Ivan], Kozlov, A.[Alexander], Malinin, N.[Nikita],
Post-training deep neural network pruning via layer-wise calibration,
LPCV21(798-805)
IEEE DOI 2112
Deep learning, Computational modeling, Pipelines, Neural networks, Production, Object detection BibRef

Li, C.L.[Chang-Lin], Wang, G.R.[Guang-Run], Wang, B.[Bing], Liang, X.D.[Xiao-Dan], Li, Z.H.[Zhi-Hui], Chang, X.J.[Xiao-Jun],
Dynamic Slimmable Network,
CVPR21(8603-7613)
IEEE DOI 2111
Training, Image coding, Head, Computational modeling, Object detection, Life estimation, Logic gates BibRef

Yu, C.[Chong],
Minimally Invasive Surgery for Sparse Neural Networks in Contrastive Manner,
CVPR21(3588-3597)
IEEE DOI 2111
Knowledge engineering, Minimally invasive surgery, Computational modeling, Throughput, Probability distribution, Task analysis BibRef

Gao, S.Q.[Shang-Qian], Huang, F.H.[Fei-Hu], Cai, W.D.[Wei-Dong], Huang, H.[Heng],
Network Pruning via Performance Maximization,
CVPR21(9266-9276)
IEEE DOI 2111
Training, Neural networks, Memory modules, Convolutional neural networks, Task analysis BibRef

Tang, Y.[Yehui], Wang, Y.H.[Yun-He], Xu, Y.X.[Yi-Xing], Deng, Y.P.[Yi-Ping], Xu, C.[Chao], Tao, D.C.[Da-Cheng], Xu, C.[Chang],
Manifold Regularized Dynamic Network Pruning,
CVPR21(5016-5026)
IEEE DOI 2111
Manifolds, Training, Degradation, Redundancy, Neural networks, Benchmark testing, Network architecture BibRef

Wang, Z.[Zi], Li, C.C.[Cheng-Cheng], Wang, X.Y.[Xiang-Yang],
Convolutional Neural Network Pruning with Structural Redundancy Reduction,
CVPR21(14908-14917)
IEEE DOI 2111
Image synthesis, Redundancy, Object detection, Network architecture BibRef

Yao, L.W.[Le-Wei], Pi, R.J.[Ren-Jie], Xu, H.[Hang], Zhang, W.[Wei], Li, Z.G.[Zhen-Guo], Zhang, T.[Tong],
Joint-DetNAS: Upgrade Your Detector with NAS, Pruning and Dynamic Distillation,
CVPR21(10170-10179)
IEEE DOI 2111
Training, Costs, Heuristic algorithms, Detectors, Object detection, Search problems BibRef

Vemparala, M.R.[Manoj-Rohit], Fasfous, N.[Nael], Frickenstein, A.[Alexander], Sarkar, S.[Sreetama], Zhao, Q.[Qi], Kuhn, S.[Sabine], Frickenstein, L.[Lukas], Singh, A.[Anmol], Unger, C.[Christian], Nagaraja, N.S.[Naveen-Shankar], Wressnegger, C.[Christian], Stechele, W.[Walter],
Adversarial Robust Model Compression using In-Train Pruning,
SAIAD21(66-75)
IEEE DOI 2109
Training, Computational modeling, Robustness, Hardware BibRef

Jiang, W.[Wei], Wang, W.[Wei], Liu, S.[Shan], Li, S.[Songnan],
PnG: Micro-structured Prune-and-Grow Networks for Flexible Image Restoration,
NTIRE21(756-765)
IEEE DOI 2109
Degradation, Training, Image coding, Computational modeling, Superresolution, Image restoration BibRef

Enderich, L.[Lukas], Timm, F.[Fabian], Burgard, W.[Wolfram],
Holistic Filter Pruning for Efficient Deep Neural Networks,
WACV21(2595-2604)
IEEE DOI 2106
Training, Tensors, Computational modeling, Neural networks, Redundancy BibRef

Ganesh, M.R.[Madan Ravi], Corso, J.J.[Jason J.], Sekeh, S.Y.[Salimeh Yasaei],
MINT: Deep Network Compression via Mutual Information-based Neuron Trimming,
ICPR21(8251-8258)
IEEE DOI 2105
Sensitivity, Neurons, Redundancy, Filtering algorithms, Information filters, Robustness, Calibration BibRef

Joo, D.G.[Dong-Gyu], Kim, D.[Doyeon], Kim, J.[Junmo],
Slimming ResNet by Slimming Shortcut,
ICPR21(7677-7683)
IEEE DOI 2105
Convolution, Logic gates, Convolutional neural networks BibRef

Ferrari, C.[Claudio], Berretti, S.[Stefano], del Bimbo, A.[Alberto],
Probability Guided Maxout,
ICPR21(6517-6523)
IEEE DOI 2105
Training, Image classification BibRef

Yu, F.[Fang], Han, C.Q.[Chuan-Qi], Wang, P.C.[Peng-Cheng], Huang, R.[Ruoran], Huang, X.[Xi], Cui, L.[Li],
HFP: Hardware-Aware Filter Pruning for Deep Convolutional Neural Networks Acceleration,
ICPR21(255-262)
IEEE DOI 2105
Degradation, Training, Measurement, Information filters, Taylor series, Hardware BibRef

Shen, S.B.[Shi-Bo], Li, R.P.[Rong-Peng], Zhao, Z.F.[Zhi-Feng], Zhang, H.G.[Hong-Gang], Zhou, Y.[Yugeng],
Learning to Prune in Training via Dynamic Channel Propagation,
ICPR21(939-945)
IEEE DOI 2105
Training, Convolutional codes, Heuristic algorithms, Neural networks, Benchmark testing, Filtering algorithms BibRef

Nguyen-Meidine, L.T.[Le Thanh], Granger, E.[Eric], Kiran, M.[Madhu], Pedersoli, M.[Marco], Blais-Morin, L.A.[Louis-Antoine],
Progressive Gradient Pruning for Classification, Detection and Domain Adaptation,
ICPR21(2795-2802)
IEEE DOI 2105
Training, Backpropagation, Weight measurement, Visualization, Tensors, Object detection, Artificial neural networks BibRef

Mitsuno, K.[Kakeru], Kurita, T.[Takio],
Filter Pruning using Hierarchical Group Sparse Regularization for Deep Convolutional Neural Networks,
ICPR21(1089-1095)
IEEE DOI 2105
Training, Convolutional neural networks, Kernel BibRef

Soltani, M.[Mohammadreza], Wu, S.[Suya], Ding, J.[Jie], Ravier, R.[Robert], Tarokh, V.[Vahid],
On the Information of Feature Maps and Pruning of Deep Neural Networks,
ICPR21(6988-6995)
IEEE DOI 2105
Image coding, Simulation, Neural networks, Data models, Numerical models, Mutual information, feature maps BibRef

Foldy-Porto, T.[Timothy], Venkatesha, Y.[Yeshwanth], Panda, P.[Priyadarshini],
Activation Density Driven Efficient Pruning in Training,
ICPR21(8929-8936)
IEEE DOI 2105
Training, Neural networks, Real-time systems, Complexity theory, Deep Neural Networks BibRef

Zullich, M.[Marco], Medvet, E.[Eric], Pellegrino, F.A.[Felice Andrea], Ansuini, A.[Alessio],
Speeding-up pruning for Artificial Neural Networks: Introducing Accelerated Iterative Magnitude Pruning,
ICPR21(3868-3875)
IEEE DOI 2105
Training, Artificial neural networks, Iterative methods, Acceleration, Artificial Neural Network, Lottery Ticket Hypothesis BibRef

Abdiyeva, K.[Kamila], Lukac, M.[Martin], Ahuja, N.[Narendra],
Remove to Improve?,
EDL-AI20(146-161).
Springer DOI 2103
BibRef

Wimmer, P.[Paul], Mehnert, J.[Jens], Condurache, A.[Alexandru],
Freezenet: Full Performance by Reduced Storage Costs,
ACCV20(VI:685-701).
Springer DOI 2103
BibRef

He, J.J.[Jun-Jie], Chen, B.[Bohua], Ding, Y.Z.[Yin-Zhang], Li, D.X.[Dong-Xiao],
Feature Variance Ratio-guided Channel Pruning for Deep Convolutional Network Acceleration,
ACCV20(IV:170-186).
Springer DOI 2103
BibRef

Li, D.[Dong], Chen, S.[Sitong], Liu, X.D.[Xu-Dong], Sun, Y.[Yunda], Zhang, L.[Li],
Towards Optimal Filter Pruning with Balanced Performance and Pruning Speed,
ACCV20(IV:252-267).
Springer DOI 2103
BibRef

Davoodikakhki, M.[Mahdi], Yin, K.[KangKang],
Hierarchical Action Classification with Network Pruning,
ISVC20(I:291-305).
Springer DOI 2103
BibRef

Elkerdawy, S.[Sara], Elhoushi, M.[Mostafa], Singh, A.[Abhineet], Zhang, H.[Hong], Ray, N.[Nilanjan],
To Filter Prune, or to Layer Prune, That Is the Question,
ACCV20(III:737-753).
Springer DOI 2103
BibRef

Duan, H.R.[Hao-Ran], Li, H.[Hui],
Channel Pruning for Accelerating Convolutional Neural Networks via Wasserstein Metric,
ACCV20(III:492-505).
Springer DOI 2103
BibRef

Xu, Z.W.[Zhi-Wei], Ajanthan, T.[Thalaiyasingam], Hartley, R.I.[Richard I.],
Fast and Differentiable Message Passing on Pairwise Markov Random Fields,
ACCV20(III:523-540).
Springer DOI 2103
BibRef

Xu, Z.W.[Zhi-Wei], Ajanthan, T.[Thalaiyasingam], Vineet, V., Hartley, R.I.[Richard I.],
RANP: Resource Aware Neuron Pruning at Initialization for 3D CNNs,
3DV20(1-10)
IEEE DOI 2102
Neurons, Training, Memory management, Optimization, video classification BibRef

Ning, X.F.[Xue-Fei], Zhao, T.C.[Tian-Chen], Li, W.S.[Wen-Shuo], Lei, P.[Peng], Wang, Y.[Yu], Yang, H.Z.[Hua-Zhong],
DSA: More Efficient Budgeted Pruning via Differentiable Sparsity Allocation,
ECCV20(III:592-607).
Springer DOI 2012
BibRef

Ye, X.C.[Xu-Cheng], Dai, P.C.[Peng-Cheng], Luo, J.Y.[Jun-Yu], Guo, X.[Xin], Qi, Y.J.[Ying-Jie], Yang, J.L.[Jian-Lei], Chen, Y.R.[Yi-Ran],
Accelerating CNN Training by Pruning Activation Gradients,
ECCV20(XXV:322-338).
Springer DOI 2011
BibRef

Wang, Y.K.[Yi-Kai], Sun, F.C.[Fu-Chun], Li, D.[Duo], Yao, A.B.[An-Bang],
Resolution Switchable Networks for Runtime Efficient Image Recognition,
ECCV20(XV:533-549).
Springer DOI 2011
Code, Network Pruning.
WWW Link. Limit the network to vary image resolution and computation time. BibRef

Lee, M.K., Lee, S., Lee, S.H., Song, B.C.,
Channel Pruning Via Gradient Of Mutual Information For Light-Weight Convolutional Neural Networks,
ICIP20(1751-1755)
IEEE DOI 2011
Mutual information, Probability distribution, Random variables, Convolutional neural networks, Linear programming, Uncertainty, mutual information BibRef

Meyer, M., Wiesner, J., Rohlfing, C.,
Optimized Convolutional Neural Networks for Video Intra Prediction,
ICIP20(3334-3338)
IEEE DOI 2011
Training, Complexity theory, Encoding, Convolutional codes, Convolution, Kernel, video coding, pruning BibRef

Mousa-Pasandi, M., Hajabdollahi, M., Karimi, N., Samavi, S., Shirani, S.,
Convolutional Neural Network Pruning Using Filter Attenuation,
ICIP20(2905-2909)
IEEE DOI 2011
Attenuation, Filtering algorithms, Mathematical model, Computational modeling, Training, Convolutional neural networks, filter attenuation BibRef

Elkerdawy, S., Elhoushi, M., Singh, A., Zhang, H., Ray, N.,
One-Shot Layer-Wise Accuracy Approximation For Layer Pruning,
ICIP20(2940-2944)
IEEE DOI 2011
Computational modeling, Hardware, Training, Training data, Graphics processing units, Shape, Sensitivity analysis, inference speed up BibRef

Tian, H.D.[Hong-Duan], Liu, B.[Bo], Yuan, X.T.[Xiao-Tong], Liu, Q.S.[Qing-Shan],
Meta-learning with Network Pruning,
ECCV20(XIX:675-700).
Springer DOI 2011
BibRef

Li, Y.[Yawei], Gu, S.H.[Shu-Hang], Zhang, K.[Kai], Van Gool, L.J.[Luc J.], Timofte, R.[Radu],
DHP: Differentiable Meta Pruning via Hypernetworks,
ECCV20(VIII:608-624).
Springer DOI 2011
BibRef

Messikommer, N.[Nico], Gehrig, D.[Daniel], Loquercio, A.[Antonio], Scaramuzza, D.[Davide],
Event-based Asynchronous Sparse Convolutional Networks,
ECCV20(VIII:415-431).
Springer DOI 2011

WWW Link. Code, Semantic Segmentation.
WWW Link. Dataset, Semantic Segmentation. BibRef

Kim, B.[Byungjoo], Chudomelka, B.[Bryce], Park, J.[Jinyoung], Kang, J.[Jaewoo], Hong, Y.J.[Young-Joon], Kim, H.W.J.[Hyun-Woo J.],
Robust Neural Networks Inspired by Strong Stability Preserving Runge-Kutta Methods,
ECCV20(IX:416-432).
Springer DOI 2011
BibRef

Li, B.L.[Bai-Lin], Wu, B.[Bowen], Su, J.[Jiang], Wang, G.R.[Guang-Run],
Eagleeye: Fast Sub-net Evaluation for Efficient Neural Network Pruning,
ECCV20(II:639-654).
Springer DOI 2011
BibRef

Li, Y., Gu, S., Mayer, C., Van Gool, L.J., Timofte, R.,
Group Sparsity: The Hinge Between Filter Pruning and Decomposition for Network Compression,
CVPR20(8015-8024)
IEEE DOI 2008
Matrix decomposition, Convolution, Tensile stress, Fasteners, Matrix converters, Neural networks BibRef

Guo, S., Wang, Y., Li, Q., Yan, J.,
DMCP: Differentiable Markov Channel Pruning for Neural Networks,
CVPR20(1536-1544)
IEEE DOI 2008
Markov processes, Training, Task analysis, Mathematical model, Learning (artificial intelligence), Optimization BibRef

Lin, M., Ji, R., Wang, Y., Zhang, Y., Zhang, B., Tian, Y., Shao, L.,
HRank: Filter Pruning Using High-Rank Feature Map,
CVPR20(1526-1535)
IEEE DOI 2008
Acceleration, Training, Hardware, Adaptive systems, Optimization, Adaptation models, Neural networks BibRef

Luo, J., Wu, J.,
Neural Network Pruning With Residual-Connections and Limited-Data,
CVPR20(1455-1464)
IEEE DOI 2008
Training, Computational modeling, Neural networks, Data models, Image coding, Acceleration BibRef

Wu, Y., Liu, C., Chen, B., Chien, S.,
Constraint-Aware Importance Estimation for Global Filter Pruning under Multiple Resource Constraints,
EDLCV20(2935-2943)
IEEE DOI 2008
Estimation, Computational modeling, Training, Optimization, Performance evaluation, Taylor series BibRef

Gain, A.[Alex], Kaushik, P.[Prakhar], Siegelmann, H.[Hava],
Adaptive Neural Connections for Sparsity Learning,
WACV20(3177-3182)
IEEE DOI 2006
Training, Neurons, Bayes methods, Biological neural networks, Kernel, Computer science BibRef

Ramakrishnan, R.K., Sari, E., Nia, V.P.,
Differentiable Mask for Pruning Convolutional and Recurrent Networks,
CRV20(222-229)
IEEE DOI 2006
BibRef

Blakeney, C., Yan, Y., Zong, Z.,
Is Pruning Compression?: Investigating Pruning Via Network Layer Similarity,
WACV20(903-911)
IEEE DOI 2006
Biological neural networks, Neurons, Correlation, Computational modeling, Training, Tools BibRef

Verma, V.K., Singh, P., Namboodiri, V.P., Rai, P.,
A 'Network Pruning Network' Approach to Deep Model Compression,
WACV20(2998-3007)
IEEE DOI 2006
Computational modeling, Task analysis, Adaptation models, Cost function, Computer science, Iterative methods BibRef

Gao, S., Liu, X., Chien, L., Zhang, W., Alvarez, J.M.,
VACL: Variance-Aware Cross-Layer Regularization for Pruning Deep Residual Networks,
CEFRL19(2980-2988)
IEEE DOI 2004
image filtering, neural nets, statistical analysis, CIFAR10, first-order statistics, second-order statistics, residual networks BibRef

Liu, Z., Mu, H., Zhang, X., Guo, Z., Yang, X., Cheng, K., Sun, J.,
MetaPruning: Meta Learning for Automatic Neural Network Channel Pruning,
ICCV19(3295-3304)
IEEE DOI 2004
Code, Neural Networks.
WWW Link. learning (artificial intelligence), neural nets, sampling methods, stochastic processes, pruned networks, Task analysis BibRef

Zhou, Y., Zhang, Y., Wang, Y., Tian, Q.,
Accelerate CNN via Recursive Bayesian Pruning,
ICCV19(3305-3314)
IEEE DOI 2004
approximation theory, Bayes methods, computational complexity, convolutional neural nets, Markov processes, Computational modeling BibRef

Molchanov, P.[Pavlo], Mallya, A.[Arun], Tyree, S.[Stephen], Frosio, I.[Iuri], Kautz, J.[Jan],
Importance Estimation for Neural Network Pruning,
CVPR19(11256-11264).
IEEE DOI 2002
BibRef

Lemaire, C.[Carl], Achkar, A.[Andrew], Jodoin, P.M.[Pierre-Marc],
Structured Pruning of Neural Networks With Budget-Aware Regularization,
CVPR19(9100-9108).
IEEE DOI 2002
BibRef

Ding, X.O.[Xia-Ohan], Ding, G.G.[Gui-Guang], Guo, Y.C.[Yu-Chen], Han, J.G.[Jun-Gong],
Centripetal SGD for Pruning Very Deep Convolutional Networks With Complicated Structure,
CVPR19(4938-4948).
IEEE DOI 2002
BibRef

He, Y.[Yang], Ding, Y.H.[Yu-Hang], Liu, P.[Ping], Zhu, L.C.[Lin-Chao], Zhang, H.W.[Han-Wang], Yang, Y.[Yi],
Learning Filter Pruning Criteria for Deep Convolutional Neural Networks Acceleration,
CVPR20(2006-2015)
IEEE DOI 2008
Acceleration, Feature extraction, Training, Convolutional neural networks, Benchmark testing, Computer architecture BibRef

He, Y.[Yang], Liu, P.[Ping], Wang, Z.W.[Zi-Wei], Hu, Z.L.[Zhi-Lan], Yang, Y.[Yi],
Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration,
CVPR19(4335-4344).
IEEE DOI 2002
BibRef

Zhao, C.L.[Cheng-Long], Ni, B.B.[Bing-Bing], Zhang, J.[Jian], Zhao, Q.W.[Qi-Wei], Zhang, W.J.[Wen-Jun], Tian, Q.[Qi],
Variational Convolutional Neural Network Pruning,
CVPR19(2775-2784).
IEEE DOI 2002
BibRef

Lin, S.H.[Shao-Hui], Ji, R.R.[Rong-Rong], Yan, C.Q.[Chen-Qian], Zhang, B.C.[Bao-Chang], Cao, L.J.[Liu-Juan], Ye, Q.X.[Qi-Xiang], Huang, F.Y.[Fei-Yue], Doermann, D.[David],
Towards Optimal Structured CNN Pruning via Generative Adversarial Learning,
CVPR19(2785-2794).
IEEE DOI 2002
BibRef

Mummadi, C.K.[Chaithanya Kumar], Genewein, T.[Tim], Zhang, D.[Dan], Brox, T.[Thomas], Fischer, V.[Volker],
Group Pruning Using a Bounded-Lp Norm for Group Gating and Regularization,
GCPR19(139-155).
Springer DOI 1911
BibRef

Wang, W.T.[Wei-Ting], Li, H.L.[Han-Lin], Lin, W.S.[Wei-Shiang], Chiang, C.M.[Cheng-Ming], Tsai, Y.M.[Yi-Min],
Architecture-Aware Network Pruning for Vision Quality Applications,
ICIP19(2701-2705)
IEEE DOI 1910
Pruning, Vision Quality, Network Architecture BibRef

Zhang, Y.X.[Yu-Xin], Wang, H.A.[Hu-An], Luo, Y.[Yang], Yu, L.[Lu], Hu, H.J.[Hao-Ji], Shan, H.G.[Hang-Guan], Quek, T.Q.S.[Tony Q. S.],
Three-Dimensional Convolutional Neural Network Pruning with Regularization-Based Method,
ICIP19(4270-4274)
IEEE DOI 1910
3D CNN, video analysis, model compression, structured pruning, regularization BibRef

Hu, Y., Sun, S., Li, J., Zhu, J., Wang, X., Gu, Q.,
Multi-Loss-Aware Channel Pruning of Deep Networks,
ICIP19(889-893)
IEEE DOI 1910
deep neural networks, object classification, model compression, channel pruning BibRef

Yu, R., Li, A., Chen, C., Lai, J., Morariu, V.I., Han, X., Gao, M., Lin, C., Davis, L.S.,
NISP: Pruning Networks Using Neuron Importance Score Propagation,
CVPR18(9194-9203)
IEEE DOI 1812
Neurons, Redundancy, Optimization, Acceleration, Biological neural networks, Task analysis, Feature extraction BibRef

Zhang, T.Y.[Tian-Yun], Ye, S.[Shaokai], Zhang, K.Q.[Kai-Qi], Tang, J.[Jian], Wen, W.[Wujie], Fardad, M.[Makan], Wang, Y.Z.[Yan-Zhi],
A Systematic DNN Weight Pruning Framework Using Alternating Direction Method of Multipliers,
ECCV18(VIII: 191-207).
Springer DOI 1810
BibRef

Huang, Q., Zhou, K., You, S., Neumann, U.,
Learning to Prune Filters in Convolutional Neural Networks,
WACV18(709-718)
IEEE DOI 1806
image segmentation, learning (artificial intelligence), neural nets, CNN filters, Training BibRef

Carreira-Perpinan, M.A., Idelbayev, Y.,
'Learning-Compression' Algorithms for Neural Net Pruning,
CVPR18(8532-8541)
IEEE DOI 1812
Neural networks, Optimization, Training, Neurons, Performance evaluation, Mobile handsets, Quantization (signal) BibRef

Zhou, Z., Zhou, W., Li, H., Hong, R.,
Online Filter Clustering and Pruning for Efficient Convnets,
ICIP18(11-15)
IEEE DOI 1809
Training, Acceleration, Neural networks, Convolution, Tensile stress, Force, Clustering algorithms, Deep neural networks, similar filter, cluster loss BibRef

Zhu, L.G.[Li-Geng], Deng, R.Z.[Rui-Zhi], Maire, M.[Michael], Deng, Z.W.[Zhi-Wei], Mori, G.[Greg], Tan, P.[Ping],
Sparsely Aggregated Convolutional Networks,
ECCV18(XII: 192-208).
Springer DOI 1810
BibRef

Wang, Z., Zhu, C., Xia, Z., Guo, Q., Liu, Y.,
Towards thinner convolutional neural networks through gradually global pruning,
ICIP17(3939-3943)
IEEE DOI 1803
Computational modeling, Machine learning, Measurement, Neurons, Redundancy, Tensile stress, Training, Artificial neural networks, Deep learning BibRef

Luo, J.H., Wu, J., Lin, W.,
ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression,
ICCV17(5068-5076)
IEEE DOI 1802
data compression, image coding, image filtering, inference mechanisms, neural nets, optimisation, Training BibRef

Rueda, F.M.[Fernando Moya], Grzeszick, R.[Rene], Fink, G.A.[Gernot A.],
Neuron Pruning for Compressing Deep Networks Using Maxout Architectures,
GCPR17(177-188).
Springer DOI 1711
BibRef

Yang, T.J.[Tien-Ju], Chen, Y.H.[Yu-Hsin], Sze, V.[Vivienne],
Designing Energy-Efficient Convolutional Neural Networks Using Energy-Aware Pruning,
CVPR17(6071-6079)
IEEE DOI 1711
Computational modeling, Energy consumption, Estimation, Hardware, Measurement, Memory management, Smart, phones BibRef

Guo, J.[Jia], Potkonjak, M.[Miodrag],
Pruning ConvNets Online for Efficient Specialist Models,
ECVW17(430-437)
IEEE DOI 1709
Biological neural networks, Computational modeling, Convolution, Memory management, Sensitivity, analysis BibRef

Chapter on Pattern Recognition, Clustering, Statistics, Grammars, Learning, Neural Nets, Genetic Algorithms continues in
Neural Net Compression .


Last update:Nov 26, 2024 at 16:40:19