14.5.9.6.10 Forgetting, Learning without Forgetting, Convolutional Neural Networks

Chapter Contents (Back)
Convolutional Neural Networks. Forgetting. CNN.
See also Intrepretation, Explaination, Understanding of Convolutional Neural Networks.
See also Continunal Learning, Incremental Learning.

Li, Z.Z.[Zhi-Zhong], Hoiem, D.[Derek],
Learning Without Forgetting,
PAMI(40), No. 12, December 2018, pp. 2935-2947.
IEEE DOI 1811
BibRef
Earlier: ECCV16(IV: 614-629).
Springer DOI 1611
Keep the old results in NN, but learn new capability. Feature extraction, Deep learning, Training data, Neural networks, Convolutional neural networks, Knowledge engineering, visual recognition. BibRef

Li, Z.Z.[Zhi-Zhong], Hoiem, D.[Derek],
Improving Confidence Estimates for Unfamiliar Examples,
CVPR20(2683-2692)
IEEE DOI 2008
Training, Calibration, Dogs, Uncertainty, Cats, Task analysis, Testing BibRef

Schutera, M.[Mark], Hafner, F.M.[Frank M.], Abhau, J.[Jochen], Hagenmeyer, V.[Veit], Mikut, R.[Ralf], Reischl, M.[Markus],
Cuepervision: self-supervised learning for continuous domain adaptation without catastrophic forgetting,
IVC(106), 2021, pp. 104079.
Elsevier DOI 2102
Domain adaptation, Self-supervised learning, Unsupervised learning, Continuous transfer learning, MNIST dataset BibRef

Osman, I.[Islam], Eltantawy, A.[Agwad], Shehata, M.S.[Mohamed S.],
Task-based parameter isolation for foreground segmentation without catastrophic forgetting using multi-scale region and edges fusion network,
IVC(113), 2021, pp. 104248.
Elsevier DOI 2108
Foreground segmentation, Moving objects, Deep learning, Continual learning, Parameter isolation BibRef

Toohey, J.R.[Jack R.], Raunak, M.S., Binkley, D.[David],
From Neuron Coverage to Steering Angle: Testing Autonomous Vehicles Effectively,
Computer(54), No. 8, August 2021, pp. 77-85.
IEEE DOI 2108
Create new images to rtain an existing DNN, without forgetting. Deep learning, Neurons, Autonomous vehicles, Testing BibRef

Zhang, M.[Miao], Li, H.Q.[Hui-Qi], Pan, S.R.[Shi-Rui], Chang, X.J.[Xiao-Jun], Zhou, C.[Chuan], Ge, Z.Y.[Zong-Yuan], Su, S.[Steven],
One-Shot Neural Architecture Search: Maximising Diversity to Overcome Catastrophic Forgetting,
PAMI(43), No. 9, September 2021, pp. 2921-2935.
IEEE DOI 2108
Computer architecture, Training, Optimization, Neural networks, Search methods, Australia, Germanium, AutoML, novelty search BibRef

Lao, Q.C.[Qi-Cheng], Mortazavi, M.[Mehrzad], Tahaei, M.[Marzieh], Dutil, F.[Francis], Fevens, T.[Thomas], Havaei, M.[Mohammad],
FoCL: Feature-oriented continual learning for generative models,
PR(120), 2021, pp. 108127.
Elsevier DOI 2109
Catastrophic forgetting, Continual learning, Generative models, Feature matching, Generative replay, Pseudo-rehearsal BibRef

Peng, C.[Can], Zhao, K.[Kun], Maksoud, S.[Sam], Li, M.[Meng], Lovell, B.C.[Brian C.],
SID: Incremental learning for anchor-free object detection via Selective and Inter-related Distillation,
CVIU(210), 2021, pp. 103229.
Elsevier DOI 2109
Deal with deep network failing on old task after new data -- catastrophic forgetting. Incremental learning, Object detection, Knowledge distillation BibRef


Roy, S.[Soumya], Sau, B.B.[Bharat Bhusan],
Can Selfless Learning improve accuracy of a single classification task?,
WACV21(4043-4051)
IEEE DOI 2106
solve the problem of catastrophic forgetting in continual learning. Training, Neurons, Task analysis BibRef

Mundt, M.[Martin], Pliushch, I.[Iuliia], Ramesh, V.[Visvanathan],
Neural Architecture Search of Deep Priors: Towards Continual Learning without Catastrophic Interference,
CLVision21(3518-3527)
IEEE DOI 2109
Training, Neural networks, Computer architecture, Interference, Pattern recognition BibRef

Katakol, S.[Sudeep], Herranz, L.[Luis], Yang, F.[Fei], Mrak, M.[Marta],
DANICE: Domain adaptation without forgetting in neural image compression,
CLIC21(1921-1925)
IEEE DOI 2109
Video coding, Image coding, Codecs, Transfer learning, Interference BibRef

Kurmi, V.K.[Vinod K.], Patro, B.N.[Badri N.], Subramanian, V.K.[Venkatesh K.], Namboodiri, V.P.[Vinay P.],
Do not Forget to Attend to Uncertainty while Mitigating Catastrophic Forgetting,
WACV21(736-745)
IEEE DOI 2106
Deep learning, Uncertainty, Computational modeling, Estimation, Data models BibRef

Nguyen, G.[Giang], Chen, S.[Shuan], Jun, T.J.[Tae Joon], Kim, D.[Daeyoung],
Explaining How Deep Neural Networks Forget by Deep Visualization,
EDL-AI20(162-173).
Springer DOI 2103
BibRef

Patra, A.[Arijit], Chakraborti, T.[Tapabrata],
Learn More, Forget Less: Cues from Human Brain,
ACCV20(IV:187-202).
Springer DOI 2103
BibRef

Liu, Y.[Yu], Parisot, S.[Sarah], Slabaugh, G.[Gregory], Jia, X.[Xu], Leonardis, A.[Ales], Tuytelaars, T.[Tinne],
More Classifiers, Less Forgetting: A Generic Multi-classifier Paradigm for Incremental Learning,
ECCV20(XXVI:699-716).
Springer DOI 2011
BibRef

Hayes, T.L.[Tyler L.], Kafle, K.[Kushal], Shrestha, R.[Robik], Acharya, M.[Manoj], Kanan, C.[Christopher],
Remind Your Neural Network to Prevent Catastrophic Forgetting,
ECCV20(VIII:466-483).
Springer DOI 2011
BibRef

Golatkar, A.[Aditya], Achille, A.[Alessandro], Soatto, S.[Stefano],
Forgetting Outside the Box: Scrubbing Deep Networks of Information Accessible from Input-output Observations,
ECCV20(XXIX: 383-398).
Springer DOI 2010
BibRef

Baik, S., Hong, S., Lee, K.M.,
Learning to Forget for Meta-Learning,
CVPR20(2376-2384)
IEEE DOI 2008
Task analysis, Attenuation, Adaptation models, Optimization, Training, Neural networks, Loss measurement BibRef

Zhang, Z., Lathuilière, S., Ricci, E., Sebe, N., Yan, Y., Yang, J.,
Online Depth Learning Against Forgetting in Monocular Videos,
CVPR20(4493-4502)
IEEE DOI 2008
Adaptation models, Videos, Estimation, Task analysis, Robustness, Machine learning, Training BibRef

Davidson, G., Mozer, M.C.,
Sequential Mastery of Multiple Visual Tasks: Networks Naturally Learn to Learn and Forget to Forget,
CVPR20(9279-9290)
IEEE DOI 2008
Task analysis, Training, Visualization, Standards, Neural networks, Color, Interference BibRef

Masarczyk, W., Tautkute, I.,
Reducing catastrophic forgetting with learning on synthetic data,
CLVision20(1019-1024)
IEEE DOI 2008
Task analysis, Optimization, Generators, Data models, Neural networks, Training, Computer architecture BibRef

Golatkar, A., Achille, A., Soatto, S.,
Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep Networks,
CVPR20(9301-9309)
IEEE DOI 2008
Training, Neural networks, Data models, Stochastic processes, Task analysis, Training data BibRef

Lee, K., Lee, K., Shin, J., Lee, H.,
Overcoming Catastrophic Forgetting With Unlabeled Data in the Wild,
ICCV19(312-321)
IEEE DOI 2004
Code, Neural Networks.
WWW Link. image sampling, learning (artificial intelligence), neural nets, distillation loss, global distillation, learning strategy, Neural networks BibRef

Nwe, T.L., Nataraj, B., Shudong, X., Yiqun, L., Dongyun, L., Sheng, D.,
Discriminative Features for Incremental Learning Classifier,
ICIP19(1990-1994)
IEEE DOI 1910
Incremental learning, Context Aware Advertisement, Few-short incremental learning, Discriminative features, Catastrophic forgetting BibRef

Shmelkov, K., Schmid, C., Alahari, K.,
Incremental Learning of Object Detectors without Catastrophic Forgetting,
ICCV17(3420-3429)
IEEE DOI 1802
learning (artificial intelligence), neural nets, object detection, COCO datasets, PASCAL VOC 2007, annotations, Training data BibRef

Rannen, A.[Amal], Aljundi, R.[Rahaf], Blaschko, M.B.[Matthew B.], Tuytelaars, T.[Tinne],
Encoder Based Lifelong Learning,
ICCV17(1329-1337)
IEEE DOI 1802
Learning usually adapts to the most recent task, need a sequence of tasks. feature extraction, image classification, learning (artificial intelligence), catastrophic forgetting, Training BibRef

Aljundi, R.[Rahaf], Babiloni, F.[Francesca], Elhoseiny, M.[Mohamed], Rohrbach, M.[Marcus], Tuytelaars, T.[Tinne],
Memory Aware Synapses: Learning What (not) to Forget,
ECCV18(III: 144-161).
Springer DOI 1810
BibRef

Liu, X.L.[Xia-Lei], Masana, M., Herranz, L., van de Weijer, J.[Joost], López, A.M., Bagdanov, A.D.[Andrew D.],
Rotate your Networks: Better Weight Consolidation and Less Catastrophic Forgetting,
ICPR18(2262-2268)
IEEE DOI 1812
Task analysis, Training, Training data, Neural networks, Data models, Standards BibRef

Chapter on Pattern Recognition, Clustering, Statistics, Grammars, Learning, Neural Nets, Genetic Algorithms continues in
Convolutional Neural Networks for Object Detection and Segmentation .


Last update:Nov 30, 2021 at 22:19:38