Li, Z.Z.[Zhi-Zhong],
Hoiem, D.[Derek],
Learning Without Forgetting,
PAMI(40), No. 12, December 2018, pp. 2935-2947.
IEEE DOI
1811
BibRef
Earlier:
ECCV16(IV: 614-629).
Springer DOI
1611
Keep the old results in NN, but learn new capability.
Feature extraction, Deep learning, Training data, Neural networks,
Convolutional neural networks, Knowledge engineering,
visual recognition.
BibRef
Li, Z.Z.[Zhi-Zhong],
Hoiem, D.[Derek],
Improving Confidence Estimates for Unfamiliar Examples,
CVPR20(2683-2692)
IEEE DOI
2008
Training, Calibration, Dogs, Uncertainty, Cats, Task analysis, Testing
BibRef
Schutera, M.[Mark],
Hafner, F.M.[Frank M.],
Abhau, J.[Jochen],
Hagenmeyer, V.[Veit],
Mikut, R.[Ralf],
Reischl, M.[Markus],
Cuepervision: self-supervised learning for continuous domain
adaptation without catastrophic forgetting,
IVC(106), 2021, pp. 104079.
Elsevier DOI
2102
Domain adaptation, Self-supervised learning,
Unsupervised learning, Continuous transfer learning, MNIST dataset
BibRef
Osman, I.[Islam],
Eltantawy, A.[Agwad],
Shehata, M.S.[Mohamed S.],
Task-based parameter isolation for foreground segmentation without
catastrophic forgetting using multi-scale region and edges fusion
network,
IVC(113), 2021, pp. 104248.
Elsevier DOI
2108
Foreground segmentation, Moving objects, Deep learning,
Continual learning, Parameter isolation
BibRef
Toohey, J.R.[Jack R.],
Raunak, M.S.,
Binkley, D.[David],
From Neuron Coverage to Steering Angle: Testing Autonomous Vehicles
Effectively,
Computer(54), No. 8, August 2021, pp. 77-85.
IEEE DOI
2108
Create new images to rtain an existing DNN, without forgetting.
Deep learning, Neurons, Autonomous vehicles, Testing
BibRef
Zhang, M.[Miao],
Li, H.Q.[Hui-Qi],
Pan, S.R.[Shi-Rui],
Chang, X.J.[Xiao-Jun],
Zhou, C.[Chuan],
Ge, Z.Y.[Zong-Yuan],
Su, S.[Steven],
One-Shot Neural Architecture Search: Maximising Diversity to Overcome
Catastrophic Forgetting,
PAMI(43), No. 9, September 2021, pp. 2921-2935.
IEEE DOI
2108
Training, Optimization, Neural networks,
Search methods, Australia, Germanium, AutoML,
novelty search
BibRef
Lao, Q.C.[Qi-Cheng],
Mortazavi, M.[Mehrzad],
Tahaei, M.[Marzieh],
Dutil, F.[Francis],
Fevens, T.[Thomas],
Havaei, M.[Mohammad],
FoCL: Feature-oriented continual learning for generative models,
PR(120), 2021, pp. 108127.
Elsevier DOI
2109
Catastrophic forgetting, Continual learning, Generative models,
Feature matching, Generative replay, Pseudo-rehearsal
BibRef
Peng, C.[Can],
Zhao, K.[Kun],
Maksoud, S.[Sam],
Li, M.[Meng],
Lovell, B.C.[Brian C.],
SID: Incremental learning for anchor-free object detection via
Selective and Inter-related Distillation,
CVIU(210), 2021, pp. 103229.
Elsevier DOI
2109
Deal with deep network failing on old task after new data --
catastrophic forgetting.
Incremental learning, Object detection, Knowledge distillation
BibRef
Wang, M.[Meng],
Guo, Z.B.[Zheng-Bing],
Li, H.F.[Hua-Feng],
A dynamic routing CapsNet based on increment prototype clustering for
overcoming catastrophic forgetting,
IET-CV(16), No. 1, 2022, pp. 83-97.
DOI Link
2202
capsule network, catastrophic forgetting, continual learning,
dynamic routing, prototype clustering
BibRef
Marconato, E.[Emanuele],
Bontempo, G.[Gianpaolo],
Teso, S.[Stefano],
Ficarra, E.[Elisa],
Calderara, S.[Simone],
Passerini, A.[Andrea],
Catastrophic Forgetting in Continual Concept Bottleneck Models,
CL4REAL22(539-547).
Springer DOI
2208
BibRef
Baik, S.[Sungyong],
Oh, J.[Junghoon],
Hong, S.[Seokil],
Lee, K.M.[Kyoung Mu],
Learning to Forget for Meta-Learning via Task-and-Layer-Wise
Attenuation,
PAMI(44), No. 11, November 2022, pp. 7718-7730.
IEEE DOI
2210
Task analysis, Optimization, Adaptation models, Attenuation,
Knowledge engineering, Visualization, Neural networks,
visual tracking
BibRef
Boschini, M.[Matteo],
Buzzega, P.[Pietro],
Bonicelli, L.[Lorenzo],
Porrello, A.[Angelo],
Calderara, S.[Simone],
Continual semi-supervised learning through contrastive interpolation
consistency,
PRL(162), 2022, pp. 9-14.
Elsevier DOI
2210
Continual learning, Deep learning, Semi-supervised learning,
Weak supervision, Catastrophic forgetting
BibRef
Huang, F.X.[Fu-Xian],
Li, W.C.[Wei-Chao],
Lin, Y.[Yining],
Ji, N.[Naye],
Li, S.J.[Shi-Jian],
Li, X.[Xi],
Memory-efficient distribution-guided experience sampling for policy
consolidation,
PRL(164), 2022, pp. 126-131.
Elsevier DOI
2212
Learn new skills in sequence without forgetting old skills.
Reinforcement learning, Policy consolidation,
Distribution-guided sampling, Memory efficiency, Distributional neural network
BibRef
Ye, J.W.[Jing-Wen],
Fu, Y.F.[Yi-Fang],
Song, J.[Jie],
Yang, X.Y.[Xing-Yi],
Liu, S.[Songhua],
Jin, X.[Xin],
Song, M.L.[Ming-Li],
Wang, X.C.[Xin-Chao],
Learning with Recoverable Forgetting,
ECCV22(XI:87-103).
Springer DOI
2211
BibRef
Singh, P.[Pravendra],
Mazumder, P.[Pratik],
Karim, M.A.[Mohammed Asad],
Attaining Class-Level Forgetting in Pretrained Model Using Few Samples,
ECCV22(XIII:433-448).
Springer DOI
2211
BibRef
Wang, Z.Y.[Zhen-Yi],
Shen, L.[Li],
Fang, L.[Le],
Suo, Q.L.[Qiu-Ling],
Zhan, D.L.[Dong-Lin],
Duan, T.[Tiehang],
Gao, M.[Mingchen],
Meta-Learning with Less Forgetting on Large-Scale Non-Stationary Task
Distributions,
ECCV22(XX:221-238).
Springer DOI
2211
BibRef
Boschini, M.[Matteo],
Bonicelli, L.[Lorenzo],
Porrello, A.[Angelo],
Bellitto, G.[Giovanni],
Pennisi, M.[Matteo],
Palazzo, S.[Simone],
Spampinato, C.[Concetto],
Calderara, S.[Simone],
Transfer Without Forgetting,
ECCV22(XXIII:692-709).
Springer DOI
2211
BibRef
Liang, M.[Mingfu],
Zhou, J.H.[Jia-Huan],
Wei, W.[Wei],
Wu, Y.[Ying],
Balancing Between Forgetting and Acquisition in Incremental
Subpopulation Learning,
ECCV22(XXVI:364-380).
Springer DOI
2211
BibRef
Mehta, R.[Ronak],
Pal, S.[Sourav],
Singh, V.[Vikas],
Ravi, S.N.[Sathya N.],
Deep Unlearning via Randomized Conditionally Independent Hessians,
CVPR22(10412-10421)
IEEE DOI
2210
Training, Law, Computational modeling, Face recognition, Semantics,
Legislation, Predictive models, Transparency, fairness, Statistical methods
BibRef
Feng, T.[Tao],
Wang, M.[Mang],
Yuan, H.J.[Hang-Jie],
Overcoming Catastrophic Forgetting in Incremental Object Detection
via Elastic Response Distillation,
CVPR22(9417-9426)
IEEE DOI
2210
Location awareness, Training, Codes, Object detection, Detectors,
Feature extraction, retrieval, categorization, Recognition: detection
BibRef
Kim, J.[Junyaup],
Woo, S.S.[Simon S.],
Efficient Two-stage Model Retraining for Machine Unlearning,
HCIS22(4360-4368)
IEEE DOI
2210
Deep learning, Training, Computational modeling,
Data models
BibRef
Ferdinand, Q.[Quentin],
Clement, B.[Benoit],
Oliveau, Q.[Quentin],
Chenadec, G.L.[Gilles Le],
Papadakis, P.[Panagiotis],
Attenuating Catastrophic Forgetting by Joint Contrastive and
Incremental Learning,
CLVision22(3781-3788)
IEEE DOI
2210
Learning systems, Training, Deep learning, Adaptation models,
Conferences, Computational modeling
BibRef
Jain, H.[Himalaya],
Vu, T.H.[Tuan-Hung],
Pérez, P.[Patrick],
Cord, M.[Matthieu],
CSG0: Continual Urban Scene Generation with Zero Forgetting,
CLVision22(3678-3686)
IEEE DOI
2210
Training, Visualization, Costs, Semantics, Memory management,
Generative adversarial networks, Pattern recognition
BibRef
Meng, Q.[Qiang],
Zhang, C.X.[Chi-Xiang],
Xu, X.Q.[Xiao-Qiang],
Zhou, F.[Feng],
Learning Compatible Embeddings,
ICCV21(9919-9928)
IEEE DOI
2203
Training, Degradation, Visualization, Costs, Codes, Image retrieval,
Representation learning, Faces, Recognition and classification
BibRef
Binici, K.[Kuluhan],
Pham, N.T.[Nam Trung],
Mitra, T.[Tulika],
Leman, K.[Karianto],
Preventing Catastrophic Forgetting and Distribution Mismatch in
Knowledge Distillation via Synthetic Data,
WACV22(3625-3633)
IEEE DOI
2202
Deep learning, Energy consumption,
Computational modeling, Neural networks, Memory management,
Image and Video Synthesis
BibRef
Benkert, R.[Ryan],
Aribido, O.J.[Oluwaseun Joseph],
AlRegib, G.[Ghassan],
Explaining Deep Models Through Forgettable Learning Dynamics,
ICIP21(3692-3696)
IEEE DOI
2201
Training, Deep learning, Image segmentation, Semantics,
Predictive models, Data models, Example Forgetting,
Semantic Segmentation
BibRef
Roy, S.[Soumya],
Sau, B.B.[Bharat Bhusan],
Can Selfless Learning improve accuracy of a single classification
task?,
WACV21(4043-4051)
IEEE DOI
2106
solve the problem of catastrophic forgetting in continual learning.
Training, Neurons, Task analysis
BibRef
Mundt, M.[Martin],
Pliushch, I.[Iuliia],
Ramesh, V.[Visvanathan],
Neural Architecture Search of Deep Priors: Towards Continual Learning
without Catastrophic Interference,
CLVision21(3518-3527)
IEEE DOI
2109
Training, Neural networks,
Interference, Pattern recognition
BibRef
Katakol, S.[Sudeep],
Herranz, L.[Luis],
Yang, F.[Fei],
Mrak, M.[Marta],
DANICE: Domain adaptation without forgetting in neural image
compression,
CLIC21(1921-1925)
IEEE DOI
2109
Video coding, Image coding, Codecs, Transfer learning, Interference
BibRef
Kurmi, V.K.[Vinod K.],
Patro, B.N.[Badri N.],
Subramanian, V.K.[Venkatesh K.],
Namboodiri, V.P.[Vinay P.],
Do not Forget to Attend to Uncertainty while Mitigating Catastrophic
Forgetting,
WACV21(736-745)
IEEE DOI
2106
Deep learning, Uncertainty,
Computational modeling, Estimation, Data models
BibRef
Nguyen, G.[Giang],
Chen, S.[Shuan],
Jun, T.J.[Tae Joon],
Kim, D.[Daeyoung],
Explaining How Deep Neural Networks Forget by Deep Visualization,
EDL-AI20(162-173).
Springer DOI
2103
BibRef
Patra, A.[Arijit],
Chakraborti, T.[Tapabrata],
Learn More, Forget Less: Cues from Human Brain,
ACCV20(IV:187-202).
Springer DOI
2103
BibRef
Liu, Y.[Yu],
Parisot, S.[Sarah],
Slabaugh, G.[Gregory],
Jia, X.[Xu],
Leonardis, A.[Ales],
Tuytelaars, T.[Tinne],
More Classifiers, Less Forgetting: A Generic Multi-classifier Paradigm
for Incremental Learning,
ECCV20(XXVI:699-716).
Springer DOI
2011
BibRef
Hayes, T.L.[Tyler L.],
Kafle, K.[Kushal],
Shrestha, R.[Robik],
Acharya, M.[Manoj],
Kanan, C.[Christopher],
Remind Your Neural Network to Prevent Catastrophic Forgetting,
ECCV20(VIII:466-483).
Springer DOI
2011
BibRef
Golatkar, A.[Aditya],
Achille, A.[Alessandro],
Soatto, S.[Stefano],
Forgetting Outside the Box: Scrubbing Deep Networks of Information
Accessible from Input-output Observations,
ECCV20(XXIX: 383-398).
Springer DOI
2010
BibRef
Baik, S.,
Hong, S.,
Lee, K.M.,
Learning to Forget for Meta-Learning,
CVPR20(2376-2384)
IEEE DOI
2008
Task analysis, Attenuation, Adaptation models, Optimization,
Training, Neural networks, Loss measurement
BibRef
Zhang, Z.,
Lathuilière, S.,
Ricci, E.,
Sebe, N.,
Yan, Y.,
Yang, J.,
Online Depth Learning Against Forgetting in Monocular Videos,
CVPR20(4493-4502)
IEEE DOI
2008
Adaptation models, Videos, Estimation, Task analysis, Robustness,
Machine learning, Training
BibRef
Davidson, G.,
Mozer, M.C.,
Sequential Mastery of Multiple Visual Tasks: Networks Naturally Learn
to Learn and Forget to Forget,
CVPR20(9279-9290)
IEEE DOI
2008
Task analysis, Training, Visualization, Standards, Neural networks,
Color, Interference
BibRef
Masarczyk, W.,
Tautkute, I.,
Reducing catastrophic forgetting with learning on synthetic data,
CLVision20(1019-1024)
IEEE DOI
2008
Task analysis, Optimization, Generators, Data models,
Neural networks, Training, Computer architecture
BibRef
Golatkar, A.,
Achille, A.,
Soatto, S.,
Eternal Sunshine of the Spotless Net: Selective Forgetting in Deep
Networks,
CVPR20(9301-9309)
IEEE DOI
2008
Training, Neural networks, Data models, Stochastic processes,
Task analysis, Training data
BibRef
Lee, K.,
Lee, K.,
Shin, J.,
Lee, H.,
Overcoming Catastrophic Forgetting With Unlabeled Data in the Wild,
ICCV19(312-321)
IEEE DOI
2004
Code, Neural Networks.
WWW Link. image sampling, learning (artificial intelligence), neural nets,
distillation loss, global distillation, learning strategy,
Neural networks
BibRef
Nwe, T.L.[Tin Lay],
Nataraj, B.[Balaji],
Xie, S.D.[Shu-Dong],
Li, Y.Q.[Yi-Qun],
Lin, D.Y.[Dong-Yun],
Sheng, D.[Dong],
Discriminative Features for Incremental Learning Classifier,
ICIP19(1990-1994)
IEEE DOI
1910
Incremental learning, Context Aware Advertisement,
Few-short incremental learning, Discriminative features,
Catastrophic forgetting
BibRef
Shmelkov, K.,
Schmid, C.,
Alahari, K.,
Incremental Learning of Object Detectors without Catastrophic
Forgetting,
ICCV17(3420-3429)
IEEE DOI
1802
learning (artificial intelligence), neural nets,
object detection, COCO datasets, PASCAL VOC 2007, annotations,
Training data
BibRef
Rannen, A.[Amal],
Aljundi, R.[Rahaf],
Blaschko, M.B.[Matthew B.],
Tuytelaars, T.[Tinne],
Encoder Based Lifelong Learning,
ICCV17(1329-1337)
IEEE DOI
1802
Learning usually adapts to the most recent task, need a sequence of
tasks.
feature extraction, image classification,
learning (artificial intelligence), catastrophic forgetting,
Training
BibRef
Aljundi, R.[Rahaf],
Babiloni, F.[Francesca],
Elhoseiny, M.[Mohamed],
Rohrbach, M.[Marcus],
Tuytelaars, T.[Tinne],
Memory Aware Synapses: Learning What (not) to Forget,
ECCV18(III: 144-161).
Springer DOI
1810
BibRef
Liu, X.L.[Xia-Lei],
Masana, M.,
Herranz, L.,
van de Weijer, J.[Joost],
López, A.M.,
Bagdanov, A.D.[Andrew D.],
Rotate your Networks: Better Weight Consolidation and Less
Catastrophic Forgetting,
ICPR18(2262-2268)
IEEE DOI
1812
Task analysis, Training, Training data, Neural networks, Data models,
Standards
BibRef
Chapter on Pattern Recognition, Clustering, Statistics, Grammars, Learning, Neural Nets, Genetic Algorithms continues in
Convolutional Neural Networks for Object Detection and Segmentation .