14.2.6.2 Reinforcement Learning

Chapter Contents (Back)
Reinforcement Learning. Positive and negative reinforcement.
See also Transfer Learning from Other Tasks, Other Classes.
See also Continual Learning.
See also Dynamic Learning, Incremental Learning.

Shoeleh, F.[Farzaneh], Asadpour, M.[Masoud],
Graph based skill acquisition and transfer Learning for continuous reinforcement learning domains,
PRL(87), No. 1, 2017, pp. 104-116.
Elsevier DOI 1703
Reinforcement learning BibRef

Koo, S.[Sangjun], Yu, H.[Hwanjo], Lee, G.G.[Gary Geunbae],
Adversarial approach to domain adaptation for reinforcement learning on dialog systems,
PRL(128), 2019, pp. 467-473.
Elsevier DOI 1912
Dialog systems, Reinforcement learning, Domain adaptation, Transfer learning, Deep Q Network, Adversarial networks BibRef

Agarwal, M.[Mridul], Aggarwal, V.[Vaneet],
Blind decision making: Reinforcement learning with delayed observations,
PRL(150), 2021, pp. 176-182.
Elsevier DOI 2109
BibRef

Hwang, R.[Rakhoon], Lee, H.J.[Han-Jin], Hwang, H.J.[Hyung Ju],
Option compatible reward inverse reinforcement learning,
PRL(154), 2022, pp. 83-89.
Elsevier DOI 2202
Reinforcement learning, Inverse reinforcement learning, Transfer learning, Machine learning BibRef

Nicholaus, I.T.[Isack Thomas], Kang, D.K.[Dae-Ki],
Robust experience replay sampling for multi-agent reinforcement learning,
PRL(155), 2022, pp. 135-142.
Elsevier DOI 2203
Reinforcement learning, Multi-agent, Sampling, Experience replay BibRef

Wang, J.[Jiao], Zhang, L.[Lemin], He, Z.Q.[Zhi-Qiang], Zhu, C.[Can], Zhao, Z.H.[Zi-Hui],
Erlang planning network: An iterative model-based reinforcement learning with multi-perspective,
PR(128), 2022, pp. 108668.
Elsevier DOI 2205
Model-based reinforcement learning, Multi-perspective, Bi-level, Planning, Trajectory imagination BibRef

Li, M.[Min], Huang, T.Y.[Tian-Yi], Zhu, W.[William],
Clustering experience replay for the effective exploitation in reinforcement learning,
PR(131), 2022, pp. 108875.
Elsevier DOI 2208
Reinforcement learning, Clustering, Experience replay, Exploitation efficiency, Time division BibRef

Tosatto, S.[Samuele], Carvalho, J.[Joăo], Peters, J.[Jan],
Batch Reinforcement Learning With a Nonparametric Off-Policy Policy Gradient,
PAMI(44), No. 10, October 2022, pp. 5996-6010.
IEEE DOI 2209
Mathematical model, Estimation, Kernel, Reinforcement learning, Monte Carlo methods, Task analysis, Closed-form solutions, nonparametric estimation BibRef

Guo, S.Q.[Shang-Qi], Yan, Q.[Qi], Su, X.[Xin], Hu, X.L.[Xiao-Lin], Chen, F.[Feng],
State-Temporal Compression in Reinforcement Learning With the Reward-Restricted Geodesic Metric,
PAMI(44), No. 9, September 2022, pp. 5572-5589.
IEEE DOI 2208
Measurement, Task analysis, Reinforcement learning, Neural networks, Time-domain analysis, reinforcement learning (RL) BibRef

Xu, T.[Tian], Li, Z.N.[Zi-Niu], Yu, Y.[Yang],
Error Bounds of Imitating Policies and Environments for Reinforcement Learning,
PAMI(44), No. 10, October 2022, pp. 6968-6980.
IEEE DOI 2209
Planning, Reinforcement learning, Cloning, Complexity theory, Supervised learning, Decision making, Upper bound, model-based reinforcement learning BibRef

Li, Y.[Yun], Liu, Z.[Zhe], Yao, L.[Lina], Wang, X.Z.[Xian-Zhi], McAuley, J.[Julian], Chang, X.J.[Xiao-Jun],
An Entropy-Guided Reinforced Partial Convolutional Network for Zero-Shot Learning,
CirSysVideo(32), No. 8, August 2022, pp. 5175-5186.
IEEE DOI 2208
Convolution, Feature extraction, Semantics, Visualization, Training, Optimization, Kernel, Zero-shot learning, reinforcement learning, image representation BibRef

Feng, J.[Jie], Li, D.[Di], Gu, J.[Jing], Cao, X.H.[Xiang-Hai], Shang, R.H.[Rong-Hua], Zhang, X.R.[Xiang-Rong], Jiao, L.C.[Li-Cheng],
Deep Reinforcement Learning for Semisupervised Hyperspectral Band Selection,
GeoRS(60), 2022, pp. 1-19.
IEEE DOI 2112
Hyperspectral imaging, Reinforcement learning, Optimization, Deep learning, Task analysis, Neural networks, semisupervised learning BibRef

Akrour, R.[Riad], Tateo, D.[Davide], Peters, J.[Jan],
Continuous Action Reinforcement Learning From a Mixture of Interpretable Experts,
PAMI(44), No. 10, October 2022, pp. 6795-6806.
IEEE DOI 2209
Task analysis, Complexity theory, Approximation algorithms, Neural networks, Trajectory, Reinforcement learning, robotics BibRef

Zhang, M.Y.[Meng-Yang], Tian, G.H.[Guo-Hui], Gao, H.B.[Huan-Bing], Zhang, Y.[Ying],
Autonomous Generation of Service Strategy for Household Tasks: A Progressive Learning Method With A Priori Knowledge and Reinforcement Learning,
CirSysVideo(32), No. 11, November 2022, pp. 7473-7488.
IEEE DOI 2211
Correlation, Task analysis, Reinforcement learning, Artificial neural networks. BibRef

Li, W.H.[Wen-Hao], Wang, X.F.[Xiang-Feng], Jin, B.[Bo], Luo, D.[Dijun], Zha, H.Y.[Hong-Yuan],
Structured Cooperative Reinforcement Learning With Time-Varying Composite Action Space,
PAMI(44), No. 11, November 2022, pp. 8618-8634.
IEEE DOI 2210
Agriculture, Aerospace electronics, Task analysis, Reinforcement learning, Games, Carbon dioxide, Robustness, time-varying action space BibRef

Zhu, R.[Rongbo], Li, M.Y.[Meng-Yao], Liu, H.[Hao], Liu, L.[Lu], Ma, M.[Maode],
Federated Deep Reinforcement Learning-Based Spectrum Access Algorithm With Warranty Contract in Intelligent Transportation Systems,
ITS(24), No. 1, January 2023, pp. 1178-1190.
IEEE DOI 2301
Contracts, Warranties, Resource management, Quality of service, Real-time systems, Heuristic algorithms, Vehicle dynamics, quality of service BibRef

Hu, T.M.[Tian-Meng], Luo, B.[Biao], Yang, C.H.[Chun-Hua], Huang, T.W.[Ting-Wen],
MO-MIX: Multi-Objective Multi-Agent Cooperative Decision-Making With Deep Reinforcement Learning,
PAMI(45), No. 10, October 2023, pp. 12098-12112.
IEEE DOI 2310
BibRef

Zhao, L.Y.[Lin-Ya], Tan, K.[Kun], Wang, X.[Xue], Ding, J.W.[Jian-Wei], Liu, Z.X.[Zhao-Xian], Ma, H.L.[Hui-Lin], Han, B.[Bo],
Hyperspectral Feature Selection for SOM Prediction Using Deep Reinforcement Learning and Multiple Subset Evaluation Strategies,
RS(15), No. 1, 2023, pp. xx-yy.
DOI Link 2301
BibRef

Huang, F.X.[Fu-Xian], Ji, N.[Naye], Ni, H.J.[Hua-Jian], Li, S.J.[Shi-Jian], Li, X.[Xi],
Adaptive cooperative exploration for reinforcement learning from imperfect demonstrations,
PRL(165), 2023, pp. 176-182.
Elsevier DOI 2301
Reinforcement learning, Imitation learning, Cooperative exploration, Imperfect demonstrations, BibRef

Huang, F.X.[Fu-Xian], Li, W.C.[Wei-Chao], Cui, J.B.[Jia-Bao], Fu, Y.J.[Yong-Jian], Li, X.[Xi],
Unified curiosity-Driven learning with smoothed intrinsic reward estimation,
PR(123), 2022, pp. 108352.
Elsevier DOI 2112
Reinforcement learning, Unified curiosity-driven exploration, Robust intrinsic reward, Task-relevant feature BibRef

Gomez, D.[Diego], Quijano, N.[Nicanor], Giraldo, L.F.[Luis Felipe],
Information Optimization and Transferable State Abstractions in Deep Reinforcement Learning,
PAMI(45), No. 4, April 2023, pp. 4782-4793.
IEEE DOI 2303
Task analysis, Reinforcement learning, Multitasking, Transfer learning, Optimization, Standards, Behavioral sciences, information theory BibRef

Zhu, Z.D.[Zhuang-Di], Lin, K.X.[Kai-Xiang], Jain, A.K.[Anil K.], Zhou, J.[Jiayu],
Transfer Learning in Deep Reinforcement Learning: A Survey,
PAMI(45), No. 11, November 2023, pp. 13344-13362.
IEEE DOI 2310
Survey, Transfer Learning. BibRef

Saengkyongam, S.[Sorawit], Thams, N.[Nikolaj], Peters, J.[Jonas], Pfister, N.[Niklas],
Invariant Policy Learning: A Causal Perspective,
PAMI(45), No. 7, July 2023, pp. 8606-8620.
IEEE DOI 2306
Training, Visualization, Reinforcement learning, Random variables, Particle measurements, Heuristic algorithms, off-policy learning BibRef

Huang, H.C.[Han-Chi], Ye, D.H.[De-Heng], Shen, L.[Li], Liu, W.[Wei],
Curriculum-Based Asymmetric Multi-Task Reinforcement Learning,
PAMI(45), No. 6, June 2023, pp. 7258-7269.
IEEE DOI 2305
Task analysis, Training, Multitasking, Reinforcement learning, Optimization, Interference, Supervised learning, asymmetric multi-task learning BibRef

Zhang, T.R.[Tian-Ren], Guo, S.Q.[Shang-Qi], Tan, T.[Tian], Hu, X.L.[Xiao-Lin], Chen, F.[Feng],
Adjacency Constraint for Efficient Hierarchical Reinforcement Learning,
PAMI(45), No. 4, April 2023, pp. 4152-4166.
IEEE DOI 2303
Task analysis, Reinforcement learning, Training, Random variables, Postal services, Markov processes, Games, adjacency constraint BibRef

Deng, Z.H.[Zhi-Hong], Fu, Z.[Zuyue], Wang, L.X.[Ling-Xiao], Yang, Z.[Zhuoran], Bai, C.J.[Chen-Jia], Zhou, T.Y.[Tian-Yi], Wang, Z.R.[Zhao-Ran], Jiang, J.[Jing],
False Correlation Reduction for Offline Reinforcement Learning,
PAMI(46), No. 2, February 2024, pp. 1199-1211.
IEEE DOI 2401
False correlation, offline reinforcement learning, uncertainty estimation BibRef

Zhang, L.Y.[Liang-Yu], Peng, Y.[Yang], Yang, W.H.[Wen-Hao], Zhang, Z.H.[Zhi-Hua],
Semi-Infinitely Constrained Markov Decision Processes and Provably Efficient Reinforcement Learning,
PAMI(46), No. 5, May 2024, pp. 3722-3735.
IEEE DOI 2404
Generalization of constrained Markov decision processes. Reinforcement learning, Complexity theory, Programming, Markov processes, Approximation algorithms, Marine vehicles, Costs, semi-infinitely programming BibRef


Choi, H.[Hyesong], Lee, H.[Hunsang], Song, W.[Wonil], Jeon, S.[Sangryul], Sohn, K.H.[Kwang-Hoon], Min, D.B.[Dong-Bo],
Local-Guided Global: Paired Similarity Representation for Visual Reinforcement Learning,
CVPR23(15072-15082)
IEEE DOI 2309
BibRef

Huang, Y.R.[Yang-Ru], Peng, P.X.[Pei-Xi], Zhao, Y.F.[Yi-Fan], Zhai, Y.P.[Yun-Peng], Xu, H.R.[Hao-Ran], Tian, Y.H.[Yong-Hong],
Simoun: Synergizing Interactive Motion-appearance Understanding for Vision-based Reinforcement Learning,
ICCV23(176-185)
IEEE DOI 2401
BibRef

Zhai, Y.P.[Yun-Peng], Peng, P.X.[Pei-Xi], Zhao, Y.F.[Yi-Fan], Huang, Y.R.[Yang-Ru], Tian, Y.H.[Yong-Hong],
Stabilizing Visual Reinforcement Learning via Asymmetric Interactive Cooperation,
ICCV23(207-216)
IEEE DOI 2401
BibRef

Choi, H.[Hyesong], Lee, H.[Hunsang], Jeong, S.W.[Seong-Won], Min, D.B.[Dong-Bo],
Environment Agnostic Representation for Visual Reinforcement learning,
ICCV23(263-273)
IEEE DOI Code:
WWW Link. 2401
BibRef

Liu, H.Z.[Hao-Zhe], Zhuge, M.[Mingchen], Li, B.[Bing], Wang, Y.H.[Yu-Hui], Faccio, F.[Francesco], Ghanem, B.[Bernard], Schmidhuber, J.[Jürgen],
Learning to Identify Critical States for Reinforcement Learning from Videos,
ICCV23(1955-1965)
IEEE DOI Code:
WWW Link. 2401
BibRef

Nie, C.[Chang], Wang, G.M.[Guang-Ming], Liu, Z.[Zhe], Cavalli, L.[Luca], Pollefeys, M.[Marc], Wang, H.S.[He-Sheng],
RLSAC: Reinforcement Learning enhanced Sample Consensus for End-to-End Robust Estimation,
ICCV23(9857-9866)
IEEE DOI Code:
WWW Link. 2401
BibRef

Liu, S.[Siao], Chen, Z.Y.[Zhao-Yu], Liu, Y.[Yang], Wang, Y.Z.[Yu-Zheng], Yang, D.[Dingkang], Zhao, Z.[Zhile], Zhou, Z.Q.[Zi-Qing], Yi, X.[Xie], Li, W.[Wei], Zhang, W.Q.[Wen-Qiang], Gan, Z.X.[Zhong-Xue],
Improving Generalization in Visual Reinforcement Learning via Conflict-aware Gradient Agreement Augmentation,
ICCV23(23379-23389)
IEEE DOI 2401
BibRef

Klinghoffer, T.[Tzofi], Tiwary, K.[Kushagra], Behari, N.[Nikhil], Agrawalla, B.[Bhavya], Raskar, R.[Ramesh],
DISeR: Designing Imaging Systems with Reinforcement Learning,
ICCV23(23575-23585)
IEEE DOI Code:
WWW Link. 2401
BibRef

Fang, F.[Fen], Liang, W.Y.[Wen-Yu], Wu, Y.[Yan], Xu, Q.L.[Qian-Li], Lim, J.H.[Joo-Hwee],
Improving Generalization of Reinforcement Learning Using a Bilinear Policy Network,
ICIP22(991-995)
IEEE DOI 2211
Representation learning, Visualization, Reinforcement learning, Object detection, Games, Feature extraction, Path planning, Generalization BibRef

Lucchesi, N.[Nicoló], Carta, A.[Antonio], Lomonaco, V.[Vincenzo], Bacciu, D.[Davide],
Avalanche RL: A Continual Reinforcement Learning Library,
CIAP22(I:524-535).
Springer DOI 2205
BibRef

Wang, X.D.[Xu-Dong], Lian, L.[Long], Yu, S.X.[Stella X.],
Unsupervised Visual Attention and Invariance for Reinforcement Learning,
CVPR21(6673-6683)
IEEE DOI 2111
Training, Visualization, Annotations, Reinforcement learning, Manuals, Benchmark testing BibRef

García-Ramírez, J.[Jesús], Morales, E.[Eduardo], Escalante, H.J.[Hugo Jair],
Multi-source Transfer Learning for Deep Reinforcement Learning,
MCPR21(131-140).
Springer DOI 2108
BibRef

Zhang, Z.Z.[Zi-Zhao], Pfister, T.[Tomas],
Learning Fast Sample Re-weighting Without Reward Data,
ICCV21(705-714)
IEEE DOI 2203
Training, Costs, Limiting, Computational modeling, Reinforcement learning, Noise robustness, Noise measurement, Machine learning architectures and formulations BibRef

Hong, J.[Jie], Fang, P.F.[Peng-Fei], Li, W.H.[Wei-Hao], Zhang, T.[Tong], Simon, C.[Christian], Harandi, M.[Mehrtash], Petersson, L.[Lars],
Reinforced Attention for Few-Shot Learning and Beyond,
CVPR21(913-923)
IEEE DOI 2111
Image recognition, Computational modeling, Reinforcement learning, Prediction algorithms, Data models, Pattern recognition BibRef

Zhang, Y.S.[You-Shan], Ye, H.[Hui], Davison, B.D.[Brian D.],
Adversarial Reinforcement Learning for Unsupervised Domain Adaptation,
WACV21(635-644)
IEEE DOI 2106
BibRef
Earlier: A1, A3, Only:
Adversarial Continuous Learning in Unsupervised Domain Adaptation,
DLPR20(672-687).
Springer DOI 2103
Adaptation models, Computational modeling, Neural networks, Reinforcement learning, Feature extraction. BibRef

Lomonaco, V., Desai, K., Culurciello, E., Maltoni, D.,
Continual Reinforcement Learning in 3D Non-stationary Environments,
CLVision20(999-1008)
IEEE DOI 2008
Task analysis, Learning (artificial intelligence), Benchmark testing, Color, Training, Complexity theory BibRef

Zhu, L.C.[Lin-Chao], Arik, S.Ö.[Sercan Ö.], Yang, Y.[Yi], Pfister, T.[Tomas],
Learning to Transfer Learn: Reinforcement Learning-based Selection for Adaptive Transfer Learning,
ECCV20(XXVII:342-358).
Springer DOI 2011
BibRef

Manteghi, S.[Sajad], Parvin, H.[Hamid], Heidarzadegan, A.[Ali], Nemati, Y.[Yasser],
Multitask Reinforcement Learning in Nondeterministic Environments: Maze Problem Case,
MCPR15(64-73).
Springer DOI 1506
BibRef

Garcia, E.O.[Esteban O.], de Cote, E.M.[Enrique Munoz], Morales, E.F.[Eduardo F.],
Qualitative Transfer for Reinforcement Learning with Continuous State and Action Spaces,
CIARP13(I:198-205).
Springer DOI 1311
BibRef

Chapter on Pattern Recognition, Clustering, Statistics, Grammars, Learning, Neural Nets, Genetic Algorithms continues in
Subspace Clustering, Subspace Learning .


Last update:Apr 27, 2024 at 11:46:35