Robust Reinforcement Learning under Model Uncertainty

Monday, May 15, 2023 10 a.m. to 11 a.m.

Speaker: Mr. Yue Wang

From: University at Buffalo

Abstract

Reinforcement learning has achieved remarkable success in various applications, such as video games, the game of Go, and the development of ChatGPT. However, most traditional reinforcement learning methods suffer from performance degradation due to the gap between simulation and reality. This gap occurs when the environment where the policy is learned differs from the environment where the policy is deployed, leading to uncertainty about the learned policy's actual performance.

Robust reinforcement learning addresses this issue by optimizing worst-case performance in the face of distributional model uncertainty. In this talk, I will first introduce our recent results on robust reinforcement learning under the average-reward setting. This setting provides a robust guarantee for the accumulative reward under a system operating over an extended period. I will discuss the fundamentals of robust average-reward RL and present our model-based and model-free methods. I will then present our results for online model-free robust RL methods in the presence of an adversarial perturbed environment. Our approaches include value-based methods for small-scale problems and policy-based methods for large-scale ones.

For more info, please follow this link.

Read More

Locations:

HEC: 101A: 101A

Contact:


Calendar:

CS/CRCV Seminars

Category:

Speaker/Lecture/Seminar

Tags:

UCFCRCV