Improving Offline Reinforcement Learning with Inaccurate Simulators

, , ,

Offline reinforcement learning (RL) provides a promising approach to avoid costly online interaction with the real environment. However, the performance of offline RL highly depends on the quality of the datasets, which may cause extrapolation error in the learning process. In many robotic applications, an inaccurate simulator is often available. However, the data directly collected from the inaccurate simulator cannot be directly used in offline RL due to the well-known exploration-exploitation dilemma and the dynamic gap between inaccurate simulation and the real environment. To address these issues, we propose a novel approach to combine the offline dataset and the inaccurate simulation data in a better manner. Specifically, we pre-train a generative adversarial network (GAN) model to fit the state distribution of the offline dataset. Given this, we collect data from the inaccurate simulator starting from the distribution provided by the generator and reweight the simulated data using the discriminator. Our experimental results in the D4RL benchmark and a real-world manipulation task confirm that our method can benefit more from both inaccurate simulator and limited offline datasets to achieve better performance than the state-of-the-art methods.

» Read on
Yiwen Hou, Haoyuan Sun, Jinming Ma, Feng Wu. Improving Offline Reinforcement Learning with Inaccurate Simulators. In 2024 IEEE International Conference on Robotics and Automation (ICRA), pages 5162-5168, 2024.
Save as file
@inproceedings{HSMWicra24,
 author = {Yiwen Hou and Haoyuan Sun and Jinming Ma and Feng Wu},
 booktitle = {2024 IEEE International Conference on Robotics and Automation (ICRA)},
 pages = {5162-5168},
 title = {Improving Offline Reinforcement Learning with Inaccurate Simulators},
 year = {2024}
}