Multimodal Reinforcement Learning with Effective State Representation Learning

, , , ,

Many real-world applications require an agent to make robust and deliberate decisions with multimodal information (eg, robots with multi-sensory inputs). However, it is very challenging to train the agent via reinforcement learning (RL) due to the heterogeneity and dynamic importance of different modalities. Specifically, we observe that these issues make conventional RL methods difficult to learn a useful state representation in the end-to-end training with multimodal information. To address this, we propose a novel multimodal RL approach that can do multimodal alignment and importance enhancement according to their similarity and importance in terms of RL tasks respectively. By doing so, we are able to learn an effective state representation and consequentially improve the RL training process. We test our approach on several multimodal RL domains, showing that it outperforms state-of-the-art methods in terms of learning speed and policy quality.

» Read on
Jinming Ma, Yingfeng Chen, Feng Wu, Xianpeng Ji, Yu Ding. Multimodal Reinforcement Learning with Effective State Representation Learning. In Proceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems (AAMAS), pages 1684-1686, Online, May 2022.
Save as file
@inproceedings{MCWJDaamas22,
 address = {Online},
 author = {Jinming Ma and Yingfeng Chen and Feng Wu and Xianpeng Ji and Yu Ding},
 booktitle = {Proceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems (AAMAS)},
 month = {May},
 pages = {1684-1686},
 title = {Multimodal Reinforcement Learning with Effective State Representation Learning},
 url = {https://dl.acm.org/doi/10.5555/3535850.3536076},
 year = {2022}
}