Learning to act with affordance-aware multimodal neural SLAM
2022
Recent years have witnessed an emerging paradigm shift toward embodied artificial intelligence, in which an agent must learn to solve challenging tasks by interacting with its environment. There are several challenges in solving embodied multimodal tasks, including long-horizon planning, vision-and-language grounding, and efficient exploration. We focus on a critical bottleneck, namely the performance of planning and navigation. To tackle this challenge, we propose a Neural SLAM approach that, for the first time, utilizes several modalities for exploration, predicts an affordance-aware semantic map, and plans over it at the same time. This significantly improves exploration efficiency, leads to robust long-horizon planning, and enables effective vision-and-language grounding. The proposed Affordance-aware Multimodal Neural SLAM (AMSLAM) approach achieves competitive generalization performance on the ALFRED benchmark, ranking second among all currently published work in task success rate and goal condition success rate on test unseen data and first on goal condition success rate on valid unseen data.
Research areas