Large-scale indoor mapping with failure detection and recovery in SLAM
2024
This paper addresses the failure detection and recovery problem in visual-inertial based Simultaneous Localization and Mapping (SLAM) systems for large-scale indoor environments. Camera and Inertial Measurement Unit (IMU) are popular choices for SLAM in many robotics tasks (e.g., navigation) due to their complementary sensing capabilities and low cost. However, vision has inherent challenges even in well-lit scenes, including motion blur, lack of features, or even accidental camera blockage. These failures can cause drifts to accumulate over time and can severely impact the scalability of existing solutions to large areas. To address these issues, we propose an automatic map generation service with a failure detection method based on visual feature tracking quality using a health tracker which identifies and discards faulty measurements and (ii) a continuous session merging approach in SLAM. Taken together, this allows us to handle erroneous data without any manual intervention, and allows us to scale to extremely large spaces. The proposed system has been validated on benchmark datasets. Also, experimental results on multiple custom large-scale grocery stores, each between 1700 m2 to 3700 m2, and duration 60 to 80 minutes, are presented. Our approach shows lowest error in all large-scale SLAM cases when compared with state-of-the-art visual-inertial SLAM packages, which often produce highly erroneous trajectories or lose track. Additionally, we provide dense 3D reconstruction with the presence of a depth camera by simply registering the point cloud from RGB-D image with respect to the SLAM generated trajectory – and the quality of the reconstruction illustrates the efficacy of our proposed method.
Research areas