Visual-Inertial SLAM is a cutting-edge technique that fuses visual data from cameras with inertial measurements from IMUs to create a 3D representation of the environment while simultaneously estimating the sensor's pose (position and orientation) within that environment. The visual information is extracted from the camera images, while the IMU provides motion measurements, including acceleration and angular velocity. By combining these two sensor modalities, Visual-Inertial SLAM overcomes the limitations of using either visual or inertial sensors alone, resulting in more robust and accurate localization and mapping capabilities.
Our 
Visual-Inertial SLAM algorithm (key elements shown below) is optimized for real-time performance and low-cost hardware. The localization is highly accurate, as it creates and pre-loads visual landmark maps on the fly for centimeter level accuracy, even in the face of challenging visual environments. Our navigation solution for AGVs and AMRs can incorporate sensor fusion with Lidar and wheel odometry to maximize the precision and efficiency of the vehicles’ operation.