Types of SLAM and its Application Examples
Updated: Oct 24
Previously, we introduced SLAM (Simultaneous Localization And Mapping), a technology to map an unfamiliar space and to identify my location. In SLAM, various theories, tools, and sensors can be used.
Let’s take a look at each feature of them and an application example of Visual SLAM, which is MAXST’s main field.
Diversification of SLAM
SLAM technology is based on several theories depending on the purpose. Typical theories include autonomous mobile robotics and computer vision. The different sensors and tools can be used for different purposes of SLAM. Let’s take an autonomous vehicle and augmented reality as an example.
First, SLAM is applied to autonomous vehicles with two lasers, LiDAR (Light Imaging Detection and Ranging) and RADAR (Radio Detection and Ranging).
▲ LiDAR (Light Imaging Detection and Ranging, photo by nasa.gov)
▲ RADAR (Radio Detection and Ranging, photo by aegean-electronics.gr)
Since LiDAR uses optical waves and RADAR uses radio waves, they have different advantages and disadvantages.
LIDAR can detect small objects with a short frequency and can see the exact distance, but cannot be used on cloudy or rainy days.
RADAR, on the other hand, has a longer working distance than LiDAR and can be used on cloudy days or at night. However, it is difficult to detect small objects and to determine the exact distance.
Therefore, depending on the purpose, both can be used at the same time, or only one can be selected as needed.
Next, let’s look at SLAM applied to augmented reality.
Because augmented reality is mostly based on mobile devices, SLAM works with cameras on mobile devices. In order to minimize power consumption and CPU usage, we use monocular cameras. Recently, the use of IMU sensors or depth cameras has been increased to improve accuracy.
▲ Combination of Visual SLAM and Sensor
MAXST’s Main Field, Visual SLAM (Visual Tracking), Using a Single Camera
So, what technologies where MAXST is focusing?
It is SLAM using single camera based on computer vision. This technology is called as Visual SLAM, which means SLAM technology that is using camera images.
In the initial Visual SLAM, the position of the camera was tracked through matching feature point in the image and then 3D map was generated. However, this method has the drawback that the processing speed is slow because matching feature point and updating map has to be performed for all the image frames.
As a result, Visual SLAM now uses keyframes to improve performance by paralleling the tracking and mapping process.
▲ Visual SLAM in Drones
▲ Autonomous Vehicles
Visual SLAM is used in various robots, such as probes, robots, unmanned aerial vehicles, etc. Autonomous vehicles also use Visual SLAM to map and understand the surrounding environment. Visual SLAM is also a substitute for GPS that cannot find the accurate location in both indoors and metropolitan areas.
Check out the video below👇
We will introduce MAXST Visual SLAM and practical examples.
If you want to learn more, please visit MAXST YouTube channel!
#MAXST_AR #MAXST #AR #AugmentedReality #AR_technology #AR_platform #AR_SDK #AR_company #immersive_experience #realistic_content #AR_content #SLAM #localization #mapping #sensor_data #IMU #gps #feature_tracking #map_estimation #LiDAR #RADAR #3d_map #autonomous_vehicle #indoors #metropolitan #city_scale #simultaneous_localization_and_mapping #robot