( read )

What is SLAM? (Simultaneous Localisation And Mapping)

By Charles Thomson
October 21, 2020
Find out what is the SLAM algorithm approach, explore new possibilities for its application and get insights on the types of advancements to watch out for.

SLAM is an evolving and exciting technology. Here, we are going to introduce you to the topic, explain new possibilities for its application, and provide insights on the types of advancements to watch out for.

SLAM defined

The term SLAM (Simultaneous Localisation And Mapping) was developed by Hugh Durrant-Whyte and John Leonard in the early 1990s. They originally termed it SMAL, but it was later changed to give more impact. 

SLAM is an algorithmic attempt to address the problem of building a map of an unknown environment while at the same time navigating the environment using the map. Although this looks like a “chicken and egg” problem, there are a number of known algorithms able to solve it — at least approximately. 

Ultimately, SLAM has been critical to the development of mobile robotics, which requires the performance of tasks and navigation of complex environments — indoor and outdoor — without any human input. In order to perform these tasks efficiently and safely, the robot needs to be able to localise itself in its environment based on the constructed maps. 

Mapping the spatial information of the environment is done on-the-fly with no prior knowledge of the robot’s location. The built map is subsequently used by the robot for navigation.

SLAM can be implemented in many ways. It’s more like a concept than a single algorithm. There are many steps involved in SLAM, and these different steps can be implemented using several different algorithms. SLAM is ultimately dependent on visual data, sensor data, point clouds and rapid processing — all of which have to work seamlessly together.

How is SLAM being used?

Applications of SLAM include robots, UAVs, autonomous vehicles and augmented reality. Typical examples include:

  • Automated car piloting on off-road terrain
  • Search and rescue in high-risk or difficult-to-navigate environments
  • Augmented reality (AR) applications where virtual objects are involved in real-world scenes
  • Visual surveillance systems
  • Medicine for minimally invasive surgery (MIS)
  • Construction site build monitoring and maintenance

Different sensors and tools are used for different purposes within SLAM. Let’s take an autonomous vehicle and augmented reality as an example. Here, SLAM is applied to autonomous vehicles with two lasers, LIDAR (Light Imaging Detection and Ranging) and RADAR (Radio Detection and Ranging).

These applications, which even include robotic vacuum cleaners like the Roomba, also perform what’s known as “sensor fusion,” meaning they combine data from various sensors with visual imagery.

Building 3D reconstructions of objects becomes equally applicable with the advancement of visual-based SLAM. It can be used in hybrid with terrestrial laser scanners and drones for a wide number of projects, but only with the right software,

SLAM CTA

Advanced applications

SLAM isn’t just for mapping without GPS. It will enable the transition from Automated Guided Vehicles (AGVs) to Autonomous Mobile Robots (AMRs) in the industrial space. We are already seeing it be deployed within the built environment, and in hybrid deployment with other types of scanning tools. These applications will only rise. 

SLAM can also monitor the performance of activities on a construction site, such as construction progress, structure inspection, and potential post-disaster rescue. 

Image-based 3D reconstruction always requires an extended time to acquire and process the image data, which limits its application on time-critical projects. 

Visual SLAM makes it possible to reconstruct a 3D map of a construction site in real-time. Integrated with Unmanned Aerial Vehicle (UAV), areas that are inaccessible with ground equipment can also be scanned. However, despite the advantages of visual SLAM and UAVs, techniques like this have historically been limited by challenges processing the data. 

Challenges with SLAM

SLAM algorithms are designed for the available resources. The goal is operational effectiveness, not perfection. Many SLAM algorithms rely on approximations to speed up processing and create a functional outcome. Particularly in construction environments, a lack of precision limits the application of this type of scanning. The amount of data generated can also become overwhelming. 

The challenges of SLAM can really be summarised in three categories: 

  1. Sensors
  2. Algorithms
  3. Data

Developments have helped resolve each of these challenges. Doing so effectively is critical to the future of SLAM. 

No one sensor does the job

Most external SLAM applications use Global Positioning System (GPS), shortcutting to the acquisition of location information. Of course, poor satellite signal coverage for indoor environments limits its accuracy.

3D LiDAR scanners are used as well. Unfortunately, in large scale environments, there will be areas lacking in features discernible by a laser, like open hallways or glass-walled corridors. Processing time has also historically limited the speed at which 3D LiDAR point clouds can be registered. Check out our free eBook — How Point Cloud Processing Has Changed — if you want to learn more about these developments.  

WiFi localisation can also play a part by correlating the different signal strengths across the target area.

The map is not the territory

The creation of an abstraction derived from something, or a reaction to it, is not the thing itself. So the mapping representation will not be a true copy of the real world. How closely it will need to represent the real world will change from application to application.

Self-drive cars bypass the mapping problem by making use of detailed map data collected in advance. Essentially, such systems simplify the SLAM mapping problem to a simpler “just-tell me-where-I-am” task. Although, this works best on road systems. In unknown environments, particularly complex ones like construction sites, greater accuracy is essential. 

New call-to-action

How clever are your algorithms?

SLAM relies on algorithms to work smoothly and accurately. The vast amount of data generated by the various sensors require efficient and low-latency processing.

Too much data

A large part of the challenge is the move towards handheld scanners. Inevitably, the ability to quickly take hand-held scales skyrockets the number of points to process, putting a dramatic strain on hardware and software.

In many SLAM solutions, the software for capturing and processing highly accurate individual measurements results in a build-up of noise and small measurement irregularities. Over time, estimated motion starts to diverge from true motion, which is known as “drift error”. This is often observed as a slight bending in long corridors that are actually straight.

The difficulty with SLAM is that it takes a lot of processing power. This power is too much for the kind of small high-speed drone that could fly through complex environments like a forest, or warehouse, or cityscape. There simply was not enough on-board processing capacity to handle the complex calculations.

SLAM developments to watch

The popularity of SLAM will grow with the emergence of indoor mobile trends in robotics. It will also offer an appealing alternative to user-built maps-- giving robot operation the freedom to work even without a pre-defined localisation infrastructure.

There is the potential to offload data processing to the cloud through real-time parallel computation. If coupled with more cloud-oriented algorithms, it could generate significant performance gains. We’ve already seen these developments with survey teams and standard point cloud processing — using advanced algorithms to front-load manual inputs and then rapidly align scans using scalable cloud computing infrastructure. 

The fact that modern point cloud registration algorithms can run on a distributed cloud unlocks access to potentially unbounded memory and CPU power. This will allow the system to scale, and create maps in large, complex environments 

The coming decade will see dramatic changes in the industrial landscape — and that includes the built environment. Technologies such as AI, BIM, IoT, SLAM and network technologies such as 5G will interact and improve each other in complex ways, and not all levels of the robotics value chain are ready for it. There are great opportunities in software development yet to be explored, SLAM being a big part of it. 

The bottom line: anyone involved in tasks which require the mapping of physical space should keep a keen eye on SLAM. A mobile scanning revolution is headed your way. 

Vercator laser scans CTA

Tags: SLAM