( read )

Feature Extraction from Point Cloud Data

By Charlie Cropp MRICS
March 11, 2021

As reality capture technology advances, the number of viable applications increases. Surveyors are spending more time working in construction, manufacturing, mining and more — helping many industries plan more effectively using 3D models, and cross-check planning with outputs. 

A big trend impacting the utility of reality capture data is the application of AI/ML algorithms to extract objects and features from point clouds data sets — a process known as LiDAR data classification. This automates the contextualisation of data, making it far easier and more efficient to access the information required — opening new possibilities for cost-effective applications of LiDAR, SLAM and other reality capture techniques and technologies.   

Here, we are going to look at the state of point cloud feature extraction in 2021, and what you can do to prepare for the opportunities this evolving technology brings to surveying in years to come. Let’s get started. 

Suggested reading: If you want to learn more about general improvements to point cloud registration, check out our ebook — Point Cloud Processing Has Changed.


What is point cloud feature extraction?

Simply put, point cloud feature extraction is the identification of features or objects within point clouds — e.g. a crack on a wall or the edges of a wall, a dip in the ground or a slope across a surface. Basically, it’s anything that is a “feature” within the scene. 

To understand point cloud feature extraction, it’s actually easier to start with point cloud data classification. Point cloud “data classification” is object detection applied to point clouds. For example, assigning the object classification of a “chair” or a “road sign” to a cluster of points within the dataset. Feature extraction goes one step further and identifies characteristics of those objects — adding another layer of detail to that classification process.    

Traditionally, feature extraction has relied on manual processes that cross-reference and identify images the old fashioned way. For example, manually adding annotations to registered point clouds based on manually cross-referencing a point cloud with a photographic image of the scene. But finding ways to automate this time-consuming process can make it far easier to cost-effectively apply this capability within a wide range of applications. 

 

Why is feature extraction important in 2021?

Point clouds contain huge amounts of spatial information, but unless that information is processed and labelled, it can’t be effectively used or even understood. Applying object recognition to point clouds data classification is a powerful first step towards harnessing point cloud data for more effective planning. Feature extraction goes one step further,  focusing on the often-overlooked surface details of objects, and has long been invaluable for applications such as reverse engineering, object recognition and autonomous navigation.

Fundamentally, feature extraction, along with data classification, allows surveyors to interpret physical space rather than simply mapping it. This is the key to unlocking the true potential that reality capture technologies have to provide within a wide range of industries and applications. 

SLAM CTA

 

How effective is automated feature extraction today?

Automated feature extraction is still in its infancy. But like with data classification, it’s rapidly improving along with the development of artificial intelligence (AI) and machine learning (ML). 

Proposed methods of feature extraction improvement use segmentation algorithms that convert existing cloud points into structured depth images. To make this possible, edge points and linear features are automatically extracted from 2D imagery, and those same pixels are projected onto 3D maps. Fundamentally, most feature extraction algorithms rely on classifying point cloud data, and then cross-referencing that data with additional reality capture information to add feature annotations to identified objects.

The path to truly automated feature extraction relies on a “trust-but-verify” operation that uses the process of training extraction algorithms to ever-increasingly improve results. From this perspective, you can divide the feature extraction processes into three broad categories: 

    1. Fully manual: Manual processes are still an industry standard for labelling and identifying features. Only by focusing on where feature extraction is right now can wide-scale improvements come into play. And, when they do, it'll be possible to take the plus points of the processes we have in place and remove the pain points that come along with them. 
    2. Cross-checked: The current stage of most feature extraction software involves manually cross-checking outputs for accuracy. Early-stage software might cross-check every single feature to deliberately train algorithms. However, as developments improve, users can be alerted to manual cross-check only particular identifications based on a confidence indicator — e.g. where certainty drops below 70% or 80% confidence. Ideally, look for software that lets you set this confidence indicator manually so that you are able to effectively apply automation in line with the level of accuracy required by the project.   
    3. Fully automated: As cross-checked systems improve, we move closer to fully automated outcomes. However, quality feature extraction software will retain the ability to alert users based on confidence intervals to ensure quality control. 

Fundamentally, software that keeps users in the loop will ensure quality outcomes while still accelerating workflows as we work towards the goal of fully automated processes. This ensures that minimal manual checks are required for the reliable mapping of features. Once it’s off the ground, well-implemented and monitored software should even automatically flag poorly executed data as well as cross-referencing datasets to check for feature accuracy at all times.

 

Evolving possibilities and applications

Feature extraction makes it possible to use point cloud data in far more effective ways. Specifically, in the construction industry, we have witnessed the deployment of reality capture tools at a far greater rate, and their use to inform planning and cross-check outcomes. Feature extraction, along with data classification, improves the use of these techniques dramatically. 

BIM (Building Information Modelling) is now deployed in 70% of construction projects across the UK. Scan-to-BIM is the process of using reality capture inputs to inform BIM models, and cross-check outputs with planning. Ultimately, this aims to give a clear view of changes made throughout construction projects, allowing for evaluation, accuracy, and streamlined projects. This enables teams to: 

  1. Prefabricate materials: Off-site manufacturing can be done based on digital plans, and then outputs scanned to cross-check against planning before shipping materials to location. Feature extraction provides far greater quality control and identification of defects in prefabricated materials.   
  2. Cross-check site development with planning: Entire construction sites can be analysed using this same cross-checking process, and feature extraction allows for small details to be identified and flagged if necessary. 
  3. More effective deployment of robotics: Better understanding of construction sites enables more effective deployment of on-site robotics to automate construction itself. Autonomous robots can also be used to capture data for feature extraction, like we see in Doxel’s AI-led scanning system.
  4. Renovation assessments: For renovation projects, feature extraction applied to reality capture data enables the quick capture of huge amounts of information that can go into planning, and effectively improve visibility over existing assets.  
  5. In-life updates: The value of data capture extends far beyond construction to encompass the entire lifecycle of a building, and feature extraction allows for more details to be captured and information processed for assessment and review.    

The value of automated feature extraction within this environment is plain to see, especially considering that, as discussed, feature extractions provide levels of detail that make it possible to interpret spaces rather than merely mapping them, even where multiple 3D datasets are concerned.

 

Make future-proofed software investment decisions

Automation is on the horizon, and it stands to change feature extraction and general point cloud data handling forever. This is incredibly exciting and stands to altogether change the surveying processes that are somewhat behind in terms of the speed clients expect right now. 

To make sure that you’re ready for this change, you need to implement the software that is on the cutting edge of focuses like these. Here at Vercator, we offer precisely that, with cloud-based software that’s already taking charge where point cloud processing is concerned. With updates inbuilt, existing cloud integrations like this ensure that you’re at the forefront, enabling you to integrate automated feature extraction the moment it becomes a market norm. We believe that 2021 will mark a shift in automated feature extraction. There are exciting developments, stay tuned for more details.

 

New call-to-action