As reality capture technology advances, the number of viable applications increases. Surveyors are spending more time working in construction, manufacturing, mining and more — helping many industries plan more effectively using 3D models, and cross-check planning with outputs.
A big trend impacting the utility of reality capture data is the application of AI/ML algorithms to extract objects and features from point clouds data sets — a process known as LiDAR data classification. This automates the contextualisation of data, making it far easier and more efficient to access the information required — opening new possibilities for cost-effective applications of LiDAR, SLAM and other reality capture techniques and technologies.
Here, we are going to look at the state of point cloud feature extraction in 2021, and what you can do to prepare for the opportunities this evolving technology brings to surveying in years to come. Let’s get started.
Suggested reading: If you want to learn more about general improvements to point cloud registration, check out our ebook — Point Cloud Processing Has Changed.
Simply put, point cloud feature extraction is the identification of features or objects within point clouds — e.g. a crack on a wall or the edges of a wall, a dip in the ground or a slope across a surface. Basically, it’s anything that is a “feature” within the scene.
To understand point cloud feature extraction, it’s actually easier to start with point cloud data classification. Point cloud “data classification” is object detection applied to point clouds. For example, assigning the object classification of a “chair” or a “road sign” to a cluster of points within the dataset. Feature extraction goes one step further and identifies characteristics of those objects — adding another layer of detail to that classification process.
Traditionally, feature extraction has relied on manual processes that cross-reference and identify images the old fashioned way. For example, manually adding annotations to registered point clouds based on manually cross-referencing a point cloud with a photographic image of the scene. But finding ways to automate this time-consuming process can make it far easier to cost-effectively apply this capability within a wide range of applications.
Point clouds contain huge amounts of spatial information, but unless that information is processed and labelled, it can’t be effectively used or even understood. Applying object recognition to point clouds data classification is a powerful first step towards harnessing point cloud data for more effective planning. Feature extraction goes one step further, focusing on the often-overlooked surface details of objects, and has long been invaluable for applications such as reverse engineering, object recognition and autonomous navigation.
Fundamentally, feature extraction, along with data classification, allows surveyors to interpret physical space rather than simply mapping it. This is the key to unlocking the true potential that reality capture technologies have to provide within a wide range of industries and applications.
Automated feature extraction is still in its infancy. But like with data classification, it’s rapidly improving along with the development of artificial intelligence (AI) and machine learning (ML).
Proposed methods of feature extraction improvement use segmentation algorithms that convert existing cloud points into structured depth images. To make this possible, edge points and linear features are automatically extracted from 2D imagery, and those same pixels are projected onto 3D maps. Fundamentally, most feature extraction algorithms rely on classifying point cloud data, and then cross-referencing that data with additional reality capture information to add feature annotations to identified objects.
The path to truly automated feature extraction relies on a “trust-but-verify” operation that uses the process of training extraction algorithms to ever-increasingly improve results. From this perspective, you can divide the feature extraction processes into three broad categories:
Fundamentally, software that keeps users in the loop will ensure quality outcomes while still accelerating workflows as we work towards the goal of fully automated processes. This ensures that minimal manual checks are required for the reliable mapping of features. Once it’s off the ground, well-implemented and monitored software should even automatically flag poorly executed data as well as cross-referencing datasets to check for feature accuracy at all times.
Feature extraction makes it possible to use point cloud data in far more effective ways. Specifically, in the construction industry, we have witnessed the deployment of reality capture tools at a far greater rate, and their use to inform planning and cross-check outcomes. Feature extraction, along with data classification, improves the use of these techniques dramatically.
BIM (Building Information Modelling) is now deployed in 70% of construction projects across the UK. Scan-to-BIM is the process of using reality capture inputs to inform BIM models, and cross-check outputs with planning. Ultimately, this aims to give a clear view of changes made throughout construction projects, allowing for evaluation, accuracy, and streamlined projects. This enables teams to:
The value of automated feature extraction within this environment is plain to see, especially considering that, as discussed, feature extractions provide levels of detail that make it possible to interpret spaces rather than merely mapping them, even where multiple 3D datasets are concerned.
Automation is on the horizon, and it stands to change feature extraction and general point cloud data handling forever. This is incredibly exciting and stands to altogether change the surveying processes that are somewhat behind in terms of the speed clients expect right now.
To make sure that you’re ready for this change, you need to implement the software that is on the cutting edge of focuses like these. Here at Vercator, we offer precisely that, with cloud-based software that’s already taking charge where point cloud processing is concerned. With updates inbuilt, existing cloud integrations like this ensure that you’re at the forefront, enabling you to integrate automated feature extraction the moment it becomes a market norm. We believe that 2021 will mark a shift in automated feature extraction. There are exciting developments, stay tuned for more details.