Vercator Blog - Insights from the Vercator Team

LiDAR Point Cloud Trends

Written by Charlie Cropp MRICS | Apr 1, 2021 1:30:32 PM

As point clouds move into the mainstream, there is a growing need to exploit their capability and share their outputs. The use of point clouds is going beyond capturing physical space to focus on conveying the components of the scenes they capture. This is where point cloud classification and feature extraction sit centre stage.  

Classification of objects is the first component in addressing the complexity of large point cloud data sets. Having the ability to separate small subset elements from within the larger point cloud enables items to be categorised, counted, and attributed. Here, we look at some of the trends making this happen.

Trend 1: Increased automation

A point cloud is fundamentally a collection of XYZ data points. It’s a large but sparse representation of a scene in three-dimensions. By extracting objects and classifying them — whether by adding colour to point clouds or applying data tags, or both — you can more easily interpret and analyse their output. 

While classification has always been seen as an essential step in a reality capture process, manual selection is too time-consuming for widespread application. As use cases and point cloud sizes grow, it becomes ever less practical. Fortunately, the problem can be addressed by automating LiDAR data classification. Robust algorithms, which will simplify the application of automation to point cloud data classification, are increasingly a reality.

What to look for:

Automation is growing across the board within point cloud processing. Deploying new software applications will improve classification and fully automate the entire process. At Vercator, we've pioneered automation within point cloud registration and are applying the same technology to improve object classification. Vector-based and multi-stage registration algorithms can deliver a more robust registration that simplifies classification and enhances the quality of analysis. 

Long-term, data classification algorithms can be trained to create a fully automated system. Eventually, this automatic verification will be good enough to assign classifications without the need for human intervention. Realistically, an amount of manual cross-checking will be required for the foreseeable future, but this will vary by project specification and the complexity (or novelty) of the data being analysed.

Trend 2: Trust-but-verify learning

Although full automation is on the horizon, it's not quite perfect yet. The best point cloud classification software should enable users to embrace both worlds — automated and verified.

Algorithms can provide a first pass at classification, which surveyors can then manually check for accuracy. Quality assurance of confidence assessments (and the possibility of a manual review) will be a significant part of any state-of-the-art automated point cloud classification software for the foreseeable future. The ability to set different certainty levels can enable users to trade speed for accuracy depending on the project's nature.

What to look for:

Just like standard object detection software, probability assignment should be allowed for each classification type, and parameters set to generate cross-check reviews based on the level of certainty — for example, anything under 80% accuracy needs to be checked.

As algorithms improve and learn, the number of classifications needing cross-check verification will reduce. Gradually, algorithms should remove the need for cross-checks during processing. Automated classification will use the verification inputs to improve the algorithm, helping teach the software how to improve accuracy and raise the threshold for checks.


Trend 3: "In the cloud" processing

Point cloud processing, in general, is moving to the cloud. The combination of processing power, scalability, resilient infrastructure and near-infinite storage capability makes cloud computing the natural home of any point cloud processing application.

More cloud-native applications will be developed to be scaled up and down on-demand to get the job done quicker. This ability of point cloud processing software to parallelise tasks has already brought scalability of resources to considerably accelerate the processing of large point cloud data sets

Performance improvements, continually updated by faster cloud infrastructure, can bring increasing levels of speed. This is mainly due to:

  • Faster processing through the parallelisation of tasks
  • Immediate registration and processing with direct transfer of files to the cloud from the field. 

Data is also naturally drawn to where the rest of the information is already held — a phenomenon known as “data gravity”. Basically, it’s easier to move applications to the data rather than the other way round; an example of this is the use of cloud-based “Common Data Environment” with BIM (Building Information Modelling).

What to look for:

A cloud-based approach to point cloud processing, in general, can enable rapid take-up of solutions such as automated point cloud classification. The great thing about a cloud software approach is that:

  • More innovative classification models will become part of existing software solutions.  
  • It will be easier to share with other stakeholders.
  • It will be simpler to integrate with other data sets.

By making future-proofed and cloud-based investment today, you will improve the accessibility of all point cloud classification and feature extraction trends in the future. 

Suggested reading: For more information on point cloud processing in the cloud, check out our ebook — Are You Ready For the Cloud?

Trend 4: Hybrid data sets

Reality capture will become increasingly dependent on the merging of multiple types of scan outputs. By combining scanning techniques, each of which has its own strengths and weaknesses, the aim is to maximise the efficiency and effectiveness of the data capture tools at your disposal. 

For example, terrestrial laser scanners excel at the accurate capture of data, such as the interior of buildings, industrial plants, and civil infrastructure. Reconstruction of 3D shape and appearance from unmanned aerial vehicle (UAV) based photographs enables rapid capture of exterior structures and their surroundings. SLAM (Simultaneous Localisation and Mapping) is automating rapid and low-precision data acquisition for indoor and outdoor spaces. In the future, all of these data types need to be stitched together and then utilised within point cloud classification processes.

What to look for:

Future trends will see a blurring of lines between software and hardware, with greater interoperability and platforms able to receive inputs from a wide range of data sources and types. Realistically, this comes back to the cloud-first trends, and your ability to invest in future-proofed software able to help you capitalise on all of these trends without needing to continually change operations. 

Software (not hardware) is defining the future of reality capture

Fundamentally, increased automation and the use of trust-but-verify systems will improve classification automation while retaining quality control. For users that invest in cloud-first platforms, it should be possible to access these technologies as they emerge — stitching together capabilities from the best providers on-demand. 

Point cloud processing applications present boundless possibilities — from simple data sharing to simplifying complex and labour-intensive tasks. These software services will continuously improve and help teams extract intelligence from their point cloud data. Such tools are transforming a collection of points into detailed, valuable 3D models. Increased automation and accessibility of these capabilities is the major trend we expect to see, and your ability to deploy the right software is critical to harnessing these outcomes.  

Suggested reading: 4 Reasons We Built Vercator Around the Cloud.