How and Why to Colourise Your Point Cloud
Point clouds are now embedded in design processes and workflows. Their accuracy and availability are the foundation for decision-making in many different applications. Autonomous cars and robot navigation are two flashy examples of this, but there are a wide range of applications. For example, the construction sector, particularly, is rapidly adopting realty capture in conjunction with BIM (Building Information Modelling). Applications are numerous and increasing, with point clouds now being considered a vital digital asset.
The question is, how do we get more from these assets? The growth of point cloud applications will be dependent on extracting objects (including LiDAR data classification and feature extraction) or adding meaningful information such as colour. Here we will look at why adding colour to point clouds is essential and how it’s done. Let’s get started.
What is point cloud colourisation?
A point cloud is a set of data points in a three-dimensional coordinate system. X, Y, Z coordinates spatially define these points. These are usually gathered through LiDAR (Terrestrial Laser Scanning), Photogrammetry, aerial LiDAR, Mobile Mapping and SLAM — and, increasingly, a combination of a variety of scanning techniques.
A native point cloud does not include colour. You can add colour to point clouds in two ways:
- You can add true colours that correspond to the real environment.
- You can classify objects by assigning specific colours to simplify visual review of the models. For example, blue for buildings and brown for ground.
Adding true colours is about making a more accurate image of the data captured. False-colour makes the scan easier to interpret visually.
When you consider a colourised point cloud, you see both the dimensional measurements and the RGB value. This colourisation makes the point cloud resemble a 3D photo of that location at each point where the scanner measured. The main effect is enabling users (both new and experienced) to quickly and easily understand what they're looking at.
How to colourise point clouds
There are increasing numbers of methods for producing coloured point clouds. There are predominantly four ways this happens.
Option 1: Cross-referencing image data to colourise the point cloud
Cross-referencing image data can be carried out manually (which is the oldest format) or automatically. Increasingly, point cloud data is colourised in this way using HD photography acquired onsite. This image capture technology may be integrated with the scanner (making alignment far simpler) or done manually. For each of the LIDAR data points, the closest colour data point is associated with the colour pixel data.
The photographs can be overlaid across the entire point cloud. The RGB value is applied directly to the corresponding 3D point to produce a 'pixel perfect' fully colourised point cloud. Tools can use RGB (red, green, blue) data from a raster to colourise a point cloud file of the same location. Each point on the point cloud is allocated the RGB value of the raster pixel with the same location.
Note: Although this can be done both manually and automatically, most automated software still contains some form of manual cross-checking and alignment to start the process and then verify the results.
Option 2: Using photogrammetry
Photogrammetry is a process of generating point clouds using photographic images. This forgoes the LiDAR component of the first option and creates a point cloud using multiple photos from different angles. This can be a great option for producing colourised point clouds, particularly for rough modelling. However, photogrammetry has a lower inherent accuracy rate than 3D LiDAR, meaning that it’s not always suitable for the job at hand.
Option 3: Colour-enabled LiDAR
Advanced LiDAR scanner devices can be equipped with laser diodes (red, green, blue) and avalanche photodetectors (red, green, blue) that capture the colour intensity returning from the target by illuminating those colours during scanning. These scanners also deploy standard infrared light to range targets like standard LiDAR.
To work well, you have to ensure you have registration software that can produce output at the correct density and at a reasonable speed. Producing complete and accurate 3D point clouds needs multiple scans. The result can be dozens, if not hundreds or thousands of independent scans. Using colour-enabled scanners accentuates the value of developments like multi-stage and vector-based point cloud registration, and harnessing scalable cloud-based computing power to accelerate and simplify the registration process.
Suggested reading: To learn more about cloud-based processing, check out our free ebook — Are You Ready For The Cloud? A Surveyor’s Guide to the Future of 3D Laser Scanning.
Option 4: Colours (real or false) applied to classified objects.
Sometimes you need to use colour to highlight specific areas, without needing realism. Especially for outdoor scenes, the LAS file format has fields for object type and colour. So you can identify particular features — such as ground, pylons, vegetation — tag them and add a defining colour.
False-colour can be deployed for any number of reasons, and partnered with cross-check and clash-detection algorithms within the context of site revision and BIM. Fundamentally, the process simply requires identifying discrete objects within a scene, and then applying colour based on those objects. The colour doesn't have to be realistic, it can simply be used to pick out objects of interest more easily. This process hinges on the effectiveness of data classification and feature extraction capabilities of the software you have deployed, and can be accomplished using both manual and automated processes.
Suggested reading: For more details about classification, check out our blog — How to Automate LiDAR Data Classification.
Why colourisation is important
Point clouds represent data. To get the most out of them, they need to be presented as models of the real world. Colourisation is an important step in that modelling. Colours bring more clarity in several scenarios:
- Analysis: As point cloud data appears in more sectors and applications, a wider range of stakeholders are required to give input. To avoid the "what am I looking at here?" question, adding colour aids in providing a "picture" that people can more readily understand.
- Comparison: Regular point clouds scans can identify anomalies during any project — acting as a means of quality and progress checking. This can be either from real-world to model (digital twins or in Scan-to-BIM) or from the model back to the real world (BIM-to-Field). By using colour to flag areas of overlap or potential conflict, actions can be taken quickly. Colours can also be used as a confirmation that the required activities have been carried out successfully.
- Highlighting detail: Point cloud data sets can often span vast areas — with many features and structures potentially hidden within the data. Extraction can help pick out, label, and vectorise relevant characteristics such as hard/soft-breaks, assets, lane lines, solid/surface models. A feature extraction process combined with colourisation can also examine even more specific details, such as surface conditions, minimum clearances/widths, volumes and surface movement.
- Realism: During construction, it can be essential to provide colourised visualisation of the site. The ability to view the complete scene from each scan location can keep stakeholders up to date on progress with animated fly-throughs across the whole point cloud. For the engineering or architectural teams, the ability to remotely transition or walk between scan locations can help identify problem areas before they emerge.
Flexible software is central to future-proofed reality capture
Colorisation is one of many modelling features that will grow in importance as point cloud use-cases grow. Software that can combine these data sets and add capabilities over time will become more and more critical.
Humans can't handle these large and complex volumes of information. Ensuring that your point cloud registration is fast and automated at this stage is a prerequisite to more effective experimentation and deployment of features such as object classification and colourisation.
Point cloud data must translate into more efficient modelling to open up new possibilities, help decision-making, and better extract information.
The ability to evolve and grow is one of many reasons that we built our own point cloud processing software in the cloud. If you want to learn more about Vercator and it’s many capabilities, check out our ebook — Point Cloud Processing Has Changed — or get in touch today.3. SLAM and wearables
Reality capture is becoming even faster and more accessible, and SLAM (Simultaneous Localisation and Mapping) is an excellent example of this. Primarily, SLAM is quick and easy. Compared to traditional static laser scanners, SLAM-enabled scanners' mobile nature delivers far simpler and accelerated workflows.
A significant benefit of SLAM in 3D reality capture comes from repetition. It’s fast and easy, so it’s possible to use it repeatedly across the lifecycle of a project. The more often an area is mapped, the more current the data is, and the more valuable it is. Wearable scanners are hugely attractive to up the scanning rate, especially if there might be safety, access or security issues to consider. Mobile scanning can even be deployed by autonomous robotics — for example, Doxel.
How to execute:
The challenge with SLAM is accuracy. Mobile scanning introduces an additional variable, and that limits the precision of the scans that can be produced. SLAM is most effectively deployed where rougher scans can be used, and structured measurements don’t need to be taken based on that data.
In order to effectively deploy SLAM (and mobile scanning more generally), you need software that’s able to combine mobile scanning outputs with the outputs of static scanners. Again, this is an area of significant development within the industry, and one that we are pioneering at Vercator.
Suggested reading: If you want to learn more about SLAM, and how it can be used in conjunction with other datasets, check out our ebook — How SLAM Enables the Evolution of Wearable Reality Capture Technologies
What next?
"We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run" — Roy Amara
The future is nearer than we think. Automation and robotics are already impacting construction, and reality capture sensors are already generating vast quantities of data that can be turned into actionable information.
The key to success in reality capture is figuring out why you need data, what data you need, how best to capture this, and how you plan to use it. By deploying cloud-based software, you have the means to coordinate the complete range of scanning technologies to form a full interpretation of a site or environment. Check out our guide to 3D Laser Scanning Software to learn more.
Tags: data set, LiDAR, point clouds, Vercator