Reality capture is transforming industries from construction to medical research. The term itself is relatively straightforward — reality capture technologies allow you to “capture reality” — to enable you to replicate the physical world and turn it into a virtual reality, digital realm.
Sounds simple? There are some cutting edge technologies which are required to get you there — point clouds, photogrammetry and LiDAR, to name but a few. Each of these technologies are fundamental to how different types of reality capture work. So, we will look at each in a bit more detail.
A crash course in any new subject must start with some definitions. What we will deliver is a basic understanding of the different options available when it comes to reality capture technology — and show how this technology is not standing still. New developments are constantly improving performance, reducing processing time, delivering greater efficiency and decreasing the cost of access. Armed with this knowledge, you should be in a better position to consider how to apply reality capture to your business and industry. Let’s start with laser scanning and LiDAR
LiDAR stands for Light Detection and Ranging and is a remote sensing process which collects measurements used to create 3D models and maps of objects and environments. Using ultraviolet, visible, or near-infrared light, LiDAR maps spatial relationships and shapes by measuring the time it takes for signals to bounce off objects and return to the scanner. A good analogy for understanding this is the echolocation used by bats to determine where objects are and how far away.
LiDAR can target a wide range of materials, including non-metallic objects, rocks, rain, chemical compounds, aerosols, clouds and even single molecules. A narrow laser beam can map physical features with very high resolutions — for example, an aircraft can map terrain at 30cm (12 in) resolution or better.
You can even capture detailed and accurate measurements for entire buildings, rooms and cities using by using multiple scans carried out by LiDAR scanners (often called Laser Scanners). An individual scan is formed of millions of individual measurements, one pulse laser at a time. You need to understand this process in order to understand LiDAR as a whole. This is everything that is involved when turning laser scans into a workable 3D model or virtual reality visualisation. Once individual readings are processed and organised, LiDAR data becomes point cloud data.
The terminology of a ‘point cloud’ is an overarching term used to describe coordinate data. However their source can vary. Methodology typically falls into 3 categories:
Point clouds are the foundations of 3D models for use in the Built Environment sector. Typically, the points are rendered as pixels to create a highly accurate 3D model of an object. They are usually built from many scans to fully describe objects measuring just a few millimetres or objects as large as trees, buildings and even entire cities.
Point clouds are the collection of a set of unique survey coordinates and each point is defined by its own XYZ coordinate position. Along with the coordinate value, an intensity value and (sometimes) colour parameter are also recorded.
The stage of knitting together the many scans and millions of points needed to create a 3D model is known as registration. Up till now, this has been both user-intensive and time-consuming. However, recent software developments, combining state-of-the-art algorithms, machine learning and cloud computing, have greatly improved the process of 3D point cloud data alignment, making it a faster, easier and more robust process. This in turn is lowering the barriers to LiDAR data capture, reducing the processing time for scan alignment and making downstream analysis and modelling more accessible.
It is worth understanding these registration processes in a bit more detail. Let’s consider a novel method — multi-stage/vector-based processing. Typically, there are tens of millions of natural alignment targets in a scan compared to the tens of artificial targets or natural targets marked by eye in other approaches. Multi-stage/vector-based processing uses features in the 3D environment as natural targets which are recognised, and their location and orientation determined automatically.
Using this mathematical technique, each point of laser scanner data is extrapolated into a vector (a vector has both magnitude and direction, usually shown as an arrow). Entire scans are then condensed into single points, creating ‘vector spheres’. These spheres are rotationally aligned by placing adjacent spheres within one another. Rapid 2D point density techniques then achieve alignment on the horizontal and vertical axes.
Vector analysis, rather than simple point comparison, allows for more accurate scan pairing and faster methods of analysis, accelerating registration speeds by 40%-80% and removing many manual steps.
While LiDAR is a technology for making point clouds, not all point clouds are created using LiDAR. For example, point clouds can be made from images obtained from digital cameras, a technique known as photogrammetry.
As opposed to 3D scanning, photogrammetry uses photographs rather than light to gather data. Many photos have to be taken from different angles to capture the target geometry and overlap slightly from one photo to the next, much like the scans taken via LiDAR.
The primary advantage of using photogrammetry for 3D models is its excellent ability to reproduce an object in full colour and texture. While 3D scanners can do this, photogrammetry lends itself to this purpose due to photographs more easily representing realism.
Photogrammetry can also be more easily accessible as equipment and photogrammetry software solutions are not as expensive as 3D scanning. However, it is not as precise as laser scanning and it can prove to be a hassle if you do not have a multi-camera setup to get your multiple photos.
So how do decide between 3D scanning or photogrammetry? Consider the size of the area you want to model and the level of accuracy you will need.
Landscapes are often rendered in 3D models for many applications, including interactive maps, archaeological surveys, film and video games. A realistic feel is usually more important in this case than accuracy, and photogrammetry meets these demands best. However, there will still be issues with weather and sharp shadows to overcome with photogrammetry, but it is the method of choice for recreating a topographical environment in a 3D model.
The reason many turn to LiDAR over photogrammetry is because photogrammetry has an inherent lower accuracy than 3D laser scanners. This is caused by several factors, including resolution calibration and the angles used. To put it simply, if your chosen camera has a low resolution or has not been calibrated precisely, your measurements will not be as accurate as they could be.
When designing a part, or renovating a building, however, accuracy matters most. For this high level of accuracy, you will need LiDAR scanning. With photogrammetry, even small inaccuracies can lead to big mistakes or skewed measurements overall.
There are also use cases that combine photogrammetry and LiDAR. 3D city models, for instance, and archaeology and forestry are typical applications. Photogrammetry, in contrast to LiDAR cannot penetrate a vegetation canopy, so matching of digital aerial images with laser scans is proving a cost-effective and reliable solution. A combination of these two in this case seems to make perfect sense – and is also a strong indicator of how far and fast this technology is developing and expanding.
What is clear is that reality capture offers unique opportunities in a range of industries and sectors, and the technology is evolving rapidly. Ongoing adoption of automatic procedures to analyse and extract information from 3D point clouds will continue to deliver versatility, ease of use and low costs. This will make both LiDAR scanning and photogrammetry accessible to a wider range of use cases, including non-traditional users.
Advances in computer vision, artificial intelligence and machine learning are already adding new dimensions to reality capture. Even if we can consider some of the laser scanner hardware technology as mature, we are still very much in early days for the application of the computational and software-based techniques that will form the basis of reality capture going forward. What will be important will be to partner with the right surveyors and reality capture specialists to ensure you can stay up to date with the exciting possibilities that are emerging in your sector.