Bringing digital twins into the world
A digital twin is a virtual replica of a physical product or asset. An asset can range in size from a component of an aircraft engine, to a building, or a whole city.
What distinguishes a digital twin from any other digital model is that it is connected to its physical twin. The twin is updated in real time (or as regularly as possible) to match its real-world counterpart. This model can then be used for testing new designs, analysing results and optimising operation — seamlessly transitioning from the real world to the virtual one and back again.
According to NASA, one of the pioneers of digital twins, the goal is to create, test and build completely in a virtual environment. Physical manufacturing only begins once all requirements are met. The physical build is always tied back to its digital counterpart through sensors and scanners, enabling the digital twin to contain all the information that could have been gained by inspecting the real world device.
Another pioneer, General Electric (GE), is applying digital twins to improve the efficiency of power turbines and aeroplane engines. As of the end of 2017, GE had over 500,000 digital twins corresponding to products, parts of products, processes and systems.
How twins are made
The lifecycle of a digital twin can be divided into three main stages: development, generation and operation. The development stage includes developing the toolkit that can be used to capture accurate as-is data. For example, from CAD systems or via point clouds created using laser scanning.
The generation stage is where the output from these tools are used to create a unique digital twin for each physical asset. Once the initial version of the digital twin has been generated, it can be taken into production: the digital twin continuously enriches the data feed, augmenting the raw data with insight provided by modelling and simulations.
Digital twins rely on sensors to collect and combine data. However, there is more to digital twins than just sensors. Where there is a visual element, as in the built environment, scanning technology, LiDAR and point cloud processing are central to the creation of the digital twin model framework.
The digital twin of a machine is made up of several layers of digital information, ranging from the original CAD information to visual inspection data and detailed data around how that machine is manufactured. The goal is to deeply simulate an individual machine and be able to predict how it will behave in certain situations or environments. As time passes, data gets added to the machine’s digital twin. The new data would include factors such as maintenance, prior sensor readings and breakdown history.
How to implement a digital twin
From a technological point of view, digital twin technology is becoming more and more accessible. Cloud technology offers a means to process and run simulations without expensive on-site servers. Microsoft already offers a growing range of Azure Digital Twin products to help organisations in this objective, while Amazon Web Services (AWS) similarly offer a “Device Shadow” service as part of their AWS IoT lineup.
One of the challenges of the digital twin concept is that the best results integrate many technologies, such as Artificial Intelligence or Machine Learning. Some applications are also taking advantage of developments in 3D laser scanning or IoT sensors, building on previous innovations.
Vector-based, multi-stage point cloud processing (or stitching) is making it far cheaper to produce and access the foundational data of digital twins. One of the unsung heroes of a future full of digital twins is the increasing accessibility of reality capture technology driven by advances in point cloud processing algorithms.
Advances in this type of survey technology mean collecting the relevant data is no longer the main challenge of creating a digital twin. Now the challenge comes with converting that scan data into something usable.
Even if designers have access to highly precise and detailed 3D information on a product, they can be restricted in transforming this information into digital twin data due to complexity and disconnected processes in the modelling phase of the project.
Fortunately, there are also ongoing developments in making the process more efficient by using machine learning algorithms and vector analysis to create new industry best practices. An example of this process becoming more streamlined to create best practice for digital twins is the recently published Gemini Principles — a paper from the Centre for Digital Built Britain (CDBB). This paper sets out proposed principles to guide the national digital twin approach and the information management framework that will enable it.
The digital twin city
Digital twins have the potential to move far beyond the confines of pieces of equipment or machinery. The real future of the digital twin is a future driven by scale.
The larger a system, the harder it can be to gain visibility over all of its components. This is why civil engineers are looking to apply digital twin technology to whole cities — delivering the information needed to optimise planning and resources.
Cities such as Paris, Cambridge and Toronto, for example, are actively developing digital twin city models as part of a smart cities concept. Sensors are being installed to monitor as much data as possible, including traffic, environmental pollution levels, power demands and more. Once paired with Machine Learning algorithms, the digital twin can quickly test possible solutions and send these control signals back to physical counterparts.
Again, advances in survey technology are helping to create the bedrock models over which sensor data is overlaid.
Towards the future
We are at the beginning of the digital twin era. Critical to its success will be the delivery of more complex simulation applications and platforms to take advantage of the digital twin data and make the model more complete.
BIM database-first models, for example, provide interesting pairing possibilities with real-time digital twin data. BIM, developed to enable greater collaboration in the construction industry, provides a single-source-of-truth platform that allows data to be flexibly viewed. The same kind of filtering capabilities could help digital twin models deliver the right information to the right people at the right time.
Beyond sensor data, business systems and visual and environmental context will be needed to provide an even greater data pool to apply simulation algorithms. Such platforms will also be key to setting up bi-directional communication and data transmission between the physical world and its systems and their digital counterpart.
As digital twins multiply across the enterprise, there will be the possibility of creating a network effect, with different twins communicating with each other. Orchestrating these “twin networks” could unlock many new opportunities and both IDC and 451 Research highlight digital twin orchestration as a long-term essential capability.
The notion of machines talking, reasoning and making decisions with each other will be transformative for how systems are operated and managed in the future. With this network effect in play, there is a good chance of yet another major internet-led revolution.
Digital simulations can help prevent actual problems. Smart components connected to a cloud-based system could gather sensor data using sensors which allows analysis of real-time status and comparisons with historic performance. Artificial intelligence (AI) and software analytics would update the digital twin as its physical twin changes.
We sit at an exciting crux of how digital and information technology are interacting with physical spaces. Advances in reality capture technology are helping drive this change. Digital twins play a central role in how this information is engaged in the future.
Tags: point clouds