Select Star Logo
June 26, 2018

The Promise and Challenges of Digital Twin

Generic Placeholder for Profile Picture
June 26, 2018
Stephen Goldberg
CEO & Co-Founder of HarperDB

Table of Contents

The concept of Digital Twin is pretty straightforward: 

  1. Take a physical entity anything from a person to an airplane
  2. Blanket it with sensors - temperature, weight, wind speed etc. 
  3. Collect data from those sensors 
  4. Create a digital replicate of that entity in the virtual world for study

 It’s an exciting concept because of advances and investment in both IoT and machine learning that make the concept of digital twin more realistic.

Imagine if you could track every important metric about a human being from blood pressure to temperature to kidney function, etc.  You could then maintain a digital twin of that human and use AI and machine learning to model different courses of treatment.  This could advance personalized medicine enormously.   

Another scenario could be to track all of the important metrics of an airplane and those metrics could then be utilized to perform modeling to allow for predictive maintenance.  Rather then guessing when that aircraft needs service you could have thousands of sensors blanketing the airplane and monitoring the health of all the various components based on usage, wear & tear, weather conditions etc.  This data could then be used to accurately predict when the airplane will need service.   

This same concept can be applied to thousands of scenarios that could provide great benefits from cost savings to safety and much more.  Why then are we not seeing this in practice in the real world today?   

Hardware

Hardware is becoming the major bottleneck to a lot of IoT initiatives, and digital twin is no different.  In order to successfully implement a digital twin project it would require a significant number of sensors.  In a lot of cases this can be cost prohibitive.  Additionally, managing the deployment of so many sensors is complex and time consuming.  Hardware startups in the IoT space are struggling to find manufacturing partners that can provide them with rapid prototyping capability and small batch runs to get their products to market.  This is beginning to shift, but not to a degree significant enough to be meaningful to Startups.   

Over time, the cost of hardware will decline as it always does.  The demand for more flexibility on the manufacturing side will ultimately be met by existing players in the space or by a new player who will disrupt the space.  Deployment will perhaps not get easier, but will become more automated as IoT practices mature.   

That said, hardware will most likely remain the long pole in the tent for digital twin to live up to its promise.   

Connectivity

Connectivity is another challenge for many digital twin concepts, primarily because physical entities, especially those that are interesting to study from a digital twin perspective, do not remain stationary.  Providing connectivity to thousands or millions of physical assets attached to an airplane, a car, or other physical assets that might be constantly moving or in areas with poor cellular reception is challenging.   

Furthermore, because most IoT architectural patterns currently rely on data caching on the edge and processing in the cloud models, the bandwidth required to gain value from a digital twin scenario that could potentially be processing billions of data points is tremendous.  

 In order to achieve digital twin successfully, projects need to adapt an intelligent edge architecture.  Data needs to be analyzed and processed on the edge.  The volumes of data needed to effectively analyze these scenarios are simply too high to depend on the cloud.  

This is a problem that can be solved today by examining data management solutions that can run more effectively on the edge.    

Edge Processing 

 As mentioned above, moving data processing, analysis, and decision-making on the edge will be critical to avoid connectivity issues.  Additionally, by moving these functions to the edge they ultimately lower the total cost as the need for cloud resources is reduced.  For example an Arrow Dragonboard 410C has similar compute to an AWS t2.micro.  Over the course of a year the EC2 machine will cost $101.61 whereas the Dragonboard costs $75.00 one time.  Assuming the Dragonboard lasts for 2 years by utilizing the Dragonboard compute the total costs savings is roughly 270%.  These are small figures, but when multiplied by thousands or millions of edge devices, and thousands of EC2 machines, the savings become massive.   

The interesting part is that most projects already utilize these micro-computing devices for data collection and applications on the edge.  That said, the current strategy is to more heavily utilize the cloud, rather than take advantage of the compute resources already on the edge.   

The challenge really lies in available data management tools on the edge.  The majority of data management technologies do not provide distributed querying on the edge, peer-to-peer clustering and replication, and enterprise grade database products that can run on the edge. Tools such as these will be required to move processing from the edge to the cloud which will ultimately be required to achieve digital twin in a cost effective and efficient manner.   

While you're here, learn about HarperDB, a breakthrough development platform with a database, applications, and streaming engine in one unified solution.

Check out HarperDB