Want to deploy digital twins as part of your predictive maintenance strategy? You should, for countless reasons that have been identified elsewhere. But like anything else worth doing in business, it’s a process – you need to know where your organization lands on the maturity curve, and where it’s going.

That’s why my most recent blog article explained the differences between digital shadows and digital twins, placing them squarely in the context of the Atkins-Realis maturity model, a practical, widely-used guide to digital twin maturity. If you need a quick refresher, it’s probably worth taking a moment to review the article.

For this article, we’ll take the next step to identify the most fundamental building blocks required to create a digital twin.

Before we go there, a quick reminder: No organization should take any of these steps without first identifying its business objectives and desired outcomes. Otherwise, your digital twin initiative will probably end up being an expensive, time-consuming science experiment. You probably don’t have the time or budget for that.

When your digital twin strategy is tightly linked with strategic goals (such as anticipating equipment failure, reducing downtime, and optimizing resource allocations), good things happen. Everyone gets on board more quickly because they know which benefits to expect. Your team makes smarter investments in the right tools and spends their time on the right tasks. That’s how you drive performance and value with digital twins initiatives.

Traits needed for digital twin building blocks

Composable Digital Twins (CDTs) represent a new approach to building digital twins faster and more efficiently. Think of them as a technological table of elements, where all the elements can be combined in different ways to address different needs. For these elements to operate together most effectively for digital twins, they need to possess three critical traits:

  • Composability: With this, these systems are built from loosely connected technologies, making them more flexible and scalable. For example, a machine learning model that uses sensor-generated vibration data to detect anomalies for predictive maintenance can be reused across different machines, accelerating the deployment process.
  • Executability: Enables digital twins to do more than just mimic physical objects in an offline, disconnected environment. They use real-time data to predict and simulate actions, even operationalizing complex models. For instance, a physics-based model requiring many hours of calculations can be extracted as a reduced order model (ROM) that can run in real time. Converting the ROM into an Open Neural Network Exchange (ONNX) format makes it portable & cost-effective for real-time use in operational systems.
  • Adaptability: This lets digital twins evolve over time. By integrating real-time data, AI and a feedback loop of business outcomes, they continuously improve their accuracy and performance. For example, a digital twin of a smart city can adjust traffic patterns dynamically based on real-time data and predicted trends.

A working list of building blocks – and what to look for

With these ideal characteristics in mind, what are the “building block” elements available to be applied in digital twin applications today? Many are already found in your organization. Others can be acquired from an ecosystem of providers. The result: A flexible, adaptable system that isn’t reliant on a single source. It’s an ideal approach for businesses that need to adjust their digital twins as requirements evolve.

This table from the Digital Twin Consortium offers a great view of the elements organizations can use to construct a digital twin.

As you can see, that’s a lot of elements. Don’t let them scare you. You can be up and running with a fully functional operational digital twin with as few as five of these elements. Your organization can add more as it gains experience and confidence in the digital twin arena, and as business users identify more areas where digital twins can help. Let’s take a closer look at them.

Core building block #1: IoT streaming analytics

Streaming data is essentially a live feed of information from IoT sensors on assets, systems, or facilities, capturing data on things like temperature, pressure, and vibration. This real-time data is what makes a digital twin more than just a static model—it's essential for predicting future behavior based on both the physical characteristics and external factors affecting an object. For a fully functional digital twin, this data is critical. It replaces assumptions with real insights, helping to plan and optimize system performance.

Core building block #2: Predictive analytics

Like machine learning, predictive analytics contributes to data-driven decisions faster and more consistently than humans. Just as important, it can be scaled up easily and adapted to account for new data and business results. In a digital twin environment, AI processes the full range of sensor data to predict future outcomes and identify the best actions to take for a system or asset. This is what sets true digital twins apart from digital models. For example, AI can predict when equipment is likely to fail and schedule maintenance in advance to avoid unexpected downtime – all while continuously learning and improving its accuracy.

Core building block #3: ModelOps

When you're deploying digital twin models at scale, effective management is crucial for ensuring ongoing accuracy and updated model deployments. There are several factors that contribute to effective management. Flexible deployment options enable models to run in different environments—from batch processing to real-time streaming, cloud, or edge setups. Continuous monitoring is required to track performance and pinpoint any data changes that might impact how the models function. With many models in play, having a searchable repository for easy comparison and version tracking is key. Finally, automating workflows and integrating systems through open APIs is vital for maintaining efficiency and consistency, keeping everything running smoothly.

Core building block #4: Decision science

Intelligent decision making contributes to more effective "human-in-the-loop" processes by using data and logic to automate decisions, like triggering alerts or actions when needed. These systems can also suggest ways to improve performance or prevent problems, like advising whether to repair or replace an asset based on predicted failures and cost considerations. It's crucial that this decision-making is always aligned with the digital twin's business goals, ensuring that any actions taken support the overall objectives.

Core building block #5: Situational intelligence

An effective digital twin user interface delivers continuous situational intelligence, making it easier for humans to interpret and act on real-time recommendations and insights. With advanced visualization tools, you can better understand these insights and interact with digital twins, quickly assessing things like machine health, output quality, and process efficiency. This interface takes complex data and presents it in a clear, actionable way, making it easier for you to make informed decisions that align with your business goals.

When it all comes together…

Each capability provides a unique benefit. And when they are combined, they become more powerful as a framework for deploying digital twins across industries, contributing to measurable improvements in process efficiency, asset reliability, and product quality.

Digital twins based on composable, executable, adaptive elements – the same building blocks highlighted earlier in this article – are already at work in a growing number of industries. In the solar industry, where assets are subject to underperformance or failure due to weather, temperature, electrical problems, or mechanical fatigue, some companies are using streaming data provided by IoT assets to create digital twins. Using these models, they’re able to monitor and address emerging issues before they generate excessive downtime for solar equipment. Plus, since these solutions require little to no code development, they’re able to get operational digital twins up and running fast – think weeks, not months.

Also, stay tuned for the next installment of this in-depth series on digital twins, which will focus on how generative AI can turbocharge your digital twins.

Share

About Author

Paul Venditti

Advisory Industry Consultant

Seasoned advisory industry consultant with over three decades of experience in application of quality methodologies and advanced analytics. Career spans services fieldwork, cutting-edge R&D and strategic partner ecosystem development. Recent contributions focus on AI, ML and streaming analytics from edge to cloud across asset and process industries, including energy, manufacturing, transportation, government and health and life sciences.

Leave A Reply

Back to Top