Digital Twins and Edge Analytics: are we ready for change?

0

Look forward to what might happen during the year these days includes scanning an ever-growing list of predictions. For a few weeks, every platform and portal has been full of forecasts for 2017, including on trends in the Internet of Things (IoT). I usually take these forecasts with a pinch of salt, and try to identify just one or two new buzzwords. For me, this year, there are two stand-out candidates, Digital Twins and Edge Analytics.

Digital Twins

Can it be a coincidence that digital twins have arrived in exactly the same year as Ridley Scott's Blade Runner sequel? I am not sure that I see any immediate future for personal digital copies of us all, in android form. But realistically, digital doppelgangers may not be a matter of science fiction for companies and organizations. By using comparatively low-cost data processing and increasingly accurate sensor technology, I can see huge potential, especially in the field of simulation. Following the law of large numbers (the greater the system and data volume, the more accurate the forecasts), the Digital Twin is particularly suited to those companies that have already done their homework in the fields of scalability, data quality, analytics and decision management. A simulated twin would have undoubted advantages. Who would not appreciate a 1A crash test dummy for a wide range of experiments?

Edge Analytics

Edge Analytics is, if you like, the next level of streaming analytics. The basic idea remains the same: to bring the algorithms to the data and not vice versa, like the mountain and the prophet. The advantage is to minimize the time-to-decision. Whether you want to recommend a product, stop a transaction, or change a setting, you can do it directly, with minimal latency.

Of course, this is on the assumption that the decision has been made correctly. The challenge is to establish how a device with comparatively low computing power and without access to a comprehensive data history can make the same decisions as an analytics cluster based on all company data.

This may never be possible. But maybe it does not have to happen. It is important to distinguish between the development and the execution of an analytical model. For the development, we need all the resources we can get, including algorithms, data, and computing power. But once a good model has been developed, you do not need so much effort. The model can be applied elsewhere, if not everywhere, at least until it has exceeded its shelf life and has to be updated.

Decisions-as-a-Service

Models must therefore be administered and distributed automatically. This is not a technical hurdle, but an organizational one. Even without IoT, many organizations have 1,000 or more analytic models. This number would then be multiplied hugely by Edge Analytics. A mature deployment concept, and appropriate model management (repository, training, validation, and monitoring) is therefore essential..

“Decisions-as-a-Service” requires a certain degree of maturity in the field of analytics. But once you have achieved this, this service makes things relatively simple, and provides a suitable engine for the execution of analytical models. In IoT environments, you should try to select a true event-based engine and not micro-batch. A clear overview of the differences can be found here.

Or, of course, you could just put everything on the SAS IoT platform. That way, you will be equipped for everything and can start tomorrow.

You can read more about SAS Analytics for IoT in this White Paper about Opportunities and Applications across Industries.

Tags
Share

About Author

Rainer Sternecker

Head of DACH Cloud Solutions, Customer Advisory

Rainer Sternecker is leading a team of customer advisory cloud architects in DACH driving cultural change for cloud adoption, developing cloud architecture and coordinating the adaptation process for SAS customers in DACH.

Leave A Reply

Back to Top