Embedding event stream analytics

0

In my last two posts, I introduced some opportunities that arise from integrating event stream processing (ESP) within the nodes of a distributed network. We considered one type of deployment that includes the emergent Internet of Things (IoT) model in which there are numerous end nodes that monitor a set of sensors, perform some internal computations, and then generate data that gets pushed back into the network. Most scenarios assume these data streams are accumulated at a central server that analyzes the data and then subjects it to existing predictive and prescriptive analytical models. Then, the models generate notifications or trigger the desired, automated actions.

The conclusion we came to, though, is that forcing all the decisions to be made at the central server might be a somewhat heavier burden than is necessary. Because this approach requires a full round trip for communication (sensors to end node to network to central server, then back to network to end node to controllers, for example). The question, then, is about the potential for embedding event stream analysis within the different types of nodes in the network – the end nodes along with any of those intermediate nodes that manage a cohort of other nodes.

One of the benefits of the central server model is that those types of machines are not limited in terms of storage or memory. In a very distributed IoT network, though, many of the end nodes are probably less powerful from all aspects of computing: they have less memory, little storage and less-powerful processors. In other words, it is more feasible to provide a full-scale event stream processing engine at the central server than within an end node.

On the one hand, we are suggesting that analytics and event stream processing be provided along the edges of the network. At the same time, we recognize the challenge of embedding this capability at those edge and end nodes. How can we resolve the challenge?

The first approach is to make sure we segregate the capabilities appropriately. We have the analytics capability, in which data streams are subjected to different types of discovery analytics tools to identify common patterns. And we have the event stream analysis capability, which applies the business rules discovered through the analytics process.

The analytics capability has high resource requirements: a need to access a lot of captured data, and a need for fast processors to perform a lot of computational analysis. On the other hand, the event stream processing itself (under the right circumstances) is reminiscent of a much simpler finite state automaton (FSA) model: there are a finite number of states, there is an input stream, a specification of state transitions based on the current state and the input(s), and a set of events generated whenever there is a transition to a different state.

White paper
Learn about the role of event stream processing in connected vehicles.

A good event stream processing tool will embody this simple model. A straightforward execution engine would manage the FSA paradigm, and all the characterization of the business rules could be boiled down into compressed representations that fit into much smaller memory systems. The resulting execution model could be engineered to fit into a small memory image – it could also be optimized to run efficiently on less-powerful CPUs.

In the absence of an existing standard for IoT framework application development, it is useful to consider scenarios that are adaptable to the model. Hopefully, event stream analysis will soon become a core component of standard IoT application development.

Share

About Author

David Loshin

President, Knowledge Integrity, Inc.

David Loshin, president of Knowledge Integrity, Inc., is a recognized thought leader and expert consultant in the areas of data quality, master data management and business intelligence. David is a prolific author regarding data management best practices, via the expert channel at b-eye-network.com and numerous books, white papers, and web seminars on a variety of data management best practices. His book, Business Intelligence: The Savvy Manager’s Guide (June 2003) has been hailed as a resource allowing readers to “gain an understanding of business intelligence, business management disciplines, data warehousing and how all of the pieces work together.” His book, Master Data Management, has been endorsed by data management industry leaders, and his valuable MDM insights can be reviewed at mdmbook.com . David is also the author of The Practitioner’s Guide to Data Quality Improvement. He can be reached at loshin@knowledge-integrity.com.

Related Posts

Leave A Reply

Back to Top