michelangelus - Fotolia

Manage Learn to apply best practices and optimize your operations.

Enable an antilatency cloud event processing architecture

Event-based applications cannot tolerate unreliable networks or latency. Consider an architecture that puts some event processing local to the source to alleviate cloud traffic.

Serverless can introduce process latencies that measure in hundreds of milliseconds, far too long for many applications. One answer to this latency problem is to put local processing power where event data is generated, rather than send this data to a cloud host for processing. Though this approach is unconventional, it could reshape event processing, serverless and cloud computing in general.

The problem with the cloud front ends

Latency is a primary problem with a cloud-hosted event processing architecture. While the cloud is accessible from virtually everywhere, a provider's servers reside in strategically located data centers. Users or IoT devices thousands of miles away from one of those data centers will experience transit delays in the tens of milliseconds when they send data for processing. For event-driven applications, this latency can make the control loop between event and response too long.

Another problem is availability. To execute on event-based data, applications rely on a network connection to the cloud provider. Without it, the application can't receive information or deliver responses. For many applications, this complete loss of functionality is unacceptable.

Finally, the evolving model of a cloud-hosted event processing application can create problems. Despite the popularity of serverless computing for event-handling, this pairing might create even more latency issues. Serverless applications load on demand, and early users report that the delay to load and run these applications can take up to hundreds of milliseconds.

Local event processing resolves complications

The answer to these problems is an event processing architecture that starts with local capabilities. This setup processes critical events on local IT resources and sends less time-sensitive events to cloud hosts. An application built with this architecture can also perform some local event functions, and then provide the cloud-based application components with partially analyzed information. This split approach can reduce the cost of cloud event handling, improve the reliability of network connections and eliminate latency problems.

To add local processing where there is no current event controller on site, either use on-premises servers and event software, or set up open source event processing platforms and tools with a cloud provider.

On-premises servers and event software

The Spark Streaming project is an example of a platform that provides the same features as cloud-based serverless event handling, but provides those features on premises, next to the data source. Streaming event handlers process events in real time. Kafka Streams, a Java package, is a streaming event processor geared toward more traditional application structures. These tools focus on analyzing events, rather than on serverless.

To add local processing where there is no current event controller on site, either use on-premises servers and event software, or set up open source event processing platforms and tools with a cloud provider.

Another viable on-premises approach for multi-event applications is a complex event processing (CEP) tool. With CEP, developers can create applications that correlate multiple events. For example, this CEP capability could apply to complicated smart building and smart city applications that pull data from multiple sensors and sensor types. Apache Flink provides CEP capabilities in open source form for applications that are too complex for basic event stream processing to support.

To use event stream processing or CEP, select a rules or policies engine to author the handling of events or event combinations. Drools is an open source business rules tool popular for event front-end applications. A rules engine is also the critical element when transitioning from an on-premises front-end event handler to cloud-based event processing and IoT systems.

Cloud provider and open source local event processing options

The second option for on-premises event handling reduces the users' focus on the handoff between cloud and on-premises data handling and relies on the cloud provider. These edge plus cloud tools, such as AWS IoT Greengrass, run cloud IoT applications using local event processing elements. Greengrass makes on-premises event handling an edge extension of AWS Lambda and other event services. Similar choices include Microsoft's Azure IoT Edge and Google's Cloud IoT Edge.

There are advantages to a cloud provider-integrated event processing front end. One is that the architecture of the application is conveniently the same across both on site and in the cloud. Another is that the cloud provider's event processing tools and features automatically integrate with the local event handling setup. The disadvantage, however, is that portability across clouds is limited. All cloud provider event and IoT tools are somewhat different, so to change providers or use multiple clouds, you must assign each site to a specific cloud provider and use a different architecture for the applications depending on the cloud provider.

Event front-end tools, both open source and proprietary, promise easier support for multi-cloud. However, you may still need to customize the events generated on premises to activate cloud-hosted processes. In general, an event processing architecture with a DIY front-end handler gives the user more power and control, and the choice enables users to shift more work on premises, reducing event, IoT or serverless compute charges from the cloud provider.

Dig Deeper on BPA and BPM