As organizations move from monolithic apps to numerous microservices, the number of dependencies can rise. Enterprises...
need to develop capabilities that make it easier for developers to focus on adding value rather than triaging complex microservice communication hiccups.
"The biggest single challenge arises from the fact that, with microservices, the elements of business logic are connected by some sort of communication mechanism … rather than direct links within program code," said Randy Heffner, principal analyst at Forrester. This means there are more opportunities for errors in network and container configurations, errors in request or response, network blips, and errors in security configurations, configs and more. In other words, there are simply many more places where things can go wrong with microservice communication.
It's also much more challenging to debug application logic that flows across multiple microservices. In a monolithic app, a developer can embed multiple debug and trace statements in code that will all automatically go into one log. With microservices, developers need to collect logs and other debugging outputs. They then need to correlate those logs and outputs into a single stream in order to debug a collection of microservices that interact. This is even harder to do when multiple streams of testing are active in an integration testing environment.
Keep up with data integration complexity
Developers have traditionally found it difficult to monitor data integration across service-oriented architectures and, more recently, cloud services, said Torsten Volk, senior analyst at Enterprise Management Associates. This is a major reason developers still struggle to debug microservices and deal with microservice communication.
A number of approaches have emerged to help congregate data from the various logs, performance metrics and other types of monitoring tools. The combination of better monitoring data integration and real-time analytics can increase data to help identify the root cause of problems faster. It will also help developers identify more relevant events when containers can be spun up and down in under a second.
"Today's monitoring trend that causes the most excitement is to leverage machine learning, deep learning and reinforced learning to autonomously find relevant events, reduce unnecessary alerts and optimize operations cost," Volk said. However, this requires enterprise architects to address monitoring and debugging data integration as part of their application stack.
It is also a good practice to utilize standardized and fully automated unit, regression and integration testing as part of deploying microservices. This is critical because microservices often are developed by separate teams.
Strategically use different tools
Developers can combine different mechanisms to triage problems that span microservices. These combinations could include application performance management tools like Dynatrace, log analytics tools like Splunk and distributed tracing tools like LightStep.
"Our approach to continuous deployment emphasizes outliers and other early warning signs of possible systemic problems with each release," said Ben Sigelman, CEO of LightStep Inc. They also use metrics to identify key symptoms, use logs to analyze exceptional situations and use tracing to do the heavy lifting of root-cause analysis.
Heffner said there are basic practices that organizations should follow when they set up a microservice integration testing environment. Teams should create comprehensive and repeatable automated tests for each microservice, he said, and manage these tests as though they are a part of the microservice itself. This includes whatever test harnesses and data are necessary to run the test. He also said developers should attempt to instrument microservice code within the application code itself and design code for failure. They should also keep an eye on minimizing dependencies when moving to numerous microservices.