Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Avoiding pitfalls of SOA application performance management

Building composite applications inside an SOA creates a new set of performance issues, requiring a deep understanding of service interdependencies and how this changes application management and monitoring.

As more and more IT departments deploy multi-tier middleware platforms and frameworks for the delivery of business-critical application services, they are expecting to garner the well-publicized benefits of a service-oriented architecture (SOA) approach: agility, flexibility, productivity and extensibility.

FIGURE 1: Expected Benefits of a Composite Application / SOA Approach

As illustrated in Figure 1, implementing composite applications in an SOA is expected to better align IT with the business and quickly adjust to change as competitive pressures arise. However, the legacy application management products currently supporting these critical applications are increasingly hindering organizations from achieving these benefits.

Management challenges of SOA-based composite apps

The loosely coupled nature of SOA enables IT organizations to develop composite applications by combining new code with existing applications, improving the ability to respond to business change. In SOA, existing software modules or applications are encapsulated and exposed publicly as services. Composite application developers then use these services to create new applications. Unfortunately, this shift in the application development paradigm creates new management challenges. The three most significant challenges are:

1. How to deal with complexity and change to composite applications and SOA infrastructure? 2. How to accurately characterize application performance and provide 24x7 production monitoring? 3. How to quickly diagnose and resolve problems when they occur?

This article will highlight the emerging array of application management challenges involved with SOA. In addition, it will define the concept of "application services" and explain how managing application performance at this layer provides necessary context and visibility to get the most from your SOA investment.

#1. Inverse correlation between flexibility and manageability

There is an inherent inverse correlation between flexibility and manageability. For example, as an automotive company increases the number of options offered on its vehicles, the difficulty associated with the management of its manufacturing process and inventory also increases. The same logic applies to SOA-based composite applications. As IT organizations adopt technology platforms and development paradigms to gain agility, management of these applications becomes increasingly difficult.

Moreover, infrastructure complexity increases significantly as the software platforms running composite applications become more network-centric and modular in design. This increase in complexity combined with a lack of SOA management expertise, methodology and tools presents a huge barrier to entry for IT organizations wishing to develop and deploy these applications.

#2. Performance metric pollution and its impact on 24x7 production monitoring

For years, software developers followed fundamental object-oriented programming (OOP) concepts -- such as inheritance, polymorphism, encapsulation, overriding, etc. -- to achieve effective software component reuse. Similarly, one of SOA's objectives is component reuse. The difference between OOP and SOA is that OOP reuses components at the source-code level and SOA reuses components at runtime. Software component reuse yields benefits, including increased developer productivity and improved software maintainability. Unfortunately, OOP- and SOA-centric software component reuse also contributes to "dirty" performance metrics throughout the enterprise.

Accurately measuring performance becomes a problem when multiple applications share a common software component to perform some tasks. Existing application performance management (APM) solutions measure the performance of this shared component at the Java Virtual Machine (JVM) level. This approach pollutes performance metrics because measurements taken at the JVM do not break out an individual composite application's impact on the shared component.

FIGURE 2 – Polluted metrics characterizes performance without application context

In the scenario illustrated above, performance metric pollution is unavoidable if measurements are taken at the JVM level. Conventional APM approaches produce metrics that measure invocations and the average response time of various methods in the shared component. The method invocation counts and average response times are polluted because they captured the combined behavior of several components interacting with the shared component. In other words, these metrics represent the performance of the shared component in the context of multiple composite applications.

Additionally, performance metric pollution negatively impacts an IT organization's ability to perform 24x7 production monitoring of its application, as inaccurate measurements trigger alerts and corresponding actions inappropriately. Time, effort and resources are wasted in dealing with false alerts. Finding a way to accurately characterize performance should be the highest priority for IT organizations wishing to establish an effective management system for their SOA environment.

#3. Inaccurate measurements slow problem diagnosis and resolution

Performance metric pollution also presents a serious problem for IT operations management: how to correctly determine the responsible party when a performance problem is identified in the shared component. To help diagnose the performance problem, owners of applications that use the shared component are dragged into a joint exercise of bottleneck hunting. Since the shared component can behave differently as part of various composite applications, using polluted measurements that do not break out performance characteristics by application service slows the process of problem diagnosis and resolution.

Eliminating performance metric pollution and obtaining accurate performance measurements in the context of specific-calling composite applications are keys to fast problem diagnosis and resolution.

A solution: application service management (ASM)

Multiple layers of abstraction, the need for specialized expertise and the constantly changing nature of SOA environments, create an 'IT visibility gap' that cannot be addressed with traditional systems management and APM tools. All of these tools lack the ability to provide the necessary business 'context' to quickly triage and resolve the issues most important to the business. Context is achieved by correlating an application service to the underlying shared services code components enabling that service. This process is known as application service management (ASM). Through ASM, it is possible to intelligently identify monitoring points, automatically deploy agents and dynamically display role-based dashboards with relevant metrics within the context of the application services being delivered.

It is important to remember that the very purpose of an enterprise application is to provide business services and if a problem with the performance or availability of a particular application service supported by a composite application cannot quickly be triaged, the value of the application is soon overcome by the cost of outages, poor performance and general maintenance chaos associated with supporting the application. Additionally, IT organizations need a dependable means for performing impact analysis. For example, before a pre-planned change is made, IT operations must be able to identify all application processes, application components, systems, etc. that will be impacted by the change. This adds significant value in a way that is not possible without having the necessary 'context to bridge that IT visibility gap.

How ASM works

The first step in the ASM process is to dynamically build a topological model of the application architecture ( See Figure 3).

FIGURE 3 – Dynamic Model Provides the 'Context'

By creating a model of a composite application and understanding how business transactions flow through the different layers, companies can automate the key steps of composite application management (setup, analysis, change management and service level reporting).

Leveraging this application model, pre-production and production operations users are immediately able to determine the root cause of a performance problem on a particular application service and then determine what other application services might be affected. Moreover, ASM enables IT operations teams to set performance metrics at the application service level, at the code level or anywhere in between. Not to mention, the status of these metrics are automatically correlated and dynamically adjust to relationship changes that occur in the application environment.

Conclusion

The combination of multiple layers of abstraction, the need for specialized expertise, and constant change creates an 'IT visibility gap' that can only be addressed with application service management. ASM bridges the IT visibility gap inherent in today's complex composite applications. Without having this context, it has become impossible to effectively diagnose application service problems in enterprise applications that leverage multiple layers of middleware. An unmanageable application, no matter its sophistication or richness of services, provides little value to an organization. Further, an IT operations team must understand an application's logic to be able to effectively conduct the impact analysis required to stay ahead of potential application problems.

In summary, by managing complex composite applications at the application services-level, IT organizations can maximize application performance levels, reduce the mean time to resolution and maximize productivity. An ASM approach, coupled with fully automated APM, will enable IT organizations to achieve the expected ROI from their composite application investments.

About Rob Greer

Rob Greer serves as VP of Product Management and Marketing at ClearApp Inc. Rob runs all aspects of product marketing, and product management. Prior to joining ClearApp, Rob served as director of product marketing at Symantec where he managed all of the go-to-market, sales enablement, and awareness activities surrounding the application performance management (APM) and data center automation (DCA) product families. Before Symantec, Rob was vice president for Helius, where he managed international sales and pre-sales engineering. Additionally, Rob was vice president of worldwide systems engineering for SonicWALL. Rob joined SonicWALL through the company's acquisition of Ignyte Technology, where he was founder and chief executive officer. At Ignyte, Rob grew revenues from $1 million in year one to $12 million in year three and subsequently sold Ignyte to SonicWALL. Rob holds a bachelor's degree in business administration with a concentration in management information systems from San Jose State University.


Dig Deeper on Topics Archive

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchSoftwareQuality

SearchAWS

SearchCloudComputing

TheServerSide.com

Close