Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Developing, migrating SOA applications in cloud infrastructures

In many cases, changes to optimize SOA applications for virtual and cloud resources are limited to the publication and orchestration processes.

There are some tricky bits to migrating service-oriented architecture (SOA) applications to virtual resource pools, including the cloud. Probably the largest problem in dealing with elastic resource pools in SOA applications is the mistaken view that the SOA publication/registration process handles resource issues fully, when in fact it can complicate things considerably.

In many cases the changes needed to optimize SOA applications for virtual resources are limited to the publication/syndication and orchestration processes, and application architects can easily plan for and drive them to prepare better for the future. In special situations such as cloud bursting, or where component changes can alter state-keeping, special procedures may be in order. It’s up to application architects to decide when virtual-resource accommodations are needed and how to optimize them.

At a high level, SOA applications bind the user of an application/service with the service and its components through a registry process. The goal is to loosely couple application components, even to the point of permitting applications to "shop" for components dynamically using web services description language (WSDL). Components "publish" to the registry process when they are hosted, and this registration has to be updated to reflect any changes in where a component is hosted -- if it is moved via a virtual machine (VM) mobility function, for example.

As resource allocation becomes more dynamic, the performance of the registration function becomes important, not so much because it might impact application performance but because rapid changes to a slow-acting registration process run the risk of having a service request processed during the re-registration. This can destabilize an application by disconnecting one of its pieces from the workflow. The risk of re-registration problems can be reduced by making sure that the registration process handles requests quickly and by first registering a component as available in its new location before decommissioning (and deregistering) it in the old one.

Performance workflows and dynamic resources

All dynamic resources are not created equal; in VM movement applications, it's typical to move a machine only within the data center, but in the cloud, a component’s host might move thousands of miles. The problem with this type of elasticity lies in its impact on optimizing application workflows for performance.

Registry control, orchestration and state are the issue trio of SOA dynamic resource optimization.

In many applications involving service-bus orchestration and integration, message flow delays can accumulate to impact application performance if components of the application are hosted at a significant "network distance" from each other, meaning that a significant transit delay is likely to be experienced because of geographic distance and network hops. This is almost never taken into account in WSDL, so applications trying to subscribe to components might select one too distant to support quality of experience goals.

One solution to this is to adapt the WSDL to include at least geographic "zone" references for hosting so that applications running in a given geography will select component copies from that same geography. Another is to adapt the subscribe-side handling of the registry process to return components that match the geography of the requester. 

Which of these approaches is best will depend on the extent to which application architects can control the registry process; an open-source tool is easily modified but a proprietary tool may not allow changes, leaving no course but to try the "add-location-to-WSDL" approach. If all else fails, application users can be directed to location-specific registries to insure components too far out of area are never selected. All of this, of course, will depend on having at least some knowledge of where a component is being placed when it’s hosted. Check cloud provider management capabilities to see what can be done.

SOA orchestration and cloud bursting

Orchestration itself can be a complicating factor in SOA/virtual-resource applications. Because message flows coalesce on the service bus, the location of the point of orchestration can be a major factor in performance if the middleware imposes a truly centralized bus architecture. A more distributed form of message management that allows for direct passing of messages to successor components isn't practical for many SOA applications because they impose message-steering logic via business process execution language (BPEL) or other workflow languages. For these cases, picking where an application is orchestrated may be critical in adapting SOA to dynamic resources.

Load-balancing within the cloud or cloud bursting between data center resources and overflow cloud resources is an increasingly valuable application for businesses, but this will often mean spawning multiple copies of a component for work-sharing. When designing new SOA components, it's smart to consider this future need because many components will maintain state, or the context of work within multi-step transactions, and thus cannot be replaced during a transaction or state will be lost. 

The web or RESTful school of application design has long advocated having client systems maintain state, but in SOA, and even some middleware, components will actually do so. Middleware control of state is especially common when applications are created via BPEL at the middleware/orchestration level, and though the practice is generally denigrated by SOA architects, it may be more flexible in cloud bursting or load-balancing applications than if the components maintain state. When in doubt, client-side state management is best!

State can be an issue even without load-balancing in both cloud and virtualization applications if a fail-over facility is added to take advantage of the dynamic nature of resources. Many architects forget that if state is maintained by the components themselves, it will be lost in a failure and thus there will be little to gain by pursuing a fast and dynamic failover. Check to see how state is maintained before you commit to a SOA application that includes dynamic fail-over or you may be wasting valuable time and ALM cycles preparing for something that’s not going to work properly.

Registry control, orchestration and state are the issue trio of SOA dynamic resource optimization. Application architects who start their consideration with these three points are likely to get to all of the important issues quickly and manage them correctly. That’s the best way of insuring that SOA applications respond to cloud and virtualization trends and take their rightful place as cornerstones in the future of application design.

About the author
Tom Nolle is president of CIMI Corp., a strategic consulting firm specializing in telecommunications and data communications since 1982.


Follow us on Twitter at @SearchSOA and like us on Facebook!

Dig Deeper on BPA and BPM

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.