Does the creation of enterprise mashup applications create a concern for SOA governance? Should you look out for people who are creating mashups (either inside or outside of the organization) with your services and/or APIs?
Runtime SOA Governance must be concerned not only with the creation of an enterprise mashup application, but the creation of any new service consumer. As an analogy, consider a DoS (denial of service) attack. This attack is nothing more than one or more unexpected "new" consumers flooding a service with requests. While an enterprise mashup application or another consumer probably doesn't have malicious intent like a DoS attack, the analogy certainly highlights what can happen if you don't pay attention to new consumers that come on board.
This problem isn't a new one. Back in the early days of Web applications, we engineered creative ways to make mainframe functionality available via a Web service, such as screen scraping. These approaches can be dangerous, however. In the case of screen scraping, the risk created is that this interface point was designed for human interaction, not system interaction. Your well-meaning Web application could wind up making hundreds of calls through this "screen", far more than what was typical when actual users interacted directly with mainframe. On top of that, the Web application may have been opened up directly to clients for self service, whereas past interactions all went through a far smaller set of client support staff.
These same problems can happen in an enterprise mashup scenario. First, a mashup environment may promote a "do the best with what you have" approach, repurposing existing interfaces for interactions different than what the service designer intended. Second, even if they use it as planned, if you don't look at the user load associated with the new consumer (mashup application) or otherwise, you run the risk of having that new consumer's users eat up all of your available capacity, even when using the service as intended.
There are a few things you can consider doing to prevent problems in your enterprise. First, try to have a defined on-boarding process for new consumers where the service manager can obtain expected loads from new consumers and plan accordingly.
Second, you must monitor service usage and look for changes in demand. The on-boarding process can only give you estimates, and there's no guarantee it will match reality.
Third, you can consider throttling according to a service level agreement. However, to do this you must be able to uniquely identify the source of incoming requests. The steps required to ensure each new consumer can be uniquely identified without having any one masquerading (accidental or intentional) as someone else can be challenging.
Finally, you should strive for elastic capacity for services in which there is risk of rapid growth, variable usage, public access, or other factors that make it challenging to require a formal on-boarding process for all new consumers. Whether through a cloud provider, or through automated provisioning of your own infrastructure, the ability to scale up or down to meet dynamic demand can prevent an outage. That doesn't eliminate the need for monitoring and governing the run time behavior, though. There's always room for optimization, and by monitoring production traffic regularly, you can identify and improve the service.
Dig Deeper on Topics Archive
Related Q&A from Todd Biske
The emergence of the stack brings up an important question: Are app servers dead? Contributor Todd Biske examines the future of the app server in a ... Continue Reading
Everyone has a different viewpoint on SOA, but three key differences between SOA and microservices architectures can help you determine which is best... Continue Reading
Why do we need mircoservices architecture? How can we benefit from it? Continue Reading