Service-oriented architectures, and systems that are built around them, are distributed by nature. SOA encourages the reuse of existing services, but these service assets are likely to be owned and managed by different distributed teams. Some of them may be different teams within the same organization, or perhaps these assets belong to partners. This reality results in some new challenges for organizations while driving SOA adoption. Each of these services have their own lifecycles independently of the others, causing dependency and synchronization issues. The business processes that execute on top of the SOA are as good as the weakest component. If one of the dependencies has a defect or fails to meet its SLA (Service Level Agreement) requirements, then the process can break down regardless of how robust the rest of the system.
This article is part one in a series of two pieces on tackling quality challenges of SOA and multi-tired systems. This first article will explore the quality challenges at a high, organizational level and propose an approach to minimize the challenges in terms of how the teams are structured. This article will also promote a framework for collaboration and information sharing, as well as visibility from a quality perspective. The second piece in the series will explore the technical aspect of managing quality in distributed system with service and Web UI components, and will address the kinds of quality policies and testing practices to enforce.
Distributed development: An example
Let's take an example of a consumer-facing Web application. You may have a Web application that provides the means for customers to sign up for products or services; a sales associate would receive these requests, evaluate them, then approve or deny accordingly. This customer request/evaluation process can depend on back-end Web services to retrieve customer data to determine eligibility for a certain promotion. It is often the case that the group responsible for the Web front-end UI (with a focus on HTML design, page workflow, etc.) is different from the group--often known as an enterprise services group--which maintains the back-end Web services (with a focus on WSDLs, SOAP and XML).
Certainly, such separation of roles makes perfect sense in today's technology environment where the diversity of technical domain expertise results in specialization and focus. Besides, these back-end services may very well be used by other applications that are different from the customer Web site, which further necessitates the separation of roles. However, the quality of one overarching business process (such as a customer requesting a product online and waiting on a sales associate to approve the request) in terms of meeting both functional and nonfunctional requirements needs to be ensured regardless of the distributed environment. Therefore, it needs to be managed and applied in a holistic approach because the business process is ultimately as good as the weakest link in the system. A defect in any part can negatively influence the customer experience.
Defining organizational roles
Many organizations used to address this issue with a shared quality group that performed QA-related tasks such as requirements validation and testing for several different teams. However, this approach, especially when the QA role is applied heavily to include all test activities, tends to be infeasible in a SOA environment. The difficulty lies first in the different areas of expertise needed for validating the work of each of the different teams. Secondly, the services reuse by different business processes restricts the kinds of applied tests and verifications to user-facing components only, and without consideration or visibility into some of the critical technical quality specifics of the underlying service components such as standards compliance and interoperability. Thirdly, the cycle times that such a model would consume tend to overwhelm the project iterations--causing significant overheads and delays. Therefore, an alternative approach is to split the quality organization roles between two main levels:
1) A technical level at the individual teams, where each team is responsible for a service or component for quality-related tasks and activities. These teams are comprised of engineering individuals who understand the technical domain and quality policies that need to be addressed at this level. In our example, the enterprise services group would be enforcing quality policies related to Web services, and the Web application group would be enforcing quality policies related to the UI front-end.
2) A business process group, where a shared (but small) number of individuals focus on the end-user experience in the context of the various business workflows. This group works closely with the business analysts, and ensures that any end-to-end business processes are implemented and integrated by the underlying teams properly.
Now, in order to prevent a situation where these two roles operate disjointedly, it is necessary that they can operate in a collaborative workflow. Here are some points to consider in achieving this workflow.
First is test artifact sharing. The technical quality engineers need to build and maintain their set of test artifacts for their piece of the system. That is, the Enterprise Services group evolves its assets by maintaining regression tests for their Web services and the code assets related to them, while the Web UI group maintains similar kinds of artifacts for their Web front-end. These test assets need to be maintained in a shared quality repository, and which is accessed by the process quality group in order to be able to piece these assets together into end-to-end test and validation scenarios.
Second is emulation. Both quality teams are going to require the user emulated (virtualized) services or components that the other teams work on. This is needed because staging a test and development environment, or replicating it for collaborative work, is extremely difficult with today's systems. Also, the availability of such an environment is crucial for each group in order to be able to make their pieces of the system testable. The emulation is what also isolates, simplifies complexity, and increases test coverage for the systems under various error conditions.
Third, automate regression test execution and provide visibility into results. Since the test assets are maintained and shared in a joint repository, they can be executed as automated regression tests on a scheduled manner or whenever a change is made by one of the groups. Because each group does not own the entire system, and yet needs to work in context of the entire system, access to test execution and results becomes necessary and critical. Problem resolution becomes faster, because causes can be identified quickly, and more importantly, problems introduced by one group would be caught since the tests of the others group might expose them immediately and thus prevent firefighting later. Regression test and policy compliance results can be made visible and failures would be assignable as tasks to the right members of the teams based on the error origin.
Not only would these practices provide a working workflow for these distributed teams, but it would ensure that they all comply with and enforce quality policies in a consistent manner, each on their own assets, and the visibility into the quality process can be consolidated so that it can be measured and improved.
Part 2 in this series will address some specifics that each group would need to address, as well as examples of quality policies and how they can be enforced consistently on the various technical domains.