Manage Learn to apply best practices and optimize your operations.

Tackling quality challenges of SOA and multi-tiered systems: A technical perspective, part 2

This part of the series is going to shift from an organizational perspective on the problem to a technical perspective. What are the technical challenges that arise in distributed SOA systems, and what kinds of quality policies and testing practices need to be put in place to address these challenges?

Part 1 of this article discussed the distributed nature of systems when SOA is involved, and some of the difficulties inherent in ensuring quality when multiple teams are involved. Various strategies were suggested as a means for teams to streamline workflow, promote reuse of the work from other teams, and identify which part of the systems are responsible when issues arise. This part of the series is going to shift from an organizational perspective on the problem to a technical perspective. What are the technical challenges that arise in distributed SOA systems, and what kinds of quality policies and testing practices need to be put in place to address these challenges?

Systems built using a SOA inherently depend on many different components. Many of these components are out of the immediate control of the teams using the components. Two technical challenges arise directly from this. First, the quality of the system being built by one team depends on the quality of the components built by other teams, whether built in the same organization or built by a business partner. If there are problems with a component, quality issues will arise with the systems being built using that component. Second, it can be difficult to test a system that depends on external components when those components are not in control of the team doing the testing. If the component is not always available or if it has bugs, then tests of the system using the component will be noisy. It can also be difficult to test how the system behaves under valid but uncommon error conditions from the component. Another challenge that arises from these distributed systems is that of testing the entire business process scenarios. Taking from the example in Part 1 of this series, a process may involve a customer interacting with a Web interface and a customer sales associate approving a request made by the customer, thereby triggering a Web service. Multiple system interfaces require interaction in order to perform the entire process.

Ensuring quality across all system components

Defining consistent quality policies for each component in a system will ensure the quality of the overall system. These policies will govern how each component is written and what conditions it must satisfy. All Web-based and SOA systems must consider security. Policies on how code should be written to make sure that it is secure can address most of the vulnerabilities discussed in the OWASP Top Ten. In addition, interoperability standards from W3C, OASIS, and WS-I need to be considered for Web services. There are many WS-* standards that may be important for the services in your SOA – the ones used depend on the requirements of the application. See some examples of these kinds of standards for SOA. Web services usually must also satisfy SLAs.

All policies discussed so far are policies that affect how code and components are written to ensure they meet the policy. A different kind of policy to consider is enforcing the types and number of automated functional test cases that must exist for each component. These functional tests may be unit tests, or they may be functional validation tests for well-formedness of messages, for functionality in both success and failure cases, and for adherence to SLAs. Web interfaces could require other kinds of policies. Adherence to WCAG 2.0 may be mandated for Web applications that must be accessible to visually impaired users, or there may be branding and content policies that affect the look and feel of the Web application.

Each of these polices can and should be automatically enforced, with result publication also being automatic. This will ensure adherence to the policies, and visibility into whether the policies are being followed. Each team in an organization should be required to follow these policies. When working with business partners, negotiations with them should result in a requirement for them to follow the critical policies mandated by your system. All teams should publish results from their testing activities. When this is done for each component in a system, results can be aggregated as a whole to determine the overall health of a system that depends on each of the components.

Testing a system that depends on components outside the team's control

In our example of a customer-facing Web application, the customer submits a request in that application that requires approval. The approval may require interacting with an external Web service for either a credit check, or some other kind of background check. The team writing the Web application does not control this external Web service, but they need to write unit or functional tests against the Web application. In some cases it may not be feasible to test against the real service because it uses sensitive data. In other cases, the availability of the service may be sporadic, or a version of it may have bugs. Lastly, the team needs to validate how the Web app behaves when the service returns valid and expected error conditions that normally do not occur. How does the team write effective tests for their Web application in this scenario?

The answer is in building emulations (stubs) for the external services. These emulations can be built manually or automatically generated by testing tools. The emulations mimic the real behavior of the service but use a static data set. Once the emulation is used, the problems inherent in the availability or temporary instability of the service disappear. The emulation can be programmed to return the different error conditions that need to be handled by the Web application, so that repeatable tests can be performed against those conditions. And as discussed in Part 1, these emulations can and should be published globally in such a way that they can be reused by all teams in an organization that depends on the services modeled by the emulations.

Testing business process scenarios

In our example, the process involves a user interacting with a Web application to submit a request, and it involves an employee validating that request by interacting with a Web service. Once the request is validated, the result is probably stored in a database. Automatically testing this scenario involves putting together a number of pieces. First, a scenario is recorded against the Web UI that replicates the process of a customer submitting a request. Second, a test is created against the Web service (or its emulation) that simulates the approval process. Third, a test is written that validates that the correct data has been stored in the database. These tests are put together into a single scenario that then can be run, with the results published, automatically. As mentioned in Part 1, these tests may have already been created by the teams working on the individual components, so the tests can be reused in this larger testing scenario. Automated testing tools exist that can make this process easy and painless.

In summary, ensuring quality in distributed SOA systems can be achieved, despite the inherent difficulties that arise. The policies and practices that have been discussed will go a long way toward ensuring the success of a SOA.


Dig Deeper on Topics Archive

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchSoftwareQuality

SearchAWS

SearchCloudComputing

TheServerSide.com

Close