Test driven SOA: Tool kit for comprehensive automated test coverage

In this post I am going to share some tools I find useful when developing components for the Oracle service bus - same principles should apply to the integration cloud service as well. 

If we are not test first (or at least test alongside) programming, we are essentially debug later programming (See "Physics of Test Driven Development").

If the enterprise service bus sits in the middle of an organisation's messaging and integration landscape, there are some key architectural principles that help in getting the best out of any service bus solution:
  • It is not the place for business logic but for integration logic i.e. heavy on message transformations and often enrichment
  • Any operations, message flows or pipelines that the service bus exposes should be stateless and without side effects (ideally). To achieve this, a lot would depend on backend services too - they would ideally need to be idempotent. 
  • Exposed interfaces must be designed to be canonical while invoked endpoints abstracted away so that calling systems are decoupled from calling systems (and then there are non-functional elements of decoupling that the Service Bus can help achieve too such as by messaging - but this post is not about the value addition of Service buses)
  • Like any other software, it must have comprehensive unit test coverage (no, not the platform but what we have developed on it) and I might be stating the obvious here but I often find test coverage inadequate at many FMW customer sites. 
Whatever transformations, validations or enrichment the service bus does to incoming messages, must have some test coverage. Having a good test coverage means the solution is less prone to regression defects, easier to change and the whole solution is more agile (Agility comes with good practices and tools and not with ceremonies with strange names like scrum). 

Often I go to a customer site where they have important business data flows running on an ESB solution with complex data transformations. You never know how a change in some field or some complex template or xpath expression might lead to some unrelated side-effect. Needless to say, unless I find an exhaustive set of test cases (and surprisingly often I don't - maybe that's why they call me in the first place), the first thing I would do is to create some - this is the only way to ensure that the external interfaces to the system continue to work as before and after I make a change (except for the specific change I intended to make of course). 
It is also invaluable when I have to make improvements to the system - such as refactoring to improve old code. 

Some technical scenarios that we can address (frequently seen with Service bus implementations)
http to file/jms/http/database, File to File, File to JMS, JMS to JMS, JMS to http and other combinations thereof in more complex orchestrations, such as file to http and then JMS
Data formats exchanged can also vary: Native text, XML, JSon, binary

Requirements from a test framework (from an ESB point of view):
* One click to run multiple test cases
* Visual indication of pass or failure
* Can be run with mainstream build/CI tools (such as the popular maven)
* Ability to mock http endpoints 
* Ability to assert (equality, pattern matches)
* Ability to "Diff" - i.e. identify differences between two pieces of text but also identify differences between two XML or JSon documents 

For unit testing one can consider the Service bus (or ICS) as a message transformation black box and get the test framework to interact with endpoints only: filesystem locations, JMS destinations, inbound http endpoints, mock http endpoints. 
Again, for unit testing, I keep all endpoints on the local server (with invoked http endpoints served by a mocking tool) and an OSB Customisation file specifically for the test instance (which points to mock http endpoints where required, in addition to the local/test JMS destinations etc)

The tool kit that I have found most effective and have been using a lot lately:
1) JUnit - plain old tried and tested with all the power of Java at hand. 
In the Fusion middleware environment, we get access to all the weblogic client libraries (full and cut-down versions) to interact with JMS queues. I have made variants of this A-Team example for different scenarios such as to read a specific number of messages which I expect for a specific input. 

2) WireMock - Easy to set up and use and is effective. I only had to add this dependency to my maven POM file and with the WireMock import, I was ready with my mock http service (I have not tried the individually downloadable jar). For individual test cases, I could reply with different XML or JSon responses with different data and statuses (success, failure). 
The assertions can be performed at specific XPath level (ensuring that a specific XPath contains the value you expected) or at the full document level. 

Worth noting that in the SOA Composite test framework, we can mock endpoints as well, in addition to running them as part of a maven build - but my post is focused on OSB. 

3) XMLDiff - This is an API hidden away in one of the FMW libraries (Oracle XML Parser). 
For normal XML manipulation, we often get by with java DOM/SAX API's. However, I found XMLDiff very handy when comparing two XML documents which we often need to do in test scenarios. 
Think how you would compare an actual XML payload with an expected XML payload - XMLDiff does it for us by identifying the specific xpath where it found differences. 

Again, in a FMW environment you can add it as a library in JDeveloper or the following dependency in the maven pom:

The output of many of the diff operations is another XML document listing the differences. If it contains no "append-node" or "delete-node" elements, it means the documents are identical. 

4) SOAPui - last but not the least of course - this is a no-brainer for initiating unit tests for exposed http endpoints. Easily achieved by adding it as a plugin in your project pom.

The tools can then easily be extended to make repeatable, automated integration tests. Additional frameworks can add value where desirable (Citrus, Cucumber seem popular)

One final point: In addition to making code less prone to regression defects and more change friendly with potential to allow more frequent releases, test cases also serve as a "source of truth" repository of the business rules actually implemented in code - the more there are, the better. 
Documents might go out of sync, people might leave and forget to update documents, and then there is the semantic gap between documented language and code. If a test case says a field HEIGHT cannot exceed 9.99, then only a passing test can prove that it in fact doesn't. 
So given any business requirement, my priority would be to write failing tests first to document those requirements, write code that fulfils those requirements, accommodate all the "changes of mind" (whether genuine or ......) in a more agile way, and put everything into documentation once dust settles. 

Coming up: more sample code for testing Service bus "code", less essays. 
In the meantime, I can flaunt the e-unit tests I wrote the last I tried my hand at Erlang. It is a small component of a larger programming assignment I had to do and the assessment report said my software met the most number of requirements. I attribute this hands-down to the adequate test coverage I had added right from start. 

Summary: TDD allows us to write more complex software and keep it maintainable, more change-friendly and more responsive to change


Popular posts from this blog

Recursive calls in Oracle Integration Flows (Scenario: Paginated API calls for large Data Sets)

How to show/hide operators in adf:query advanced mode

WS Security - enabling passwordDigest authentication in an Oracle FMW environment