Spending big on serious middleware

February 7, 2008 § 1 Comment

I once worked as part of a team with a clear mandate to select a middleware product that would dramatically speed up our productivity. We were given a short list, pre-determined by a separate consulting company, specialising in the area of document management systems (DMS) and business process management (BPM) tools.

What were the motivations for doing so? Well, we had to develop a large green-fields application, consisting of many business processes and requirements for integrating with a multitude of third parties. The business employing us had a fear of us developers getting bogged down in developing a technical infrastructure – ‘plumbing’ was the word used. Instead, it was envisaged that with a serious tool on board we wouldn’t need to worry about all that stuff; we could rock on and start developing business value quicker. Up to a million quid was reserved for this initial expenditure.

That’s obviously quite a crude summerisation of the reasons behind this middleware seeking. There were other factors at play. Large tools give the impression of having well-thought-out concerns taken care of such as security, data-protection issues, audit-trails and logging, scalability and performance, and transparency into the underlying business processes. Given these benefits, spending a million or so quid – which can be relatively quite small in the grand scheme of things – could be seen as a favourable risk.

The arguments against are somewhat subtler, and when squared up to the proposed benefits pushed by a ferocious sales force, they can be seen in a somewhat dimmer light. I’ll attempt to bring some of them out into a clearer focus during this post.


Testing a message-bus or restful SOA has been done many times and useful patterns and tools have arisen to make this a straightforward process. Trying to wrap an automated test-harness around a vendor product can be much trickier to accomplish. Why? There can be some major hurdles. First, is to try and get some hooks into the product to see what’s going in and going out; for example by using stubs that can be switched in and out automatically. Many middleware products come as a black box, which means it’s difficult to see the processing going on inside them. As if often the case, the documentation is fragmented and hard to trawl, and the consultants provided by the middleware company simply direct challenging questions back to HQ which is probably in a different country. A common complaint is that vendors don’t get testing. That is to say that they haven’t thought through how automated testing of business processes could be applied, let alone how such testing could be integrated into a continuous build environment such as CruiseControl. A reliance is placed on the product being so simple and easy to use that manual testing will suffice. In reality the complexity of some business processes cannot be avoided, and there will always be hazardous bugs that automated testing could catch.

Sometimes help is on hand. Open-source or third party products can crop up around the testing of large vendor based products. There are many, such as BTSUnit which tests BizTalk. A problem is that as the vendor product evolves incompatibilities arise with the testing tools; they aren’t developed to keep pace and thus they can fall into a state of dis-repair. A littered cloud of debris ends up orbiting the “big name” products.

Ease of development

A big selling point of large middleware solutions is how quickly they are to develop with. It is easy to see demos where within a few clicks someone has integrated with Ebay or Amazon. It’s so easy! A few shapes on a diagramming tool, some directional process lines and the entry of some credentials is all it can take.

Troubles set in however, when a development team with more than a few members start to work on the tool in unison. If developers can’t have their own locally installed instance to work with (for whatever reason, i.e. the lack of version control, or it’s too difficult to store the configuration), they may be forced to work on a centralized server. How the conflicts are managed depends on the tool. When I once used WebMethods it was recommended to us we have our own ‘package’ (think of sandbox) to work on, and they we manually copy across the changes into the production packages when we’ve finished. Disheartened by this, as continuous integration would be a very big challenge indeed, we opted to write our own tool that sucked in all the configuration our of WebMethods, and was capable of storing it as an XML document which could be versioned; the build server could then configure a fresh WebMethods instance with that information. It was however a nightmare. We got it to the point of working effectively so that a WebMethods consultant wanted to buy it, but it cost us time to develop and maintain, and it still had some frustrating issues.

The ironic thing is that the product was bought in order to prevent us spending time ‘plumbing’, yet by working alongside such a big middleware product, we got our hands far dirtier than expected.

Another portentous factor to consider is that big products might not do all they say they can do. For example, does it integrate well with SOAP? Yes – Great, which version? Oh, not 1.2? It’s a nice image that all the work can be done inside a single tool, yet if there are many third parties to integrate with the chances are that some additional software will be needed. How it’s all glued together is an additional complexity introduced.

So, if a large tool is easy to write automated tests for, easy to develop with a medium to large sized team, is it then going to deliver the promised impact on productivity? How does it cope when large complex business processes have to be entered into it?

Managing complexity

The art of BPM (business process management) tools is to make complex business processes look simple. This way they become easy to understand:

singularity process builder

The complexity however does not go away; it’s simply no longer fully in view. There is a large danger that one may edit the business process using a pictorial tool such as the above without awareness of the underlying code. A separate problem is that without a highly skilled team, the underlying code may lose its coherency and simplicity, as it’s scattered around the various parts of a wider orchestration of which it is not aware. If the team is not careful the code will consist of procedural hacks and a series of work-arounds for logic that could not be clearly and simply expressed elsewhere. New skills will have to be learnt in order to develop cleanly and simply. It’ll be easy to do trivial work, but when a business process strays from the happy path and becomes ever more elaborate, it’ll be much more difficult to maintain the virtues of process visualization and simplicity. As has happened on more than a few projects ThoughtWorks have taken over from, the complexity modelled in middleware solutions has spiralled out of control.

There’s an underlying theme that these tools are aimed at business people, not developers. In the project I was a part of where a considerable amount of money was spent on WebMethods, the end result was that the product was side-lined in favour of a clean, well-tested code-based solution. Once the painful decision was made to discard the vendor based product, we went many times faster.

  • http://lptf.blogspot.com matt m

    We get this all of the time in my consulting/contract work. All of the time. These huge product suites do not reduce risk. I have been listening to Eli Goldratt’s audio program “beyond the goal” and he has some great historical perspective on ERP systems in a similar light. I think middleware is even more insidious though, because, as you say, it is sold to business people, but doesn’t actually offer any features to them, as it is supposed to be “helping the developers”. Which, of course, it doesn’t!

What’s this?

You are currently reading Spending big on serious middleware at Pithering About.