Many of the benefits of SOA derive from service reuse. Software reuse as a concept has been promoted for many years and proposed for many different technologies. Services represent a greater level of abstraction for designing and implementing reusable software, and XML and Web services are easier to use than their predecessors. But is it still too hard?
This, I believe, is what’s at the heart of David Chappell’s recent paper on reuse.
He says that almost everyone he has talked with during the past two years about SOA has said that achieving reuse with services is almost as hard as it was with objects.
Because reuse was promised and not really achieved with objects, and because it’s not much easier with services, David’s conclusion from his conversations appears to be that the industry is going to fail again.
He cites the usual difficulties with creating reusable services, including cultural challenges for developers (like my colleague Steve has written about), political problems (in which one department is not motivated to share the cost of reuse with another), and the fact that a service published for reuse might not contain all the features and functions some consumers require.
I do not agree with his assertions that the industry adopted objects, and will adopt services, simply because the big vendors push these technologies onto their customers. If these technologies were not addressing real customer problems – which is where vendors get their ideas for new features – customers would not buy and use them, no matter how hard vendors might push. Customers have a low tolerance in general for unuseful features (anyone remember “Bob“?)
However his conclusion is what bothers me, since he’s basically saying that reuse is still too hard. I wonder whether or not that’s true. Certainly I have heard a lot of success stories from our customers and from others at industry trade shows.
If it were it would be a serious disappointment. I certainly think that the current technologies for SOA – XML and Web services – represent a sufficient level of abstraction to achieve reuse.
Last week Oracle hosted a three-day meeting of the SCA policy working group to try to accellerate progress on the current draft.
Naturally the issue of simplicity arose since ultimately people will have to learn how to use the syntax and understand its semantics. Complexity is always a potential barrier.
Someone said something like “after all, simplicity is the main reason for doing SCA in the first place.”
I am not sure about that and said so. I think that the main reason for doing SCA is to define a better way to deploy Web services onto various runtimes.
It’s possible that we’re saying the same things – certainly the Java APIs for Web services are too complicated (I have been saying this for several years), and that using the SCA metadata could simplify them.
But that perspective is somewhat limited. Enterprise software needs to be standardized, not just the Java APIs simplified, and Web services are the best potential solution to date.
Web services are not executable. Nor should they be – they are an abstraction of distributed computing concepts such as interfacing and interoperability expressed using XML – and by not tying them to any particular runtime they maintain the necessary independence to achieve standardization.
If we now want to work on the next level problem – how to consistently map the abstraction to multiple runtimes – that is where SCA comes in, or should. It is definitely a similar statement — simplicity is an important characteristic related to the potential adoption of any technology – but to me the most important thing is getting right the deployment characteristics
necessary for the success of the abstraction.
At one point during the meeting I made the joke, which comes up from time to time in these discussions (as you might imagine) that we should just get everyone to code everything in Java. Problem solved.
But of course we can’t do that. The next best solution is to ensure the abstraction layer contains the necessary functionality, and that the containers can interpret the abstractions correctly. And as simply as possible, of course.
I am writing this from the Eclipse Board meeting. We’re at the Omni hotel in the huge Las Colinas planned development, just outside of Dallas. It is pretty scenic, and since it was a nice day I went out for a fairly long walk – about 30 minutes – at lunchtime to try to get some exercise and look around.
The area is certainly interesting, and certainly somewhat successful. But I couldn’t help comparing it to some of the “natural” waterfront communities I’ve seen, like say, San Antonio, which has a thriving riverwalk area that just sort of naturally evolved. Las Colinas seemed like it couldn’t really make up its mind whether to front the street or the water, and I often found myself alone for long stretches of the walkway along the manufactured canals and lake.
Anyway, as usual a good bit of time during the board meeting centered around IPR, which reminded me about the recent news from Microsoft, called the Open Specification Promise.
This is a very important issue, since basically it means Microsoft is saying they will not try to enforce any patents they may own in connection with 35 Web services specifications (including most of the major ones, SOAP, WSDL, WS-Security, WS-ReliableMessaging, etc. and some not yet submitted to a standards body such as WS-MEX, but strangely excluding WS-BPEL and UDDI).
In my opinion, software patents are entirely broken in the first place – and I say this even though I have two pending applications. You can basically submit anything for a software patent, and trying to defend against potential infringement is what I would call a nightmare scenario for any software company.
So it is great that Microsoft is saying they will not try to enforce their Web services patents (at least most if not all of them – impossible to know the answer here I suppose) against other implementors of those specifications. I would in fact like to see IBM say the same thing.
Intellectual property rights disputes play a large role in standardization and open source. Everyone working on open source projects is keenly aware of the possibility of inadvertently adopting code that infringes on someone’s patent.
The start of the fragmentation in the Web services specification community was when WS-Security was submitted to OASIS. Until that time all Web services specifications were at W3C. It was right around the time W3C tightened up its IPR policy while OASIS still operated on a looser policy.
In those days also Microsoft and IBM often included licensing terms and conditions on the specifications they published. Anyone who implemented one of them, WS-Addressing for example, needed a license from each of the specification authors.
They created a workshop process that included a mechanism for nonauthors to provide feedback in person or in writing on the various specifications. For the (again) purposes of maintaining a clean IPR for the specifications, anyone providing feedback had to sign an agreement relinquishing all IPR to the specification authors.
However well intentioned this was, it created a double bind for specification implementors who were not also authors – if any feedback was adopted and incorporated into a new version of the spec, the feedback submitter could end up in the position of having to pay (license terms were not disclosed) to implement his or her own idea.
To their credit when I pointed this out on the W3C email list, they went back and fixed it, changing the licensing text on their specifications. When the WS-Transactions specifications finally went into OASIS last year, the IPR terms and conditions were significantly improved. The only big remaining potential issue was around patents, and last week they cleared that up – or at least significantly improved that as well.
And recently IBM and Microsoft – albeit after a long delay – did submit WS-Policy to W3C under their strict policy. So this is great progress.
IPR issues are, by the way, also behind much of the disagreement among Java vendors with respect to how Sun is running the JCP. One bit of encouraging news here recently however was Sun’s joining SCA and the discussion there about merging SCA with JBI.
Personally, I believe that standards are not really the right area to think about when considering the IPR to patent. What’s in the specifications should be what everyone in the industry agrees needs to be standardized. Competition should be based on the implementation of the standard rather than what’s in the spec.
I also often point to the success of the World Wide Web, which I believe results in no small part from the inventor’s decision not to patent the core enabling technologies. This decision led to a very low cost of entry for participating – think how cheap it is to send email over the Internet, how cheap it is to put up a Web page, or update a Web site frequently.
On the other hand it’s interesting to think what would have happened had the Web been invented by Microsoft or IBM. Would they have sought to patent their invention? Would that have increased the cost of entry and created a barrier of adoption that would have inhibited or prevented its amazing success?
Well, at least there’s good progress recently.
For more information look here for a Microsoft blog entry with a lot of links and comments.
For the record, the Eclipse IPR policy
And why certain types of gambling, such as government lotteries, are actually helpful!
This hilarious clip from the Daily Show comes to me via Mark Little’s blog.
All I can say is that this really illustrates why so many people take an interest in these kinds of news shows.
Peter Kriens blogged about the results of the OSGi Enterprise Workshop on Monday, and more information is available, including this list of prioritized requirements and draft charter.
I was pretty impressed that we covered so much ground in one day. The working style of the OSGi Alliance was very informal, low pressure, but keep-things-moving and get-it-done. (Does that sound too much like Larry the Cable Guy?) We went from stating requirements to brainstorming to prioritizing to drafting a charter…for a standards organization that is pretty good progress.
Of course a key question here is how would an “OSGi Enterprise Edition” relate to other enterprise software initiatives out there, such as Web services, SCA, CORBA, or J2EE? How well would OSGi be adopted in the enterprise, especially given its origins in a Java runtime environment for mobile devices?
One answer can be found within the recent OSGi-Spring mapping. This project allows Spring components to be delivered as OSGi bundles and creates proxies between Spring beans and OSGi services.
Another answer appears to be OSGi’s willingness to reuse and adopt whatever makes sense, without regard to any industry marketing/hype campaign.
Other interesting aspects…
As of 6.1 IBM ships WebSphere as a collection of OSGi bundles. (And by the way a Google Search of Maven OSGi turns up evidence that OSGi support is already underway.)
The OSGi framework provides the foundation for the Eclipse runtime.
Open source and commercial projects and products are starting to emerge that deliver OSGi compliant servers for enterprise applications.
Workshop attendees described in detail a large-scale enterprise integration project in the airline industry, based on OSGi, that is already underway, with another one potentially to follow soon. In fact this project was the source for many of the requirements mentioned during the morning.
OSGi was chosen for this project instead of existing enterprise software technologies. J2EE, for example, was described as too heavyweight and complex. What they needed was basically a network of lightweight service containers talking directly to each other, without any broker, hub, or server in between.
Is this a good idea? Absolutely. Artix works exactly this way. Its distributed microkernel container with configurable plugins already handles large numbers of users, large numbers of applications, and large numbers of transactions in large numbers of customer applications in production today. Absolutely this approach works – actually it is the best way to do SOA.
But what about SCA? IBM was present in force at the OSGi workshop (where aren’t they ? and they are also the main drivers behind SCA. The response was that the design center of SCA is clearly for distributed and large scale systems, whereas the design center for OSGi is for services running in a single JVM on mobile or embedded systems. Also OSGi is Java only while SCA is multi-language.
But guess what two of the main requirements are for enterprise OSGi? Distributed computing support (i.e. multi JVM/multi-process) and multi-language compatibility, of course.
But maybe also there’s a good way to put all this together?
And maybe the OSGi Alliance is a good place to do it?
All in all, a very interesting set of questions.
As the board members present at the workshop said, OSGi simply noticed a groundswell of interest in applying OSGi to enterprise software problems, and thought they should probably look into starting an expert group to focus the conversation.
Dana Gardner posted a great entry about his meeting with some Ionaians last week (must have been Oisin and Debbie) last week at Eclipse World.
He hits the nail on the head about what we are doing, and about the potential for open source to meet SOA infrastructure requirements.
Search Web Services also did a nice preview and write up of Oisin’s session on the SOA Tools Platform project.
If you think about Web services, as I do, as interfacing and interoperability technologies that are independent of execution environment, it’s important to maintain separation of Web services descriptions and messages from any language or platform specific features. This separation is where they deliver most of their benefit, and from which they derive most of their power.
However there needs to be a mapping from this independent description and messaging (plus ‘quality of service’ services of course) to an execution environment. That’s what SCA does. SCA is a component model, yes, and the reason we need it is because Web services are not executable (by definition they are independent of execution environment).
And of course on top of that we need tools.
And as I mentioned in the recent article in Enterprise Open Source Journal (sorry it is a large file) about enterprise open source, it makes sense to think about SOA infrastructure in terms of open source.
The software industry has been mainly proposing SOA infrastructure in terms of traditional commercial software paradigms. However, since SOA is basically an improvement layer on existing systems – a style of design or blueprint that encourages reusability and better integration – the kind of innovation we are starting to see in open source projects seems like a perfect fit.
I celebrated my return from vacation Monday by getting on a plane to Seattle for the three-day Web Services Transactions technical committee face to face.
Bad weather in Chicago caused a total of 5 hours’ delay on the way out, and I missed my connection. My bag, which I checked only because of the new restrictions, ended up on a different plane than I did, which cost me one of those hours. On the way back I decided it wouldn’t happen again, so before leaving the hotel I threw away all the toothpaste, shaving gel, and cologne in my toilet kit. Of course the weather was fine Friday and the delays were minimal…
Anyway from Tuesday through Thursday we made some good progress on the WS-Transactions specifications. These are important specifications because some applications of Web services and SOA require transactional integrity, and because the coordination spec (WS-C) and compensation spec (WS-BA) lay the foundation for some of the advanced transaction models we will need for large scale, loosely-coupled, asynchronous applications of SOA.
Tuesday and Wednesday we resolved all outstanding issues with current drafts of WS-Coordination and WS-AtomicTransaction. We confirmed successful interop testing of them, and voted them to Public Review, which is the next step toward their becoming OASIS standards. (The new drafts will be posted soon on the TC website.)
Wednesday evening Ian Robinson, my co-chair, had the idea of going to the restaurant in the Space Needle. Not everyone went, but those of us who did had a great view of the sunset. As is typical for Seattle the day had been cloudy with a bit of rain but by evening it started to clear.
After a Long Day of Processing Transaction Processing Issues
A lot of the discussion on Thursday was given over to issues on the WS-BusinessActivity specification.
The two-phase commit protocol defined in WS-AT hasn’t fundamentally changed in 20 years (or more), and has been standardized at least four times previously (OSI TP, TX-RPC, OTS/JTS, and TIP). And implementations of this specification interoperate pretty smoothly, and the spec is pretty stable.
Of course you might ask whether two-phase commit is appropriate for Web services and SOA, but the answer is, like anything else, that it depends what you’re doing, and it sometimes is.
Anyway the bigger, more open discussion is on alternative protocol models, which are more appropriate for asynchronous or long-running applications, such as WS-BA. Less is known about these from implementation experience.
WS-BA is an “open nested transaction” protocol, which means that “nested” or substransactions can commit without the overall transaction committing. Compensation actions are then required to undo any changes should the overall transaction have to roll back.
In this context the overall transaction can be thought of as a WS-BPEL script with multiple service invocations, and in fact one of the design criteria, if not the main one, for WS-BA is that is is a good match for WS-BPEL compensation scopes.
Other advanced transaction models and protocol mappings are possible on top of WS-C, but these are outside the scope of the current TC. We are still hoping to complete the TC’s work within about a year of it’s initial meeting last November, and at the moment we are probably on track for completion in early 2007.