Category Archives: Software Standardization

Second Edition of TP Book out Today

It’s hard to believe, but the second edition of Principles of Transaction Processing is finally available. The simple table of contents is here, and the entire front section, including the dedication, contents, introduction and details of what’s changed, is here. The first chapter is available here as a free sample.

Photo of an early press run copy

Photo of an early press run copy

It definitely turned out to be a lot more work than we expected when we created the proposal for the second edition almost four years ago.  And of course we originally expected to finish the project much sooner, about two years sooner.

But the benefit of the delay is that we were able to include more new products and technologies, such as EJB3, JPA, Spring,  the .NET Framework’s WCF and system.transactions API, Spring, SOA, AJAX, REST/HTTP, and ADO.NET even though it meant a lot more writing and rewriting.

The first edition was basically organized around the three-tier TP application architecture widely adopted at the time, using TP monitor products for examples of the implementations of the principles and concepts covered in the book. Then as now, we make sure what we describe is based on practical, real-world techniques, although we do mention a few topics more of academic interest.

The value of this book is that it explains how the world’s largest TP applications work – how they use techniques such as caching, remote communications (synchronous as well as asynchronous), replication, partitioning, persistence, queuing, database recovery, ACID transactions, long running transactions, performance and scalability techniques, locking, threading, queuing, business process management, and state management to process up to tens of thousands of transactions per second with high levels of reliability and availability. We explain the techniques in detail and show how they are programmed.

These techniques are used in airline reservation systems, stock trading systems, large Web sites, and in operational computing supporting virtually every sector of the economy. We primarily use Java EE-compliant application servers and Microsoft’s .NET Framework for product and code examples, but we also cover popular persistence abstraction mechanisms, Web services and REST/HTTP based SOA, Spring,  integration with legacy TP monitors (the ones still in use), and popular TP standards.

We also took the opportunity to look forward and include a few words about the potential impact on TP applications of current trends toward cloud computing, solid state memory, streams and event processing, and the changing design assumptions in the software systems used to power large Web sites.

Personally this has been a great project to work on, despite its challenges, complexities, and pressures. It could not have been done without the really exceptional assistance from 35 reviewers who so generously contributed their expertise on a wide variety of topics. And it has been really great to work with Phil again.

Finally, the book is dedicated to Jim Gray, who was so helpful to us in the first edition, reviewed the second edition proposal, and still generally serves as an inspiration to all of us who work in the field of transaction processing.

The Problem with SCA

David Chappell recently published his Introducing SCA whitepaper, and it is a very good introduction to SCA. I recommend it to anyone interesting in getting a handle on SCA.

In his summary of the effort in his blog he notes the major difficulty he encountered: SCA participants seem to have different opinions about what’s important about SCA.

David has blogged about this before, based on his experience charing an SCA panel at Java One. He has also argued (and continues to take this view) that the new Java programming model is what’s important.

My view is that the service assembly model is the most important thing, and I guess it’s fair to say that IONA as an SCA vendor will emphasize that view as we incorporate SCA into our SOA infrastructure product line.

I don’t think the world needs another Java programming model, and although I understand the comparison David makes with WCF, I don’t think it makes as much sense for the Java world. In fact the Java world appears fragmented enough already.

I was at Tech Ed ’03 when WCF was announced, and I remember clearly hearing the objections to some of the developers in attendance when they discovered that Microsoft was asking them to change how they developed their Web services. And I also agree WCF is a nice piece of work, and has a great architecture (very similar to IONA’s architecture BTW).

Our view, and we did express this during the SCA meetings (and we were not alone), was that the metadata should be incorporated into the assembly spec as much as possible, and the metadata remaining in the Java annotations should be minimized.

I suppose it is not surprising that the committee work ended up placing more or less equal emphasis on both the assembly model and on the Java programming model, since the participants in the meetings also represented the division of opinion David encountered at his Java One panel. But this division of opinion continues to be a problem for SCA.

OSGi for the Enterprise – update

One of the great things about co-chairing the OSGi EEG is getting to work with guys like Adrian Colyer, who was interviewed on InfoQ recently about Spring-OSGi. This is a very interesting interview, especially if you are, like me, interested in understanding the proposed Spring-OSGi marriage better.
Adrian is a great guy and really sharp, and he is also a great contributor to the EEG – and he is by no means alone in that category by the way. We are really lucky to have a great EG, and fortunate to have Adrian’s help editing the Spring-OSGi Request for Proposal (RFP).
I do have a small issue with Adrian’s portrayal of Spring and OSGi, however, and I would guess he’s probably familiar with it from some of the things I’ve said during the meetings. Basically Adrian makes it sound like Spring and OSGi can (or will) do everything anyone needs.
As my colleague David Bosschaert says, this is a bit like saying you can do everything in Java, since it basically amounts to being about to develop a new application to do anything you want in any given programming language. Sure, the combination of Spring and OSGi is very powerful, and great stuff, but I hope it’s clear by now that we are all living in a heterogeneous world, and one in which interworking is essential.
Spring and OSGi is an important part, but just one part of the overall picture. One of the really big benefits of OSGi is that it’s capable of supporting multiple programming models.
I am sure that Adrian did not intend to imply otherwise, but I just wanted to highlight it, in no small part because the RFP I’m editing includes the requirements for interworking with existing and non-OSGi systems (which may of course also be non-Spring systems).
If you want to find out more about OSGi overall check the Website and consider attending the upcoming OSGi Community Event in Munich. Hope to see you there!
The next EEG meeting actually takes place the day after the community event (June 28), where I hope my co-chair Tim Diekmann and I will be able to send off for approval the majority of the RFPs we’ve all been working on.
In OSGi the Expert Groups first create requirements documents, then move to the design phase, then to the specification phase, and finally to the reference implementation and conformance test kit phase. So I am hoping we can close out the requirements phase soon, and start on the really fun design discussions!
We have recieved about 15 RFPs so far (some have been combined), including the Spring-OSGi RFP that Adrian edited and the external systems RFP David and I edited, and RFPs for mapping various parts of JEE to OSGi (such as naming, database access, RMI, etc.), SCA to OSGi mapping (mostly to be done by the SCA community we hope), a distributed registry, database access, how to run system developed components (i.e. parts of JEE) alongside application developed components. marshaling and classloading, Web application support for the enterprise, management, and “universal” or multilanguage OSGi.
Stay tuned!

Report on Workshop on Web of Services for Enterprise Computing

The report is finally out for the Workshop on Web of Services for Enterprise Computing.
First, my apologies for the delay in getting this out. I think it still probably could use some work, but hopefully it includes most of the good stuff that can out in the Workshop.
I’d also really like to thank, once again, everyone who participated. Speaking for myself, the Workshop was a really great experience and I learned a lot. It was a privilege to be in the same room with the folks who attended.
Now I’m going to go out on a bit of a limb here and say that there’s been a bit of a detente recently in the REST vs SOAP debates, because I’d like to think that the Workshop has played a role in that.
I am seeing a lot of former WS-* advocates starting to give REST its due, and a good number of the “RESTafarians” starting to acknowledge the fact that people are getting value out of WS-*.
Whether the Workshop has really had a hand in that or not, I am pleased to see the debate being conducted on a more reasonable and rational basis.
Of course someone can always prove me wrong about that 😉 But I truly think it’s important that the industry as a whole understands the pros and cons of each approach.

WS-Transactions moves to OASIS Standard

I am a little tardy in doing so, but I am nonetheless very pleased to say that the WS-Transactions set of specifications (WS-AT, WS-BA, and WS-C) have been adopted as OASIS standards.
My co-chair, Ian Robinson, has written an excellent summary of the history behind the development of the specifications, including a link to Mark Little’s write up on InfoQ earlier this year.
I have to say it was really great working with Ian, and Mark, and Tom, and Max, and Colleen, and Bob, and I’m going to miss everyone. I really had a great co-chair and we really had a great TC.
At one point someone asked me how many TP interop standards I’ve worked on now, which number is WS-Transactions?
Well, let’s see – it’s either 4th or 5th, depending on how you count 😉
The first was the Multivendor Integration Architecture (MIA) transactional RPC, back in 1990. This was essentially a combination of DCE RPC and OSI TP. This was submitted to X/Open somewhere around 1992 I believe and modified (among the modifications was Transarc taking credit for writing it – some things never change ;-). So that’s either 1 or 2, depending on how you count it. This eventually became the X/Open TxRPC specification.
Sidenote: One of the interesting things about the X/Open DTP work, looking back on it, is the “basic compromise” that allowed the work to progress. The vendors could not all agree on a single interoperability standard, so they agreed to create three: TxRPC (basically DCE RPC), CPI-C (basically LU6.2), and XATMI (basically Tuxedo’s ATMI). All three shared a mapping to OSI TP for the remote 2PC. However it was mostly likely this compromise that got in the way of X/Open DTP solving the problem, since these specifications were implemented only by one vendor (maybe two for TxRPC) and all that remains in practice of this body of work is the XA spec (which is still used).
[I actually didn’t know that the TP book Phil and I wrote is on books.google.com till I googled for X/Open DTP just now – see link at top of previous paragraph!]
In the late 90s I worked on the Transaction Internet Protocol (TIP) 2PC interop specification, which was my only experience working at IETF. This was adopted by Microsoft as their 2PC interoperability solution for a while (in OLE Transactions, at least betas), and implemented by Digital and Tandem, but I don’t think it ever became product.
After joining IONA I chaired OTS V2, which was also the interop spec adopted in EJB2. I also worked on the additional structuring mechanism for OTS around the same time, which is how I met Mark and Ian.
So that is also 4 or 5, depending on how you count (one OTS or two?).
One of the things that bugs me, after all this time, is that people still tend to look at the use of transactions either outside of their applicable area, or in a kind of black in white context in wihch you either always should use them or always should not.
In the first case, I often find people talking about client initiated transactions. To me a transaction does not exist unless and until there’s an operation on data. If the client has a transactional resource, then fine, it can initiate a transaction. If it doesn’t, it can’t. The server initiates the transaction. And a transaction cannot exist without a transactional resource, since by definition that’s what actually starts the transaction (you know, a database, file, or queue with a transaction manager capable of journaling, committing, and rolling back multiple changes as a group).
In the second case I hear things like “don’t ever use transactions in an SOA!” As if this statement had any practical value. As with any other technology, transactions have their applications. I can remember (unfortunately 😉 database coding before transactions were widely available. Believe me you don’t want to be in the situation of telling the customer service department to stop taking orders while you track down the partial updates in the database and remove them by hand, and of course figure out where they should restart entering orders once the system comes back up…
So yes, there’s a cost, and if you are doing distributed 2PC across a network, there is definitely an extra impact to the application’s performance. But sometimes the ability to automatically recover to a known point following a crash is worth it.
Yes, make it a conscious choice. You don’t always need it. But when you do, there’s no substitute. This is not something you want to try to code yourself and get right, always, and without fail.

WS-Context approved as OASIS standard

I’m very glad to say that WS-Context was approved as an OASIS standard.
(Catching up a bit here…;-) (See also Mark Little’s InfoQ article.)
Red Hat contributed a good presentation on it to the WSEC Workshop in February, and the minutes from that reflect the discussion around the open issue of state management for Web services. There was also a bit of praise for WS-Context from Nick Gall, who called out its support for both RESTful and SOAPy interactions.
(By the way we are still working on the workshop report, which we hope to complete soon. I am somewhat embarassed to admit that I am the holdup at this point, but I am expecting to set aside some time over the weekend to catch up on it.)
On the Web cookies are used to share persistent state between the browser and server, and several times it was proposed to rename “WS-Context” to “WS-Cookie.” But cookies aren’t compatible with XML (and therefore Web services) and more to the point aren’t suitable because they are opaque and owned by the server (i.e. not visible or updatable by the client).
Many WS-* specifications define their own context mechanisms for their shared state management, including WS-Security, WS-Transactions, WS-ReliableMessaging, WS-Trust, WS-SecureConversation, and WS-Federation (I may have missed some).
Although it’s possible to make an argument that this is a good thing (i.e. each context type is designed and tuned for each spec’s specific requirements and purpose) in practice I don’t believe it is.
Most distributed computing systems end up using a commmon mechanism to propagate multiple types of shared session context, including security creditials, transaction context, user and device information, database connection information, and so on, because it’s more efficient and easier to maintain than having to figure out which kind of context goes into which kind of mechanism.
It has been a long haul to get to this point. I’d like to thank everyone who contributed to the WS-CAF TC and to the various interop demos and POCs along the way.
I am hoping CXF will pick this up – certainly there’s been a discussion about this on the dev list – as a future release feature.
Once WS-Context starts to get some use, its advantages are likely to be recognized, and Web services will have its common mechanism for shared session state management.

Ajax World presentation on OSGi

During the past week or so I’ve been doing lots of intersting stuff that I’ve been trying to find time to blog about, including Eclipse/OSGi Con, the Open Source Think Tank, and Microsoft MVP Summit.
And now I’ve found that I have yet another interesting thing to do – speak about OSGi and Eclipse at Ajax World next Monday.
(I am happy to help out but I just wish Jeremy would use that new photo I sent 😉

Workshop summary and observations (2)

To continue the summary and observations on the Web of Services Workshop last week…
Not a lot of blogging afterward yet, but here’s an entry from Pete and one from Jonathan.
Adoption
It was clear that both the Web (REST) and Web services (WS-*) are being successfully used in production applications today.
The usage patterns seem to substantiate my view that Web services are more often used in enterprise IT environments that predate the Web. Most of the users who attended the workshop – and they tended to represent large, established organizations – said that they are using Web services in what I’d call mission-critical or operational applications.
It was also very clear that many of the same organizations are successfully using Web technologies in mission critical/operational applications.
Some of us – myself included – took the position that both technologies can and should co-exist, and that it would be good if the W3C could help define how this could and should happen.
I thought Noah’s paper was really great, and probably the best presentation as well. (But of course my reaction was perhaps predisposed because of my thinking that a hybrid or combined solution is what the industry needs.)
Although he is employed by IBM, Noah attended the workshop to represent the W3C’s Technical Architecture Group (TAG).
He said the TAG’s view is that people are getting value out of both Web and Web services technologies. His presentation and paper include some very good pros and cons points, and describes what I thought was a great approach to using the two in combination.
One unfortunate part is that what Noah presented relies to a large extent on parts of the specifications that aren’t well implemented, or are potentially misusable.
The SOAP 1.2 GET feature, WSDL 2.0, and WS-Addressing endpoint reference mechanisms are integral to Noah’s recommendation, but not yet widely implemented (with the exception of WS-Addressing EPRs, but this is a variation on the issue that I’ll explain later).
This is directly related to part of the discussion during the Workshop, about how the issues the Web community has with Web services relate more to how the specifications have been implemented (or not implemented, as the case may be) than with the specifications themselves.
Achieving an impact in this area may be somewhat challenging, but this may be one of the things to evaluate in the context of going forward.
The WS-A EPR issue relates to the fact that as specified, they resemble cookies inasmuch as they are intended to carry opaque data. The data in EPRs may contain identifiers, but if they do they risk “breaking” the Web in that they could be using a format other than a URI to identify an endpoint.
More later…

Web of Services Workshop Summary

It’s not often I get to go to something like this, never mind the priviledge of co-chairing it.
I think we had a really great discussion, and I certainly learned a lot. From what others said, I think that was a pretty general impression. And I think everyone really did maintain a spirit of cooperation and pitched in.
We had a great mix of users, WS-* folks, and REST folks, with a couple of industry analysts thrown in, and experts on a variety of topics.
I think in the end we came up with a few good ideas for improving software standards for the enterprise, and a some good suggestions for how to better join the Web services (WS-*) and Web (REST) communities.
Of course, we have yet to see what will really happen. But for the past two days we had everyone into the same room, and I would say each started to acknowledge the other’s viewpoints. I even heard Mark Baker say he thought one of the WS-* companies was making pretty good progress 😉
There’s a lot to say, more than I can get to today. There will also be a format report, including any actionable items and recommendations. (Also the program now has all the presentations, in case you want to take a look
I’ll start recording a few interesting thoughts, in no particular order. I’ve also got a few photos that I’ll upload.
The innovator’s dilemma
Also known as why it’s difficult to recognize the effects of a disruptive innovation such as the Web.
I remember in the early days, when we’d talk with customers about SOAP, they’d say “well, that’s fine, but I can’t use it until it has better security,” or reliable delivery, or transactions, etc.
We had an example during the workshop, when one of the users said something like “I need a lot more capability in Web services before I can use it to replace WebSphere MQ.”
I just think that’s the wrong way to think about it, but it’s very natural. Customers rely on these kinds of “enterprisey” features every day, and when thinking about adopting new technology they look for feature compatibility.
But I think the question isn’t really “do Web services features offer equivalence with MQ” but “can I meet my application requirements using Web services”?
Yet we keep thinking about Web services in terms of the past, an evolution of the current solution, rather than as a completely new approach, an adaptation of middleware concepts to Web technologies.
The comment also was made a few times that it isn’t the specs as much as how they’ve been implemented that creates the difficulties for Web developers, and the complexities for which WS-* gets criticized.
Start with the Web
If Web based businesses such as Yahoo, eBay, Google, and Amazon.com can handle hundreds of millions of users and thousands of messages a second, petabytes of data, etc. and with good response time in a browser — you know it can be done.
So for anything new, consider using Web based technologies, or at least consider following the architectural principles of the Web in your design.
(Unless, of course, your business has absolutely nothing to do with the Web, and will never have to scale or change very much. There are many reasons not to consider using Web technologies. But I am thinking about the general case of a large, distributed enterprise application.)
Now however if you have a bunch of old systems – or if your IT environment was created before the Web, and (like many) is a mess of heterogenous stuff, you are going to also need to tackle the problem or rationalizing or standardizing that. Here an SOA using Web services is a great approach, and one that is growing in adoption.
If you want to join up your old stuff to the new stuff, Web services seem like the way to go.
Yahoo was among the companies represented and they mentioned that they still like to create their own infrastructure – these “mega sites” use quite a bit of custom code – because they can’t really buy what they need from the vendors.
Not too long ago this was the case more generally. I remember a lot of financial services organizations inventing their own middleware and TP monitors, because their requirements were farther advanced than the features of generalized products. So this seems cyclical, a pattern that substantiates the disruptive concept.
This was the reason I proposed a hybrid solution – use the Web for new applications, and adapt (or interface) existing applications using Web services.
This was the subject of some debate, and about two or three ideas were given on how to best accomplish this.
More later…

Jon’s Conversation with Steve about the Workshop

Jon Udell, who unfortunately isn’t able to attend next week’s workshop, apparently did the next best thing by interviewing Steve Vinoski for this week’s podcast.
As Steve said, it’s exactly this whole issue of “existing enterprise IT systems are not going away any time soon” that is why we are having the workshop. I remember visiting an automobile factory in the Netherlands about 10 years ago, back when I was working for Digital. I was there as part of an effort to help them convert their transaction processing systems from the VAX to Alpha.
During a tour of the factory floor I noticed an old VAX model controlling part of the production line and asked, “How about that, do you plan to replace that one too?”
They said, “We plan to replace that the day before it fails.” This was a joke of course, but it was their way of saying that the production line was working just fine, thank you, and unless they absolutely needed to mess with it, they were going to just leave it alone as long as they could. There are literally thousands of systems like that, running businesses all over the world.
Recently the workshop seems to be gaining some momentum. This past week we received some very good additional papers, including one from Pete Lacey, a long-promised one from Boeing, and a surprise from Westinghouse Rail (this via encouragement from presenters Mark Baker and Mark Nottingham – so thanks Marks!)