WS-Transactions moves to OASIS Standard

I am a little tardy in doing so, but I am nonetheless very pleased to say that the WS-Transactions set of specifications (WS-AT, WS-BA, and WS-C) have been adopted as OASIS standards.
My co-chair, Ian Robinson, has written an excellent summary of the history behind the development of the specifications, including a link to Mark Little’s write up on InfoQ earlier this year.
I have to say it was really great working with Ian, and Mark, and Tom, and Max, and Colleen, and Bob, and I’m going to miss everyone. I really had a great co-chair and we really had a great TC.
At one point someone asked me how many TP interop standards I’ve worked on now, which number is WS-Transactions?
Well, let’s see – it’s either 4th or 5th, depending on how you count 😉
The first was the Multivendor Integration Architecture (MIA) transactional RPC, back in 1990. This was essentially a combination of DCE RPC and OSI TP. This was submitted to X/Open somewhere around 1992 I believe and modified (among the modifications was Transarc taking credit for writing it – some things never change ;-). So that’s either 1 or 2, depending on how you count it. This eventually became the X/Open TxRPC specification.
Sidenote: One of the interesting things about the X/Open DTP work, looking back on it, is the “basic compromise” that allowed the work to progress. The vendors could not all agree on a single interoperability standard, so they agreed to create three: TxRPC (basically DCE RPC), CPI-C (basically LU6.2), and XATMI (basically Tuxedo’s ATMI). All three shared a mapping to OSI TP for the remote 2PC. However it was mostly likely this compromise that got in the way of X/Open DTP solving the problem, since these specifications were implemented only by one vendor (maybe two for TxRPC) and all that remains in practice of this body of work is the XA spec (which is still used).
[I actually didn’t know that the TP book Phil and I wrote is on till I googled for X/Open DTP just now – see link at top of previous paragraph!]
In the late 90s I worked on the Transaction Internet Protocol (TIP) 2PC interop specification, which was my only experience working at IETF. This was adopted by Microsoft as their 2PC interoperability solution for a while (in OLE Transactions, at least betas), and implemented by Digital and Tandem, but I don’t think it ever became product.
After joining IONA I chaired OTS V2, which was also the interop spec adopted in EJB2. I also worked on the additional structuring mechanism for OTS around the same time, which is how I met Mark and Ian.
So that is also 4 or 5, depending on how you count (one OTS or two?).
One of the things that bugs me, after all this time, is that people still tend to look at the use of transactions either outside of their applicable area, or in a kind of black in white context in wihch you either always should use them or always should not.
In the first case, I often find people talking about client initiated transactions. To me a transaction does not exist unless and until there’s an operation on data. If the client has a transactional resource, then fine, it can initiate a transaction. If it doesn’t, it can’t. The server initiates the transaction. And a transaction cannot exist without a transactional resource, since by definition that’s what actually starts the transaction (you know, a database, file, or queue with a transaction manager capable of journaling, committing, and rolling back multiple changes as a group).
In the second case I hear things like “don’t ever use transactions in an SOA!” As if this statement had any practical value. As with any other technology, transactions have their applications. I can remember (unfortunately 😉 database coding before transactions were widely available. Believe me you don’t want to be in the situation of telling the customer service department to stop taking orders while you track down the partial updates in the database and remove them by hand, and of course figure out where they should restart entering orders once the system comes back up…
So yes, there’s a cost, and if you are doing distributed 2PC across a network, there is definitely an extra impact to the application’s performance. But sometimes the ability to automatically recover to a known point following a crash is worth it.
Yes, make it a conscious choice. You don’t always need it. But when you do, there’s no substitute. This is not something you want to try to code yourself and get right, always, and without fail.


5 responses to “WS-Transactions moves to OASIS Standard

  1. As usual Eric, it was a pleasure to work with you too. First the CORBA Activity Service, then the Java Activity Service, then WS-CAF (in all of its incarnations) and now this. It’s been a long ride and I hope it’s not over yet!

  2. And don’t forget OTS V2! We had some good discussions there, especially rollback in the case of network failure 😉
    Thanks very much. Yes, I am definitely looking forward to the next time.

  3. WS-TX 1.1 moves to a standard

  4. Nice writeup, Eric, and congratulations are in order for all those that managed to get this work done!
    I would like to argue that there is an additional reason for using transactions over those you mention, namely, coordination of access to in-memory data structures.
    Jim Johnson of Microsoft describes this much better than me, so see and other articles he has written on the topic.
    It is true, I think, that one big challenge in developing software is that of synchronization of access to ‘things’, something that transactions take care of very well.
    So, if one can get a transaction manager running fast enough, why not use it for this and other purposes, including those of files, databases, queues and so forth?
    No, there is no commit or rollback in the strictest sense, but all sorts of methods, e.g., shared read, protected write, exclusive write, for allowing access to ‘things’ in memory whilst the transaction is running.
    Cheers, john

  5. Yes, very interesting. I also have heard some discussion about the potential for transactional models to help ensure correct behavior in multithreaded environments.
    An interesting aspect of the debate about transactions is whether their cost is worth their benefit.
    Certainly if you can lower their cost you can change the results of that equation.
    One very interesting aspect of Jim’s post is the design that minimizes the involvement of the coordinator, or maybe it’s better said as the coordinator only gets involved when it’s really needed.
    At Digital we had a design that would allow you (the programmer) to initiate a single phase transaction (i.e. single database) and then if you were to access another resource (second database, transactional file, or queue), the transaction manager would automatically promote the transaction from a single phase to a two phase protocol.
    This capability is not supported by some transaction managers, which is (in my opinion) why it was not included in OTS/JTS, resulting in the very strange “one phase commit optimization” which is not really much of an optimization since if all you’re doing is a one phase transaction there’s no need for the transaction coordinator to get involved at all.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s