Monthly Archives: February 2007

Jon’s Conversation with Steve about the Workshop

Jon Udell, who unfortunately isn’t able to attend next week’s workshop, apparently did the next best thing by interviewing Steve Vinoski for this week’s podcast.
As Steve said, it’s exactly this whole issue of “existing enterprise IT systems are not going away any time soon” that is why we are having the workshop. I remember visiting an automobile factory in the Netherlands about 10 years ago, back when I was working for Digital. I was there as part of an effort to help them convert their transaction processing systems from the VAX to Alpha.
During a tour of the factory floor I noticed an old VAX model controlling part of the production line and asked, “How about that, do you plan to replace that one too?”
They said, “We plan to replace that the day before it fails.” This was a joke of course, but it was their way of saying that the production line was working just fine, thank you, and unless they absolutely needed to mess with it, they were going to just leave it alone as long as they could. There are literally thousands of systems like that, running businesses all over the world.
Recently the workshop seems to be gaining some momentum. This past week we received some very good additional papers, including one from Pete Lacey, a long-promised one from Boeing, and a surprise from Westinghouse Rail (this via encouragement from presenters Mark Baker and Mark Nottingham – so thanks Marks!)

Goodbye Steve, and good luck

Update 2/16: See also William Henry’s post.
It’s with mixed emotions that I write about Steve’s departure yesterday.
I am really going to miss him. It really hasn’t sunk in yet. No more Steve!
But I really also want to wish him the best. I envy his opportunity – as he said, things like this come along once in a lifetime.
I know that Steve loved working for IONA, just as I do. It really is a great company, full of exceptional people. I know how hard it was for him to leave. I got the impression he was just trying to pull the band aid off quickly, if you know what I mean.
I first met him when I was working for Compaq (by way of being at Digital when Compaq bought them). I was in the enterprise server group, and we were evaluating technology for potential partnerships, including IONA’s.
A couple of nights ago I was sitting on the window ledge beside Steve’s cube, chatting while he was going through his stuff. He pulled out a business card I’d given him in those days and we reminisced a bit. As usual, he ended up giving me one of his familar self-deprecating jokes: “Yeah, the technology turned out to be ok, but what about the bozo they sent over?”
I am really going to miss that.
One time he “shamelessly” promoted his new book on CORBA. So of course I answered him right back by shamelessly promoting my book on TP. I learned a log from that book, and from him.
Shortly after joining IONA I went on a customer visit where we were both scheduled to speak. But the signs put up all over the halls said: “Steve Vinoski is coming!” As I discovered, in the CORBA world (and rightly so) Steve is God. (I mean this in the Eric Clapton sense, of course 😉
Steve is one of those multi-talented guys – he can play the blues harp well enough to sit in with a house band. He can throw a frisbee well enough to compete internationally in freestyle – and I would say golf too (having played against him, throwing from the women’s tee to try to keep up).
He can code, write, speak, and think well, and understands how to contribute meaningfully, and in multiple ways, to a company’s goals. But he also cultivated a larger role in the industry, and as anyone who’s followed his career knows, he’s contributed significantly, whether writing journal articles, serving on myriad conference program committees and standards committees, and of course to the blogosphere.
He also has a wicked eye for fashion, as can be seen in this photo.
MeAndSteve.jpg
An Hawaiian Shirt Day
Come to think of it, there are some things I am not going to miss…
But seriously, during his 10 years here he led the most challenging engineering projects, including the project that created what are really our “crown jewels” – the Adaptive Runtime Technology (ART) on which our modern CORBA and SOA products are based.
Steve characteristically recognized the value in things like REST before many of us and would not hesitate to challenge the prevailing wisdom, whether internally or externally, when he saw something that made sense. His most recent column is a good example of the kind of clear thinking and clear writing he’s capable of.
No one around here feels bad about this, although we are really going to miss him. Steve left on the best of terms, to pursue his dreams. He gave us his all for more than ten years, and a pile of great memories.
Good luck!

WS-* vs REST is not the question

Update 2/13 5:10 pm. Apparently I initially missed this post from Jerome Louvel on the reconciliation idea…
Update 2/16. See also Stefan Tilkov’s summary of the workshop papers.
It’s nice to see the Workshop get some attention in the blogosphere (hope I constructed those Google and Technorati searches ok).
For a while there I wasn’t sure it was going to happen.
But it’s a bit disconcerting to see how much of the interest is sparked by the very critical papers that take a kind of “anti-WS-*” or “pro-REST” position. Not much has been blogged about the papers that take a more moderate, middle ground view, or those from the user side that state requirements.
In putting the workshop together we tried to solicit as many position papers from the user side as we could, since it is hard to really evaluate the fitness for purpose of any given technology unless you have a good requirements statement.
The big question isn’t whether WS-* is too complex, or REST is better. The question is “for what?” In the context of this workshop, the “what” is enterprise software standardization.
My original statement to the W3C was around the importance of enterprise software standardization, and the question that eventually led to the Workshop, is whether anyone is really doing anything about this problem. You could of course just buy all your software from one supplier, but I am not sure that’s such a great idea ;-).
By way of contrast, anyone can see how well standardization is working out for the Web. (And royalty-free standards, I should add, since Tim famously decided not to patent his invention.)
Here’s a diagram of the problem I’m trying to find a good standards based solution for.
Diagram2.png
Typical Back Office Spaghetti Mess
Most medium and large companies have some version of this diagram.
The reason for it is pretty simple: companies just went out and developed the applications they needed, more or less one by one as they went, to run the business using whatever technology was available, what someone decided to buy (*ERP*), or using whatever language the developers on the team knew. The result is a lot of “stovepiping” simply because most folks were not even thinking about IT in terms of an enterprise level asset – they were thinking about the ROI of the next manual process they wanted to automate.
So what do we propose to solve this problem?
Tim Bray’s blog entry includes an excerpt from Paul Downey’s workshop paper, which I actually took to be fairly neutral (althogh Tim of course, and famously, is not).
Tim also focuses on on the WCF reference implementation idea. Certainly there’s a lot of truth to this, given the large number of Windows developers. WCF may very well come to define what WS-* means, at least for many many developers. David Chappell has said something similar.
In an interview published last year, Tim O’Reilly gives the tools argument. It’s very possible that a overly strong focus on tools contributes to the difficulty to innovate here and really address the problem. For one thing, what is easiest for the developer is not necessarily the best thing for the enterprise. If the goal is to make a service easily reusable, for example, this might be harder on the developer than simply implementing the function in a standalone application.
Among the SOAP bashers there’s very little talk of alternatives. Nelson is one of the few to even bring up this topic.
So where are we on the enterprise software standardization question? WS-* has a lot of critics but no good proposals for alternatives.
This will no doubt be among the more important topics to be discussed at the Workshop. Mark Baker’s paper takes a view of it, and I certainly hope we can find a lot of time for it.
One perspective is that WS-* exists for the purpose of adding back into HTTP what was intentionally omitted, and of course this is a major reason for RESTians to object.
Consider the strange situation of WS-ReliableMessaging. Web services specifications are designed to be communications transport independent. But WS-RM only makes sense for an unreliable protocol, such as HTTP.
So where are we? The adoption of Web services is increasing year over year, and yet the opposition voices grow louder. Why is that?
An interesting Forrester report just came out that indicates very few vendors agree on what WS-* is. In fact only the basic specs are widely implemented. Maybe the basic specs are enough?
Also we have the news that the use of REST is on the rise for AJAX developers. Is that a surprise?
And what about this diagram?
DiagramSVC2.png
Services “Standardize” or “Commoditize” the Problem
Adding services to the existing technologies can help solve the spaghetti/stovepipe problem by providing a standard interface to any application, so any application can be accessed the same way, regardless of its implementation technology. Can’t it?
Can the definition of Web services be influenced to solve this problem? Or is the Web a better place to start (again?).

Thinking about Jim Gray (again)

Yesterday I wrote “Thinking about Jim Gray” because I had been thinking about him on and off for most of the week. It looked like the search was over, and I wanted to say something.
Today I find myself unable to stop thinking about him. It’s partly because I’ve been working on updating the TP book, and that keeps him in my thoughts because of his relationship to the book, but it’s also because updating the book involves a lot of tedious work, and my mind tends to wander off.
So much is out there about him. I take breaks from the manuscript and search the news and the blogs. It’s unbelievable. Now the search is continuing through private efforts and searching photos on the Web.
I did go to the Amazon site, and I went through some of the photos. It seemed like searching for the proverbial needle in a haystack, but I guess you never know what might help. Can you imagine if one of us finds him that way? It is already becoming a phenomenon.
A good place to find out what’s going on is the Tenacious search blog. It summarizes what everyone is doing — computer analysis, postering, analyzing cell phone records, shipping records, Web cams, reports from the family, etc.
You can also see some of the photos here in a different format from how they’re presented on Amazon.
Some folks suggest Jim might just have kept on sailing to Mexico or somewhere else across the Pacific…
If we knew what happened to him, that would be one thing. For example, I do not want to write in the past tense, not yet. Although the news isn’t good, there is still hope.
I guess what mainly strikes me is the huge amount of interest. Everyone who works with him says what a great guy he is, and it’s amazing how much he’s contributed to computer science. Everyone seems to feel about him the way I do — as a friend, but more — someone to really look up to.
As I work on the book I find myself thinking about him and the example he set.
Wherever you are, Jim, I hope you can sense some of what’s going on – and see how you have truly touched so many lives. So many of us thinking about you, and still hoping you are well.

Thinking of Jim Gray

Update Feb 3. Help find Jim. Also: Blog tracking search progress. (Doesn’t require logging into Amazon.)
He is not only one of the smartest guys I ever met, he’s also one of the nicest.
I was lucky enough to be working at Digital in the late 80s when we started hiring the big brains in transaction processing — including Jim — to help us compete against IBM. Never mind that we should probably have been focused on personal computers, instead. For someone whose first job was converting batch applications into online TP applications, and who loves TP as much I do, it was a great time.
He visited Digital’s TP heaquarters in Littleton, MA (where I worked) often and his presence was always felt throughout the building.
I last heard from him about a year ago, when he sent comments on the proposal for the second edition of the TP book Phil and I wrote (we are working on that second edition right now in fact).
Although it might have been easy for him to consider our effort to be in competition with his book, he actually gave us the most thorough review or the manuscript and some very helpful comments, and agreed to write the foreword.
The tremendous industry reaction, as seen in the many articles and blog entries, is completely understandable. Jim invented, or helped invent, many technologies fundamental to the way the world works every day, including the relational database, high availablilty and fault tolerance mechansims, scalability algorithms, transaction processing mechanisms, and many many others. Yet you would never hear him boast about it. He always preferred to think of himself as part of the team.
When he won the Turing prize I sent him a congratulatory email and received back a characteristically very nice and humble message.
I cannot be too sad, because he is still just missing, but that news was a great shock, and the fact that it has been almost a week now is not good.