Monthly Archives: June 2005

Open Source Summit

Friday the Mass Software Council held its Open Source Summit and Open Source SIG Kickoff meeting at Babson College. (See the Dan Bricklin link for additional notes and links to the MP3s.)
Because of our Celtix announcement, Bob Zurek invited me to join a panel discussion on open source business models and strategies, along with Nick Carr of Red Hat, Al Campa of JasperSoft, Douglas Heintzman of IBM, and Bob Lefkowitz of Optaros.
The highlight of the day was Marc Fleury’s keynote over lunch, during which he crossed swords with the audience, and in particular with Doug Heintzman.
Nat Friedman + Miguel de Icaza of Novell provided some very timely advice about how to create a successful open source project, since we’re just starting out.
You know, this open source stuff is really great. We should have made the jump a long time ago. And it really does seem like a community – at least judging from this event – like a group of people interested in cooperating toward a common goal. That feeling is not as present in Java, or Web services, or even the CORBA community for that matter.
Anyway, the day started with a session about open source licensing. Karen Copenhaver of Black Duck made the point that it is impossible in today’s world to avoid open source. All of us, whether we know it or not, rely on open source software modules every day, such as the Apache Web server (with something like 75% market share) and open source components embedded within commercial software. Therefore licensing is critical to understand and deal with appropriately. She and Ira Heffan of Goodwin Proctor reinforced the importance of being careful about using open source in our commercial products, and why the LGPL is the best license choice for Celtix.
Dan Bricklin then asked my panel several questions about our business models and strategies. I answered, as did the guys from IBM and JasperSoft, that open source is one part of an overall business strategy in which we will continue to offer closed source products as well. It represents a new line of business for us, a way to reach out to the developer community, and to help drive the adoption of the ESB as SOA infrastructure.
One thing I was pleased to hear about from IBM was the importance of a pluggable architecture (mentioned in the context of Eclipse), which is exactly what we are creating in the Celtix project. This was also reinforced by Nat Friedman in the next session, because it is easier for developers to contribute and find a good place to add value above the commodity line.
Marc Fleury, on the other hand, despite having heard from other software vendors that the blended model can and does work, continued to argue that all software licenses should be free. Period.
This is actually a great and grand idea, and perhaps someday in the future we may see such a world. But in the current software business the licensing model is very well established, very widely implemented, and working well. Even if the world were to eventually adopt the license free model for all software (which seems unlikely), it will be a long time before that happens and for the foreseeable future the two business models are going to continue to co-exist.
At one point Marc asked, “How will IBM compete with that?” (Referring to the world in which software licenses are free.)
“By buying Gluecode,” said Doug from the back of the room.
IBM’s more serious parry to the JBoss thrust, however, went essentially unanswered. IBM acknowledges the commoditization of certain aspects of software infrastructure while building added value “up the stack.” It was easy for Doug to tick off several added value features for Gluecode from the huge WebSphere family of products.
Marc said he agreed but he did not provide any specifics about the kind of innovations he might have in mind for JBoss (other than aspects, actually, which is not really an innovation at the next level).
All in all a great event and a great kick off for the Open Source SIG!

Open Source ESB

It’s become clear the world needs an open source Enterprise Service Bus.
IONA is starting an open source ESB project together with the ObjectWeb consortium, best known for its Jonas application server.
The need for readily available SOA infrastructure is critical. An SOA approach solves the most pressing IT problems, and creates a foundation for agile and responsive business.
IT departments are constantly challenged to do more with less. With core ESB functionality available at commodity cost levels, this essential computing paradigm will more quickly achieve mass adoption and help address important industrial and economic issues.
We will be starting the project this summer, and intend to have the first product available by the end of the year.

It’s the XML, Stupid!

In the early days of SOAP and WSDL, while doing the conference rounds I would often be asked, “What’s the most important thing about Web services?” I would always say “XML, XML, and XML.”
Not everyone agrees with me, but this very fundamental question has recently come up again in the important new paper about the shortcomings of JAX-RPC and other related topics. (If you haven’t read it yet I strongly encourage you to do so.) One of the authors, Edmund Smith, elaborated on some of the thinking behind the article in this interview posted yesterday.
Edmund is correct that it is not as convenient for programmers to deal with the XML as a separate and distinct language but this is inevitably the right approach. It may seem like a benefit to view XML entirely through a Java (or any other language) filter, using native data types and generated classes, but the XML is what’s important, not the Java code that processes it.
The world has changed, get over it!
A sea of virtual ink has been spilled in the blogosphere over this debate and a related debate about the suitability of WSDL for a description language. The complaint in this area seems to be more about tool support of WSDL and how classes and objects are generated from it rather than about WSDL itself (which is, after all, just XML).
Don Box is among those who have reinforced the main thrust of the article, adding that the problem needs to be fixed at the endpoints.
A lot is said about endpoints, and moving integration logic toward the endpoints away from the hub is indeed the correct direction. But what is an endpoint? It’s an application to which a bit of code has been added that understands XML, and how to map (or transform) the XML into and out of it. It is not an application server, EAI broker, or even a Web server.
Tim Ewald who is listed in the paper’s acknowledgements as one of the paper’s reviewers, adds the point that unless Java gets its treatment of XML right, other languages are likely to come to the fore. It’s already happening.
To see where things seem to go wrong, let’s start with the definition of a Web service. It’s the action of sending an XML document to a receiving system that understands how to parse the XML and map it to an underlying execution environment, and optionally receiving a reply, also in the form of an XML document.
I don’t usually tend to quote myself, but on page 2 of Understanding Web Services, I say that:

Web services are Extensible Markup Language (XML) applications mapped to programs, objects, or databases or to comprehensive business functions. Using an XML document created in the form of a message, a program sends a request to a Web service across the network, and, optionally, recieves a reply, also in the form of an XML document.

On page 11 I introduce the RPC-oriented and document-oriented interaction styles found in SOAP and WSDL. Originally I had placed these in the reverse order, that is, I wanted to explain document oriented Web services first, because the technology is a much better fit to asynchronous messaging systems than it is to RPC based systems. One of the manuscript reviewers pointed out that the RPC oriented style is more popular, and therefore I should explain it first. I said ok but in retrospect I shouldn’t have. The document oriented style is much more important, and my acquiescence has probably added to the confusion.
In the couple of years I spent as an editor of the W3C Web services architecture specification the working group spent a lot of time discussing the definition of a Web service. In the end we produced something that I would characterize as vaguely interesting but not really that useful (at least not as the independent, industry-wide definition of a Web service and its associated architecture I was hoping we’d produce).
One thing I managed to gain consensus on however was separating the concept of the Web service from its execution, as in Section 1.4.1:

A Web service is an abstract notion that must be implemented by a concrete agent.
. . . The agent is the concrete piece of software or hardware that
sends and receives messages, while the service is the resource characterized
by the abstract set of functionality that is provided. To illustrate this
distinction, you might implement a particular Web service using one agent one
day (perhaps written in one programming language), and a different agent the
next day (perhaps written in a different programming language) with the same
functionality. Although the agent may have changed, the Web service remains
the same.

In other words, the XML applications comprising a Web service exist independently from any programming language that processes them. A Web service must exist independently of any programming language’s view of it. If It didn’t, we would not achieve the benefit of universal interoperability. And they are not RPCs, despite the fact that it is possible to represent and interpret an XML document as an RPC signature.
As the simplest and most abstract form of Web services, the document oriented style has the advantage of preserving more of the characteristics of XML that make it so helpful – namely its independence from any one particular programming language and it’s unique type system. Because XML can be used as an independent data type system, it can be used to solve one of the biggest problems in integration and interoperabilty, data type compatibilty.
Of course those of you working to transparently map XML data types to language specific data types will no doubt mention how hard this is (reference again the “Rethinking” paper for why it isn’t a good idea in the first place). But again this is missing the point of XML. And of course the early interop tests from SOAPBuilders (some of the result links at the bottom of the page are still valid) showed that the more complex the data types, the harder interoperability is to achieve, especially for the RPC style.
The document oriented style is more abstract and preserves the most important characteristics of XML, meaning that although it take a bit more effort to deal with, the benefits of doing so are proportionately greater as well.
Don’t get me wrong about JAX-RPC. We support JAX-RPC in our products and it is a useful spec, especially for the Java to XML mappings it provides. And it looks like the next version will be a big improvement.
But until the whole industry learns that document oriented Web services are where it’s at, we will continue to struggle to achieve the value of Web services, and we will continue to wrestle with fundamentally unresolvable incompatibilities with existing RPC technologies.
Nor can I accept the REST proposition of fixed interfaces as the solution since I need to know what program or queue to hand the XML message to for processing when I recieve it. That’s part of the definition of a Web service, that the endpoint knows how to find the program the data maps into and out of, and you need a custom interface (or queue) name to do it effeciently.
And yes, WSDL is hard to read and understand. It was really the hardest chapter to write, and I’m still not sure I got it right. But WSDL is not really broken, either (except for the include function I guess). It can stand some improvement, but it is doing the job.
WSDL is (like a SOAP message) XML. And you can parse XML. You can transform XML. You can interpret XML and aggregate XML and split up XML, and of course process only what you can understand (and if there’s something you don’t understand just ignore it).
The whole thing really has to start and end with the XML, not the Java or the C# or the Perl or the Python or the COBOL, SQL, or whatever. Until we all learn this we are going to continue to endure these endless debates.

TP Never Dies: The ACMS/DECtp Reunion

About 80 of us showed up for the first (and hopefully not the last) ACMS/DECtp reunion this past weekend in Groton, Mass. Some folks traveled from Baltimore, Virginia, Detroit, Ohio, Seattle, and Houston to get there. We had a great time.
ACMS is the TP monitor Digital developed for the VMS operating system. The project started in 1979 and during its 26 year history the product has seen a lot of highs and lows, quite like that of Digital itself. A great ride, but definitely a bit of a roller-coaster.
Along the way ACMS pioneered many innovations, contributed to the development of several TP standards, and very nearly took over the world. The things I learned during my 15 or so years connected with ACMS are things I still use every day of my professional life. A lot of people at the reunion said the same, or similar. Above all, it was really a great group of people to work with, and it was tremendous seeing so many of them again.
In early 1990 Nippon Telegraph and Telephone selected ACMS and we were swept up into a much larger program called the Multivendor Integration Architecture, the goal of which was no less than standardizing IT software, including TP. And we achieved it back then, too, demonstrating 7 independent implementations at Telecom ’95 in Geneva. At that time NTT had submitted the MIA specs to SPIRIT, and gained the support of major telecom companies around the world.
As part of this effort we redesigned ACMS and produced a standards-based, multiplatform implementation called ACMSxp. But like many legacy replacement products on Unix and Windows (anyone remember CICS 6000?), it never did as well as the original.
Around this time ACMS became also became part of DECtp, which was a big program related to the “Mainframe VAX” (aka the VAX 9000) initiative, since TP is so important to the mainframe market. We even had our own building, with an official ribbon cutting ceremony and everything. Digital started aggressively hiring the best and the brightest TP talent, much like Microsoft did about six or seven years later.
DECtp was one of the turning points in Digital’s history, since while we were trying to compete with IBM at the high end of the market, we were more or less ignoring the PC and the low end of the market.
Along the way ACMS pioneered remote procedure call based transaction processing, the three-tier architecture so widely used in modern application servers today, the interface definition language used in CORBA (and other places), and the idea of abstracting implementation details out of the “container” now found in COM+/.NET and EJB. Yes, all of these things were there when the product was first released in 1984, and yes, sometimes we had a very difficult time selling it (although the customers who understood it really loved it).
It’s amazing what bits and pieces of MIA and ACMSxp are still around.
Through the support of the telecom companies in charge of MIA and then TMF/SPIRIT, the ACMS approach came close to taking over the world, giving CICS a real run for its money. The Structured Transaction Definition Language (STDL) based on ACMS was nothing less than an attempt to provide the equivalent of SQL for the TP world. But (and perhaps predictably since it wasn’t based on their products) IBM and other TP vendors were strongly opposed. The specification was approved by X/Open only because of user pressure for a portable TP language.
With object orientation on the horizon along with CORBA/OTS, COM+, and even EJBs, STDL eventually became an interesting step in the evolution of TP standards.
Today we are seeing a new approach to the problem of software standardization in the form of a text-based solution based on XML, SOAP, and WSDL, and even more flexibility in interpreting an even higher level language. And perhaps the industry is finally mature enough for standardization, including TP. The lessons of ACMS and DECtp continue to serve us well, and the people are all still great people.

Answer a Negative Blog?

Steve Vinoski has provided an excellent rebuttal to Dave Linthicum’s recent post about IONA.
The only thing left to do is wonder what could have prompted Dave to post a negative entry in the first place, and whether or not I should respond.
Never mind Dave’s inaccuracies and his clear misunderstanding of SOA, the question is whether this is in any way a correct response to anything? I cannot imagine what it might be.
Yes, the software industry has been going through some hard times during the past few years. But this blog entry doesn’t seem like it helps anything, and may even reflect badly on the author and the practice of blogging. And any response might also reflect badly on me.
This is, by the way, something I would never consider doing. And perhaps it is worse to respond at all, since it just serves to calls further attention to the original post and dignifies it with a reply.

Occasional Music Post

The other day a friend of mine sent me an email containing links to MP3s of the Cream reunion concert. Great stuff. I went right out the next day and bought the remastered edition of Wheels of Fire and the recently released BBC sessions. I wonder whether the record companies have noticed an increase in worldwide Cream CD sales?
It also turns out there are a lot of bootlegs on the Web of Cream concerts from 67-68 as well. These are really incredible sessions. Cream was one of the best jam bands and no doubt influenced many modern bands. All three of them would start improvising together and just when it seemed like they must be playing as hard as they possibly could, one of them would kick into another gear and the others would respond. I don’t think there’s ever been another band like it, with three top musicians playing together so well. The best thing about the reunion was they didn’t try to recreate the original sound but instead produced an updated edition, jamming together again at their current levels. The guys can still play!
This reminds me of the old discussion I used to have with my college friend, Phil, who was a drummer, about who the best drummers were. Our top three list included Ginger Baker, Keith Moon, and Robert Wyatt. Wyatt may seem like a strange name to have on the list, but just listen to Soft Machine 1 through 4 and you will see what we mean. In those days I was just glad to meet someone else who was a fan.
Today it seems like they’ve issued every possible live Soft Machine recording, whether terrible or not. I have bought almost all of them. Soft Machine was another trio (at least originally) that played incredible jams, so again, every live recording is (in theory anyway) worth having since you get a variation of the tunes and sometimes a completely different interpretation.
This seems to challenge an assertion recently from a New Yorker review of some new books on the effect of recording on music. The main idea is that the availability of a recording device changed forever how musicians play, and how people viewed music. Instead of being a social event that included a human aspect (i.e. enthusiastic playing with mistakes), recording transformed music into something mechanical and either caused musicians to self consciously strive for perfection or add things in the studio that could not be recreated live. And people would not view music as a moment in time to be enjoyed but rather a captured performance to be played over and over.
Among the statements in the article is that live music has suffered because of a decline in spontaneity, although Ross does admit the same isn’t true for pop music, as is so much in evidence on the Cream and Soft Machine recordings.