Monthly Archives: October 2004

The Red Sox

It’s often hard to explain to someone who didn’t grow up with it. What it means to win the World Series, that is.
Eighty-six years is a lifetime, and most Red Sox fans figured it wasn’t going to happen in theirs. It is very hard to explain what it meant for the ball to go through Buckner’s legs, for Bucky Dent and Aaron Boone to hit their game-winning home runs, or for Bob Gibson to go 3-0 in 1967 unless you lived through it.
Boston is really a baseball town. Sure the Patriots have won the Super Bowl. But the celebrations over that will look like nothing in comparison to what we will see now.
There is a tradition, a history, and a love for the game unmatched in other cities. The rivalry with the Yankees, one of sports’ greatest, and longest. If you grew up a Red Sox fan you knew, absolutely for certain, that they would let you down in the end.
They would hook you with early success and get you to believe they might win, but only to let you down. The early winning streak, being in first place early on, winning the first few games of a playoff – only to build you up for the letdown. It was like Murphy’s Law – the Red Sox could only win in order to build up false hope. This victory shifts the balance in the universe.
Somehow we were ready for it this time. The bar in San Francisco where I was when they won – it was mostly Red Sox fans. The same guy came in two nights in a row to try to sell caps with “World Series Champs” on them. But no one would buy them, not till the Red Sox had actually won.
Yes, now we can finally believe.

Wrong Reaction on Policy Initiative

The common reaction to any proposed WS-* spec these days is perhaps rightly sceptical, since there are already so many of them and it’s very hard to tell whether anything new is warranted.
But Paul Krill mistakenly applies this reaction to the news of last week’s Constraints and Capabilities Workshop sponsored by the W3C.
The Workshop wasn’t aimed at creating new specs. Its purpose was to discuss whether or not the industry needed an open process (possibly initiated by the W3C) for additional work on the aspect of Web services description commonly known as “policy.” Several WS-* policy specs already exist, as does a proposal from the XACML group at OASIS called the Web Services Policy Language.
Workshop attendees included the authors of these specifications and the authors of Rei, a “universal” policy language in the research and prototype stage. Policy is widely recognized through the variety of efforts underway as a critical aspect of Web services metadata definition and management.
So this was a case in which various aspects of the problem were explored in the context of trying to identify the best candidate solution and therefore to bring the community together, not create yet another specification.
It’s clear that the WS-* specification complexity issue has reached the point at which we all need to be concerned about it, but we also should be careful not to blow it out of proportion so that even efforts to simplify the situation in a given area (such as policy) are viewed as contributing to the problem, when the opposite is in fact true.

W3C Web Services Constraints and Capabilities Workshop

Someone who said he was new to the W3C put the question best: “Why are we even talking about the Semantic Web in a workshop on Web services?”
The answer, of course, is that this is the W3C, and that’s how things work. The Semantic Web represents a significant activity within the W3C, and the Workshop presentations and discussions included significant input from folks involved in that activity. (The constraints and capabilities workshop is also known as the policy workshop.)
The trouble with the Semantic Web, however, as another attendee put it, is that it hasn’t achieved industrial acceptance. As elegant and appropriate a solution based on RDF might be for expressing policy, Web services software vendors do not implement RDF and could not propose it to their customers.
Thus the discussion tended to clearly identify the positions of those in favor of starting a working group based on the WS-Policy set of specifications (most Web services software vendors) and the positions in favor of expanding the definition of policy and considering expressions of policy in languages other than XML (i.e. Semantic Web proponents).
Under W3C process, the results of the Workshop discussion will be evaluated by the W3C team as the W3C considers the possibility of initiating a new area of work.
It seemed as if everyone could understand and support the requirement for standardizing metadata expressions of policy for the so-called “qualities of service,” including such traditional middleware features as security, reliability and transactions. Some difference of opinion emerged over what was called “higher level” or business policies, such as “if the bank account was opened in 2003, apply one set of rules, if it was opened in 2004, another” and so on. This somewhat boils down to a difference of opinion as to whether the expression of policy should be as simple declarative or complex rules.
In the end, everyone who attended seemed to agree that policy represents a significant area of work that the W3C should undertake. I think so, too, as Rebecca Bergersen said in our position paper and presentation yesterday.
What I worry about, though, is that we’ll end up discussing the same issues raised in the Workshop over and over again in any possible working group, despite the likely starting point of WS-Policy.

Complexity Summary

I’d like to sum up the discussion on the complexity topic, including the SOAP and REST comparison, before moving on to other topics for the blog.
I also still think it would be interesting to try out an XML example, and I hope to get to that soon.
I’d like to thank everyone who participated in the discussion by posting comments. I wanted a discussion, and I got one ;-).
Ok, so what have I learned?
— I still have a lot to learn about REST (and apologies again for the mistakes)
— REST can indeed be useful for program to program communications, and therefore it makes sense to include it within the “Web services” umbrella, as they do at Amazon.com
— REST has advantages over SOAP for simple applications and is also a lower cost solution for many applications
— SOAP has advantages over REST in terms of tool support, an interface definition language (WSDL), and enterprise qualities of service (reliability, security, and transactions)
— Combinations of REST and SOAP based solutions seem very plausible and practical for applications that span simple and complex requirements
— A lot of resentment still exists about software companies over-hyping Web services, and the proliferation of WS-* specs isn’t helping (and of course we had another one today, WS-Management)
A lot of discussion tends to get generated over “what can technology x do” or “what is technology y capable of.” What I’m interested in is what is the best use of technology x or technology y, not what’s possible. As I like to say, it’s possible to build a phone using string and tin cans, but that doesn’t mean it’s the best technology for that application.

Complexity in REST and SOAP

Ok, so let’s take a stab at the main question Pete raised in his comment to the previous blog entry, and which has generated some further comments. Hopefully this will generate further good and helpful discussion…
The main question as I understand it is what are the use cases for Web services as distinguished from the use cases for REST, and why would you use one or the other. Also when is the added complexity of Web services justified.
This is probably a bit of an over simplification, but to me the best summary is that you use REST when you want to display XML data in a browser, and you use SOAP when you want to pass XML data to a program.
The situation regarding the use of Amazon.com’s Web services is that last year, 80% of the developers used the XML over HTTP version and 20% used the SOAP version. This makes sense because most of Amazon.com’s users are interested in displaying the data in a browser in virtual storefronts. However, Amazon.com includes both flavors in its “Web services” since they also have developers interested in passing the data to programs. (For the sake of clarity I’ll adopt Amazon’s current terminology (REST and SOAP Web services).)
One way to look at REST is that it basically means putting Web service requests into the URL.
This the URL I get when I type “Skateboots” into Google.com’s search window:
http://www.google.com/search?hl=en&ie=UTF-8&q=%22Skateboots%22&btnG=Google+Search
Google basically takes my input to their Web form and appends it as parameters to their URL, with some other stuff about the character set and operation to be performed (i.e. “Search”), which executes their service and returns the results in my browser. Google also asks me if I meant “Skate Boots” (i.e. two words instead of one). This is the URL generated when I clicked on that link:
http://www.google.com/search?hl=en&ie=UTF-8&q=%22Skate+boots%22&spell=1
The recently released (August 2004) Web services V4 from Amazon.com continues to provide both REST and SOAP requests for its Web services. Now, some people would argue that REST and Web services are different things, while others would argue that you could do everything with REST that you can do with Web services. To me it seems like different formats for essentially the same operations.
The difference seems to boil down to whether or not you want to use a browser to interpret the result data. If you do, then you might as well use the REST requests (i.e. place the parameters in the URL). However, if you are interpreting the data in a program, then you should probably use the SOAP requests since the parameters are carried within the XML message (instead of in the URL).
This is the format Amazon gives for its REST based Web services requests:
http://webservices.amazon.com/onca/xml?Service=AWSECommerceService
&SubscriptionId=[your subscription ID here]
&Operation=ItemSearch
&SearchIndex=Books
&Keywords=dog
&ResponseGroup=Request,Small
After you get a subscription ID (I am sorry but I did not go so far as to obtain one) you can simply type the request parameters into the address box on your Firefox (or other) browser, hit return, execute it, and view the XML results. Nothing could be simpler.
But if you wanted to send the XML data to a program, possibly using another transport, the SOAP request format will do the job better since it’s designed for that, and everything the message needs is within the envelope (i.e. no dependence on transport features such as fixed interfaces and URL parameters). With the REST request it’s necessary to write code yourself for everything other than displaying the XML in the browser. And when you add features important to enterprise applications, such as security, reliability, and transactions, you also have to figure out how to write the code for that. The code could get pretty complex pretty quickly.
Now the next part of the discussion normally revolves around development tools, since they are pretty important in the use of SOAP (and WSDL for that matter). The basic question here is whether tools like XMLSpy, Visual Studio, WebLogic Workshop, or Artix Developer make life easier or harder. According to Jeff Webb, SOAP is harder if you want to change the auto-generated VB classes. If you just want to use the SOAP request unmodified, and take advantage of the generated code, however, then that is probably the simplest and most straightforward way to handle your XML.
And there’s the question of WSDL, which REST requests don’t have. Publishing a WSDL file on the Web ensures that your Web service becomes part of a larger “eco-system” since most Web services toolkits can import WSDL and automatically generate not only the SOAP message, but usually also the proxies and stubs to process it. Many tools are also available to generate WSDLs out of existing application metadata, including CORBA IDL, COBOL Copybooks, C++ and Java classes, EJBs, and COM objects, to name a few.
If you are already developing applications using an IDE such as Visual Studio or Eclipse, Web services support is already built in. And as more of the WS-* specifications gain adoption, they are added to the tools. For example, IONA’s Artix already supports WS-Security and automatically generates the SOAP headers for it. I think the principle about as simple as possible but no simpler definitely applies, and the reverse is true too. As complex as necessary but no more.
I think as the Web evolves “beyond the browser” it naturally encounters more complexity as it hits enterprise applications. It would be ideal if everything could be accomplished using simple HTTP interfaces and URL parameters. But the IT world beyond the Web needs reliability guarantees and transactional integrity for legal and business reasons such as ensuring the order of stock trades are carried out in the order the messages were received and verifying the identity of someone who sends a large purchase order for tons of steel. And the IT world is not going to change – not for a long time. The Web needs to adapt to the IT world, it’s as simple as that, to incorporate some of the complexity already there.
Ok – so let’s have some comments, and a good discussion!