Monthly Archives: July 2007

Crossroads Guitar Festival 2007: Pretty Good Concert ;-)

It was long, and sometimes hot, and the ground was covered with some hard plastic tiles, but it was about as good a day of guitar music as anyone could hope for.

MSN has some clips available (although strangely not of Steve Winwood’s “Dear Mr. Fantasy”).

Official summary was very positive, of course…but there was really not much to complain about other than a few sound system glitches.

The Tribune blog reprinted the reviewer’s final with a bit more balanced view, although still full of deserved superlatives, and including something that John, Brett, and I picked up on – who was that unbelievable young bass player with Jeff Beck?

The details from the Tribune blog on the other hand highlighted B.B. King passing the torch. I would have said it was more like Eric Clapton passing the torch… to Derek Trucks. I would also quibble with his opinion that Susan Tedeshi “nearly stole the show” from Derek Trucks. (Sorry but Susan is not Derek, no matter how well she sings!)

Some good photos on the Rolling Stone site.

Additional info here, along with a couple of video clips.

Chgo0001_1.JPG

The photo John took of me and Brett.

Chgo0001_2.JPG

The photo I took of John.

More photos and notes on Flickr.

(If I get my video clips uploaded I’ll update the blog.)

Update, first video uploaded:

update 2: someone posted a Clapton/Derek Trucks duet.

Update 3, Aug 3

Rest of the videos finally uploaded:

John McLaughlin

Derek Trucks Band

Susan Tedeschi

Derek Trucks & Susan Tedeschi

B.B. King

Eric Clapton

Robbie Robertson

Steve Winwood

Eric Clapton & Steve Winwood

Steve Winwood & Eric Clapton

Steve Winwood – guitar solo

Buddy Guy et al

Advertisements

Off to Chicago

This afternoon I’m heading to Chicago for tomorrow’s Eric Clapton’s Crossroads Guitar Festival. I’m going to meet my friend Brett from New Zealand there, and we’ll be going down to Toyota Park with my friend John from Chicago.

I’m really looking forward to this – the festival has probably the best lineup of guitar players on one stage anywhere. Looks like they will be broadcasting on MSN

By sheer conicidence, my son Alex is heading out tomorrow for the Rock the Bells festival in New York, where he’ll finally have a chance to see one of his favorite bands, Rage Against the Machine.

Last year when my brother and I were at the Cream reunion in New York, Alex was at the Audioslave concert in Virginia, near his college.

Brett and I are also going to see Walter Trout at the House of Blues tonight.

And of course I need to find time to go visit the Jazz Record Mart while I’m in town – always manage to spend too much there and somehow never seem to have enough blues CDs…

Should be a great weekend!

The Problem with SCA

David Chappell recently published his Introducing SCA whitepaper, and it is a very good introduction to SCA. I recommend it to anyone interesting in getting a handle on SCA.

In his summary of the effort in his blog he notes the major difficulty he encountered: SCA participants seem to have different opinions about what’s important about SCA.

David has blogged about this before, based on his experience charing an SCA panel at Java One. He has also argued (and continues to take this view) that the new Java programming model is what’s important.

My view is that the service assembly model is the most important thing, and I guess it’s fair to say that IONA as an SCA vendor will emphasize that view as we incorporate SCA into our SOA infrastructure product line.

I don’t think the world needs another Java programming model, and although I understand the comparison David makes with WCF, I don’t think it makes as much sense for the Java world. In fact the Java world appears fragmented enough already.

I was at Tech Ed ’03 when WCF was announced, and I remember clearly hearing the objections to some of the developers in attendance when they discovered that Microsoft was asking them to change how they developed their Web services. And I also agree WCF is a nice piece of work, and has a great architecture (very similar to IONA’s architecture BTW).

Our view, and we did express this during the SCA meetings (and we were not alone), was that the metadata should be incorporated into the assembly spec as much as possible, and the metadata remaining in the Java annotations should be minimized.

I suppose it is not surprising that the committee work ended up placing more or less equal emphasis on both the assembly model and on the Java programming model, since the participants in the meetings also represented the division of opinion David encountered at his Java One panel. But this division of opinion continues to be a problem for SCA.

Open Source Update

Last week Darryl Taft asked me to comment on IONA’s open source announcement, for his eWeek article on open source SOA.

I am always glad to hear from Darryl. He’s one of the best in the business. And it’s not unusual for him to pick out a few sentences or paragraphs that best fit the story, and omit some other things I say. There’s no way I can complain about what he included, and I think he did a great job on the story. But this time something interesting got left out.

I told Darryl that we thought of the LogicBlaze guys as having been more successful at open source than we were (the part about open source being a challenge for a commercial software company did make it into the story), although we have of course made excellent progress with CXF and STP.

The concern after the acquisition – and I think it’s fair to say this was felt on both sides – was that the larger, commercial license oriented company (IONA), might try to direct the LogicBlaze folks and therefore somehow diminish or interfere with their success. I am very pleased that this did not turn out to be the case – things have gone as well as I could have hoped in that department.

By coincidence we held an extended management meeting this week, and prior to it our CEO forwarded for background reading an article by Cayton Christensen and Michael Overdorf (sorry but I could not find any good link for Michael) entitled “Meeting the Challenge of Disruptive Change.”

(I looked for a free link to the article but could not find one.)

The article describes the challenges that established companies face embracing disruptive change (like, for example, open source SOA). One is organizational – how an organization that institutionalizes a capability in one area (say commercial software) typically creates a corresponding disability in another, perhaps related, area (say open source), that inhibits its ability to embrace necessary change.

One of the solutions involves gaining new capabilities through acquisition, which is exactly what we’ve just done. But, Christensen and Overdorf say, this does not work if the acquiring company tries to integrate the new folks into the existing organization’s practices. So the trick is to take on board the new capabilities and enhance, rather than inhibit, their successful characteristics.

What we did was work through our new, combined strategy together (this strategy is described in the eWeek piece and on our website) and although there have been some inevitable compromises and difficult decisions, the result is something stronger than before.

All because of the synergy we’ve managed to establish in a short time following the acquisition, and one of the main reasons for it is that we managed to avoid telling folks what to do who already know what to do, and focused instead on establishing good synergy and cooperative spirit. This cooperative spirit is going to make IONA hard to beat in open source SOA.

Importance of OSGi for the Enterprise

Last Friday’s flight back to Boston ended a crazy string of travel – a different European city each week of June. I will post some photos soon. I am really glad that’s over!

Last week’s OSGi Enterprise Expert Group (EEG) meeting in Munich went very well, making a fitting end to the crazy travel period, and giving me some idea that it was all worthwhile… Certainly last week’s trip was.

The EEG meeting was held the day after the OSGi Community Event, which illustrated pretty well the growing interest in OSGi. Lots of great presentations, good information, and very interesting people in attendance.

(Here’s a good summary, from someone with a home automation perspective – which was actually the original purpose of OSGi. You can see the top of my head behind Peter Kriens in the third photo ;-).

During my presentation on the EEG I summarized where we are in the process, listed the RFPs we’ve received, and tried to highlight some of the discussion points. More about that in a moment.

First, I am very glad to say that during the EEG meeting we successfully passed from the requirements phase into the design phase, with the recommended approval of seven of the 11 RFPs under discussion (two of the 13 mentioned in the presentation have been referred to the management EG).

So now the real fun can begin! We can debate whether or not mapping existing enterprise technologies onto OSGi is sufficient. We can, in the words of a community event participant, ascertain whether or not any of the current “failed” distributed computing models should be adopted, or whether anything new may be needed.

In general, it looks like the EEG will be spending a lot of time on mapping existing technologies to OSGi, and drawing requirements from those mappings to potentially extend and enhance Release 4. These include Spring, lot of parts of JEE, SCA, and some existing enterprise technologies (CORBA, Tuxedo, Tibco, WebSphere MQ, Web services, etc.), at least for interworking, and probably JBI as well (some interest already expressed here – may need to be formalized). Many, if not all, of the original requirements coming out of the initial workshop and refined in subsequent meetings, may be met this way.

However, OSGi already has a well-documented and widely used programming model based around services. Peter Kriens and BJ Hargrave gave a great “best practices” presentation — they said they also gave it at Java One — that illustrates the benefits of this programming model pretty well.

Some are suggesting – including me – that the OSGi programming model can be very simply and minimally extended to meet some of the very basic enterprise requirements, such as access to a remote OSGi service (deployed in a remote JVM), or interworking with existing enterprise technologies. OSGi developers should not have to learn another API and/or programming model in order to accomplish some simple, but important, enterprise tasks.

To be very clear, this is a point of current discussion within the EEG. Nothing has been decided. We are just starting the design stage. As co-chairs it will be our responsibility to bring these discussions to closure through consensus (hopefully!) or vote (if necessary) in the upcoming meetings.

One very interesting point is raised in one of the RFPs (for security enhancements), which already recognizes in its requiements statements the need for vendor supplied and user developed code to co-exist in the same OSGi deployment (and to maintain security of course). If you accept this view – and this does seem likely – the cat is already out of the bag (i.e. people are already using the OSGi programming model and will want to continue).

Imagine a situation sometime in the future in which an enterprise developer can develop basic services using OSGi by itself, and, as required, configures in parts of other technologies such as JEE (security, transactions, JNDI, JCA?), SCA (components, composites, wires, policies?), JBI (deployment container for BPEL?), Spring (well, this actually may not be the same kind of thing since the Spring RFP basically marries OSGi and Spring configuration), and external systems (CORBA/IIOP, WebSphere MQ, Tuxedo?). This could create an envioronment in which developers do not have to choose upfront among these various “competing” enterprise technologies and programming models, but instead can think about using them when and as necessary in a kind of continuum.

By the way, whenever anyone mentions the idea that OSGi might be extended for distributed computing, some number of people seem to automatically jump to the conclusion that we mean “doing everything” and if we “do anything we’ll be forced to do everything” i.e. define an entirely new distributed computing approach, new communication protocol, new data serialization format, etc.

I can assure you that this is not at all the case. The EEG has progressed carefully and cautiously from the beginning, focusing on gathering and refining requirements. And we will continue to progress carefully during the design phase, evaluating proposed solutions against those requirements. Let the fun begin!