Monthly Archives: April 2008

Conference Season – Upcoming Speaking Engagements

The snow is finally gone and the new, yellow-green leaves are starting to emerge… Yes, that’s right, it’s conference season again!

Here’s a list of conferences at which I’ll be speaking during the next couple of months, followed by a summary of the topics:

I hope to see you at one or more of these events.

Testing topic

Next Wednesday at the Practical Software Quality and Testing conference I’ll be giving the opening keynote on “Meeting New Challenges in Testing Service Oriented Architectures.” We have been doing a lot of work with our customers recently on SOA testing strategies, and in the course of that work have built some technologies to help, in particular interface simulation tooling.

The challenge in moving to an SOA environment is to find a good way to validate service contracts. A good service contract is key to a successful SOA, and consequently the focus of SOA governance (and I mean this in the logical sense, as in the desired result of a successful governance effort).

One of the great benefits of a good service contract agreement is that the work to implement the contract can be divided between teams developing the application requesting the service and the team developing the service being requested (especially when such a service gets reused). But then it’s necessary to ensure that all of the teams involved in such a distributed or divided development effort interpret the service contract the same way. This is the goal of SOA testing, and the summary of the additional testing challenge SOA adds to an IT environment. That is, getting the all-important service contract from definition to successful deployment.

As with any testing strategy, the sooner errors can be caught, the easier it is to fix them. Because an SOA environment introduces new artifacts, and requires new techniques for developing services (especially reusable services), it also introduces new requirements for testing systems.

Security topic

Tuesday, May 13 I’l be giving the opening keynote at the Web Services and SOA Security Conference on “Handling Multiple Credentials in a Heterogeneous SOA Environment,” based on this article that I wrote with Fred Dushin.

An interesting aspect of an SOA environment is that it can abstract away many of the differences among hetrogeneous IT environments using a common interfacing technology (e.g. IDL or WSDL). Lots of companies have Java, Microsoft, mainframes, and other types of systems that need to be brought together in new applications.

Such heterogeneity represents a challenge for security technologies, since each environment tpically has a different approach to security. And in some cases more than one approach. When a service request message touches multiple technologies, it usually means encountering multiple security domains, which have to be federated and mapped in order to implement effective solutions for single sign on and authorization.

A valuable tool for dealing with this situation is a data structure associated with a service request that can be used to pass along multiple credentials, in whatever format they happen to appear, endorsing them if they arrive from a secure source (such as an encrypted communication channel). This way the service provider not only has all the security related information needed to call out to a security server, the application can also tell which of the credentials arrived from a trusted source. This can help resolve questions about the relative significance of a credential when multiple are in play.

Another interesting question in the world of security is whether is should be it possible for a policy unaware requester to interact with a policy-aware provider? I mean, if the requester does not specify any security policy, but the provider does, should the services still be allowed to interoperate?

Middleware topic

At SOA World on June 24th (Tuesday, 2:30 pm) I’ll be talking about IONA’s view of the middleware world. We have recently packaged an interoperability
solution
which some of us are calling “middleware for middleware” and that others will call our “universal adapter.”

The basic concept is that people have enough middleware already, and they don’t need more middleware if all they need is to get their existing middleware based applications and systems to work together. Instead, they need just the right amount of software to service enable the existing middleware, reusing that middleware’s communications protocol, data format, and qualities of service as much as possible. The IONA solution is configurable for small footprint and high performance, and supports multiple deployment options (in the same address space as the existing application, in a different address space at the requester side, provider side, or in the middle) for C++ or Java (and COBOL and PL/I on the mainframe). And best of all it is priced accordingly – you just pay for the plug ins you need.

I delivered a version of this for a Webcast last year, the link to which you can find on our Webcast page. You can also check out the presentation.

A configurable, micro-kernel based solution implementing the call-chain interceptor pattern just seems like the best approach to SOA infrastructure. It offers the best match to the widest variety of requirements and does not impose on the solution any of the architectural constraints you get with a hub-and-spoke, server based, or mid-tier solution.

And yes, it supports SOA testing and security federation…

Advertisements

More on the Software Assembly Question- Do Patterns Help?

Since I posted the initial entry questioning the validity of the Henry Ford analogy for improving software productivity through interface standardization, there’s been some good posts by Hal and Richard, and some good feedback to the Sys Con site that syndicated the entry.

While I have to say I think the posts and comments make excellent points about the value of design, and the differences between mass producing hard goods and creating individual applications, I am not sure any clear recommendation is emerging for how to improve the software development process. So now I am wondering whether we can get at this progblem through patterns.

One aspect of the debate over software productivity and assembly is whether or not visual tools can help. I think that they do – visual abstractions can be very meaningful – but I do not know of any visual system that actually solves the complete problem (i.e none have solved the customization/round trip problem). UML tools are furthermore too object oriented for some applications – such as services and REST- although of course I will get an argument from the UML (and MDA?) folks that models are the way to go anyway, and UML and MDA are being changed to be more data and document oriented (i.e. sequence diagrams could be improved in this direction).

I admit I am not up to date with the latest in UML and MDA. But I also don’t know of any reason to change my view that they do not provide the answer. I have yet to see any graphical system entirely able to replace any human oriented language, and I do not think programming languages are any different. People still need text, even when the graphics and icons are superb.

So noting the growing adoption of software patterns, including integration patterns and SOA patterns, and observing the fact that software systems such as Apache Camel for example, are starting to be built around them, I can’t help wondering whether the solution might be found there.

The fundamental issue seems to be identifying the right abstractions. Software is the way people have of telling computers what to do, and it is still too hard, requiring way too much work.

In the Henry Ford analogy, the API (or interface) is seen as the right abstraction. As long as the interface to a program is standardized, its implementation can contain any code. With a standardized interface, programs can be assembled with predictable results (i.e. other programs know what to expect when invoking the interface). This led to the idea of reuse, of libraries of components, objects, and services that someone could sell and others could use in building applications. And this has happened to some extent, but there are also many unfulfilled promises in this area (as David Chappell, among others, has pointed out).

Now if we look at patterns, and how Camel is representing them in software, we see a different type of abstraction being used – basically a variation on the theme of domain specific languages. The domain in this case being integration, and the realization of integration patterns in particular.

One of the challenges of DSLs is integration in fact – that is, how do you join together programs written using different DSLs into a system or application? It sounds like a crazy idea, but what if we were to use integration techniques, such as patterns, themseleves implemented using DSLs, to join programs together written using other DSLs?

Would we have the abstractions right? I.e. in the language instead of in pictures or interfaces? And would we be able to assemble programs together quickly and easily? Maybe we need some patterns specifically for application assembly?

The Artix Connect for WCF Beta Experience

A couple of days ago we announced the Artix Connect for WCF product, and posted a beta on our Website. Today I finally got around to downloading it and trying it out with VS 2005. I am very pleased to say that it worked the first time! 😉

The kit comes with a sample project that uses two connections: one to a CORBA based application and another to a JMS based application. The CORBA software comes in the kit, and you can use just about any JMS — FUSE Message Broker ( which is a supported version of Apache ActiveMQ) is the default, which is open source and freely available. You can run everything on the same machine, per the instructions, but to use Connect in a multi-machine environment you would just reconfigure the network addresses of the CORBA and JMS software systems.

The way I usually talk about this is that WCF is for connecting to all things Windows, and Artix is for connecting to everything else. More precisely, Artix Connect for WCF is a Java-* interoperability tool that can be used from line of business adapters in Visual Studio 2005 and BizTalk Server 2006. One of the things you can connect to is Artix ESB, which connects to Java and native Tibco, Tuxedo, WebSphere MQ, C++ applications, etc. You can also connect to Artix Mainframe for accessing IMS and CICS based applications. And finally, Artix ESB can also be used to Web service enable all these existing systems and more, so if you are a WCF developer you have a lot of options for connecting to virtually anything non-Windows, while still coding as if you were using WCF.

The user’s guide takes you step by step through how to set up the CORBA and JMS servers, configure the line of business adapter in Visual Studio, uncomment a few lines of C# code, and build and run the project. And there you go. WCF talking to CORBA and JMS. It’s pretty fast, too, once it’s all up and running.

This is pretty exciting. I’ve been briefing reporters about it. I have a lot of friends at Microsoft (including on the WCF team), and have been blogging and talking about the recent interoperabilty announcements from Microsoft. Some folks have taken a “glass half empty” view but I am definitely in the “half full” camp. I think these are very positive changes in direction for Microsoft, and I am very hopeful that Artix Connect will be very positively embraced by the Microsoft community.

Anyway, if you get a chance to try it out, let us know what you think. We have a month or two before GA, so there’s still time to change things (and yes, I know, the EJB connector still needs to be finished). I had a good experience with it, but I am very curious to know what others think.