The snow is finally gone and the new, yellow-green leaves are starting to emerge… Yes, that’s right, it’s conference season again!
Here’s a list of conferences at which I’ll be speaking during the next couple of months, followed by a summary of the topics:
- May 7, Practical Software Quality and Testing, Las Vegas
- May 13, Web Services Security and SOA Conference, Baltimore
- June 24, SOA World, New York
I hope to see you at one or more of these events.
Next Wednesday at the Practical Software Quality and Testing conference I’ll be giving the opening keynote on “Meeting New Challenges in Testing Service Oriented Architectures.” We have been doing a lot of work with our customers recently on SOA testing strategies, and in the course of that work have built some technologies to help, in particular interface simulation tooling.
The challenge in moving to an SOA environment is to find a good way to validate service contracts. A good service contract is key to a successful SOA, and consequently the focus of SOA governance (and I mean this in the logical sense, as in the desired result of a successful governance effort).
One of the great benefits of a good service contract agreement is that the work to implement the contract can be divided between teams developing the application requesting the service and the team developing the service being requested (especially when such a service gets reused). But then it’s necessary to ensure that all of the teams involved in such a distributed or divided development effort interpret the service contract the same way. This is the goal of SOA testing, and the summary of the additional testing challenge SOA adds to an IT environment. That is, getting the all-important service contract from definition to successful deployment.
As with any testing strategy, the sooner errors can be caught, the easier it is to fix them. Because an SOA environment introduces new artifacts, and requires new techniques for developing services (especially reusable services), it also introduces new requirements for testing systems.
Tuesday, May 13 I’l be giving the opening keynote at the Web Services and SOA Security Conference on “Handling Multiple Credentials in a Heterogeneous SOA Environment,” based on this article that I wrote with Fred Dushin.
An interesting aspect of an SOA environment is that it can abstract away many of the differences among hetrogeneous IT environments using a common interfacing technology (e.g. IDL or WSDL). Lots of companies have Java, Microsoft, mainframes, and other types of systems that need to be brought together in new applications.
Such heterogeneity represents a challenge for security technologies, since each environment tpically has a different approach to security. And in some cases more than one approach. When a service request message touches multiple technologies, it usually means encountering multiple security domains, which have to be federated and mapped in order to implement effective solutions for single sign on and authorization.
A valuable tool for dealing with this situation is a data structure associated with a service request that can be used to pass along multiple credentials, in whatever format they happen to appear, endorsing them if they arrive from a secure source (such as an encrypted communication channel). This way the service provider not only has all the security related information needed to call out to a security server, the application can also tell which of the credentials arrived from a trusted source. This can help resolve questions about the relative significance of a credential when multiple are in play.
Another interesting question in the world of security is whether is should be it possible for a policy unaware requester to interact with a policy-aware provider? I mean, if the requester does not specify any security policy, but the provider does, should the services still be allowed to interoperate?
At SOA World on June 24th (Tuesday, 2:30 pm) I’ll be talking about IONA’s view of the middleware world. We have recently packaged an interoperability
solution which some of us are calling “middleware for middleware” and that others will call our “universal adapter.”
The basic concept is that people have enough middleware already, and they don’t need more middleware if all they need is to get their existing middleware based applications and systems to work together. Instead, they need just the right amount of software to service enable the existing middleware, reusing that middleware’s communications protocol, data format, and qualities of service as much as possible. The IONA solution is configurable for small footprint and high performance, and supports multiple deployment options (in the same address space as the existing application, in a different address space at the requester side, provider side, or in the middle) for C++ or Java (and COBOL and PL/I on the mainframe). And best of all it is priced accordingly – you just pay for the plug ins you need.
A configurable, micro-kernel based solution implementing the call-chain interceptor pattern just seems like the best approach to SOA infrastructure. It offers the best match to the widest variety of requirements and does not impose on the solution any of the architectural constraints you get with a hub-and-spoke, server based, or mid-tier solution.
And yes, it supports SOA testing and security federation…