Why SOA is different

What I should say, is: what is different that makes SOA popular now when it has been around for 10 years or more?
I’ve given this rap a couple of times now during the past couple of weeks, and I think I’m starting to get it down 😉
I believe the reason SOA is popular now is that the IT software industry has reached a critical turning point in its history. I think this is what’s different.
30 or 40 years ago the big deal was which application to automate next. The idea of using computers to help run a business was new (never mind the idea that some businesses could be completely based on computers).
So everyone was trying to figure out which investments to make in which applications, many of which were automating manual procedures, and speeding up the flow of information, which in turn helped a business to achieve a return on those investments.
No one was really thinking about what do you do once everything is automated? But that’s more or less where the industry finds itself now. Enterprises pretty much have all the IT they need – certainly all the features and functions, languages, operating systems, packaged applications, database management systems, application servers, EAI brokers, etc.
The next challenge – for any industry in this position – is how to improve and refine what has already been done. And that’s why SOA is so popular, and why we at IONA have developed our unique approach to SOA infrastructure.
It’s not about building new applications or integrating ERP systems anymore. It’a about refining and improving what everyone has so it works better together. So it all supports the same level of abstraction and standardization, and all the existing applications we’ve worked so hard on during the past few decades can participate in SOA based solutions.
So we are now in a unique position – for the software industry at least – of looking backward to see what we did and how to improve it, and get more out of the investments we’ve already made – instead of looking forward to the next big feature/function, the next big release.
The industry is different, and needs a different approach to enterprise software. Specifically to support the reasons why SOA is so popular now, and takes into account the different stage of industry evolution in which we find ourselves.


6 responses to “Why SOA is different

  1. Most SOA principles have been around for decades, without the least exaggeration. The difference is that now it is possible to put them in practice at an architectural level, while before it was not. And this is so thanks mainly to one thing we have that we did not have until now: INTEROPERABILITY in practice. Before that, we had good interoperability, assumming that everyone supported the same specifications you did; which never happened. But now, everybody supports HTTP and XML, and virtually everybody supports SOAP and WSDL. You can invoke any useful functionality available, no matter who or how or where it is implemented, easily and with fair support by tools and knowledge. No previous service (or middleware) technology has had this feature until now.
    HTTP, XML, SOAP and WSDL are the key to this. Of course you can have a SOA without them – but then it is going to be somewhat autistic and lose functionality, tools and knowledge out there. I hope they become the basis of the standard platform of the future so that, everywhere you try to develop or deploy a new functionality, they are available so that this new functionality can leverage what is already there, and vice versa. Of course more things are needed: runtime registry, transactions, reliability, …

  2. Very nice description of where SOA is today and why it is finally ‘successful’.
    I believe that another piece in the puzzle is that the price/performance ratio of all hardware is now such that people can afford to use these new integration technologies and standards. Not long ago, nobody believed that we’d be watching movies online!
    Despite all the advances in software, acceptance and so forth, I do not believe we would be where we are now if hardware had not made the increases it has.
    So, the challenge for me now is: with all this great hardware and software, where are the new applications going to come from and who is going to build them?
    By that I do not mean things like Second Life, YouTube, Flickr – they are an amazing thing, I agree. However, what I’m thinking of are more ‘mundane’ things like general ledger, production planning, business process management. In other words: applications without which we would literally have nothing to eat, or no light, or heat or whatever.
    If we could harvest the talent that is there building things like Second Life toward these more ‘mundane’ things, then, I think, we’d really be making progress.
    A quick look at what some of the people are doing with the Amazon Web Services (http://www.amazon.com/AWS-home-page-Money/b/104-1944210-0945565?ie=UTF8&node=3435361)
    suite sort of illustrates my point, but not enough.
    Oh well, more in my blog when I get round to writing it.

  3. Yes, absolutely XML and Web services, and advances of hardware are critical factors.
    But I’d say the problem of interoperabiity has been solved at least twice before, if not more often.
    DCE solved it in the early 90s and then CORBA solved it again in the late 90s. Why didn’t these technologies catch on?
    Conventional wisdom is that they were (a) too complex (b) not endorsed by everyone. There is some truth to both of these ideas, but DCE interoperated just fine with DCOM (since DCOM’s communications protocol and data format were implementations of the DCE specification). And CORBA works on every platform, including browsers, and there are plenty of open source implementations available. Maybe the market is controlled by the big vendors, but I don’t think that’s the whole story, either.
    DCE was basically procedure oriented, not object oriented. During that stage of the IT software history, DCE represented a kind of fundamental innovation – the ability to communicate across hardware/operating system platforms. In those days most IT solutions were developed using a single vendor’s hardware and software. The concept of an independent software vendor was definitely gaining ground but in the enterprise most solutions were still single vendor hardware/software.
    CORBA did suffer from a lot of political fighting, and from the fact that interoperability was not among its V1 specifications. The results of the interoperability fight during the V2 specification effort definitely resulted in some lingering hard feelings, especially after the anti-Microsoft crowd won out, and DCE was rejected.
    However CORBA was also defined in advance of some other fundamental technology shifts, including Java and the Web. Most CORBA deployments – of any significant size anyway – are based on C++. This is part of the reason it’s seen as a legacy technology.
    Another reason is that although IIOP was proposed once as an Internet standard to IETF, it was never adopted by them, apparently because (at least in part) of an unresolved dispute between IETF and OMG over editorial control.
    Certainly after all this change a new interoperability solution seemed as if it would be welcome, but the original goal of SOAP was to carry XML documents across the Web, not interoperability among enterprise IT systems. In those days there must have been at least 15 proposals for an XML protocol for business to business applications, or program to program communications across the Web. SOAP won out, and yes, its endorsement by Sun after nearly everyone else was on board really meant that for the first time the industry had achieved public support around a single specification.
    Implementations, by the way, have been another story, as Web services moved into the enterprise and vendors adapted them to their existing technologies, and no one independently tests or certifies compatibility across those vendor implementations. You see things like JMS specific namespaces added into the logical part of a WSDL for example by vendors who are not thinking abstractly when incorporating Web services into their product lines.
    So all of this is correct, and definitely the widespread adoption of XML and Web services plays a big part in modern SOA adoption, as does the contstant improvement in hardware. I also remember when we all thought the Web was too slow to render an HTML page, never mind streaming audio and video.
    But in one way the adoption of XML and Web services also substantiates the point I want to make, which is that these technologies are developed assuming a heterogenous environment. CORBA on the other hand defined an entire interface system and execution environment, including language mappings, on the assumption that CORBA would be adopted and used ubiquitously as the single and only interoperability solution. Java was originally proposed as the single general purpose language in which all programs would be written.
    Today we do not think that way anymore. We know that a single programming language or execution environment is not going to emerge to tackle every problem. It’s a multi-language, multi-protocol, multi-data format world, and it’s going to stay that way. We are done with a significant part of the software industry and moving to the next stage, in which XML, Web services, and SOA based applications using them are being used as an abstraction of existing systems and technologies, rather than a replacement of them.
    In this way the modern trend toward SOA recognizes the turning point in the history of the industry and improves and refines what we already have, rather than defining a new execution environment.

  4. Gregor Hohpe’s JavaZone presentation on Infoq

    I have a lot of blogging topics to catch up on, but I just watched Gregor Hohpe’s presentation on Infoq and wanted to write about it while it’s fresh in my mind. I really thought it hit home on a lot of very important points about SOA, especially thing…

  5. I completely agree that the advent of changes in how Enterprises design, deploy and use their existing IT infrastructure has made SOA more popular than ever before.
    As a complimenting topic, where do we see the Network devices going? I, for one, feel that these devices which speak IP for their living are not too proficient with what’s happening at Layer 6 and above. This gap needs to close and I do see that happening.

  6. Yes, neetwork devices are getting more powerful and capable. I know there’s been a lot of discussion about whether services need to be specifically designed for those devices, or whether device power will just grow sufficiently for them to be able to deploy “regular” services.
    This is one of the interesting topics that comes up at OSGi’s enterprise expert group. Can you have a common architectural approach for both resource constrained and unconstrained environments? The working assumption is yes, but we have really only started the EEG work a few months ago.
    I have heard some of the phone vendors say that they view the new mobile platforms as servers, capable of hosting services for other devices or clients. If this is the case then mobile devices can participate in an SOA environment just like a PC or UNIX server or mainframe…

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s