Category Archives: Services and SOA

APIs and Alan Moore Characters

It was originally intended to be a straightforward conversation about APIs, but thanks to Ian Murphy of Enterprise Times several Alan Moore characters managed to enter the discussion. Magic is certainly something we can all use in organizing our APIs and automating our processes.

Failing that, it’s important to cut ties with the past and implement best practices for an API first design approach, to ensure the APIs provide the service the consumer needs and wants. Just wrapping existing code with an API is unlikely to cut it, although this is a tempting shortcut. Generally in this area however, you get what you pay for.

It’s also a lot better to take security into account from the start, rather than having to redo things and remediate potential vulnerabilities just as you are about to deploy to production.

And it’s very intersting that we are basing our APIs on REST now – understandable, of course, but interesting that the best practices for HTTP (which embodies RESTful principles) evolved from a completely different stream of technology development than WSDL-based Web services and traditional middleware.

As unexpected as it might seem, if you think about it, Alan Moore’s dr & Quinch characters are a good fit for the kind of chaos monkey style of resiliency testing helpful to improve the stability of APIs.

And – thanks for this one Ian – who initially thought of Mina Murray as the best one to impress upon independent minded developers the need to follow a common purpose, just as she did for the League of Extraordinary Gentlemen. I’m just as sure she would be capable as I am that some API dev teams would benefit from such leadership. This is a constant challenge with dev teams, of course. Just not sure everyone really needs a vampire to lend a hand.

Anyway , it was great fun to approach the topic this way, and I hope Ian won’t mind if I mention that he told me he has actually seen one of Alan’s spoken word performances. I’m sure that was really cool, and potentially disturbing.

Now, I only wish old Moore titles would increase in value the way DC and Marvel title recently have, since he is by far the best writer in the field today. Maybe this has something to do with movies…

What everyone gets wrong about microservices

Martin Fowler has collected a lot of good information about microservices, but does not understand their origin as the “right size” unit of work for commodity data center infrastructure. This is apparent from his advice to start with a monolith and decompose it, as if the target deployment environment had no bearing on design.

I suppose that’s the way he and many others would naturally approach it, given the frame of reference is the traditional 3-tier application server model and the abstraction of Java from the hardware and operating system it runs on. This leads him and others to view of microservices as an evolution of development practices to facilitate rapid change, rather than a fundamental shift to design for appropriately sized units of work.

I give Martin a lot of credit for identifying and defining development best practices, including helping establish the agile method. But microservices did not originate as a development best practice in the 3-tier app server environment. Nor did it become popular because people were deconstructing monoliths. Microservices are a creature of the infrastructure they were designed for – the commodity data center.

I don’t think Martin and others viewing microservices as a development trend takes into account the strong relationship of the deployment infrastructure to the origin of the microservices design pattern. I give Martin a lot of credit for including Stephan Tiklov’s clear rebuttal to the monolith-first silliness though.

Google invented the commodity data center infrastructure about 20 years ago, and this has become the de facto infrastructure for IT since then. It is the most cost effective IT infrastructure ever built. Pretty much all Web companies use it, and most pre-Web companies are planning to adopt it. Their original servers are in the computer history museum for this reason (photos below).

Commodity data center infrastructure offers a compelling range of benefits in addition to cost-effectiveness, such as auto-scale, large data set capacity, and automatic resiliency, among others. In order to achieve those benefits, though, software has to be engineered specifically to run in this environment, which is basically where microservices come from. Some core design assumptions allow a very high tolerance for hardware and software failures, but these assumptions are very different from the assumptions on which the traditional IT infrastructure is based, and applications have to be broken into smaller units of work suitable for deployment on PCs and connected via network into larger units of work.

I can understand a view of development best practices unconnected to deployment infrastructure considerations – after all, the traditional IT industry has been on a path for years to “write once, run anywhere” and it’s easy to assume that language and operating system abstractions will take care of harware infrastructure mapping considerations.

But in the case of microservices, it is a big miss to ignore the target deployment infrastructure as the origin of the design pattern, since both the hardware and the software on which they are intended to run has such different characteristics.

Second Edition of TP Book out Today

It’s hard to believe, but the second edition of Principles of Transaction Processing is finally available. The simple table of contents is here, and the entire front section, including the dedication, contents, introduction and details of what’s changed, is here. The first chapter is available here as a free sample.

Photo of an early press run copy

Photo of an early press run copy

It definitely turned out to be a lot more work than we expected when we created the proposal for the second edition almost four years ago.Β  And of course we originally expected to finish the project much sooner, about two years sooner.

But the benefit of the delay is that we were able to include more new products and technologies, such as EJB3, JPA, Spring,Β  the .NET Framework’s WCF and system.transactions API, Spring, SOA, AJAX, REST/HTTP, and ADO.NET even though it meant a lot more writing and rewriting.

The first edition was basically organized around the three-tier TP application architecture widely adopted at the time, using TP monitor products for examples of the implementations of the principles and concepts covered in the book. Then as now, we make sure what we describe is based on practical, real-world techniques, although we do mention a few topics more of academic interest.

The value of this book is that it explains how the world’s largest TP applications work – how they use techniques such as caching, remote communications (synchronous as well as asynchronous), replication, partitioning, persistence, queuing, database recovery, ACID transactions, long running transactions, performance and scalability techniques, locking, threading, queuing, business process management, and state management to process up to tens of thousands of transactions per second with high levels of reliability and availability. We explain the techniques in detail and show how they are programmed.

These techniques are used in airline reservation systems, stock trading systems, large Web sites, and in operational computing supporting virtually every sector of the economy. We primarily use Java EE-compliant application servers and Microsoft’s .NET Framework for product and code examples, but we also cover popular persistence abstraction mechanisms, Web services and REST/HTTP based SOA, Spring,Β  integration with legacy TP monitors (the ones still in use), and popular TP standards.

We also took the opportunity to look forward and include a few words about the potential impact on TP applications of current trends toward cloud computing, solid state memory, streams and event processing, and the changing design assumptions in the software systems used to power large Web sites.

Personally this has been a great project to work on, despite its challenges, complexities, and pressures. It could not have been done without the really exceptional assistance from 35 reviewers who so generously contributed their expertise on a wide variety of topics. And it has been really great to work with Phil again.

Finally, the book is dedicated to Jim Gray, who was so helpful to us in the first edition, reviewed the second edition proposal, and still generally serves as an inspiration to all of us who work in the field of transaction processing.

Artix and Fuse updates

Update 12/14/2007 – A great article from Rich Seeley on the hybrid approach initiated with this weeks’ releases.

My favorite part of this week’s announcements around Artix and FUSE is the support for enterprise integration patterns (EIP).

When I was in Tokyo a couple of weeks ago I heard that a former colleague, now working for Accenture, has recently been giving presentations about the applicability of these patterns to addressing various enterprise IT problems. It actually makes a lot of sense that people would want to have their software products directly support the development and deployment of common patterns – whether in the integration space or not.

EIP support in both Artix and FUSE is derived from the Apache Camel project. As we say around IONA, and as I hope everyone knows, a camel is superior to a mule… πŸ˜‰


Illustration of some of the Integration Patterns Now in Artix

I also think it’s great that Camel is using the domain specific language (DSL) approach, since I’ve been a fan of DSL for a long time, although two years ago I was characterizing DSLs in opposition to UML/MDA. Since then I believe MDA has turned more toward modeling than executable code, which is good, and annotations and aspects have kind of arisen to take their place.

Eclipse tooling shapshot for EIP

At the Eclipse Board meeting here in San Francisco I just presented an update on the SOA Tools Platform Project, including the snapshot accessible via the above link that illustrates what we’re working on in that project to generically support EIP through open source tooling.

I may be wrong here but I think the convergence of these two trends is going to be huge – the identification, characterization, and codification of EIP – and the specialization of DSLs to solve specific computing problems.

SOA in China

In late October I visited China to discuss SOA with several telecom customers. I am a little behind on my blogging, so bear with me. Like they say, sometimes it’s hard blogging about the things you do when you are so busy doing the things you should be blogging about… πŸ˜‰

Saturday morning, just before flying home, I had a chance to visit the Forbidden City. Lulu from IONA’s Beijing office was kind enough to accompany me so I wouldn’t get lost. I really recommend it if you are in Beijing, it’s a great old palace with lots of temples and great museums of clocks, manuscripts, bronzes, and other things.


Me in front of a ceremonial hill at the back of the Forbidden City

It’s amazing how quickly Beijing changes. I was last there about a year and a half ago, and since that time three subway lines have opened, and an entire new wing was added to the hotel where I stay. The big question of course is whether the city will be ready for the 2008 Olympics. I think it will be a close call, there will be some problems, but they will pull it off all right.

And they are apparently keeping up with the latest IT trends. Anyway SOA and Web services are pretty hot in the telecom area right now, especially within the new Service Delivery Platforms (SDP). Just as they are in other parts of the world, in China service orientation and Web services are seen as techniques and tools for improving the effeciency of delivering new telecommunication services to market.

Sometimes you hear things like “the Chinese market lags behind Europe and the U.S.” but I’m not sure that’s true. For one thing, that’s a very broad generalization that implies European and U.S. companies are all at the same stage of advanced thinking, or that no Chinese companies are thinking about advanced topics.

I spoke at an SDP conference this past June in Budapest – I was asked to give an introduction to Web services for the telecom industry. And I was able to use most of the same material for the customer presentations in China.

It may sound a little strange to still be giving an introduction since Web services have been around for about 7 or 8 years. But it has really been the business systems folks — the back office systems for billing, inventory management, order management — who have been investigating SOA and Web services. The network management folks (the ones who deliver the calls, troubleshoot the network, and manage the new services) are just now getting started.

WSDL’s unique ability to abstract multiple protocols (IIOP, HTTP, JMS, etc.) and data formats (XML, CDR, ASN.1/BER, fixed format etc.) and categories of software systems (i.e. message queuing, application servers, database management systems, and packaged appications) is a big part of the solution since CORBA is used in most existing telecom network management products, and lots of telco specific formats and protocols are in use.

Maybe because they’ve been using a lot of CORBA, including notification for event handling for switch failure and other types of alerts, I got a lot of questions about performance and the impact of the additional overhead of XML processing. In fact I got this so many times I started to think that another vendor was going around saying that their Web services products didn’t perform very well! In our case we certainly do also see the usual additional overhead involved in sending lots of XML text, but if someone needs it to go faster we just change the configuration to run SOAP over IIOP instead.

As promising as it sounds for telecom network management applications, no one really knows exactly what the definition of an SDP is. TMForum is working on this in its SDF initiative. But there’s also activity in IEEE around NGSN and quite a few other similar initiatives.

What I heard at the conference in Budapest is that the telco carriers are worried about competition from Skype, Google, Yahoo, and other companies delivering innovative new products and services over the Internet. The carriers still have “carrier grade” networks, and if they can open them up using services, perhaps they will be able to attrack some of that innovation. At least that’s what I heard one of them say.

But of course the larger issue is developing and delivering new telecom services such as Voice over IP and TV over IP, TV on cellphones, expanded multimedia delivery to the home, etc. And here is where SOA and Web services can play a key role – especially if we can get them to go fast enough (and I believe we can ;-).

Assembly Required

Some of the greatest advantages of SOA derive from adopting interface contracts. (Of course the benefit of interface contracts isn’t limited to SOA, but they are definitely applied to good advantage in an SOA environment).

By the way the definition of SOA that I use is that it’s a style of design, not a technology. The design is implemented using one or more technologies.

One of the crucial aspects of any SOA design is the contract between the service requester and the sevice provider, which is typically expressed (at least in part) using an interface definition language such as IDL and/or WSDL.

Among the many benefits of designing contracts and defining interfaces is the ability to divide up the work. Once the contract is in place (at a minimum this would be the definition of the data to be shared, often expressed using XML) separate teams can go off and work in parallel to develop the service provider and requester.

This approach can scale effectively as the library of reusable services grows, dividing up the work across multiple development teams, perhaps in different geographies. And this approach also creates the possibility that the work can be divided across companies, too. And that it could more easily be outsourced and/or offshored.

But with all the great potential for dividing up the labor involved in IT development comes an equally great responsibility to ensure that things go as planned when the divided up work gets assembled. Someone has to ensure that the interface definitions are interpreted and implemented consistently, and preferably before the whole application is deployed.

The folks at Business Management recently interviewed me about this. We have found, through sometimes painful experience, that it is best to validate individual interfaces as they are created rather than wait until final assembly. Many of our customers are starting to realize they need to change their approach to application assembly when using an SOA based approach.

Gregor Hohpe’s JavaZone presentation on Infoq

I have a lot of blogging topics to catch up on, but I just watched Gregor Hohpe’s presentation on Infoq and wanted to write about it while it’s fresh in my mind.
I really thought it hit home on a lot of very important points about SOA, especially things that developers (as well as the industry) need to think about.
I also would like to take him to task on a few items, including the fact that Starbucks does not need two-phase commit (I think this propagates a misunderstanding about transactions rather than illustrating how best to use them), but more about that later.
I really like Gregor’s book, and was in the same room with him once (the 2005 Microsoft MVP summit, since he and I are both Archiecture MVPs) but failed to introduce myself, something I still regret. He has a good, clear way of understanding integration issues and recommending how to deal with them. He does a great job with the presentation, clear, not too serious, and with a great perspective that acknowledges why things are the way they are even when saying that they are not exactly right.
I would definitely encourage everyone interested in development issues around SOA to sit through this presentation. It’s only about an hour – a little less – but Gregor covers a lot of very good territory, and raises a lot of good issues. Although I have to say that Gregor’s focus is on integration, while the main reason for SOA is reuse (not that these are necessarily in conflct, it’s probably more a matter of emphasis).
Ok, back to the Starbucks thing. Of course Starbucks doesn’t use two-phase commit – that would be like hammering a nail with a screwdriver – wrong tool for the job in other words. I completely understand the good advice not to use 2PC for this kind of scenario – what mystifies me is that someone would think it’s a good idea in the first place.
Transactions exist to coordinate multiple operations on data in order to record a real world transaction. No data operations, no transactions. Many argue that a retrieve operation is not a transaction, since the data isn’t changed.
And recording a real world transaction almost always involves multiple operations on data that need to be treated as a unit. Take the Starbucks example – the store has to record the payment and the fact that the drink was delivered. If either the payment isn’t given or the drink isn’t delivered the transaction didn’t happen and the record of it needs to be erased.
Computers being touchy electrical devices (as opposed to say, pen and paper), they can easily (and predictably, according to Murphy and everyone else) fail between data operations, and transactions are what ensures that the system of record (i.e. the database) matches the real world (i.e. if part of the real world transaction doesn’t complete, the database transaction rolls back one of the changes).
Starbucks no doubt uses some kind of database to record real world transactions. Therefore they no doubt also use software transactions, since it’s very hard to buy a database these days that does not support transactions, and even if you could you wouldn’t want to (personally I remember coding my own locking and doing manual recovery on an old HP Image database circa 1980, pre-transactions, but that’s another story).
Two phase commit (aka distributed transactions) is intended to be used when data operations involving multiple databases (aka transactional resources) need to be coordinated. I know that Gregor knows this, and is trying to illustrate the point that you should not automatically assume that you should use 2PC even if you can, but I for one think he could come up with a better analogy. In particular it would be helpful to illustrate when 2PC is worth the cost, and not just say “never use it.” In talking about most of the other topics he does usually get into a discussion about tradeoffs and when to use what, but in this case the advice is too simplistic for my taste.
Update 5/21 – this seems too long so I’m splitting it here…

Continue reading

Why SOA is different

What I should say, is: what is different that makes SOA popular now when it has been around for 10 years or more?
I’ve given this rap a couple of times now during the past couple of weeks, and I think I’m starting to get it down πŸ˜‰
I believe the reason SOA is popular now is that the IT software industry has reached a critical turning point in its history. I think this is what’s different.
30 or 40 years ago the big deal was which application to automate next. The idea of using computers to help run a business was new (never mind the idea that some businesses could be completely based on computers).
So everyone was trying to figure out which investments to make in which applications, many of which were automating manual procedures, and speeding up the flow of information, which in turn helped a business to achieve a return on those investments.
No one was really thinking about what do you do once everything is automated? But that’s more or less where the industry finds itself now. Enterprises pretty much have all the IT they need – certainly all the features and functions, languages, operating systems, packaged applications, database management systems, application servers, EAI brokers, etc.
The next challenge – for any industry in this position – is how to improve and refine what has already been done. And that’s why SOA is so popular, and why we at IONA have developed our unique approach to SOA infrastructure.
It’s not about building new applications or integrating ERP systems anymore. It’a about refining and improving what everyone has so it works better together. So it all supports the same level of abstraction and standardization, and all the existing applications we’ve worked so hard on during the past few decades can participate in SOA based solutions.
So we are now in a unique position – for the software industry at least – of looking backward to see what we did and how to improve it, and get more out of the investments we’ve already made – instead of looking forward to the next big feature/function, the next big release.
The industry is different, and needs a different approach to enterprise software. Specifically to support the reasons why SOA is so popular now, and takes into account the different stage of industry evolution in which we find ourselves.

Why Artix Registry/Repository is different

Update 4/4: see also Chris Horn’s thoughtful and interesting related entry about the concept of the “SOA Guardian,” drawing an analogy to government in general, referring to Cicero’s ultimate law as the welfare of the people – here of course we are talking about the role of the SOA reg/rep in ensuring the welfare of IT.
William has already written about this announcement, and Frank gives a good perspective and a comment about how some of the media reports have missed the point (some did not, to be fair).
I’d like to add some more perspective about why the Artix Registry/Repository is different. I often get asked why IONA is entering this market – isn’t it already sewn up? Aren’t we late to a croweded party? The answer is that the current solutions do not meet customer requirements.
Today’s registry/repository solutions are what I’d call passive. That means you store metadata in them, such as WSDL files, WS-Policy assertions, and other attributes descriptive of parts of the the SOA design. And then, like any database application, you look up things about what you stored.
Sometimes you can even store things related to management, such as service level agreements and policy enforcement points. But you can’t translate that into deployment configurations for your runtime container.
Sure, there’s even a notification system to alert interested parties when a service changes. But you are still stuck using manual procedures to create and update configuration and dependency information for your runtime container, using other tools.
Yes, you can use today’s registries to find services, and bind to services, notify someone of a change to a service, but you cannot do anything with the runtime implementation of the service. For that you need another tool or set of tools.
Current registry/repository solutions are not active, at least not in the sense that they allow you to actually do anything with the implementation of the services. They are comlpetely and totally disconnected from the SOA runtime.
Now part of the reason for this is that the industry is still hung up trying to use yesterday’s containers for today’s problem. Configurable, lightweight runtimes are gaining popularity as the complexity of JEE based application servers continues to inhibit SOA adoption. Same for EAI hub and spoke based solutions.
This is why all of a sudden we have vendors like BEA and Tibco out there announcing their new “micro” containers. IONA has been shipping a lightweight configurable container for 7 years.
We know that’s what you need for SOA. We have been doing SOA for 10 years.
So there’s been lots of talk recently about the obvious shortcoming in current registry/repository products, about developing active registry/repository solutions, but all that the other vendors have done is to propose a way to combine the passive registry/repository model with existing system management (and sometimes service management) solutions. That doesn’t really cut it. This is still assuming that things like application servers and EAI hubs are valid implementations of SOA infrastructure – which they are not.
Today’s passive registry/repository solutions just don’t give you the capability to work the metadata to configure your runtime and push the change out to the SOA infrastructure.
Now sure, I’ve mixed up the topics a bit here, between the obvious shortcoming of today’s passive registry/repository solutions, and the fact that most vendors still promote yesterday’s platforms as the answer to today’s requirements.
But I really think these are related. The industry is moving toward lightweight, configurable containers, via Spring and OSGi. A lightweight, distributed runtime is much better suited to SOA. Services can directly find and interact with each other, without having to go through a central application server or EAI hub.
The world of SOA infrastructure is adopting these lightweight, configurable runtimes, because you get more for your money, and software that’s better suited to the purpose. And as you do, the requirements of your SOA infrastructure will also be better served by an active registry/repository. One that’s able to deal with the composition and configuration implications of changes in service descriptions (i.e. the deployment implications).
When you change a policy, you want to be able to push that right out to the SOA infrastructure, and you don’t want to have to do a lot of manual recoding, recompiling, configuration, and redeployment using yesterday’s tools.
Update 4/2: One of my colleagues pointed out that I never came back to the point about the crowded market. SOA registry/repository is a very new market segment, and therefore by definition nowhere near sewn up or crowded.

Future of Radio?

Friday I had the pleasure of appearing on My Technology Lawyer Radio.
I’m not sure it’s the future of radio, as Scott suggested after we were done – a kind of hybrid between AM talk radio and Podcasting, with commercials. But one thing’s for sure, I had a great time doing it. And even if I do say so myself, I think it came out pretty good.
One of the best things was chatting with Scott during the commercial breaks and afterward. I certainly believe that the informal style he’s using is really the future of communication, whether in blogs (like this) or on the radio.
You can find the replay here – look for 3/16 and when it starts move the ball about halfway down the slider (I was in the third half of the show, to quote Tom and Ray).
(Maybe Jon Udell can tell me how to create a link to the middle of the stream. He did it before with a SYS-CON TV appearance.)
Actually I’m listening to the reply while I’m typing this. Good informal discussion about blogging, IONA, distributed computing, distributed SOA, and open source. Maybe it came out good because it was fun doing it, and because of Scott’s relaxed approach.
In fact as an icebreaker before we got started he asked my opinion of several famous older ladies, would I want to go out on a date with one of them if I could, that sort of thing. Well, I guess several have been in the news lately, so a lot of guys must be thinking about this. I said Helen Mirren, and but Scott said for him it’s Nancy Pelosi, and only Nancy Pelosi.
Suffice it to say that the ice was broken by laughter πŸ˜‰
Anyway, I hope you get a chance to listen to the show, and check out some of Scott’s other shows. If you do, please let me know what you think!