Category Archives: Microsoft Software

Do You Need to Develop for the Cloud?

And other notes from last week’s IASA NE cloud computing roundtable (more about the meeting format later – also note the Web site currently does not have up to date information – we are volunteers after all ;-).

We had good attendance – I counted 26 during the session, including folks from Pegasystems, Microsoft, Lattix, Kronos, SAP, The Hartford, Juniper, Tufts, Citizens Bank, Reliable Software, Waters, and Progress Software,  among others (I am sure I missed some), providing an interesting cross-section of views.

The major points were that no one is really thinking about the cloud as a place for accessing hosted functionality, and everyone is wondering whether or not they should be thinking about developing applications specifically for the cloud.

We touched on the landscape of various cloud computing offerings, highlighting the differences among Google, SalesForce.com, Microsoft, and Amazon.com.  Cloud vendors often seem to have started with trying to sell what they already had – Google has developed an extensive (and proprietary) infrastructure for highly available and scalable computing that they offer as Google App Engine (the idea is that someone developing their own Web app can plug into the Google infrastructure and achieve immediate “web scale”).

And Salesforce.com had developed a complex database and functionality infrastructure for supporting multiple tenants for their hosted application, including their own Java-like language, which they offer to potential cloud customers as Force.com.

Microsoft’s Azure offering seems to be aiming for a sort of middle ground – MSDN for years has operated a Web site of comparable size and complexity to any of the others, but Microsoft also supplies one of the industry’s leading application development environments (the .NET Framework). The goal of Azure is to supply the services you need to develop applications that run in the cloud.

However, the people in the room seemed most interested in the idea of being able to set up internal and external clouds of generic compute capacity (like Amazon EC2) that could be related, perhaps using virtual machines, and having the “elasticity” to add and remove capacity as needed. This seemed to be the most attractive aspect of the various approaches to cloud computing out there. VMware was mentioned a few times since some of the attendees were already using VMware for internal provisioning and could easily imagine an “elastic” scenario if VMware were also available in the cloud in a way that would allow application provisioning to be seamless across internal and external hosts.

This brought the discussion back to the programming model, as in what would you have to do (if anything) to your applications to enable this kind of elasticity in depoyment?

Cloud sites offering various bits of  “functionality” typically also offer a specific programming model for that functionality (again Google App Engine and Force.com are examples, as is Microsoft’s Azure). The Microsoft folks in the room said that a future version of Azure would include the entire SQL Server, to help support the goal of reusing existing applications (which a lot of customers apparently have been asking about).

The fact that cloud computing services may constrain what an application can do, raises the question of whether we should be thinking about developing applications specifically for the cloud.

The controversy about cloud computing standards was noted, but we did not spend much time on it. The common wisdom comments were made about being too early for standards, and about the various proposals lacking major vendor backing, and we moved on.

We did spend some time talking about security, and service level agreements, and it was suggested that certain types of applications might be better suited to deployment in the cloud than others, especially as these issues get sorted out. For example, company phonebook applications don’t typically have the same availability and security requirements that a stock trading or medical records processing application might have.

Certification would be another significant sign of cloud computing maturity, meaning certification for certain of the service level agreements companies look for in  transactional applications.

And who does the data in the cloud belong to? What if the cloud is physically hosted in a different country?  Legal issues may dictate data belonging to citizens of a given country be physically stored within the geographical boundary of that country.

And what about proximity of data to its processing? Jim Gray‘s research was cited to say that it’s always cheaper to compute where the data is than to move the data around in order to process it.

Speaking of sending data around, however, what’s the real difference between moving data between the cloud and a local data center, and moving data between a company’s remote data center?

And finally, this meeting was my first experience with a fishbowl style session. We used four chairs, and it seemed to work well. This is also sometimes called the “anti-meeting” style of meeting, and seems a little like a “user-generated content” style of meeting.  No formal PPT.  At the end everyone said they had learned a lot and enjoyed the discussion. So apparently it worked!

Stay tuned for news of our next IASA NE meeting.

Advertisements

Second Edition of TP Book out Today

It’s hard to believe, but the second edition of Principles of Transaction Processing is finally available. The simple table of contents is here, and the entire front section, including the dedication, contents, introduction and details of what’s changed, is here. The first chapter is available here as a free sample.

Photo of an early press run copy

Photo of an early press run copy

It definitely turned out to be a lot more work than we expected when we created the proposal for the second edition almost four years ago.  And of course we originally expected to finish the project much sooner, about two years sooner.

But the benefit of the delay is that we were able to include more new products and technologies, such as EJB3, JPA, Spring,  the .NET Framework’s WCF and system.transactions API, Spring, SOA, AJAX, REST/HTTP, and ADO.NET even though it meant a lot more writing and rewriting.

The first edition was basically organized around the three-tier TP application architecture widely adopted at the time, using TP monitor products for examples of the implementations of the principles and concepts covered in the book. Then as now, we make sure what we describe is based on practical, real-world techniques, although we do mention a few topics more of academic interest.

The value of this book is that it explains how the world’s largest TP applications work – how they use techniques such as caching, remote communications (synchronous as well as asynchronous), replication, partitioning, persistence, queuing, database recovery, ACID transactions, long running transactions, performance and scalability techniques, locking, threading, queuing, business process management, and state management to process up to tens of thousands of transactions per second with high levels of reliability and availability. We explain the techniques in detail and show how they are programmed.

These techniques are used in airline reservation systems, stock trading systems, large Web sites, and in operational computing supporting virtually every sector of the economy. We primarily use Java EE-compliant application servers and Microsoft’s .NET Framework for product and code examples, but we also cover popular persistence abstraction mechanisms, Web services and REST/HTTP based SOA, Spring,  integration with legacy TP monitors (the ones still in use), and popular TP standards.

We also took the opportunity to look forward and include a few words about the potential impact on TP applications of current trends toward cloud computing, solid state memory, streams and event processing, and the changing design assumptions in the software systems used to power large Web sites.

Personally this has been a great project to work on, despite its challenges, complexities, and pressures. It could not have been done without the really exceptional assistance from 35 reviewers who so generously contributed their expertise on a wide variety of topics. And it has been really great to work with Phil again.

Finally, the book is dedicated to Jim Gray, who was so helpful to us in the first edition, reviewed the second edition proposal, and still generally serves as an inspiration to all of us who work in the field of transaction processing.

The Artix Connect for WCF Beta Experience

A couple of days ago we announced the Artix Connect for WCF product, and posted a beta on our Website. Today I finally got around to downloading it and trying it out with VS 2005. I am very pleased to say that it worked the first time! 😉

The kit comes with a sample project that uses two connections: one to a CORBA based application and another to a JMS based application. The CORBA software comes in the kit, and you can use just about any JMS — FUSE Message Broker ( which is a supported version of Apache ActiveMQ) is the default, which is open source and freely available. You can run everything on the same machine, per the instructions, but to use Connect in a multi-machine environment you would just reconfigure the network addresses of the CORBA and JMS software systems.

The way I usually talk about this is that WCF is for connecting to all things Windows, and Artix is for connecting to everything else. More precisely, Artix Connect for WCF is a Java-* interoperability tool that can be used from line of business adapters in Visual Studio 2005 and BizTalk Server 2006. One of the things you can connect to is Artix ESB, which connects to Java and native Tibco, Tuxedo, WebSphere MQ, C++ applications, etc. You can also connect to Artix Mainframe for accessing IMS and CICS based applications. And finally, Artix ESB can also be used to Web service enable all these existing systems and more, so if you are a WCF developer you have a lot of options for connecting to virtually anything non-Windows, while still coding as if you were using WCF.

The user’s guide takes you step by step through how to set up the CORBA and JMS servers, configure the line of business adapter in Visual Studio, uncomment a few lines of C# code, and build and run the project. And there you go. WCF talking to CORBA and JMS. It’s pretty fast, too, once it’s all up and running.

This is pretty exciting. I’ve been briefing reporters about it. I have a lot of friends at Microsoft (including on the WCF team), and have been blogging and talking about the recent interoperabilty announcements from Microsoft. Some folks have taken a “glass half empty” view but I am definitely in the “half full” camp. I think these are very positive changes in direction for Microsoft, and I am very hopeful that Artix Connect will be very positively embraced by the Microsoft community.

Anyway, if you get a chance to try it out, let us know what you think. We have a month or two before GA, so there’s still time to change things (and yes, I know, the EJB connector still needs to be finished). I had a good experience with it, but I am very curious to know what others think.

Microsoft Interoperability and Open Source

At EclipseCon this week Microsoft announced cooperation with Eclipse, among other things supporting the SWT technologies on Vista’s presentation framework, effectively allowing Eclipse developers to generate GUIs for Vista.

Like many of their recent announcements concerning interoperability and open source, some observers are enthusiastic while others criticize the fact they didn’t go farther and suggest they never will. However to me this continues to be a glass-half-full situation, in which take Microsoft’s efforts in the context of their culture and history. These are big steps for them, and I think they represent a serious and significant change.

Last fall I attended an ISV event at Redmond, which we were invited to because of our interoperability solutions bewteen Microsoft, Java, and other environments. I couldn’t help but notice that Ray Ozzie’s name was mentioned several times by the presenters. That’s why I made the 2008 prediction for Sys-Con that I did about Microsoft and the enterprise. It seemed to me as if Ray Ozzie’s influence is starting to be felt. Burton Group analyst Peter O’Kelly was also quoted as saying so in the InfoWorld follow up article.

Does this mean that Microsoft is starting to become more serious about their interests in interoperability and open source? In the past I always got the impression that it was hampered by the fact that it would imply their recognizing the legitimacy of a platform other than Windows. Perhaps Ray Ozzie is able to bring a helpful external perspective. Perhaps reality is sinking in that the world of heterogeneous platforms is unlikely to change.

The main news for those of us offering interoperabilty solutions is that Microsoft is opening up some of its internal APIs and publishing their proprietary extensions to standards, which will make it easier to integrate with their products.

They are also allowing “reasonable licensing” terms on their patents – not sure how much benefit this is but they have also loosened up the terms and conditions under which other vendors can develop products that “infringe” on a Microsoft patent – i.e. they don’t have to get a license up front now, but instead only have to negotiate a “reasonable” fee when they ship.

The recent steps toward improved interoperability support and improved relationship with open source communities may strike some as insufficient or incomplete, but to me they represent a signficant change in tone and strategy for Microsoft.

Microsoft Memos

Well, everyone else is talking about this today – might as well chime in!
I just finished the Ozzie memo. I had read the Gates email yesterday. I thought the email was right on target. It is a clear recognition of a significant new industry trend and a very important recognition of the need to change and adapt yet again. But I wonder whether it will be possible this time. They have spent the past few years institutionalizing their very successful culture, and this change represents much more than a change in technology direction – it is a fundamental business model and culture change.
I do not see evidence in Ozzies memo of any specific plan of attack. The description of the problem seems very good, but theres nothing about a solution, or how exactly he expects Microsoft folks to go about implementing the proposed changes. Hes asking for a study, for each division to respond with a proposal to solve the problems he describes, and he has created a process within which he will be assigning people to work on solving the problems. But again, nothing concrete about how its going to happen, or what it will mean in terms of products and technologies. We all know the problem what we are looking for is what Microsoft will do about it, and theres nothing here about that.
Of course, the email and memo were very likely designed to be released to the public, which is pretty clever if so since a large part of the change theyre talking about involves innovation in the open as Google has been doing, for example. They have been releasing their works in progress publicly, as does the open source community. Traditionally, Microsoft’s innovation is performed behind closed doors, and even if you sign an NDA you may still not get access to everything theyre doing or thinking. So leaking the memo and email would be a clever move in this direction, even though they dont really contain any specific details about their plans.
Unlike the earlier efforts they cite as precedent, such as the famous Internet memo and the bet the company on .NET and XML/Web services, theres no concrete action here other than a call to work on proposals and processes for solving the problem. No solutions are proposed.
Furthermore this comes at a time where Microsoft is heads down working to fulfill its commitments and promises around Vista, Office 12, Communications Framework, Workflow Foundation, new SQL Server, new Visual Studio, etc. These are all scheduled to ship around this time next year, and I cant imagine everything is ready. I have more of a picture of the folks in Redmond sweating out the final days of testing, bug fixing, etc. in a big rush toward the release date, as always happens in large software projects, rather than sitting around contemplating solutions to new problems.
Bill Gates says he wants quick action but Ozzie says this initiative is intended to start after this next generation of products ships and people will start to free up. But this means effectively nearly a year before the company can really start working on this initiative.
Beyond the technical challenges, which amount to turning around in mid-stream, this also represents a cultural challenge since Microsoft has built up its entire business around the shrinkwrapped license. A recent Gartner survey showed that Office 2003s biggest competitor is Office 2000. The article about upgrading to Office 2003 shows that majority of users are still on Office 2000. In fact a higher percentage are on Office 97 than on Office 2003. And now we are talking about introducing Office 12, and working on services at the same time, or after the release of Office 12 (when staff will be freed up), when customers are still not moving to Office 2003.
In other words, Microsofts fundamental business model, that drives all the cash flow, is under threat because people are not upgrading people do not seem to need new features in Office or Windows, preferring what they have while at the same time they are trying to change the business model of how they make money on software by turning to advertising. They do not have the proposed solution for that just the problem statement and they are still struggling to execute their current plans.
Finally, I think the recent reorg into divisions will work against this proposed change to services. Ozzie is asking each of the divisions to come up with plans and proposals to address the problems he outlines but they are as likely to compete with each other as cooperate, as he points out they are already doing in several areas.