Category Archives: Cloud computing

Mergers and acquisitions – Don’t be like Salesforce

The success of a merger or acquisition depends quite a bit on the integration of the IT systems of both companies.

In a recent article published by Data Center Dynamics, I offer some perspective on four major acquisitions from 2020:

— Actifio by Google

— Slack by Salesforce

–Kustomer by Facebook

–Wondery by Amazon

Of Google, Amazon, Facebook, and Salesforce, only Salesforce does not run on modern, commodity server infrastructure – meaning both hardware and software. This means the integration of their traditional CRM application with Slack’s stack will be much less straightforward.

This by itself doesn’t make or break an acquisition, but because Salesforce has its eye on competing with Microsoft among others offering integrated capabilities, it’s definitely going to present a challenge.

If you are considering an acquisition, or just looking to improve your chances of integrating with another IT environment, it will really help to build a foundation of abstract APIs and – if you can – migrate to cloud native infrastructure.

It’s not just about the pizza

Before I begin this, let me note I’ve broken another rule of blogging, which is that I promised to write a new entry soon… Anyway here is a new entry. I will not say anything about the next one 😉

In the world of microservices, the term “two pizza team” has a specific meaning – i.e. the right size of a dev team is one you can feed with two pizzas. The group structure and responsibilities of development teams is, if anything, more important than the technology they use.

In a previous blog post, I wrote about the influence of commodity server infrastructure on system software and application design, in particular how microservices evolved as the best way to design applications for deployment on modern IT infrastructure.

But the common misconceptions about microservices go beyond that. It’s not just about new technology and techniques. It’s also about changing the development culture. And of course the pizza that fuels it.

I couldn’t find Jeff Bezos’s original post anywhere, but I found this post that contains it, and describes a lot of the things the Amazon developers had to learn to implement SOA successfully. Of course Bezos’s 2002 memo is famous for mandating SOA or API-first development for the Amazon Web site, a course they have successfully followed in building a platform and creating a website that can be updated literally hundreds of times a day.

In Jeff Bezos’s world, the two-pizza team is a key principle of productivity. For the SOA and API first approach the two-pizza concept constrains the size of a development team. Larger teams are less productive. Marry this with the SOA and API first directive, and you end up with a lot of small teams building individual components of the application and deploying code without disturbing other teams, since everyone is adhering to the SOA principles of strongly governed interfaces (you don’t deploy incompatible changes to the interface, basically, without getting agreement from other teams dependent on those interfaces).

From talking with Amazon developers, I understand the dev teams take complete responsibility for the function (or functions) they deliver to the Website in the form of microservices. Including support – the dev team wears beepers in case a production problem occurs.

Part of this is because of the way the commodity computing works. It’s the “pets vs cattle” analogy, among other things. When you are dealing with servers as “cattle” you have to automate the system administration function. It’s simply not possible to manually administer hundreds of thousands of PC based computers. AWS has posted a good definition of DevOps as the merger of the dev and ops functions. This represents a significant culture change for traditional “scale up” IT environments, since there is no longer a separate Ops team and Dev has to provision all the infrastructure (data stores, messaging, networks, etc.) using APIs.

Even MORE APIs you say! Yes. So let’s conclude by revisiting a key part of this discussion – API governance. Why haven’t more companies followed Amazon’s lead? They have proven the API first SOA design approach to be very successful, if not essential in modern computing and digitization.

Mark Settle in Forbes argues it’s because of the governance, or lack thereof. Yes, it’s incredibly hard to change the development culture to two-pizza teams developing microservices with strongly managed interfaces. You need someone like Jeff Bezos to mandate it. Or at least have the organizational will to change the development culture.

The Amazon Web site Death Star from 2008! Showing the microservices landscape, essentially.

What everyone gets wrong about microservices

Martin Fowler has collected a lot of good information about microservices, but does not understand their origin as the “right size” unit of work for commodity data center infrastructure. This is apparent from his advice to start with a monolith and decompose it, as if the target deployment environment had no bearing on design.

I suppose that’s the way he and many others would naturally approach it, given the frame of reference is the traditional 3-tier application server model and the abstraction of Java from the hardware and operating system it runs on. This leads him and others to view of microservices as an evolution of development practices to facilitate rapid change, rather than a fundamental shift to design for appropriately sized units of work.

I give Martin a lot of credit for identifying and defining development best practices, including helping establish the agile method. But microservices did not originate as a development best practice in the 3-tier app server environment. Nor did it become popular because people were deconstructing monoliths. Microservices are a creature of the infrastructure they were designed for – the commodity data center.

I don’t think Martin and others viewing microservices as a development trend takes into account the strong relationship of the deployment infrastructure to the origin of the microservices design pattern. I give Martin a lot of credit for including Stephan Tiklov’s clear rebuttal to the monolith-first silliness though.

Google invented the commodity data center infrastructure about 20 years ago, and this has become the de facto infrastructure for IT since then. It is the most cost effective IT infrastructure ever built. Pretty much all Web companies use it, and most pre-Web companies are planning to adopt it. Their original servers are in the computer history museum for this reason (photos below).

Commodity data center infrastructure offers a compelling range of benefits in addition to cost-effectiveness, such as auto-scale, large data set capacity, and automatic resiliency, among others. In order to achieve those benefits, though, software has to be engineered specifically to run in this environment, which is basically where microservices come from. Some core design assumptions allow a very high tolerance for hardware and software failures, but these assumptions are very different from the assumptions on which the traditional IT infrastructure is based, and applications have to be broken into smaller units of work suitable for deployment on PCs and connected via network into larger units of work.

I can understand a view of development best practices unconnected to deployment infrastructure considerations – after all, the traditional IT industry has been on a path for years to “write once, run anywhere” and it’s easy to assume that language and operating system abstractions will take care of harware infrastructure mapping considerations.

But in the case of microservices, it is a big miss to ignore the target deployment infrastructure as the origin of the design pattern, since both the hardware and the software on which they are intended to run has such different characteristics.

Do You Need to Develop for the Cloud?

And other notes from last week’s IASA NE cloud computing roundtable (more about the meeting format later – also note the Web site currently does not have up to date information – we are volunteers after all ;-).

We had good attendance – I counted 26 during the session, including folks from Pegasystems, Microsoft, Lattix, Kronos, SAP, The Hartford, Juniper, Tufts, Citizens Bank, Reliable Software, Waters, and Progress Software,  among others (I am sure I missed some), providing an interesting cross-section of views.

The major points were that no one is really thinking about the cloud as a place for accessing hosted functionality, and everyone is wondering whether or not they should be thinking about developing applications specifically for the cloud.

We touched on the landscape of various cloud computing offerings, highlighting the differences among Google, SalesForce.com, Microsoft, and Amazon.com.  Cloud vendors often seem to have started with trying to sell what they already had – Google has developed an extensive (and proprietary) infrastructure for highly available and scalable computing that they offer as Google App Engine (the idea is that someone developing their own Web app can plug into the Google infrastructure and achieve immediate “web scale”).

And Salesforce.com had developed a complex database and functionality infrastructure for supporting multiple tenants for their hosted application, including their own Java-like language, which they offer to potential cloud customers as Force.com.

Microsoft’s Azure offering seems to be aiming for a sort of middle ground – MSDN for years has operated a Web site of comparable size and complexity to any of the others, but Microsoft also supplies one of the industry’s leading application development environments (the .NET Framework). The goal of Azure is to supply the services you need to develop applications that run in the cloud.

However, the people in the room seemed most interested in the idea of being able to set up internal and external clouds of generic compute capacity (like Amazon EC2) that could be related, perhaps using virtual machines, and having the “elasticity” to add and remove capacity as needed. This seemed to be the most attractive aspect of the various approaches to cloud computing out there. VMware was mentioned a few times since some of the attendees were already using VMware for internal provisioning and could easily imagine an “elastic” scenario if VMware were also available in the cloud in a way that would allow application provisioning to be seamless across internal and external hosts.

This brought the discussion back to the programming model, as in what would you have to do (if anything) to your applications to enable this kind of elasticity in depoyment?

Cloud sites offering various bits of  “functionality” typically also offer a specific programming model for that functionality (again Google App Engine and Force.com are examples, as is Microsoft’s Azure). The Microsoft folks in the room said that a future version of Azure would include the entire SQL Server, to help support the goal of reusing existing applications (which a lot of customers apparently have been asking about).

The fact that cloud computing services may constrain what an application can do, raises the question of whether we should be thinking about developing applications specifically for the cloud.

The controversy about cloud computing standards was noted, but we did not spend much time on it. The common wisdom comments were made about being too early for standards, and about the various proposals lacking major vendor backing, and we moved on.

We did spend some time talking about security, and service level agreements, and it was suggested that certain types of applications might be better suited to deployment in the cloud than others, especially as these issues get sorted out. For example, company phonebook applications don’t typically have the same availability and security requirements that a stock trading or medical records processing application might have.

Certification would be another significant sign of cloud computing maturity, meaning certification for certain of the service level agreements companies look for in  transactional applications.

And who does the data in the cloud belong to? What if the cloud is physically hosted in a different country?  Legal issues may dictate data belonging to citizens of a given country be physically stored within the geographical boundary of that country.

And what about proximity of data to its processing? Jim Gray‘s research was cited to say that it’s always cheaper to compute where the data is than to move the data around in order to process it.

Speaking of sending data around, however, what’s the real difference between moving data between the cloud and a local data center, and moving data between a company’s remote data center?

And finally, this meeting was my first experience with a fishbowl style session. We used four chairs, and it seemed to work well. This is also sometimes called the “anti-meeting” style of meeting, and seems a little like a “user-generated content” style of meeting.  No formal PPT.  At the end everyone said they had learned a lot and enjoyed the discussion. So apparently it worked!

Stay tuned for news of our next IASA NE meeting.