On the occasion of Wm S. B’s 100th birthday

Below is the brief interview with William S. Burroughs I published in the May-June 1981 issue of Newcomers magazine.


I got him to agree to do it by giving him a photocopy of an Orgone Energy Bulletin (published by William Reich, whose work I discovered reading The Job: Interviews with William S. Burroughs).

The issue I gave him had an article about the Orgone Motor, which sounded a lot like a radiometer since it supposedly worked off of free atmospheric energy (“orgone” energy, or biological energy – the centerpiece of Reich’s later work).

The Orgone Energy Bulletin was a hard thing to get in those days, although now of course it’s all online.

I had obtained my photocopies through interlibrary loan from the Library of Congress by requesting them from the Antioch College Library (where I went to college and organized an independent study on Reich).

Anyway, I went up to Burroughs after his reading (he was in Chicago promoting “Cities of the Red Night“) and knowing of his interest in Orgone energy gave him one of the photocopies in return for a promise to respond to a brief set of interview questions for my magazine.

Unfortunately, the Orgone Energy Bulletin did not disclose the critical “factor Y” that made the Orgone motor actually work (and give free energy to the world). We were supposed to find out when Reich’s lab at Orono Maine was unsealed in 2007, 50 years after his death. But I don’t think we did.

I sent the questions off and the answers arrived a few days later, all typed on a single sheet of paper. Burroughs was living in Kansas City at the time. In the magazine I also reported on his book signing appearance at Barbara’s and his reading at Tuts, and reviewed Cities of the Red Night. I will post those articles another time.

The Interview

So here, in honor of William S. Burroughs’s 100th birthday, is the brief interview:

1. What do you say when someone asks you, “What is Cities of the Red Night about”?

It’s about a remake of history and a second chance. Sooner or later for every species time runs out. Mutate or die. This is not a religious or moral but a biologic imperative. The human species is not designed to remain in its present state any more than a tadpole is designed to remain a tadpole.

2. What did you think about coming to Chicago on a publicity tour like a normal author?

I felt normal. All my reading tours have been publicity tours and I have given more than a hundred readings in the past six years. One thing authors have in common: they are in the business of writing and selling books.

3. What has been the reaction so far to Cities of the Red Night?

Critical reaction has been mixed, two good reviews to one bad. Word of mouth has been unanimously enthusiastic and positive.

4. Why is there no mention of the word love in Cities of the Red Night, though there is ample opportunity for it?

The word love has been so vulgarized and loaded with sentimental connotations that I prefer not to use it. In this book the characters are working for a common end which they take for granted. Many of them experience the mixture of liking and sexual attraction that is as close as I can come to a definition of love. It is not necessary to state the obvious.

Remembering Martin Luther King, Jr

I remember Martin Luther King Jr very clearly, although I do not remember meeting him, as I’m told I did. In the early ’60s my father worked at Brown University in the chaplain’s office. A protege of William Slone Coffin’s from Yale Divinity School, from which he received his Doctor of Divinity degree, my father was responsible in those days for inviting speakers to visit Brown. Many civil rights activists, MLK Jr and Malcolm X included, were among them.

Almost always after the events, speakers were invited to a reception at our house, or someone else’s house, close to the campus. I remember that time mostly as a time of a lot of people coming and going. I was 6 years old when MLK visited Brown University in November 1960. My father and MLK were the same age. 

The famous “I Have a Dream” speech occurred exactly on my 9th birthday – August 28, 1963. My father was in Washington DC to attend, along with a couple of bus loads he organized of residents of Middletown, CT and faculty from Wesleyan University, where he was by then chaplain. 
I watched the whole event, hoping to catch a glimpse of my father, and listening to all the speeches, hymns, and songs. I remember MLK Jr’s speech as an excellent one, but it was more or less expected in those days that he would give a great speech. He always seemed to. We (my mother and I) always watched his speeches, and very often my father was there. The “Dream” speech was considered as one of his best, to be sure, but at the time it did not have the stature it has today. 
I remember most clearly my father’s eulogy in Wesleyan Chapel after Dr. King was assassinated in Memphis five years later. He could barely get the words out between the tears. I was crying, too. The whole place was. 
When my father traveled to Mississippi and Alabama as a Freedom Rider in the mid 60s, marching from Selma to Montgomery, organizing food and clothing drives for Belzoni, we of course worried that he might not come back. My mother put a brave face on it for us but several Northerners had been murdered in cold blood in those days for trying to intervene in segregated society, including clergymen. The Middletown Press ran an article at the time that recounted my father’s squaring off with a Mississippi policeman trying to close down a march. A witness in the party was quoted as saying he came dangerously close to getting beaten with a nightstick and arrested, as so many were. But he continued to go, and we were proud of him, especially after President Johnson signed the Civil Rights Act of 1964 to end segregation. 
It is hard to believe how bad things were in the late 50s and early 60s for black people in the South. How many brave black and white people gave their lives, and were injured, which was always a threat in the air, for equality. Because of Martin Luther King, Jr. Yes, there were many, many others, including my father, who did their part and took their chances. But he was indisputably the leader, not only in speeches but also in nonviolent action. 
I remember the big struggle to get MLK Jr day declared a national holiday. I still see the nasty, ignorant, and hateful comments on Facebook and other Web pages about him and the day. Back then, it was literally a matter of life and death, and of serious injury, jail time, and financial loss to stand up for equal rights. 
He should never have had to do what he did. Equal rights for black people should never have been an issue. But it was, and he fixed it, along with my father and thousands of other brave souls. Let’s celebrate them all today, and clearly remember what inspired us all to act in the name of justice for all. 

Vertical Epic tasting results, full set, Dec 28

Results from the Stone Vertical Epic tasting Dec. 28.

Overall Stone Brewing pulled it off. All the beers were drinkable. And most were exceptionally good. Each was a different style, which was by design. So it was not a true vertical in the sense of it being the same beer produced each year, and tasted as it aged, as might be the case with Orval for example. But it was something no other brewer has done, and you have to hand it to them.

We had a great time with the event, tasting from oldest to newest, and comparing one to another, using the homebrew recipes for detailed information about the brews. One of these days I need to try making one, probably the ’02. The first and best (at least in our scoring)…

On a scale of 1 to 5 mugs, completely subjective, in order from top to bottom (best to worst) here is the average of scores from Kirk Searle, Brian Kelly, Helen Grembowicz, and myself:

4.3    2002
4.2    2006
4.175  2005
4.175  2009
3.725  2003
3.55   2008
3.5    2007
3.375  2004
3.2    2012
2.3    2011
1.375  2010

We had print outs of the home brew recipes for each (excepting 2012, which hasn’t been published) and checked the list of ingredients for spices, hops, sugars, malt mixtures etc. The 2010 suffered most from being unusual I suppose, with Muscat grape juice and chamomile unexpectedly cloying.

Some other comments:

2002: Like honey mead
Doesn’t taste like a wheat beer (40% wheat)
Like angels dancing on my tongue

2003: More alcoholic than the 2002, more malt forward

2004: Starting to taste what I don’t like about Belgians

2005: Like Ommegang (dubble)

2006: This is more of a thirst quencher. Could drink a bottle of this [not sure of the earlier beers in other words]

2007: Peachy nose

2008: Getting towards tasting like the bottom of the stale vegetable crisper

2009: Like a porter

Little bit of a burnt nose

2010: [Muscat and chamomile detected by astute tasters]
This is everything I detest in a Belgian beer, all rolled into one

2012: Pretty good example of the Christmas spicy beer thing

I worked in Santa Clara from early 2000-mid 2002 and looking through the local beer selections discovered Stone’s Arrogant Bastard, which soon became a staple of the fridge and parties at the apartment complex where we lived (we being employees of IONA, where I worked at the time, sent out to assist the transition of the Netfish acquisition).

When I moved back to MA, I looked for Stone brews and found the 03-03-03 Vertical Epic. Following the instructions on the label I bought one of them to keep every year after that, waiting for the release of the 12-12-12.

Because I did not have an ’02, after a couple of years of indecision following the prices on eBay, I finally decided to go for it, and was lucky enough to get a good bottle. As it turned out, it was the star of the event. What a great event!

Homebrew recipes (see bottom of blog for complete list)

The 11 bottles and tasting glasses laid out on the kitchen table below Santa say it all.

The 11 bottles and tasting glasses laid out on the kitchen table below Santa say it all.

Letterman looks like he's grabbing for the remainder of the VE tasting glasses.

Letterman looks like he’s grabbing for the remainder of the VE tasting glasses as the evening comes to a happy close.

Kirk, Helen, and Nataly (who wasn't drinking)

Kirk, Helen, and Nataly (who wasn’t drinking)

Nataly, Jane (who  wasn't drinking beer), and Brian

Nataly, Jane (who wasn’t drinking beer), and Brian

Spectrum Road

Cindy Blackman killed it all night. On the two Cream tunes (Politician and Sunshine of Your Love) I did not miss Ginger Baker. Vernon Reid seemed a bit shy about the Eric Clapton parts. Not that he couldn’t play them. He can pay anything. But he seemed uncomfortable about it. Cyndy just kept on hammering the kit, as she did all night long. I think she’s my new favorite drummer. Great show!

Do You Need to Develop for the Cloud?

And other notes from last week’s IASA NE cloud computing roundtable (more about the meeting format later – also note the Web site currently does not have up to date information – we are volunteers after all ;-).

We had good attendance – I counted 26 during the session, including folks from Pegasystems, Microsoft, Lattix, Kronos, SAP, The Hartford, Juniper, Tufts, Citizens Bank, Reliable Software, Waters, and Progress Software,  among others (I am sure I missed some), providing an interesting cross-section of views.

The major points were that no one is really thinking about the cloud as a place for accessing hosted functionality, and everyone is wondering whether or not they should be thinking about developing applications specifically for the cloud.

We touched on the landscape of various cloud computing offerings, highlighting the differences among Google, SalesForce.com, Microsoft, and Amazon.com.  Cloud vendors often seem to have started with trying to sell what they already had – Google has developed an extensive (and proprietary) infrastructure for highly available and scalable computing that they offer as Google App Engine (the idea is that someone developing their own Web app can plug into the Google infrastructure and achieve immediate “web scale”).

And Salesforce.com had developed a complex database and functionality infrastructure for supporting multiple tenants for their hosted application, including their own Java-like language, which they offer to potential cloud customers as Force.com.

Microsoft’s Azure offering seems to be aiming for a sort of middle ground – MSDN for years has operated a Web site of comparable size and complexity to any of the others, but Microsoft also supplies one of the industry’s leading application development environments (the .NET Framework). The goal of Azure is to supply the services you need to develop applications that run in the cloud.

However, the people in the room seemed most interested in the idea of being able to set up internal and external clouds of generic compute capacity (like Amazon EC2) that could be related, perhaps using virtual machines, and having the “elasticity” to add and remove capacity as needed. This seemed to be the most attractive aspect of the various approaches to cloud computing out there. VMware was mentioned a few times since some of the attendees were already using VMware for internal provisioning and could easily imagine an “elastic” scenario if VMware were also available in the cloud in a way that would allow application provisioning to be seamless across internal and external hosts.

This brought the discussion back to the programming model, as in what would you have to do (if anything) to your applications to enable this kind of elasticity in depoyment?

Cloud sites offering various bits of  “functionality” typically also offer a specific programming model for that functionality (again Google App Engine and Force.com are examples, as is Microsoft’s Azure). The Microsoft folks in the room said that a future version of Azure would include the entire SQL Server, to help support the goal of reusing existing applications (which a lot of customers apparently have been asking about).

The fact that cloud computing services may constrain what an application can do, raises the question of whether we should be thinking about developing applications specifically for the cloud.

The controversy about cloud computing standards was noted, but we did not spend much time on it. The common wisdom comments were made about being too early for standards, and about the various proposals lacking major vendor backing, and we moved on.

We did spend some time talking about security, and service level agreements, and it was suggested that certain types of applications might be better suited to deployment in the cloud than others, especially as these issues get sorted out. For example, company phonebook applications don’t typically have the same availability and security requirements that a stock trading or medical records processing application might have.

Certification would be another significant sign of cloud computing maturity, meaning certification for certain of the service level agreements companies look for in  transactional applications.

And who does the data in the cloud belong to? What if the cloud is physically hosted in a different country?  Legal issues may dictate data belonging to citizens of a given country be physically stored within the geographical boundary of that country.

And what about proximity of data to its processing? Jim Gray‘s research was cited to say that it’s always cheaper to compute where the data is than to move the data around in order to process it.

Speaking of sending data around, however, what’s the real difference between moving data between the cloud and a local data center, and moving data between a company’s remote data center?

And finally, this meeting was my first experience with a fishbowl style session. We used four chairs, and it seemed to work well. This is also sometimes called the “anti-meeting” style of meeting, and seems a little like a “user-generated content” style of meeting.  No formal PPT.  At the end everyone said they had learned a lot and enjoyed the discussion. So apparently it worked!

Stay tuned for news of our next IASA NE meeting.

Second Edition of TP Book out Today

It’s hard to believe, but the second edition of Principles of Transaction Processing is finally available. The simple table of contents is here, and the entire front section, including the dedication, contents, introduction and details of what’s changed, is here. The first chapter is available here as a free sample.

Photo of an early press run copy

Photo of an early press run copy

It definitely turned out to be a lot more work than we expected when we created the proposal for the second edition almost four years ago.  And of course we originally expected to finish the project much sooner, about two years sooner.

But the benefit of the delay is that we were able to include more new products and technologies, such as EJB3, JPA, Spring,  the .NET Framework’s WCF and system.transactions API, Spring, SOA, AJAX, REST/HTTP, and ADO.NET even though it meant a lot more writing and rewriting.

The first edition was basically organized around the three-tier TP application architecture widely adopted at the time, using TP monitor products for examples of the implementations of the principles and concepts covered in the book. Then as now, we make sure what we describe is based on practical, real-world techniques, although we do mention a few topics more of academic interest.

The value of this book is that it explains how the world’s largest TP applications work – how they use techniques such as caching, remote communications (synchronous as well as asynchronous), replication, partitioning, persistence, queuing, database recovery, ACID transactions, long running transactions, performance and scalability techniques, locking, threading, queuing, business process management, and state management to process up to tens of thousands of transactions per second with high levels of reliability and availability. We explain the techniques in detail and show how they are programmed.

These techniques are used in airline reservation systems, stock trading systems, large Web sites, and in operational computing supporting virtually every sector of the economy. We primarily use Java EE-compliant application servers and Microsoft’s .NET Framework for product and code examples, but we also cover popular persistence abstraction mechanisms, Web services and REST/HTTP based SOA, Spring,  integration with legacy TP monitors (the ones still in use), and popular TP standards.

We also took the opportunity to look forward and include a few words about the potential impact on TP applications of current trends toward cloud computing, solid state memory, streams and event processing, and the changing design assumptions in the software systems used to power large Web sites.

Personally this has been a great project to work on, despite its challenges, complexities, and pressures. It could not have been done without the really exceptional assistance from 35 reviewers who so generously contributed their expertise on a wide variety of topics. And it has been really great to work with Phil again.

Finally, the book is dedicated to Jim Gray, who was so helpful to us in the first edition, reviewed the second edition proposal, and still generally serves as an inspiration to all of us who work in the field of transaction processing.

IASA NE Roundtable June 23 on Cloud Computing

I just wanted to pass along the notice for the June 23 meeting of the IASA NE Chapter, which will be a roundtable on “cloud computing” hosted by Microsoft but chaired by Michael Stiefel of Reliable Software.

Details and registration

(No, you do not need to be a member of IASA although of course we encourage that. Basic membership is free, and full membership is only $35. )

What should be interesting this time is that everyone seems to be doing something slightly different around cloud computing, whether it’s Amazon, Microsoft, Google, VMware, SalesForce, etc. Cloud computing is definitely an exciting new area, but like many other new and exciting areas it is subject to considerable hype and exaggeration. Good questions include:

  • What exactly can you do in the cloud?
  • What are the restrictions, if any, on the kind of programs and data you can put “in the cloud”?
  • Can you set up a private cloud if you want to?

I think the trend toward cloud computing is closely related to the trend toward commodity data centers, since you kind of need one of those to offer a cloud service in the first place. Like the ones James Hamilton describes in this powerpoint presentation, which I heard him present at HPTS 2007.  (Looks like James has left Microsoft and joined Amazon BTW.)

I would expect a lot of heated discussion from the folks who usually attend the IASA meetings. Attendance has been steadily increasing since we founded the local chapter about a year ago, so I would hope for and expect a very lively discussion.

As usual, the event includes networking time and food & drinks (not usually beer though – have to work on that I guess 😉

Please be sure to register in advance so Microsoft knows how much food to buy.

Thanks & hope to see you there!


What we Learned Writing the Second Edition of the TP Book

After proofing the index last week, Phil and I are finally done with the second edition of the TP book!

A lot has changed since the first edition came out in 1997.

For one thing, the TP monitor is no longer the main product type used to construct TP  applications. Many components formerly only found within TP monitors — such as forms systems for GUIs, databases, system administration tools, and communication protocols — have evoloved to become independent products.

And although we can reasonably claim that the .NET Framework and Java EE compliant application servers are the preeminent development and production environments for TP applications, it seems as if the three-tier application architecture around which we were able to structure the first edition has evolved into a multitier architecture.

Another big change is represtented by the emergence of “scale out” designs that are replacing “scale up” designs for large Web sites. The scale out designs tend to rely on different mechanisms than the scale up designs for implementing transaction processing features and functions – the scale out designs tend to rely much more on stateless and asynchronous communications protocols, for example.

By mid-2007 I had started to think it would be interesting to center the second edition around these new scale out architectures, like those implemented by large scale Web sites such as Amazon.com, eBay, PayPal, SalesForce, BetFair, etc. Phil and I had a great opportunity to learn about what these companies were doing at HTPS in October of ’07 . Unfortunately, it was impossible to identify sufficiently common patterns to do so, since each of the large Web sites has created a different way to implement a “scale out” architecture.

(BTW this was a fascinating conference, since the room was full of people who basically created the application servers, TP monitors, and relational databases successfully used in most “scale up” style TP applications. But they had to sit there and hear, over and over again, that these products just didn’t meet the requirements of large Web sites.)

Nonetheless we managed to fit everything in the book – how caching is done, replication, how replicas synchronize, descriptions of compensating and nested transactions, locking, two-phase commit, synchronous and asynchronous communications, RESTful transactions, and so on.

As in the first edition we have kept the focus on what’s  really being used in production environments (with the help of our many generous reviewers). We completely rewrote the product information to focus on current Java and .NET products and TP standards.

And finally, we idenfity the future trends toward commodity data centers, cloud computing, solid state disk, and multi-core processors, among others, which are going to significantly impact and influence the industry during the next decade or so.

One of the most interesting things I learned about in doing the book was how to design a transaction as a RESTful resource (see RESTful Web Services for more details). But once again, many of the familiar principles and concepts still apply – such as “pseudo-conversations” that have been used in TP applications for a long time to avoid holding locks during interactions with users.

Fascinating to me are the different assumptions people make in the different worlds between the REST/HTTP “scale out” designs and the mainframe-derived “scale up” designs. This is likely to remain an area of continual evolution.

IASA NE gaining momentum – April meeting set for 23rd

After nearly a year, it is starting to look like IASA New England is starting to gain some momentum. For those of you in the area, please register here for the next meeting (April 23), at which we’ll be hearing about  Intuit’s Saas/cloud initiative from their QuickBase architect, Jim Salem.

Personally, I’m looking forward to hearing the details of their active-active load balancing…

I was sorry to have to miss the March meeting due to attending EclipseCon / OSGi DevCon, but I heard it went very well and that Hub did a great job.

At the meeting we also announced that Intuit and IBM were joining in and sponsoring the April and May meetings, respectively.  This is excellent news for the NE architect community since it means we’ll have more support and access to additional excellent speakers for the meetings.

We also announced a panel discussion on cloud computing for June, a social event for July, and a regional event for this fall.  Things are really starting to fall into place!

I’m happy about this since it will be great to have an active community of software architects in the NE area. I personally always learn something new at the meetings and have a great time discussing topics with the other members.  Hope to see you on the 23rd!

IBM/Sun Post: I Forgot About Solaris

When I wrote about IBM’s potential interest in acquiring Sun to gain control of Java, I forgot about the Solaris factor. But this was mentioned in yesterday’s Times article about the acquisition, and I have seen it mentioned other places as well.

What I forgot about was Red Hat and Linux. IBM sells a lot of Red Hat Linux. After Red Hat acquired JBoss in 2006 they began competing with IBM’s WebSphere division, which must have put strain on their partnership around Linux. IBM started hedging its bets with Novell’s SUSE Linux, but open sourcing Solaris would give IBM its own alternative to Red Hat Linux.

Add that to the potential for gaining control of Java and you have two pretty compelling reasons for IBM to acquire Sun.  Of course there are probably any number of other factors, but these strike me as the most strategic.