Let me switch from governance to technology to establish another thread for this blog.

It is not very profound to state that cloud computing is a big deal. What I will try to do in this thread is to explain why it is a big deal in some different ways. My hypothesis is based on three contentions:

  1. That very cheap compute, plus elastic compute, fundamentally changes how we think about building software.
  2. That very inexpensive storage fundamentally changes how we manage data.
  3. That the accelerating pace of technological change, plus cheap compute, plus cheap storage, fundamentally changes how we architect the enterprise.

I’ll introduce these ideas now and go deeper as the blog progresses.

Way back the amount of computing power constrained how we thought about software. We used assembler languages to build small applications that could squeeze into small memory footprints. We counted instructions to utilize the costly CPU as efficiently as possible. We made applications tightly coupled to reduce the overhead of crossing application and network boundaries. We deferred processing to batches run off-shift to effectively use limited resources during prime hours. The result of this was a set of very efficient business applications that required very skilled programmers to maintain the efficiencies.

Further, prior to the 1990’s, the business requirement for compute often out-paced the growth of capacity described by Moore’s Law. This led to the ever tighter development and tuning of business applications [a little aside… mainframers would have you believe that the mainframe is mega-efficient… IMO this is not true. It is the applications we built in those constrained early days that are mega-efficient. The von Neumann architecture is the same for both mainframes and microprocessors]. These efficient applications were, and are, hard to maintain and hard to extend.

We made a trade-off making applications hard to extend in favor of efficiency. There really was no choice.

The world is very different today. The power of a single microprocessor is greater than the power of the biggest mainframe in 1990. The ability to lash microprocessors together in a large server made compute even more inexpensive and accessible. Today, we lash servers together into clouds of compute where a single business application can utilize thousands of servers. Further, these servers are equipped with terabytes of memory so both the compute and the memory constraints are more-or-less removed.

But these giant clusters of compute and memory were first assembled 15-20 years ago. They are not new. It is not the giant cluster infrastructure that has made cloud computing real, it is the maturity of the tools to develop applications for that infrastructure. Writing sophisticated code for these large clusters has become easy-as-pie. Today, a programmer with no previous cloud experience can log in, compose a program in a very easy to understand language like Python, deploy it in a container, and scale to thousands of machines in less than an hour.

In addition, this cloud infrastructure provides not just compute that scales up… it also scales down. This means that instead of buying a set of on-premise servers to handle the maximum load, you can rent as many servers as you require and scale up in real time… then scale back down paying only for what you use. This provides a different, maybe more compelling efficiency.

The trade-off we made in the first forty years of programming, efficiency for maintainability is no longer required. Still, writing inefficient code has no merit on its own, and even with the built-in efficiency of an elastically scalable cloud, we might ask why not build very efficient cloud-native code?

In the next post, Part 2, we’ll explore a new imperative that has made efficiency not just less important, but actually an obstacle.