In Part 1 of this thread, we described how constraints on memory and compute caused legacy programmers to develop extremely efficient, tight, business applications. These constraints have been removed on modern large clusters that we call the cloud.

But storage has also become cheap and available. The 100MB large format disk drives of the 1990’s are replaced by multi-terabyte drives today… an improvement of over two orders of magnitude at a fraction of the price. Further, the introduction of solid state devices driven by Moore’s Law has made extremely high performance I/O generally available. In other words, another constraint has been removed and the programming effort required to work around slow I/O is no longer such an issue.

I want to say again… efficiency is a good thing… until it becomes an impediment. Describing the impediment is just a few paragraphs away.

When the world wide web emerged programmers were faced with huge scale-up issues. Millions of users might be accessing multiple terabytes of data. There was no way to share a single copy of the terabytes across all of the servers required to support the millions. The solution was often to replicate the data thousands of times so that tens of thousands of users could all access the data individually. Sharing was accomplished by replication and very sophisticated replication techniques were baked into the systems infrastructure to make this easy.

Today applications distributed across the cloud with replicated data underneath is not a difficult problem to solve. We understand the limits of the CAP Theorem and build systems within the constraints of the theorem. It is just not difficult to share data through replication. It is built into the cloud.

The rapid growth of new technology caused the rapid growth of several very large technology companies and the scale of the companies fed the technology boom. The availability of this new technology then enabled more new companies and the race we are in the middle of now took off. Today technology advances so fast that the shelf life of advanced software is no more than five years… and often less. Keeping up with the pace of change is the new imperative for software developers.

The way we keep up is to build software that is optimized for change. There are means to this end.

We now develop software in small chunks that are insulated such that they can be changed without impacting the chunks around them. We deploy software in insulated modules with published interfaces that can be easily replaced as long as the interfaces do not change. We reuse chunks and modules as often as possible when we find newer versions that have been changed for the better by someone else. These chunks and modules are published and shared as open source in a very large open community that further accelerates the change. Applications are composed of the chunks and modules and these are published as well. All of this is deployed on modern cloud infrastructure so that the scalable hardware can be used to solve the performance problems we used to spend so much time working around.

This is not to say that there is no code optimization happening… but it is focussed and shared. There is an explicit decision made to optimize for performance… an investment is made to optimize for performance… and this happens only after a performance issue is identified. The ability to move fast, to change, trumps performance for the vast majority of code developed today. This realization sets me up for the next part of this thread.

Evolution is the best method for managing change. Intelligent design, where our intelligence is somewhat less capable than the intelligence usually associated with that phrase, will not effectively get us to an unknown future place. We have to evolve our software as new technology presents itself. We have to let software architecture emerge and let enterprise architecture emerge. The next post will consider this.

Summing up: the cloud has evolved into an extremely capable distributed and elastic infrastructure where it is easy for developers to build modern scalable software. The cloud has removed the constraints on computation, memory, and storage. As a result, developers can focus on solving business problems as fast as possible. Performance is no longer the primary concern especially for relatively small-scale business applications required by even large government agencies. Today we optimize for change instead of optimizing for performance. Performance comes easy and inexpensive in the cloud.

One last thought. I might consider letting an extremely capable architect define my enterprise knowing that if we get it right there will be a big payoff. But here is my test: show me the architect that predicted ten years ago where we are today with mobile technology, with machine learning, with the cloud, with a ubiquitous internet and I will give that person the authority to develop an architecture for the next ten years. If your architects are not that prescient then consider the emerging architecture approach I’ll discuss next.