The inevitability of Exponential – Revisiting Roadkill

This is the first essay in a short series based on a document, written by Nathan Myhrvold in 1993, predicting the future of computing.

Tarot

The value of intelligent prescience

Occasionally we are gifted, via a variety of media, with the thoughts of unusually clever people, thoughts that, in retrospect seem to be stunningly prescient. Not that we should be surprised necessarily. If anyone were to be stunningly prescient we should most certainly expect it to be the unusually clever people.

Sometimes what seems like an uncanny ability to have successfully forecast the future is also the revelation that some people, well connected and talented people most certainly, had information available to them that was not available widely to the rest of us ordinary folk. Not at the time they wrote their predictions, and often not in later years when we get to read those predictions. By definition these are often the leaders and innovators of our society, and nowadays, largely courtesy of the internet, we are able to read exactly what they once said / wrote for ears and eyes, other than our own, more quickly than ever before.

This speed of access makes the experience particularly interesting. The events described, even though they are definitively historic (as in descriptive of the past) are also events that we have lived through. Moreover, because this original documentation is predictive (and when it is accurate) we have the value of viewpoints that are untainted by the actual implementation of that which they are describing.

This reduces (although I dislike the pejorative association of the word, reduce, in this context) the observations to the academic or cerebral, and in so doing gives us the opportunity to more easily strip out the commentary of actual political and social contexts, victories and disasters, thereby enabling new insights and perspectives on exactly those contexts, victories and disasters.

Using the lessons of history to make sense of the present is not a new idea. The cycle over which we can take this value, however, is now much shorter than ever before, increasing the frequency with which, and increasing the number of people exposed to, this rich seam of available, derivative wisdom.

One such document is this one written by Nathan Myhrvold, formerly the Chief Technology Officer at Microsoft, in September 1993.

The work is not done for you. It is a document that inspires your own thinking. It’s an internal memo written, at least in part, on a plane (according to the text) but which I also suspect was the product of some deep introspection and sustained thought over time. It’s long too,  coming in at a little over 20,000 words, so it takes time to read,  but still comes with a level of recommendation that justifies the time investment required.

I am left wondering exactly who was on the cc list for such a monstrous internal email. I suspect it was the more senior end of the spectrum, although I would be most impressed if it had had a wider distribution.

The memo, titled “Roadkill on the Information Highway” is written in 3 broad sections. He starts with a deep examination of the technical state of play of computing in 1993 and demonstrates with some flair why the key factor of the last 40 years has been the exponential growth of computing power as represented by the price / performance ratio.

The second section examines how this power of the exponential is going to affect various societal and sociological institutions.

Finally Myhrvold examines the entities that he considers will end up as Roadkill, those that for various reasons will not keep pace with the exponential growth of computing.

Sometimes it’s difficult to remember that this was written 20 years ago as so much of it seems particularly relevant to today’s challenges. I’m going to summarise some of the ideas and observations that stood out for me, and to also add in some other ideas that have been suggested by the text, thereby mixing Myhrvold’s predictions with other thinker’s observations and theories.

Finally I am going to look at some of the big things that didn’t get into Myhrvold’s stake in the ground and how what he observed and predicted has influenced the big players of today’s landscape. Myhrvold wrote this memo for his business colleagues in a world that was pretty much Microsoft’s and which hadn’t conceived of either Facebook or Google. The 1993 Myhrvold didn’t forecast either of them but he does describe the fundamental changes and innovations that led to them both. It is fascinating stuff.

Carlota Perez and Installation

The timing of the memo is also interesting coming, as it did, in 1993. It affords us a look at the thinking, at an industry wide scale, of the dominant tech player in the pertinent years leading up to the dotcom bubble burst of 1997 – 2000. That the key topic of the memo is evolution, or perhaps more fundamentally the oncoming extinction, of various sectors of the tech market also adds a certain frisson.

Microsoft survived the bubble. Others clearly didn’t, but with the exception of Novell, who in 1993 were the main rival to Microsoft, no companies are mentioned in depth. Instead Myhrvold examines the underlying tech, business and customer needs and trends whether those customers are the enterprise or simply individuals.

The timing of the memo in relation to the bubble burst is also made more poignant in light of the theories of Carlota Perez. These have shown that there are significant (long) technology cycles, which define the structures of society at large. She has identified 5 such cycles dating back to 1771.

1771: The industrial revolution, machines, factories and canals

1829: The age of steam, coal iron and railways

1875: The age of steel and heavy engineering, electrical, chemical, civil and naval

1908: The age of the automobile, oil, petrochemicals and mass production

1971: The age of information technology…..

Her theories identify 2 phases of each cycle, separated by a financial crash, named installation and deployment.  During the installation phase political will incentivises financial capital through de-regulation.

This achieves 2 significant goals. Firstly making it easy to move capital enables a huge swathe of, often failed, experimentation. In each cycle, the dominant technology idea is somewhat defined so experimentation is not about inventing the paradigm so much as inventing the utilities, functions and infrastructure of the new technology. There is typically a lot of activity and much of it is adversarial. The second goal of the installation phase, therefore, is to select the winners, those businesses that will be the unstated fundamental backbone of the next 30 or 40 years. This is somewhat achieved via the Darwinian reality of de-regulated finance led capital. The market is made king and as such selects the winners.

All of this leads us to a phase, post financial meltdown, whereby the installed technology, through selected business winners is deployed in new markets. This is achieved via a fundamental change in capital’s dominance brought about by changes in regulation. This is another reason why the phases are separated by a financial crash. Deployment is characterised by the creation of new infrastructure, which is often a long term venture, a job much more suited to the more patient production capital paradigm. These changes do not happen quickly. Indeed the last cycle, which delivered the age of the automobile was characterised by a 13 year gap and the small matter of a world war, between the financial crash of the early 30’s and the massive “rebuilding” projects of the post war west. Whereas Europe needed rebuilding post war, the US did not suffer the same critical destruction at all. Nonetheless ‘rebuilding’ is what it got.

The current cycle’s changeover should have been triggered by the dotcom bubble crash but, speculatively, due to cheap oil and cheap labour (China), the existing structures (finance capital) maintained their economic hegemony. We should now be looking to change as we see the 2008 financial crash continue to shake the world via, amongst other problems, the big issues still much unresolved in the Eurozone.

These issues could be the subject of a whole essay of their own but in this context we can simply consider the information in Myhrvold’s memo as a very timely ‘state of play’ of technology at a very, unheralded, but no less fundamental, time in the pertinent tech cycle.

The Inevitability of Exponential

“In the last 20 years the overall improvement in the price/performance ratio of computing has been about a factor of one million.  There is every reason to believe that this will continue for the next 20 years; in fact, the technological road map appears reasonably clear.  This will yield another factor of one million by 2013”

“In order to put this into perspective, a factor of one million reduces a year of computing time to just 30 seconds”

Aside from the fairly simple reading and repetition of fairly commonplace Moore’s law arguments that equate the business growth of the sector and of individual companies (such as Microsoft), since the early 70’s with the unprecedented growth in computing power, Myhrvold also makes a number of more insightful observations that are derived from the fundamental reality of Moore’s law. It is also intriguing to read that this relationship was not truly understood widely, even amongst the market makers of modern technology. He makes the point that knowing the reality of consistent performance improvement is distinct from believing that it meant that everything would keep changing. Considering that there are still huge swathes of industry and business that does not accept this truth today perhaps we should not be surprised.

With the fundamental driver of the market being the underlying growth in the performance / price ratio Myhrvold defines the key metric for market share as share of CPU cycles. This maybe, seems a little odd to us today. However, if you remember that Myhrvold was CTO perhaps it isn’t surprising that he would think in terms of technical metrics, as opposed to business metrics, even when considering business markers such as share of market. He does offer an extrapolation that at some point this metric should expand to include share of total data transmitted due to the impending ‘information highway’ – remember the internet was not commercialised until 1995 and Myhrvold is talking about computing, not the internet (this is also, as an aside, an interesting marker as to how intertwined the concept of the internet has become today with the concept of computing – they were once quite distinctly separate).

The improvements in hardware leading to ever faster computing, therefore needed to be matched by ever increasing use of the newly available CPU cycles, in order to maintain or increase market share. Sounds straightforward right? It’s a simple precept for sure, and one which is easy to accept from the perspective of 2013 where we have the advantage of knowing that all the increase in computing power has been eagerly hoovered up by both the consumer and the enterprise.

The reality of the computing power required to handle the evolution of increasingly richer media types has been very manageable. We know this from experience today, but Myhrvold gives us a rundown of the potential impact of the evolution of both inputs and outputs from the perspective of 1993 looking at the potential loads that might be created by the use of different human senses as input devices and the requirements that might be created by the clearly foreseen move to considerably heavier use of video and richer media experiences. Myhrvold even postulates a measure of touch roughly equivalent to a pixel, a touchel, and shows that computing power aided by the inevitable march of increasing power will easily surpass the limits of the human senses, as both input and output.

This may all sound interesting yet trivial. However, the combination of the business need to maintain share of CPU cycles (and eventually data transmitted), and this demonstrated ability to outstrip the computing power requirements of human senses (and the inevitability of exponential) leads to a more profound observation, that the future, for a business in the business of computing, lies in software, not hardware.

“If the mainframe folks had stopped to make the exponential extrapolation – and acted upon it – then it naturally follows that microprocessor based systems would deliver computing to the masses, that they would ultimately surpass mainframes and minis, that hardware should be decoupled from software because the driving forces are different, and finally that software would be a central locus of value.”

We can see quite easily that as a business proposition the more flexible nature of software is more adaptable to an ever changing environment, the absence of a manufacturing infrastructure being just one reason. However, if computing power is to exceed human sensory requirements then, aside from complex reasoning challenges (NP problems), computing must extend to the masses and the masses will be using computers for more trivial functions. These functions will be dealt with by software, not hardware. Or rather the innovation that creates these functions will be led, enabled, by software, not hardware.

Networking

We are now left with a question. If the business future in 1993 was to be software, not hardware, how can we explain the continuing presence of Apple within the market, much less their business success seemingly built around the perceived superiority of their hardware? Well, the memo gives us an answer for that too, one that also explains a core reality of the Microsoft dominance of the last 30 years.

“The only hardware companies that have ever made significant money are those that managed to create an asset – the hardware architecture – which was above the fray of individual implementations and thus could enjoy a longer life span.  Software is able to do the same trick in an even better fashion.  Like a hardware architecture it lives for a long time – more than a dozen years so far for MS DOS – but the tooling cost is far lower.”

Apple lives, very successfully, today because of Mac OS and now, iOS. Moreover this also somewhat explains the rigid control issues of the Apple paradigm. By excerpting such strong control over ‘user experience’, Apple has an excuse for acting as sole gatekeeper to both its software architecture and its hardware architecture, reducing any moves by competitors to innovate away the lifespan of their products, a neat double lock to help hedge against the unrelenting tide of performance growth and drive superb margin .

Myhrvold’s examination of the exponential yields some other profound observations.

The universe of exponential growth covers more than just clock speed and CPU cycles, it also effects the increasing density of storage. This gives us an interesting principle of forecasting, a tool if you will. In 1993, if you were in the upper echelons of Microsoft it seems that things such as video on demand were an accepted inevitability. And they were right. We see that today. This inevitability in one area can provide fuel for forecasting in other areas based on support for research.

“….In fact, it might cost even less because video on demand systems used to replace Blockbuster and other video rental stores will dramatically increase the market for storage and should dramatically drive the price learning curve”

But more powerful than this, is the following assertion.

“The key issue behind this point is that computing is on a very fast exponential growth curve.  Anything which isn’t exponential in growth, or which is exponential but with a slower growth rate, will quickly and inexorably be overwhelmed”

“The trick for the next decade of the computing industry will hinge on being very smart about recognizing which things scale with or faster than computing and what things do not.”

Does this still hold today? Myhrvold’s timespan is a decade, and that measures from 1993, so we are long past that. I think there are 2 more pieces of data in this part of his memo that help answer that question as a positive.

Firstly Myhrvold shows us through a fairly amusing (his phrase) measure of latency that size matters, and that the computers of the future must be smaller. There is a distinction here that is quite important. He is not saying that computers of the future can be smaller; he is saying that they must be smaller. Part of the latency equation is the distance that the signal must travel.

“If you have a computer with a femtosecond cycle time, then it takes about 1 million CPU cycles for a signal to travel one foot.   As a point of comparison, a hot processor of 1993 with a 100 MHz clock rate (10 nanosecond cycle time) will [sic] would have a similar relative wait time in terms of clock cycles if it was sending a signal about 1860 miles”

He also states categorically that they will be extremely cheap, as any manufacturing process that can operate at this scale will, needs must, be one of intense replication.

Alongside these observations he also projects a future where the inevitability of exponential also leads to the usurpation of PC’s.

“One aspect of the price/performance trend discussed above is that a PC class machine will get amazingly powerful, but an equal consequence is that extremely cheap consumer computing devices will emerge with the same or higher computing power than today’s PCs, but with far higher volume.”

He didn’t manage to combine these trends, considerable reduction in size and cheap commonplace high power computing, to reach a conclusion of smartphones and mobile computing, but from our perspective of 20/20 hindsight we can see that he had identified some of the pre-conditions that would lead to the cycle of computing devices we are experiencing today.

Clayton M. Christenson’s theory of innovative disruption is often slightly misunderstood. It is often read as a new technology, a superior technology, arriving and disrupting the dominance of an incumbent. However, it is really the arrival of a new business model. The shift from desktop to mobile is a classic example. Although there is clear invention and innovation involved in the technology that enables toady’s smartphones what we actually have is the disruption of the PC / laptop market by the arrival of computing devices that are decidedly inferior in scope and performance, but are massively cheaper (zero cost often, depending on the contract). Whereas the computing power in your phone is often described in marvellous comparisons to the computing power that put man on the moon, it is clearly less computing power than that which you have in your desktop PC/MAC. Yet, still it is widely accepted that mobile is the new platform and there really is no reason to disagree.

So, should we still be looking for innovations and ideas that scale faster than computing? Yes. We most certainly should. The transfer to mobile has opened a whole new world of opportunity for innovation and alongside the growth of networking, as we shall see in the second section of Myhrvold’s memo (and the 2nd part of this essay series), we should perhaps also consider whether the growth of networking, or distribution as Myhrvold describes it herein, has stolen the crown from computing as the growth vector to measure new innovation against, instead of pure clock speed?

The Limits of Moore’s Law

Finally, yet almost an aside, this whole discussion leads to one key, almost scary question. What happens when Moore’s law stops delivering ever faster clock speed? And when will this happen. Outside of the implications for computing it seems to me that this has significant potential implications for our economies also.

Michio Kaku’s book, Physics of the Future, provides some indications. The speed of computing can ultimately be reduced to the size of the switches inside the chips. Today, they are etched using UV light. The wavelength of UV light can be tuned to a tighter and tighter level, but there is a limit, which turns out to be transistors (switches) about 30 atoms across. Even pending new manufacturing processes we get into the realm of quantum uncertainties at about 5 atoms breadth.

Kaku is clear that we will reach these limits shortly after 2020. That’s not very far away. Again this could be the subject of a whole essay and as such will not be dealt with here in any level of detail. Suffice to say the march of performance is something that will still drive change in society for an interesting number of years yet.

Part 2 to follow…

Advertisements