The Innovator’s DilemmaPosted: April 9, 2013
Sometimes it’s really worth watching long non-fiction videos. This youtube film of Clayton Christensen’s keynote Pullias lecture at USC last year is one such. The video is 90 minutes long but if you are interested in how new technologies disrupt older technologies then I can’t implore you enough to watch it, it’s fantastic.
Clayton Christensen has, literally, written the book about innovative disruption. If you haven’t heard his ideas and think you know how new ideas disrupt old ones then I’m pretty sure there are some surprises (pleasant ones) ahead for you.
This particular keynote is about the disruption that technology is currently visiting on the field of education, which is one of the last big games to fall under the spell of such disruptive cycles. Without necessarily referencing education, although there will be some mention, the 4 ideas from this talk that I want to highlight are…
- Disruption always happens at the bottom of the market with inferior technology
- The new technology gets a hold by competing against non-consumption
- Customers are eventually sucked from the old businesses to the new, instead of the technology moving from the new businesses to the old
- The metrics which define success will change for the new technology
Disruption occurs at the bottom of the market
This is the first bit of Dr Christensen’s thinking that grabbed my attention, actually some time before I watched this video. It’s easy to think of disruptive technology being driven by superior functionality blowing away competitors existing products and bringing unheralded value to the consumer. It allows us to put change into an easy framework, “…this technology will dominate because it is better”. It is, however, rarely what actually happens.
We are at heart simple creatures and the truth is that change is almost always driven by improvements in price and accessibility (in itself often a factor of price). These price advantages are delivered by trading off quality, even when factoring in the value of new technology.
Dr Christensen begins his talk with the example of the global steel industry and the arrival, in the 60’s, of the mini mill electric furnace (yes this does sound like a Dr Evil plot), a method of making steel that had a 20% price advantage over the incumbent, the big integrated steel mills. The trade-off for this huge price differential was a similarly huge differential in quality. Mini mills were small and easy to set up, melting down scrap to reform it as low grade steel. At first it was only good enough to be sold in the rebar market (the steel that gets sunk into concrete blocks), the lowest grade of commercially useful steel.
The cost advantage, from mini mills, driving the bottom tiers of the market eventually forced the incumbents to depart from those same lower tiers, often with a huge sigh of relief believing that they were abandoning low margin unprofitable businesses, which they were.
The factor that hadn’t been incorporated into these decisions was that as soon as the last incumbent player left, the price, and hence that 20% advantage, collapsed. Price, of course, as with all market differentiators is relative. This put profit pressure on the mini mill operators who found that their 20% price advantage needed an expensive competitor to exist. In short, they had been commoditised.
However, because these low tier markets had enabled good profits for the new technology entrants (enjoying significant price advantages), and because the incumbents took some time to leave, the march of technology improvement meant that the mini mill process could now be applied to the next production grade up, and the whole thing started again. Eventually from what was an inferior and completely unthreatening position the new technology disrupted the whole industry.
It turns out that there are 4 production grades in the steel industry, rebar, angle iron, structural steel and sheet steel. Mini mills have, over time, disrupted every level, one after the other with the incumbents both seeing the threat and, via making what seem like sensible plays (getting out of the low margin sectors of their markets and increasing profitability), enabling the threat. What makes this talk so fascinating is that we get to see how intelligent people, making seemingly intelligent decisions (eventually) kill their own businesses.
Competing Against non-consumption
Sometimes the incumbents do try to incorporate the new technology, and still don’t manage to lead the changes in the market. Consider vacuum tubes and transistors. When transistors first became available to the market they weren’t capable of handling the power requirements and the heat production of modern consumer electronics (imagine for a moment the huge television sets of the 1950’s).
Even though all the main TV manufacturers of the time bought licences to this new technology, and invested significantly in trying to overcome its initial limitations, the actual impetus for the dissemination of the transistors came from a different sphere. The first consumer products that used them were hearing aids, an area where the size advantages actually played in their favour (a hearing aid made using vacuum tubes would have been large, unwieldy and was simply untenable).
However, more valuable than the size advantage, was the fact that the makers of transistor driven hearing aids were competing against absolutely no-one. It was an entirely new market, and as such the development of this new customer base meant that the owners of the incumbent vacuum tube driven technology felt no pain.
Competing against non-consumption isn’t always a matter of new product categories. Take the new pocket radios that were made with transistors, led by Sony. In terms of product quality, compared to the bigger more engineered family ‘wireless’ they were appalling, and as a direct result they weren’t purchased by adults. They were, however, hugely popular with teenagers. Cheap and portable, they enabled a whole generation to discover the demon that was rock and roll out of the earshot of their parents. The quality didn’t matter because they were competing against non-consumption, they were “infinitely better than nothing!”
Sony eventually produced and sold portable televisions, seemingly taking the incumbents producers head on. However, even here the principle of competing against non-consumption held. The old players felt no pain from Sony’s expansion into TV. Their products were substantially inferior compared to the large and expensive TV’s that were the mainstay of the market. The old companies felt no pain because Sony wasn’t selling to their market, the new attractive price point brought vast hoards of new customers in, but didn’t ruffle those already playing, after all these new TV’s weren’t actually very good were they.
Incumbents rarely keep hold of their customers
All the time Sony was selling huge numbers of TV’s to an ever increasing market the transistor technology was getting better and better until it got to a point where the solid state product was just as good as the old vacuum tube products.
And then the customers simply deserted the old TV makers and bought the new, cheaper, but just as good transistor models. All the vacuum tube driven companies died, despite investing more in the technology (transistor tech that is) and despite seeing the new technology coming long before the disruptors like Sony had.
This is the truly perverse reality of the nature of this form of disruption. Seemingly there is nothing you can do to ward off the looming threat of being made redundant. Could RCA et al have developed the technology to prevent Sony from taking their market away from them? They might have been able to, but only at the risk of hurting their brand. They would have had to market, for a significant period of time, inferior products in markets that they had no experience in. They would have had no advantage from their installed customer base, affluent people with high expectations. It is perhaps instructive to remember (or imagine, if you are young enough to have no choice) just how bad the early portable televisions were.
Would you risk your existing high margin, market leading positioning and profitability to enter a completely unproven market? It’s just never going to happen. New market entrants, with their hands on some new technology? Well, that’s a different proposition.
The result of this is that the technology never moves back towards the old incumbent market leaders, not even once it’s mature and functional enough for their core customer base. Instead the core customer base just goes and buys the new technology made by the new market leaders.
It’s just not good enough!
Part of the problem, of course, is that a lot of the time the incumbent industries don’t even try to integrate the new technology. A lot of the time the threat is completely disregarded, even when the new way of doing things starts to get a serious grip the old will decry the quality of the threat and again, dismiss it from first principles. Funnily enough this particular concept is very visible in the debate about the disruption of education today.
There is a concept, noted by Dr Christensen, that helps to perpetuate this behaviour, and it’s best demonstrated using education as the example.
Put simply, the metrics change. The systemic way in which the quality of the old and the new are judged, changes. This allows the old guard to feel that their point of view is robust, as the new challenger clearly and unequivocally does not compete on the metrics that framed the old paradigm, while the new players proceed ‘under the radar’ comfortable that the new way will prove itself in the end.
Take university education.
How do the elite universities compete against each other? Generally speaking there are 2 areas of competition, the faculty and the graduation rate. The quality of the faculty is judged on the quality of the research conducted, the number of papers published and the general prominence in wider society of the institution as a result. A hat is doffed to the aspirations of the students via the graduation rate, but to some extent this is at least partially (if not dominantly) explained by the nature of and the quality of the student intake itself. The elite universities don’t tend to accept inept students in the first place.
How do the new competitors in education measure themselves? It’s first interesting to ask who their customers are, because they are a very different group. In true style these new providers are competing against non-consumption, their customers are late 20’s, early 30’s to 40’s adults with obligations, looking to learn new skills that will lead to employment.
The most important success metric, therefore, is the volumes of people in new employment, and because this can only happen if they effectively learn new skills, the actual measure of these new universities is how good they are at teaching. There is no faculty to speak of, and the graduation rate is appalling. But, and this is where the incumbent’s structural understanding of the situation lets them down, this just isn’t important because the metrics have changed to match the new paradigm, the quality of the faculty just doesn’t matter.
There is so much more that needs to be said to flesh out these ideas. This short, 2000 word, summary of one keynote talk is so painfully weak in introducing a small section of Dr Christensen’s work that I can only ask, if any of this was interesting to you, please please watch the youtube. Dr Christensen will do a much better job than I can in explaining his ideas.
If you find yourself settling in to watch a TV movie, probably something like the Matrix that you’ve seen maybe 4 times already, or Miss Congeniality, because there really is nothing else on, then please fire up Youtube instead, and watch this delightful talk.