The Goat Library

Talent as a distribution channel, AI and eternal talent

When the world started writing, en-masse, about the explosive arrival of generative AI many journalists made predictions about who would be AI winners and who would be losers. The inevitable fears, that any new valuable and scalable technology will bring mass unemployment, were tempered by the equally abundant observations that such innovation cycles usually see a transfer of jobs to newly required skills, rather than a simple destruction of jobs on an aggregate level.

The transfer argument, while historically accurate, rather glosses over the fact that often the new jobs do not necessarily go to those that have lost their old jobs. The common accusation thrown at those that decry such innovation, that they are Luddites, is in fact quite a distortion of the actual complaints of the Luddites. They most certainly did destroy the new technology as it appeared in ‘their’ factories, but the complaint was not about the arrival of new efficient technology, rather that they were to be cut out of the value it would generate and left with no way to support their families and communities. A fear that played out as they suggested it would.

It is, therefore, reasonable to ask who the losers will be because there will be some.

One sector that got an early mention in much of the coverage is the voice acting industry.

Apple, for example, has already offered AI voice to creators for audio books. It seems inevitable that AI voice will eventually decrease the size of the established voice market that is currently served by human actors.

Sadly there is a bigger problem for the voice community which will inevitably hit other creative sectors too. Eventually, and it will take a while longer, a library of Goats, a library of the greatest voices of all time, will establish itself killing the capacity for new market entrance to all except a smaller number of generational talents.

Think about how you build a career in sport. Maybe you are a gifted 14 year old and you dominate in school, have trials for the regional team and then maybe even the national team, all while you’re still 14, playing for the under 15s. Eventually after a stellar school career you look to the professional game, but now you are competing not just against your own age cadre but all the credible adult players too. Now you are competing against all the kids who were 15, 16, 17 and 18 when you were 14 and everyone else already in professional employment. Depending on the sport that might mean playing against opponents up to age 28 or, as is the case in Formula One today, maybe you are now looking at fighting for your place against 40 year olds as well.

The sporting world has one big advantage over much of the creative world because sport must occur in real time. Sure, you can watch old matches or archival footage of the greats but the live experience is treasured for obvious reasons. Indeed it is the primary value proposition of any contest as entertainment, who will win? The advantage that sport has in this context, is that, eventually, all the Goats will stop playing and new generations will step up to replace them. Even if we could digitally capture and transfer the skills of the greatest, we still hold to the fundamental concept of zero artificial enhancement in sport, which means we would baulk at the idea of implanting today’s youth, for example, with Pele’s footballing chops. That is not the case with our creative industries. If you are a voice actor in the future you will be competing for work against the greatest voices of all recent history, living and dead. Voice will become a tangible post-mortem asset. That wasn’t true until this current technology cycle, but it is now.

A counter argument in favour of the status quo is that the capacity for human voice actors to improvise or to understand and translate the emotional nuance of a piece, and then reflect that in their performance, cannot be replaced by AI. There is something in this but even here there might be a Goat library correction. Such interpretation is currently executed by those that have the vocal characteristics for the piece in question, today’s voice actors. But once those uniquely held vocal skills are replaced by the Goat library then a smaller group of specialists, using their natural voices, will be able to offer their interpretive and improvisational services across a much larger range of work, the actual vocal characteristics for the work will be transplanted onto the architecture created by their natural vocal performance. David Attenborough might still be the voice of nature documentaries well into the next century or more.

There’s no reason to think that a Goat library could only apply to voice acting.

For some time now CGI has been used to replace actors who have passed away either before a sequel is planned (Peter Cushing in Rogue One, 20 years after his death) or as a result of an untimely passing before a movie can be completed (Oliver Reed in Gladiator). There is already a specialist field of work for CGI actors, skilled no doubt but cheaper than the box office stars, living or dead, that they replace.

Google and Universal Music have just announced a partnership to produce tools that allow fans to produce AI generated music but that also enables the musicians to receive payment in recognition of their intellectual property being appropriated, either in their voice or their musical style.

Grimes has offered a partnership agreement to musical creators who want to use her vocals on their music, original or AI produced, as long as she gets a 50% cut of the royalties.

This is a tech continuation of an already established trend in the music business whereby big name’s will only sing on music written by others if they get a share of the song writing credit, because that’s where the money is. The fame and following of established musicians has become a distribution channel in and of itself. Talent itself has become a distribution channel and a powerful one at that.

The impact of social media is a key component of the talent as distribution dynamic

Today’s digital social media dynamics have already skewed the money distribution of creativity, creating a huge power distribution with a smaller number of bigger winners and a long, getting longer, long tail (more artists, less money). Taylor Swift is a massively talented musician but she is also a master of the social domain, both of these talents contribute to her economic reality. It is quite possible that AI will contribute to a deepening of this power distribution, not for artists that are alive but by extending the effect to some of the dead ones too.

Did you write a great song? Can you get Elvis to sing it for you? Tupac? Maybe, and maybe it’s not so far away that this might be the most effective distribution strategy available to you.

The Hollywood actors and writers strike is also about digital distribution via captured talent, even if the headlines are mostly about AI and job loss. The studios are trying to lock in this new economic surplus to their advantage, and, subsequently to the disadvantage of the creative talent.

Like all assets the economic power of creative talent lies partly in it scarcity. Today that economic power is shared between the industry and the talent itself. In a world full of Goat libraries the economic power share will change because the talent itself, not the person, will be tradeable post-mortem, as a tangible asset. An asset not inextricably attached to a living human being with agency. The talent scarcity balance will shift towards a greater abundance allied, quite possibly, with a wider access to that abundance. The industry will have options it never had before. Abundance it never had before.

Years ago I was writing, speculating, about the business model for content on the internet. At the time a much used adage was “Content is King”. I came across an adaptation of that saying. “Content is King, but Distribution is King Kong”. The observation was that ownership of distribution, the theatres, cinemas, TV channels, cable channels and then the internet was more economically powerful than artistic genius. Its obvious once noted but it wasn’t widely acknowledged, and still isn’t. The real takeout though, is that ideally you are both King and Kong, talent and distribution in one package.

If today big artist’s are to become distribution channels then a simple reading suggests that this change should empower them, King and Kong. But this dynamic will eventually, if not immediately, turn in favour of the industry as they alone will be able to afford to build Goat libraries alongside the business architecture required to monetise them.  

There are at least 3 factors working in favour of the commercial side of the entertainment industry.

  1. Social media favours power distribution rewarding fewer big names, often more handsomely, and extending the long tail for everyone else.
  2. AI will extend its capacity to extract the ‘essence’ of an artist’s unique gift. Vocal characteristics are an obvious example but it can and will go further. We can already use generative AI, today, to create images in the style of Caravaggio for example. This has 2 impacts. First the Goat libraries will favour those with a significant canon of work or the time and resource to produce it (training data) and secondly it reduces the scarcity value of an artist’s gift. Your voice may be stunning and unique, but will it outsell Elvis on the same song? Will someone take the gamble when they believe Elvis is a banker?
  3. The scarcity of living talent will be commercially devalued if we can call on the talent of the dead.  

The long tail will get longer and the established big name stars will take more of the money while they live. After they are dead the industry will consolidate its control of both the economic and artistic sides of the business. The incentive for new entrants will diminish, but it won’t disappear, the industry power to gate-keep opportunity enhanced. A significant degradation but not a demolition.

I do try to look for silver linings. I don’t subscribe to the AI doom cults even if I worry about what stupid humans will do with ever more powerful technology. If my logic here turns out to be valid then maybe there can be an upside, even if it’s a small one. Maybe the long tail will inspire/force more localised musical community, a return to the idea that expression is sometimes best consumed as a live performance in small settings. If that sounds ludicrous ask yourself if you would be excited to see your favourite superstar stadium performer in a small club playing to 150 fans? They’ll still play the stadium but much like ABBA they might not be there when they do, widespread musical engagement may become more intimate again?


Linkdump Feb 2018 (2015 vintage)

Bruce Sterling on cognition and computing, in reference to AI, shows that conflating the 2 ideas can be misleading and restricting. A different perspective to view AI through.

This is 2+ years old now but shows, very effectively, that the malware/botnet/ad-fraud space is complex. Google’s efforts with spider.io

Lot’s of short pithy bits of advice about writing, from Paul Graham. Really worth reading.

Working in top end restaurants can be like this.

Quantum field theory explained in a children’s picture book kind of way.

 


r/pics June 2017


4 from 2014 (links)

I periodically revisit the links at the bottom of my reading list (managed by pocket) to see if I can either dump them from the list unread, leave them there for reading at an unspecified future point or ideally read them and push them to my archive.

Having done this recently these 4 links from 2014 stood out. All of them talk about the way the world is today, and are particularly interesting to me as they could all just as easily have been published this year (with some small changes no doubt).

The Economist challenges the principle of driving shareholder value and notes that since it became a corporate mantra the value of a CEO’s remuneration has increased 8-fold, which most certainly was not the point. Please remember the quote that follows is from 2014 when no-one thought Brexit or Trump likely to come to pass….

And that trend, which has spilled out to the rest of the developed world, is leading to the growing anger of  voters and the resistance to globalisation which may eventually cause even more damage to business and investors.

This link is a guide to Shenzhen, almost certainly out of date in some regard (I’m no Shenzhen expert), but the reality of the Chinese tech markets is fascinating and although it took me nearly 3 years to read this I’m still glad i did.

A good overview from the Guardian about progress made by ‘robot’ writers. The primary example here is a quick news story about an earthquake report, in this case some gentle tremors. Published and approved by a human, but written by a computer. This was in 2014, I can’t help but imagine this is more widespread today.

Hammond was in the limelight recently, having claimed that by 2025 90% of the news read by the general public would be generated by computers. “That doesn’t mean that robots will be replacing 90% of all journalists, simply that the volume of published material will massively increase,” he explains. “Take the example of small amateur baseball games. They don’t interest the media, but several dozen people follow each one. Quill collates data on thousands of these games and can produce thousands of articles almost instantaneously, one for each match, in a style similar to sportswriters, who are easy to imitate.”

The last link is about the nature of the global clothing industry, although the article concentrates on the Korean impact on LA’s Jobber market.

How did this neighborhood become what it is? The answer lies in a 50-year process of migration and generational progress—one that has recently reached a kind of critical mass.

A well written and good exploration that reminds us that even in these times of incredible and fast change the slower moving trends driven by demographics and migration still hold some sway.

 

 


How will brand advertising work? (Disintermediation and the curious case of digital advertising continued)

This is the third installment in my series exploring why the internet has so far failed to disintermediate digital advertising yet at the end of this essay we still won’t have arrived at a solution that describes a fully disintermediated model for digital advertising. That’s because I’m still trying to work it out.

This blog is chiefly written, selfishly, to force me to examine and understand things. This particular topic has been no different and I have found my perspectives changing quite a lot as I dig further into the issues. Writing this particular essay has taken some time, I published the last one in March, 8 months ago, and a lot has changed in the ad tech landscape in that time. I’m sure ad tech is here to stay, but I’m also sure that problems caused by the current ad tech deployment are still problems.

This exploration is about advertising primarily, and therefore is approaching these problems through the advertising filter uniquely, even though the issues relating to data ubiquity and misuse affect more than just advertising paradigms.

From the point of view of an advertiser the biggest problem with ad tech (programmatic as it’s called by advertisers) is that it, and the internet at large, is not currently setup to deliver brand advertising. At all.

The goal of this essay, therefore, is to examine what needs to happen to mould the internet publishing space into one that can support brand advertising goals as well as direct response or performance goals, something that I believe is very important.

Somehow, somewhere along the way we are going to have to fix the revenue side of modern digital publishing without recourse to clickbait and lists, which have their place for sure I guess, but should not be the only revenue lifeline for publishers. Similarly the trend to ever more intrusive and creepy data work is unlikely to end well.

In my opinion the rescue of revenue models for premium publishing output is important for lots of reasons but it is particularly important for the future of brand advertising.

Metrics are more than just scorecards – a quick recent history of brand and DR media process

I started working for a global media agency in early 2000, as a direct response (DR) specialist working on the account of a global technology/computing brand, working across their European markets.

At my interview I was asked if I was familiar with direct response metrics, which at the time meant cost per response (CPR) and cost per acquisition (CPA) with a little ROI (return on investment) thrown in. In late 1999, when that interview took place, we didn’t have cost per click on the dashboard, because the use of digital media as an advertising medium was still very young and because where it was getting attention it was being treated as a branding medium.

I remember being quite taken aback at that line of questioning, not offended just surprised. I had thought it was an entry level concept, to have an understanding of the very basic metrics CPR and CPA. What after all is there to understand? It is worth pointing out that the question wasn’t asking about attribution techniques, nor was it hinting at deeper subtleties about measurement in terms of statistical validity, longer term lifetime value or brand halo effects.

The questioning was as basic as could be simply because my interviewers were not in any way familiar with these metrics. They were, for the record both very sharp people, but they had no experience of direct response marketing, and certainly no experience of the management of media within direct response operational models.

Somehow or other I must have covered my bemusement well enough because I was eventually offered the job.

This not entirely exciting anecdote points out something much bigger than the scope of understanding that 3 people held in late 1999 about DR metrics. What it showed was that the wider media agency world’s intellectual alignment in 1999/2000 was almost exclusively focused on the metrics, methodologies and theories of how to build brands.

There were some agencies that focused on direct response as a discipline but they were specialists, and in comparison to the brand companies in the media agency space, they were very small. When I was working client side in financial services we used one of these specialist agencies. As a result I was as unaware of the branding metrics, reach and frequency, as my interviewers were of CPR and CPA. During that strange interview worlds were colliding. If the roles had been reversed my interviewers would have been as bemused as I was.

Today it is all very different.

There are no media agencies that do not offer a DR product (in the extreme cases only because they will at the very least offer digital media buying) and the DR product is no longer the idiot brother of brand. Back in 1999 it was the home of junk mail and “sausage machine” decision making driven by spreadsheets (as difficult as it is to imagine, in 1999/2000 spreadsheet skills were rare in media planning circles), nobody in my agency was looking to take my job or even to join my team. DR was considered very much the low rent discipline, to brand’s shining pinnacle.

The advertising cognoscenti of the day would have been in revolt if they had understood just what the advent of digital was going to do to the ascendancy of direct marketing (the fact that those very same people now lead the DR/ad tech frontier says a lot about how trends propagate within advertising agencies, but that’s a topic for another day).

True to their heritage the old school brand agencies, which for good reason are the biggest agencies out there, have re-branded the discipline of direct response marketing. The DR department is now called the performance department and the job titles are all branded with the new sobriquet – performance planners, performance strategists, performance directors.

There is something slightly distasteful, yet ultimately revealing, about being re-branded by the advertising community in order to be recognised and ultimately assimilated by them. Let no-one tell you that the agency community doesn’t do irony, even if they don’t recognise it as such themselves.

Clearly the factor that changed everything was the rise of digital. Or more specifically, the uptake of digital behaviour among our populations which in turn, led to a transfer of where those populations spent their media consuming hours.

At its very most simplistic core, media planning involves understanding this simple idea. We establish where people spend their time, and then we follow them into those spaces. The finest of campaigns go beyond such basic thoughts to generate greater value and resonant emotional impact, but they never lose sight of the core need to find the right people and put the message in front of them.

So, when media consumption increased in digital channels, digital advertising was sure to follow. At first the community in the brand agencies maintained the belief that this was a brand medium, not to be sullied by the techniques of direct marketing. I had many a good natured discussion (argument) with my colleagues in the digital team about where the channel was ultimately going.

Some of this perverse position was driven by a general resistance to change but it was also fuelled by the realities of the agency business model and our relationships with our clients. If 95% of the money you spend on behalf of a client comes from the clients with responsibility for branding, then it makes sense that you tell them this new channel, this new frontier is great for their brand. Which is what we did. Which is also why, finally, when the client communities caught up with reality, so did the agency models.

If for this reason alone, I think the generally assumed idea that frontiers in advertising are driven by the agency community over the client community is complete hogwash. Agencies sell what clients will buy, or they go out of business, there is little real room for leadership from the agency community (at least not while margins are tight).

Slowly, through the 2000’s almost all of the top media agencies built out and incorporated cross channel DR teams and gradually allowed the digital channel to be planned against the tsunami of new performance data being generated every day. The development of Google’s search product played no small part, effectively removing any space for an ad planner (of the day) to see brand theory being deployed through the small data driven text ads on the side of Google’s results page.

Where was the brand man?

Nowhere. Nowhere at all. Largely because the other big factor in this story is the failure of branding metrics ability to be operationally useful in the new medium. Whereas DR looks to identify and count the number of actions that a consumer must pass through on their way to a purchase (served the ad, clicked the ad, arrived at the site, put item in basket, paid for item) and then uses these data points to plan further activity, brand advertising is planned against metrics that are much less precise. This is a feature not a bug.

Reach and frequency are the planning staples of brand advertising. What percentage of your target audience saw your advert (reach), and on average how many times did they see it (frequency). You will notice that these metrics are conspicuously absent of any reference to business value, there is no mention of cost, and no reference to sales or indeed any consumer commercial activities. These metrics are measured against total population too, the full circulation of a newspaper for example, and not the smaller percentage of the circulation that looked at every ad placed there. Who reads every single page of a newspaper? Some do, but clearly not all.

Understanding whether a brand campaign was successful or not was a matter for econometric modelling after the fact, which in turn then fed the heuristics that we used for planning (effective frequency of 3 anyone?). But mostly it just didn’t happen very often. It was not without some foundation that those of us in the DR world often joked that the key skill of a brand planner was a bottomless ability to post rationalise failure (although to be fair, and even though that wasn’t an entirely accurate observation, it was undoubtedly a useful skill for an agency planner to possess).

Nonetheless, and for all my DR snark, there is a good reason behind this fluffiness. Contrary to opinion in certain quarters the theories that drive advertising, both direct and brand, are derived from a great deal of experience and knowledge.

The reason for this vague set of metrics is at the heart of brand planning media strategy. Broadly speaking brand advertising is about making the target feel something, whereas DR advertising is about making the target do something, and something very specific at that (call this number, click this ad). The following couplets are not absolute statements, there will be many exceptions to almost all of them, but they still round out a good basic approach for understanding the different tasks the 2 disciplines can undertake

  • Brand generates demand, DR harvests it.
  • Brand operates almost exclusively in the field of emotional response, DR provides the, seemingly, rational spur to action (price, product features, short term offers, superior financing, a free pen).
  • Brand often works in the periphery of your perception, DR demands your full attention.
  • Brand is built on conspicuous waste, DR is all about efficiency (it is no accident that almost all DR metrics have cost as a key component).

The idea of conspicuous waste is very important, it is the same idea that explains the peacock’s feathers…“I am healthy, I can waste resource on this fanciful display”. In earlier times this was incompletely characterised by the phrase, often seen in print ads, “as seen on TV”, implying that a company that could afford to be on TV was a substantial and trustworthy entity. The dynamics of today’s TV market may, in some cases, make a mockery of such a thought but at the time it was valid and valuable.

The landscape may have changed, but the human psychology that these ideas feed on has not. Establishing worthiness through conspicuous waste is still important, it’s just a little harder to do, you have to be more accurate in more spaces than you did 30 or 40 years ago. Mass media, social media, design, interface quality and brand/product experience are now all major contributors to a brand’s intrinsic health.

The theory that encompasses this idea of conspicuous waste, this peacock’s feather, is called signalling. Humans and other animals use signalling throughout their lives, and so do brands. Brands signal in many ways, as mentioned above, but advertising remains one of the most controllable tactical and strategic weapons we have, and as such retains a particularly valuable place in the business tools arsenal. It is one of the very few things that can be turned on or off tomorrow and that makes it very important.

Rory Sutherland, vice Chairman of the Ogilvy group in the UK and ex-president of the Institute of the Practitioners of Advertising has been talking about signalling for years. He offered these thoughts during a talk he gave at the Business of Software conference in 2011.

Specifically on the subject of peacocks feathers (my emphasis added).

Now arguably the Ferrari is effectively the sort of you know, the human male equivalent. It also interestingly has to be slightly pointless or wasteful to have meaning, it has to be a handicap you know, if woman (sic) were merely attracted to man (sic) with expensive vehicles they’d all chase truck drivers but the truck doesn’t serve a useful signalling purpose because it’s actually useful

Sutherland translates this idea to the challenge of building brands. The context of this next quote was the idea of buying an engagement ring (signalling as a biological theory is almost entirely about mating).

What this is saying, what this upfront investment is saying is ‘I am probably, because this is expensive, and to have meaning signalling sometimes has to cost money, I am probably playing the long game’, OK? And a brand is doing the same. It has taken me 15 to 20 years to build this brand, it has cost me an enormous amount of money in terms of advertising and reputation building, therefore it is probably not in my interest to make a quick buck by selling you something that is rubbish.

Anti-signalling

On the other side of the same coin Don Marti does a great job of showing that highly targeted DR advertising can’t send effective signals. Don’s observation here is particularly poignant for businesses that only engage in DR advertising.

Let’s use Lakeland’s example of carpet. I can go carpet shopping at the store that’s been paying Little League teams to wear its name for 20 years, or I can listen to the door-to-door guy who shows up in my driveway and says he has a great roll of carpet that’s perfect for my house, and can cut me a deal.

A sufficiently well-targeted ad is just the online version of the guy in the driveway. And the customer is left just as skeptical.

He also points to the famous Man in the Chair advert (via Eaon Pritchard)

advertising man in chair

The man in the chair

If you take all these quotes and ideas together you come to a simple conclusion, one of many perhaps, but one that I want to highlight here. Targeting, the idea of targeting, suggests a spectrum of quality from high precision accuracy to missing by a mile. In the world of weapons we can understand the different use of a shotgun and a sniper’s rifle and the world of advertising is the same. We have DR targeting which in its purest expression is the sniper’s rifle and we have brand advertising which is more like a shotgun, lots of the bullets are wasted (by design) but we still manage to shoot the bastard.

So brand uses quite a lax targeting regime while DR is built entirely around the concept of very accurate targeting. The maxim that drives DR is “right message, to the right person, at the right time”, which today operationally translates to the high level of targeting that has been hyper realised by big data in the online space.

Brand, on the other end of the spectrum, works with targeting that is considerably more aggregated than that. This is part of the handicap that Rory Sutherland refers to, the difference between the Porsche and the big truck, for advertising, is the ability to advertise in high value, high cost, environments without worrying about wastage and efficiency, without worrying if everyone there is a potential customer. Knowing that the value of the signal this delivers is worth it.

Money

Right now I can hear sceptical marketers and media buyers thinking “Get out of town. If that was true why do we bother to negotiate so aggressively for the media space we run brand campaigns in?”

On the surface that is a good enough question, but only on the surface. After Sutherland’s statements on handicapping and utility you might be forgiven for thinking that the price of the media buy is what matters. But it isn’t. The cost of the media buy is largely, if not entirely, invisible to the potential customer. Only in certain marquee scenarios is the cost issue widely acknowledged, like the Superbowl half time ads.

The Superbowl ads are a great example of signalling through excess. For sure there is a big audience, and live sport is one of the few remaining ways to access big audiences in one hit, but the real value of the Superbowl spots is admission into a select group of businesses, businesses so healthy they can afford these insane amounts of money, simply for access to your eyeballs for 60 or 120 seconds. And of course, it’s not just the media buy that is expensive, the quality of the creative is also premium. This is the venue that the big idea is launched in, the really memorable, high production value, expensive to make advert. Superbowl advertisers are widely acknowledged to pull out all the stops.

As long as a premium environment is accessed the impression of cost, of conspicuous waste, is created, regardless of what was actually paid behind the scenes. This is why media agencies still seek to negotiate cost savings for these deals. We don’t refund the cost saving to the advertiser, we buy more TV spots instead. It would be insane to do anything else.

What I am rather clumsily building towards is that in brand advertising the excessive value of the media buy, Sutherland’s handicap, is signaled through the environment that the ad is placed in (and the quality of the creative execution) and the dominance that the ad has in that place. Brands are not built in the gutter.

One last thing before I get back to my main theme. There is a place for both brand and DR advertising, they are both when used well, effective business models. There is another spectrum at play here, and it can be quite legitimate for any particular business to be placed at any position along that spectrum depending on their unique attributes and objectives. I am not arguing for brand, and against DR, I am suggesting that the evolution of the digital channel as an advertising medium has massively favoured DR to the significant detriment of brand, and that this needs to be rectified. For the good of all.

Part of the adjustment required is a scaling back of some of the evolving trends, this includes some of what ad tech is doing, and some of the adjustment is about building new support for the revenue models of high value premium advertising environments, which also, partly, has something to do with what ad tech is doing.

So, with all that said I can finally link back to the main purpose of this essay.

The digital medium, as it is today, has a number of fundamental weaknesses when it is used for signalling via premium editorial environments. We need to start changing that or we will lose these premium editorial environments. Here are 3 weaknesses to start us off.

  • The transition from print to digital has not been kind to publishers. The business models are still being developed and the consumer behaviours are still being tested and stretched. Ad tech, in particular, has offered a helping hand to reduce operating costs for inventory management and remnant inventory sales while increasing the scope of hyper targeting. It is also driving prices into the ground. None of that helps brand revenues for premium environment publishers.
  • Page dominance is also under fire. No-one gets annoyed by a full page ad in a print vehicle. GQ, Vogue and all the upmarket style magazines run a bundle of full page ads right up front in the most important positions, before you get anywhere near the actual content you bought. In digital channels however, popups, full page takeovers, auto run video and other types of ad unit that gain dominance by intrusion irritate the user and make them angry. This is not good for brand advertisers, who, even though they do want to provoke a strong emotional reaction, rarely wish to be associated with anger, and certainly not anger directed at them. Meanwhile regular ad units vie for our attention competing against editorial and are often placed to garner maximum impression volumes instead of maximum dwell time. In a cluttered environment dwell time is one of the few positive compensating factors but doesn’t currently feature as a planning metric.
  • There is no ad break. Commercial TV has trained us that every 15 or 20 minutes our consumption will be interrupted and the screen will be 100% handed over to the advertisers. For sure, many of us stand up and make a cup of tea and all kinds of other ad avoidances, but enough people just sit there, or forget to fast forward or simply don’t mind. We are aware that the ads are part of the price we pay for the content and we accept that. The web, however, does not have a similar mechanic and the consumer offers the publishers no such largesse even though the same mechanic (ads = free/cheaper content) is widely understood.

This is not a good place to find yourself in if you want to use advertising to help build brands over the next 10 or 20 or 30 years. And let’s be clear here, that is exactly the time frame you should be considering if you are trying to build a brand. You can certainly get further along the path today faster than you could 20 years ago, but that is no reason not to plan for the same longer term time frames, brands are by definition long term economic plays.

Let’s focus on the most theoretical part of the argument. It’s a broad brush concept, but I think it is fundamentally the key to the whole show.

Because advertising is unashamedly an economic activity that, therefore, responds to incentives and is framed by supply and demand, we should start by looking at some of the economic frames and ask what we can do to change them.

The obvious issue, as highlighted in the last essay, is the uneasy relationship web publishers have with their quasi infinite stock of inventory. It is economically easy to introduce new advertising stock in digital channels in ways that cannot be replicated in other media. This combined with what ad tech can do for the selling and trading of this new inventory causes great problems for the premium publisher in particular.

Premium content by definition needs to be relatively scarce. The more there is, relative to demand, the faster the slide to commoditisation, the horrors of ever declining prices and the loss of signalling value. High value editorial, which in turn becomes the location of high value advertising environments must be driven at least partially by scarcity. The Superbowl is the ultimate demonstration of this idea.

Premium publishers must, therefore, somehow re-assert the scarcity of their premium editorial product and as a result re-assert the high pricing that premium editorial content should garner. This is a lot easier to say than to do as it will need the combined and aligned efforts of the other 2 parts of the system, the agencies and advertisers, to succeed.

Ad tech, as currently structured, doesn’t help either.

While we have publishers using ad tech to sell cheap inventory for tiny CPM’s, we must recognise that this inventory almost always has to be served with cheaper less valuable editorial product. If ad tech is seen as the future of publishing revenue models, which is the case at the moment, then  premium editorial, in particular, will be threatened even more. Because good editorial costs a lot more than weak editorial, ad tech, which drives down average CPM’s cannot help us with editorial quality.

Even more troubling is the practice of using ad tech to target premium audiences (because we know they have visited a premium site or via other demographic or behavioural markers), within cheaper trashier low value ad environments. This is the retargeting strategy, currently so beloved of the agency digital planner. In the face of the need to restore some kind of peak revenue levels this device is especially pernicious as it doesn’t simply absorb ad dollars in an adversarial fashion, but it actively converts actual premium audiences into commoditised and ineffective cheap buys, delivering all the revenue to media businesses that did not even create the original premium content in the first place. It hurts the future of digital brand advertising harder than any other single ad tech ‘innovation’ currently out there.

So for their part agencies must stop selling cheap CPM’s via ad tech as the prime market innovation. This in turn would stop incentivising the creation of vast acres of cheap, dreck filled click fest media properties with so called ‘attractive’ pricing.

For their part advertisers must rediscover the idea of brand signalling and realise that it can’t be achieved in 50 cents a thousand shitholes surrounded by 24 pictures of cats called Burt.

So to clarify, in order to reassert effective scarcity into the premium editorial market and resurrect premium CPMs, publishers need to restrict their premium inventory, agencies need to stop pushing ad tech as the answer to every digital advertising challenge and the brand advertisers themselves need to rediscover the theory of brand signalling to give a structure and purpose to these changes.

But how does this happen?  

For all 3 sectors the easiest tool, perhaps the only tool that synchronises across all 3, that can mechanically drive these changes, is to recast the metrics we use. Metrics define how we plan and what we buy. This seemingly starts with the publishers and agencies (sellers and buyers), but can only start if these changes are accepted/driven by the advertisers. Because, ultimately, the advertisers are the only source of revenue in this ecosystem, they hold the keys to change. As is ever the case money talks.

Who moves first?

It should be the advertisers (and might still be) but there is a difficulty in targeting the advertisers as the agents of change. Advertisers are intensely adversarial by definition, much more so than agencies and publishers (who at least have industry wide representative bodies and conflict management processes), and hence they are the most disaggregated and least aligned of the 3.

Agencies? Theoretically the ownership of best practice in advertising sits with the agencies, which should put them in the prime position to steward the industry through these challenging times. Rather sadly this is a charge they have resolutely relinquished. Agencies are either the most passionate cheerleaders of ad tech and programmatic advertising because they, either truly believe in it or (if I am feeling charitable) are making a series of weak moves to protect their market from technology players by adopting a re-seller status. Both are short term positions and neither enables a market leadership stance of any kind, let alone one that can re-position the academic theories that drive the operational practice of brand advertising.

Publishers? They mostly need someone to take the foot from off their necks. It’s hard to imagine they will be able to lead these changes which they could only do by adopting a tough trading position. That needs deeper pockets than they have.

The fate of all 3 stakeholders here is firmly intertwined so we need to look at the responsibilities of these 3 parties in more detail, one by one.

Publishers

These guys are having a hard time, they are at the thinnest end of the thinnest wedge. Broadly speaking unless their model is built around surfacing commercial intention from their readers behaviour (a la Google) it is difficult to see what the market has done for them in 10 years except squeeze, and squeeze ever harder.

The broad idea, restoring the scarcity of premium content and hence premium advertising environments, starts with the publishers as they alone control this stuff. Of course the decisions they make are heavily influenced by market dynamics and the needs/wants of their customers, but at the end of the day those decisions are still theirs and theirs alone.

Publishers aren’t in this situation because they are stupid or lazy. They face some hard and brave decisions that will only pay off in the medium to long term, and that will also see some of them (maybe a lot of them even) fail. That is the reality of less inventory, less publication. The painful truth that we simply can’t get away from is the fact that restoring scarcity means less published content, not more. That’s a trend that might be reversible once healthier revenue dynamics and models are restored, but for the moment at least less = more. It should be noted that historically, and within channels, the amount of money spent on building brands tends to dwarf that spent on direct response.

From the perspective of the publishers we need to change the supply dynamics of valuable human attention. Instead of selling clicks they need to sell a consumer’s attention. The metrics that we use to communicate the values of a media buy, therefore, need to reflect actual consumer attention, not clicks. This is not a new idea.

Time magazine recently published an essay, “What you think you know about the web is wrong”, which surfaced some of the bigger misconceptions that we, as a whole industry, have held in regard to digital ad measurement and performance data.

The killer point is in the first paragraph.

We confuse what people have clicked on for what they’ve read

The article is written by Tony Haile, the CEO of Chartbeat, a web analytics company, and he shares the learnings of analysis covering the user behaviour data for over 2000 websites. He tackles 4 myths by showing what the data actually tells us, which is often quite contra to the concepts we have been using to plan advertising campaigns. I’m going to focus here on just 2 of these myths.

Myth 1: We read what we’ve clicked on

…All the topics above got roughly the same amount of traffic, but the best performers captured approximately 5 times the attention of the worst performers. Editors might say that as long as those topics are generating clicks, they are doing their job, but that’s if the only value we see in content is the traffic, any traffic, that lands on that page.

The focus on clicking is our collective original sin. From this original formulation almost all the other problems were born. Once you start to think about this there is a chance that you will wonder why we need data to make the point, it stands on an intuitive basis also.

We have all on many an occasion arrived at a webpage, and immediately left it again, or after a quick scan of the headline and 1 paragraph maybe, also moved on somewhere else. This is not uncommon behaviour. The old habit of spending an hour with the Sunday newspaper has never been replicated online. We spend an hour online reading perhaps, but rarely on just one site.

We scan much more than we consume.

The web makes the ability to move on so incredibly easy. And if the main copy doesn’t contain links to redirect us (more clicks, more impressions, more CPM, less attention) the chances are that the sidebar(s) will, as will the footer and any ads that are running. All in all its an environment built to foster additional click throughs generated at the expense of actual quality consumption measured in time.

Webpages are designed like this directly and precisely because the success of an advertiser campaign is measured in clicks. Publishers want advertisers to succeed, so that they in turn buy more ads.

What should a successful brand ad want?

For a brand campaign the value should be twofold. For sure some people will click on an ad and as a result visit the brand homepage and experience a deeper more complete brand message, it would be peculiar to turn click through functions off just because its a brand ad. This is good but generally asks more of people than they commonly wish to give. It asks someone to stop what they are doing, reading a great little article, and submerge themselves in our brand ‘experience’.

We might kid ourselves that consumers are excited to consume our cleverly designed brand experiences but in the main they just aren’t. Common sense suggests that if they were summarily queuing up for such things then the whole ad industry simply would not exist. We have to buy our ad space, people don’t volunteer to watch ads. The entire media agency product is based on making people watch/read something they don’t really want to watch or read at all. As Doc Searls says, “there is no demand for messaging”.

Of course you can tempt people in with competitions or other giveaways for sure, but the role of the advertising in such a scenario is DR, the branding value, if it comes at all, comes from the post advertising experience, the homepage.

The key value for a brand advertising campaign, therefore, should be the quality of the signalling, not navigating users to a brand experience hosted elsewhere. Driving awareness of a brand, aligning that brand with various positive attributes and gaining mind space through quality environment association is where the long term brand building comes from. So whereas you will not be unhappy if your brand campaign does generate clicks, that should not be the goal of the campaign. The goal must be to drive brand awareness, association and purchase consideration through the placement itself.

Such a goal needs to be reinforced by the consumer experience delivered by the publishers.

Brand advertisers therefore  should be looking to change the definition of campaign success and moving the discussion away from clicks. While success is defined by clicks we cannot expect publishing environments to deliver optimal experiences for any other factor. DR tactics optimise to drive clicks, brand tactics should not need to.

Because of this focus on clicks we tend to believe myth 4 also.

Myth 4: Banner ads don’t work

…Here’s the skinny, 66% of attention on a normal media page is spent below the fold. That leaderboard at the top of the page? People scroll right past that and spend their time where the content not the cruft is. Yet most agency media planners will still demand that their ads run in the places where people aren’t and will ignore the places where they are.

Now we are really getting somewhere.

Below the fold

Below the fold

To drive association between a brand and a media environment the key factor is consumer time spent in the locations where that association is being made, maybe over one exposure, more likely over several. The long standing brand metric that helps understand this idea of association is effective frequency (frequency of course is not only about media association, it is also about messaging legibility when consumption is often peripheral).  Effective frequency is the minimum number of times someone should see your ad for them to absorb from it what you want them to absorb.

It is not too much of a jump to transfer the ethos behind effective frequency to actual time spent on a page. A brand ad has a much greater chance of being consumed by either your conscious or sub conscious brain if it spends more time in front of your eyeballs.

This was always understood in old school print media planning even if we couldn’t measure it. The front sections of magazines were, still are, the prime positions not only because of the signalling value but also because ads have a much better chance of being seen in those positions. Similarly facing editorial was a concern because good editorial drove dwell time on the page which in turn also increased the ads consumption value.

In this regard the only difference between print and digital is that in digital we can measure that attention metric, we can be more informed about where we can capture that valuable attention.

But we don’t. Yet. Although we will eventually. And at that point we will have turned an incredibly important corner. It is an easy action point. Let’s plan our brand campaigns against a metric, that is available already, but that allows us to give value to brand advertising instead of DR. Time spent.

Why would a media planner buy above the fold when it’s obvious that most attention will be below the fold? It’s because even though more attention is below the fold, 100% of the eyeballs arriving on a page see the content that is above the fold, but not all of them go below. So the argument is that placements below the fold are seen by less people, which is true. This makes sense for a pure numbers game like DR, measured and bought as cost efficient impressions and clicks, but not for brand advertising that should be measured against ad and brand recall and planned against reach and frequency/attention.

You do need to read the whole Time article by the way, it is very good. The problems highlighted, and solutions advocated are all good for brand marketing, but not so good for DR which, after all, wants that click above anything else.

Bundling and verticals 

Time spent also offers support for publisher revenues in tight verticals. What was once seen as a horrible new reality for newspapers, transferring an existing print franchise to the digital domain (and it was), can now be the strength of a savvy publisher. Unbundling.

The commercial idea of a content bundle, and the Sunday papers are the most obvious example of this, is to build environments that are suitable for vertical advertising sectors. The reason that the Sunday papers are so large is because of the extra free time (dwell time) that we have on the weekend which enables the habit (now being lost) of reading the Sunday papers as a specific leisurely pursuit. Hence the motoring section was a great place for car manufacturers to advertise, the travel section a great place for tourism and holiday destinations and travel agents, the garden section for gardening products and the financial section for pensions, life assurance and savings/ investment products.

It was a very effective revenue model for the news industry. But it was utterly destroyed by the internet because the internet is an unbundling machine. If you like the Times news reporting, but prefer the travel section of the Guardian, then you can simply consume only those sections of the relevant titles. More likely, however, you are probably going to get your motoring fix from a digital property that has nothing to do with either the Times or the Guardian. The chances are that you will find a site dedicated to motoring which fulfils your needs.

This is bad news for the news industry.

Or at least it is if they don’t start to compete in those vertical spaces. With brands separate from their existing news brands.

What is the first thing that would happen? Traffic would drop. There is no doubt that the current structure of branded bundles is used to drive traffic through different sections of the site. The masthead at the top of a news site’s front page contains links to all the subsections for a good reason. This traffic loss is bad for a click revenue model.

What is the second thing that would happen? Quality. The content in a standalone vertical has to be good. If you are writing for people with either a strong topic affinity or a current topic need, then the writing needs to be good. If it isn’t then it will fail. Which as I’m arguing that we need to trim the overall volume of premium content to restore some level of scarcity is not such a bad thing.

To build a regular audience for say, a motoring vertical, would need the capture of respected, popular writers, and a reputation for being a solid home of quality writing. The second factor often the tool that attracts the better writers. It would also, over time, need to become a brand in its own right.

The third thing to happen? Engaged readers spend lots of time reading the content. Real valuable consumer attention. And even though smaller audiences might be likely, if we can shift the metrics from clicks to attention then we can still look to maintain, or possibly even increase ad revenues.

So in summary there are 2 broad areas for development that sit with publishers.

Firstly collect and disseminate attention metrics. Today. There is no need to wait for the agencies and advertisers to ask for these data points, make them available now and drive the debate.

Secondly build out editorial strategies that thrive in an attention model. I’ve talked in a very brief fashion about only one idea, the incentives that could accompany a move into a range of verticals, but I’m sure there must be other ways to design for a revenue model driven by attention metrics.

Agencies

Quietly the agency world, the media agency world at least, is preparing for its exit, although they would never admit such a thing (maybe they would call it a pivot). The current media agency business model is under fire, holed below the water line and has no chance of recovery. It is likely to be a long exit, there are revenues to be made for some time yet, but if anyone thinks this business has a 20+ year future in its current guise then they aren’t thinking things through properly.

The writing was on the wall the moment Google explained that they wouldn’t be paying agency commission for paid search, but we hurt ourselves a little while before that when media became an independent service separated out from our creative agency cousins. The move to independence laid bare the business model, and sure enough over the years we have cannibalised the 15% commission model, buying clients with promises of commission rebates. What was once 15% is now often only 1 or 2% and sometimes even less than that. Money talks and we’ve been giving it away.

Until quite recently I had been more confused by the strategic plays being made by media agencies than by any other business in advertising. They are also the businesses closest to my heart and where I have spent the vast majority of my career. I have heard digital planners and evangelists in agencies talk with delighted and excited passion about marketing automation for the best part of 10 years. From the perspective of a business model that provides outsourced media market participation already (what else is a media agency?) I always thought it was a ludicrous place to go. Why would the agency world want to package up everything they do and put it in a black box, to literally outsource, via technology, what they are already paid to do on an outsourced basis?

The first stirrings of marketing automation and media management black boxing was driven by a weak strategic vision for digital which left the nascent frontier in the hands of clever people with little experience, or people with much experience and little knowledge (of digital).

10 years ago neither of these constituencies had seen what was coming. The operational end, the young clever people who actually ran the campaigns could see how automation could help them and how it could streamline process and add insight. They weren’t thinking about the next 10 years, they were thinking about next week. Which was what they had been hired to do.

The people who were supposed to think about the next 10 years, the experienced strategic managers, agency heads, CEOs and FDs didn’t have enough instinctive knowledge of the landscape, which to be fair was being built in real time around them, to see what was coming either.

So, we had a revolution built, innocently enough, from within. When the real leadership did wake up to the realities of a technology driven advertising future they made the only decision they could. Acknowledging the destruction of the commission revenue model the media agency sector has been forced to accept the old playground adage…If you can’t beat them, join them.

The media agency world is falling over itself to become an ad technology sector.

As individuals they are lining up to be employed by Google.

As businesses they are either moving into the territory of technology re-sellers, or they are (depending on how big they are) buying into the technology market directly either through development or acquisition. That’s a cold rational assessment of their futures, but it is cohesive.

The agency position is extremely problematic, as it stands, because we can’t develop a sane basis for digital brand advertising alongside a future based on the current view of ad technology dominated by DR/performance and hence clicks, as in fact the brand model almost certainly needs to dominate the DR model.

It is, chiefly problematic, because the agency model is likely to remain in a precarious position even once they move over to a tech footing. This rather means that they aren’t looking to develop a long term rational advertising model, which they should be. They should be wowing advertisers with wise and intelligent stewardship of the academic needs of advertising, and proving the need to pay them for their thinking and not just their buying. Instead they are just trying to grab the nearest life raft. They have little choice.

The concern is that even if we, rather charitably, dismiss the role played by agencies in the rise of programmatic ad tech they face an even trickier challenge laid out in the structural development of this ad buying paradigm.

Programmatic, like Google’s search ads are a 1 to 1 auction. Each individual buy is one consumer on a website being bought by one advertiser. For a business model built entirely on the promise of scale (“we represent 30% of the market, therefore we can buy media cheaper than anyone else”) a 1 to 1 purchase, which by definition pays no respect to scale whatsoever, is an incontrovertible harbinger of doom. No scale, no business model.

The guys at the top understand this. They are no longer sleeping on shift. There are 2 things happening as a result.

Firstly they are spreading fear, uncertainty and doubt (FUD) liberally through the marketing/advertising ecosystem about how difficult it is to run programmatic (ad tech) campaigns, the only sensible take out (from their perspective) being that advertisers can’t possibly run these new operational models without the wise and knowledgeable help of agencies. They have seen their ultimate disintermediation on the horizon and are buying time.

Secondly they are looking to reassert scale in the market by becoming the principle on media buying contracts via PMP’s, private market places, that are accessed using the programmatic tech stack.

PMP’s have been heralded as the solution for brand advertising by the ad tech evangelists, which they can be, if they understand the issues I have discussed in this essay and trade in the metrics I have suggested instead of clicks.

The logic is that these artificial closed gardens within the programmatic universe (each PMP belongs to a publisher or a publishing group and can be accessed via obtaining the correct digital token) deliver a price protection mechanic via a pricing floor, and can also deliver a more effective approach to fraud management through much tighter inventory controls and visibilities. Much of the discussion has focused on how publishers can more comfortably release their prime inventory to programmatic buying processes through PMP’s without destroying revenue.

So, on the surface I draw some hope from the emergence of PMP’s although it should be clear that such optimism depends on a correct adoption of which metrics they choose to trade in.

There is a problem, however, that must be overcome. PMP’s offer the last chance for agencies to chase off the disintermediation horror story that they face, but by doing so they may introduce something worse again.

Agencies as principles – no thanks

As mentioned media agencies have begun to buy up PMP inventory as the principle to the contract, instead of buying the same inventory on behalf of their advertiser clients, who would historically be the principle. Once they have purchased these impressions they are adding a mark-up and then reselling them to advertisers, as well as charging for their time (via the agency team first, then secondly through the agency group trading desk fees – a classic double dip. Actually a triple dip if you add in the mark-up on the resold inventory).

Not so many years ago such a strategy involved a very significant risk. If the inventory could not be resold the loss could be too heavy to bear. Now that same unsold inventory can just be dumped back out onto the networks, via the programmatic stack, and at the very least deliver a small loss or a break even.

The big shiny goal here is the already mentioned reassertion of scale. So, whereas once an agency would offer you advantageous pricing through their buying scale being brought to bear in negotiation with the media owners they may now seek to offer you advantageous access to inventory, instead of or potentially alongside price, by being the biggest purchaser of inventory via PMP’s. Scale is important under this new paradigm as a wider base of clients offers much greater opportunity to resell, which in turn offers both a greater inventory holding and a more effective risk mitigation position.

Its a fundamentally different proposition, half opportunity and half threat. One that pitches them into both a client/service position with their customers and an adversarial/negotiating position with those same customers.

In other words a desperate play.

We are getting close to the business end of the disintermediation challenge now, and as a result a possible way out for some of the problems that this whole journey has created.

Of the 3 players in the ecosystem, advertisers, agencies and publishers the intermediators are clearly the agencies, and as harsh as it may be these will be the businesses that eventually face the fate of modern digital/internet inspired disintermediation. The technology is already here that spells out the operational and intellectual future of advertising media planning and buying, and there is no role in it for modern media agencies.

So, it all comes down to the advertisers themselves

If you haven’t twigged where this is going, we are now very close to the end. And, I apologise, if you have stayed with me all this way, this will feel slightly anticlimactic.

The role of the advertisers in this story is to take the programmatic ad technology stack in house and thereby remove the agency from the picture. By placing the operational structure in the hands of the advertisers we remove one of the economic players from the conundrum that seeks to perpetuate this destructive cycle. By removing that player we can more easily re-balance the whole ecosystem.

It is tempting to seek a future that does not involve the programmatic buying of media space. But as much as the liberal minded privacy respecting individual in me wishes it so I simply cannot see the media industry moving back to a human heavy operational model, much less so when we pass peak TV (when sufficient %’s of TV impressions are delivered over internet protocols as to make the dissemination of programmatic buying of TV, or more accurately audio visual ads, an inevitability that encompasses all AV ads, over the air or IP).

If we accept that programmatic is with us to stay, in some form or another, then we need to seek a way to make it work properly. Of all the players in the advertising ecosystem only the advertisers themselves lose if we blindly destroy the ability to use advertising to build brands.

If we don’t manage to change the current model then agencies still get to buy media, whether it is DR or brand media, and publishers still get to run advertising whether it is DR or brand advertising. They can still operate effective revenue models on either basis.

The players at risk are the advertisers, the businesses that depend on trust and value and long term positive connection between their users and their products, the businesses that need a brand, particularly those in adversarial sectors where brand saliency is key. Advertisers don’t succeed because they advertise, they succeed because they sell a huge diverse range of products at a profit and they use a wide range of business tools to do that. Most advertisers (although not all of them) are best served by having brand advertising in the tool box even if they don’t always use it.

The only option here is for the advertisers to grab control of their own destiny, to just go ahead and build what they need without worrying about the agencies. It may not be clear to a sector that is built on the binary of agency and advertiser but right now media agencies’ needs are not aligned with the advertisers’ needs.

We aren’t going to get a sensible resolution to this mess by examining it from the side-lines and academically “resolving” the conflicts. There are too many, powerful entrenched interests for that to work. Somewhere along the line advertisers themselves are going to have take control of the tech and the operational model, and by doing so create a business opportunity for  the publishers that builds an incentive to force them to properly organise their publications to better serve brand advertising.

Here’s how I hope it plays out.

There are a number of advertisers that have already brought their ad tech in house, and many of them are doing well. A lot of that value is being driven from clever 1st party data work, not from a control of brand advertising destiny, but in this 1st instance I just hope to see the business case of in house programmatic being verified. To be clear, there is always going to be a business role for DR, data led advertising via the programmatic stack, I am chiefly trying to find a way to engender a significant rebalance between brand and DR as the internet is currently not fit for purpose as a brand advertising medium.

If it does come to pass that the business case for in house can be verified then I hope to see a tipping point which is the precursor to a wholesale operational change with digital ad buying moving client side and the 1st phase of disintermediation finding a foothold.

Following from this fairly fundamental change in the ecosystem we will need to bring pressure to bear on the publisher community to prepare their sites to drive optimisation of the metrics that demonstrate real valuable human attention. Not shares, or likes, or tweets or other metrics that enable contrary interpretation, but simple straightforward unequivocal data such as dwell time and demographics. This can be achieved by building (re-building) the business case for brand advertising in digital media, and also as a natural externality of that business case, the withholding of budget from recalcitrant publishers.

Once these data points are commonly made available by publishers then we can use the programmatic stack to execute campaigns against these brand metrics, which by definition do not need to be intrusive or personal or disrespectful of social norms  and which do not need to be derived from vast detailed profiling tools. The relative price points of these campaigns will rise, but so will the scarcity of genuine premium quality environments and therefore so will the signalling value that they represent. Sorted.

Well, maybe not sorted. Not quite as easy as that. But some food for thought hopefully.


links again, 4 of ’em

Earlier this year Netflix and Comcast had a little contretemps about their peering agreement. Unless you spend time trying to keep up with the various layers, and the players therein, that make up the technical infrastructure of the internet then that statement potentially means absolutely nothing to you whatsoever, but actually its all quite important.

Peering is the name given to the agreements that cover the terms for transferring internet traffic between networks and they are a fundamental cornerstone of the modern internet. Because there is not one single network that covers the whole world it is important that traffic can be exchanged, as required by the needs of the user, with the least possible friction. Historically these agreements were made between engineers and each network simply agreed to open to each other as required, no money involved. Comcast decided that they wanted to buck that history and demanded payment from Netflix. Netflix suggested that prior to this negotiation Comcast were deliberately allowing congestion in certain parts of their network to negatively affect Netflix’s customers (something Comcast denied) and that they had no choice but to pay the ransom.

Level 3 is one of the big global internet backbone companies that carry enormous amounts of web traffic. They are the one of the companies that pay for, and own and maintain, the cable that runs under the oceans. This post on their blog lays out some of the dynamics that go into peering agreements and even though this post doesn’t deal directly with the situation between Netflix and Comcast it should probably give you enough information to understand who is playing the shithead and who isn’t.

Level 3

 

I never did try playing Go the ancient Chinese strategy game, and after reading this article i’m starting to think that that is no bad thing. Like chess its a 2 player war game, but unlike chess its a game (possibly the only game left) that retains an unbeaten crown in human vs computer match-ups. Machines beat humans at checkers in 1994 and chess was added to the list in 1997. Now, 17 years later Go still holds out, and holds out with comfort. Every year the University of Electro-Communications in Tokyo hosts the UEC cup where computer programs compete against each other for the opportunity to compete against a Go sage, who will be one of the world’s top Go players. The challenge is not one of brute force computing power, its more about strategic understanding and the fact that by all accounts we don’t really understand what goes into being a great Go player, human or otherwise.

Wired

 

Ever wondered why the airwaves are licensed by the government? If you think about it a little, the chances are you will come to what seems like a very simple and straightforward conclusion, which is that the airwaves are licensed so that broadcasting signals don’t interfere with each other. To ensure that when you tune in to 97-99 FM here in the UK, you will receive radio one, not some other outfit broadcasting on the same frequency. Which is all well and good except that electromagnetic waves simply don’t interfere with each other. This concept of interference seems to imply that there is only so much space on any particular frequency that can carry signals, that there is only so much spectrum available. Colours are on the same spectrum as radio waves, separated only by their different frequencies, to suggest that spectrum is limited is to suggest that there is only so much Red to go around, which is clearly a farcical concept.

All of that, which is sound and uncontroversial 6th form physics, does raise some interesting questions about our radios and TVs and mobile phones, all of which broadcast across licensed electromagnetic frequencies. It turns out that the problem of interference is a problem of the broadcasting and receiving equipment not the natural scarcity of the airwaves. We have a system that limits access to frequencies because we are still using a technology base optimised to an old technical paradigm.

This piece published in Salon gives you the full detail and quotes extensively from the work of David Reed an important and prominent scientist from MIT, famous for writing the text that nailed the modern architecture of the internet. Understanding what is actually going on here turns out to be entertaining and enlightening.

Salon – The myth of interference

 

I don’t know whether this last link is being serious or not, and that alone might be the best observation I have to make about the state of modern economics.

Alex Tabarrok is a right leaning economist who authors the blog Marginal Revolution with Tyler Cowen, both are professors at George Mason University in Virginia. This short post, one of many from the right responding to the fuss being made by Piketty’s Capital, offers “2 surefire solutions to inequality”. One is to increase fertility among the rich, dilute the inheritance and reduce capital concentration. The other surefire way? To reduce fertility among the rich! The author of the post puts a lot more detail into this position than I do here. I’m leaning towards the opinion that he is simply having a laugh, but then as he is an economist i’m really not so sure.

Marginal Revolution

 

 


Links – High frequency trading, poison and Jimmy Carter

I’ve had my eye on high frequency trading (HFT) for a little while. I was initially fascinated by the overwhelming need for connectivity speeds driving up the cost of real estate near the exchanges. If you could locate your trading operation next to the exchange the tiny fractions of a second saved by geographical proximity were worth huge amounts in naked profit.

The whole thing struck me as another example of short term profiteering taking precedence over the more sensible longer term allocation of risk and investment capital (which is after all what stock markets are supposed to do).

The long term trend of shareholding periods has been in decline since the 1960’s when shares were held on average for 8 years. The following 2 links are in disagreement about what the average holding period was in 2012 but whichever data point you prefer, 5 days according to Businessinsider or 22 seconds according to London’s Daily Telegraph, it is clear that we are using the mechanic that is the stock market, in a radically different way than we have ever done before.

Part of the overarching societal philosophy of share ownership was that the long term incentives held by investors, via shareholding, would act as a brake on the incentives for larceny, held by the professional management class. But those long term incentives can only act as a brake if the shares are held in a long term pattern, and today that is not the case. To be fair this is not a triumph of the managerial class over the investor class, this is driven by the profiteering of Wall street and London’s City too. Simplistically more trading generates more trading income, but there are other factors acting here too.

Firstly we have a global capital investment paradigm that is focused on the needs of finance capital not production capital. This so called casino approach to capitalism, and its relationship to production capital, is best understood through the analysis of Carlota Perez in her stunning, and surprisingly accessible, Technological Revolutions and Financial CapitalThis is a very quick summary of her ideas.

Secondly, and something that has only surfaced into popular discourse recently, it appears that there is a fraudulent mechanic in widespread use as a result of HFT. Michael Lewis has penned the expose in his book Flash Boys.

In essence the advantage bestowed on certain traders as a result of their proximity to certain exchanges generates more than quicker sight of price movements. The original story of HFT was that this small advantage generated thousands of small arbitrage trades, but the window of opportunity to make these trades was so small that it could only be accomplished by computers acting autonomously via algorithms.

Among other things Lewis has showed that the various routings around the multiple exchanges are illegally skirting regulations that are designed to present the same price on all exchanges at the same time. This means that a trading outfit that operates on multiple exchanges can make trades based on the market intelligence of a received order. If a client instructs the trader to buy all 10,000 shares at the price they are listed they will need to buy them from multiple exchanges, but the trader’s ability to communicate to the furthest exchanges faster than the official price channels sees them instruct their machines at those locations to buy what is available. The end result is that the original client will only receive say 4000 of the shares they wanted. The differential 6000 shares are now owned by the trader, but only because of the inherent value generated by knowledge of the instruction for the original 10,000 transaction.

Interestingly a key component here is the difficulty of verification.

This lack of human insight about what is occurring through these technical networks, obscures knowledge of what is happening so much so that fraud can flourish. In this regard it is very similar to the fraud rife in the ad tech networks for digital display advertising.

This link from the Washington Post sets the scene very well, explaining the nature of the wider problem.

This piece, published by the NYT is actually an adaptation from Lewis’s book and details how the situation was eventually understood and the actions being taken to rectify the problem. Its long but really good.

 

 

In 2006 Alexander Litvinenko was poisoned in London with the lethally radioactive chemical Polonium. If you recall the incident at the time you will remember that it took a long time before anyone was able to understand what was happening to him, what the poisoning agent was and where he was poisoned. The original location was believed to be the Itsu sushi restaurant in Piccadilly although it eventually turned out to be the Pine Bar in the Millennium hotel in Grosvenor square.

The story is much more involved than any 5 minute news segment could possibly have hoped to portray, which I guess is not a surprise. Litvinenko himself was no innocent, with a history in the feared FSB and the KGB before that. He took a stand in favour of Boris Berezovsky over Vladimir Putin, something that no ordinary man would do, and maybe ultimately was contributory to the situation he found himself in.

This article is very long but if you enjoy a spy story, fictional or otherwise, then you will certainly enjoy this. Something I didn’t understand was how much concern was raised by the presence of Polonium in London. The decontamination program employed a staggering 3000 people and operated across 50 different sites, 2 simple numbers but they invoke images of a Men in Black style clean-up operation happening under our noses but with no obvious sign whatsoever that it was happening.

The article also describes the strange world of the Russian Oligarchs and as a result tells part of the recent history of the post cold war Russian experience.

It’s worth finding the time to read this. For fun.

How radioactive poison became the assassin’s weapon of choice

 

 

This next piece is also a pleasure to read if only because it is so refreshing to hear an American president (ex) talking so candidly about the role and position of America in the world today and issues within the scope of American domestic politics as well. Jimmy Carter never was like all the rest, whether you agreed with him, whether or not you are a natural republican or a natural democrat it is difficult to fault his sincerity and his integrity. There aren’t many world leaders that it’s easy to say that about.

On the topic of religious persecution of women’s rights.

Well, they actually find these verses in the Bible. You know, I can look through the New Testament, which I teach every Sunday, and I can find verses that are written by Paul that tell women that they shouldn’t speak in church, they shouldn’t adorn themselves and so forth. But I also find verses from the same author, Paul, that say all people are created equal in the eyes of God. That men and women are the same before God; that masters and slaves are the same and that Jews and Gentiles are the same. There’s no difference between people in the eyes of God. And I also know that Paul wrote the 16th chapter of Romans to that church and he pointed out about 25 people who had been heroes in the very early church — and about half of them are women. So, you know, you could find verses, but as far as Jesus Christ is concerned, he was unanimously and always the champion of women’s rights. He never deviated from that standard. And in fact he was the most prominent champion of human rights that lived in his time and I think there’s been no one more committed to that ideal than he is.

I’m not a religious man but I do wish there were more prominent leaders who hold a strong faith, willing to call out the contradictions within their religious texts and simply choose the side that is kind caring and decent instead of aggressive and divisive.

Jimmy Carter

 


4 videos about technology

This is a fun way to look at the concept of lag. What we have here is an Oculus Rift, headphones, a webcam and a Raspberry Pi all put together to increase the lag from a third of a second to 3 seconds. It’s actually an advert for a Swedish fiber ISP selling fast connectivity, but that is easy to forgive as the message is well delivered and amusing. Its not about virtual reality at all, but it does nonetheless bring home how important latency is in the VR experience.

 

Ok, so this next video is VR as VR, as in it doesn’t show the Oculus interacting with the real world. Still great though. Its early days for the Rift but this is a good start, a taster if you will.

 

This video combines a few different development memes from the computing world. The internet of things, interface development, home automation, office automation, projection. It also shows some of the development directions that I discussed in my recent post about mobile. Its a table, 3 lights, a wall and a computer than can hear and see. Very interesting.

 

Finally a small change of direction. This is a group of kids reacting to Walkmans. There is lots in here and it is a joy to watch (particularly the one kid smarter than the others who seems to know his own mind, most are appalled at the observation that a Walkman cost $200, he calmly observed that an iPad costs $700). Perhaps the biggest take out is that the paradigm of mechanical, not computational, functionality is just beyond them. They poke the buttons gently, as you would with a touchscreen, there is no concept of opening the device, no expectation that another item is required (the cassette itself), or even headphones, and horror that a cassette might only hold 16 songs! There is some succour here for the older among us, if you have ever been confused by a new technology, or watched a parent in a similar situation, then this video will have some ringing similarities. Clearly, misunderstanding technology goes both ways.


Small phones, mobile computing, innovative disruption and speculation


Apps apps apps

I don’t like getting my content from apps. I use the mobile web instead if I have to, but the majority of my web access is on a laptop with a nice big screen, which means my prejudice doesn’t hurt me too much.

I know that that sets me apart from the majority of smartphone users, but as I said before, my primary web access point isn’t my phone so the limitations of the mobile web that have enabled app culture don’t really hurt me.

I also believe that it is a culture with a short shelf life.

It is a system with an unusual degree of friction in comparison to an easily imagined alternative. The whole native apps versus web apps argument that has been running for years now is largely decided, at any one point in time, by the state of play with regard to mobile technology and the user requirements of said technology. I’m pretty certain the technology will improve faster than our user requirements will complicate.

That easily imagined alternative is pretty straightforward too. A single platform, open and accessible to all, built on open protocols, that can support good enough access to hardware sensors, is served by effective search functionality and doesn’t force content to live within walled gardens.

The web, in other words. Not fit for purpose today (on mobile), but it will be soon enough.

Monetisation possibly remains a challenge but I’m also pretty sure we will eventually learn to pay for the web services that are worth paying for. Meaningful payment paradigms will arrive, the profitless start up model/subsidised media (which is what almost all these apps are) can’t last. We will get over ourselves and move beyond the expectation of free.

Should everything be on the web? No, not at all. For me there is a fuzzy line between functional apps and content apps. The line is fuzzy because some content apps contain some really functional features, and vice versa. Similarly some apps provide function specifically when a network connection is unavailable (pocket, for example). The line is drawn between content and function only because I strongly believe that the effective searchability of content, via search engines or via a surfing schema, and the portability of that content via hyperlinks is the heart of what the web does so well and generates so much of its value. If all our content gets hidden away in silos and walled gardens then it simply won’t be read as much as if it was on the open web. And that would be a big perverse shame.

Why can’t we just publish to the stream? Because it is a weak consumption architecture, it is widely unsearchable (although I suspect this can be changed although we need to move beyond the infinite scroll real quick) and because you don’t own it (don’t build your house on someone else’s land). That’s another whole essay and not the focus of this one.

I’ve felt this way for a long time so I am happy to say that finally some important people are weighing in with concerns in the same area. They aren’t worried about the same specific things as me, but I can live with that if it puts a small dent in the acceleration of app culture.

Chris Dixon recently published this piece calling out the decline of the mobile web. I think he is over stating the issue actually, but only because I think the data doesn’t distinguish effectively between apps that I believe should be web based and those I am ambivalent about. He cites 2 key data sources.

comscore-mobile-users-desktop-users-2014

The ComScore data for total web users vs total mobile users is solid enough, showing that mobile overtook desktop in the early part of this year. Interestingly the graphic also shows that desktop user volumes are still rising, even if the rate of acceleration lags behind mobile. Not many people are taking this on board in the rush to fall in love with the so called post PC era. It’s important because to really understand the second data point, provided by Flurry, you have to look at the very different reasons people are using their mobile computers.

Flurry tells us that the amount of time spent in apps, as a % of time on the phone/internet itself, rose from 80% in 2013 to 86% (so far) in 2014. That’s a lot of time spent in apps at the cost of time spent on the mobile web.

But the more detailed report, shows that that headline number is less impressive than it at first seems. All that time in apps, it turns out isn’t actually at the cost of time spent on the web, not all of it anyway.

time_spent

Of the 86%, 32% is gaming, 9.5% is messaging,  8% is utilities and 4% productivity. That totals 53.5% of total mobile time, and 62% of all time in apps specifically. The fact that gaming now occurs within networked apps on mobile devices, rather than as standalone software on desktops, only tells us that the app universe has grown to include new functionality not historically associated with the desktop web. It tells us that this new functionality has not been removed from the mobile web, that in this case the mobile web has not declined. As I said I agree with the thrust of the post I just think the situation is less acute.

Chris Dixon’s main concern is for the health of his business, he’s a partner at Andreessen Horowitz (a leading tech VC), and he is seeing the rise of 2 great gatekeepers in Google and Apple, the owners of the 2 app stores that matter, as a problematic barrier to innovation, or rather a barrier to the success of innovation (there is clearly no shortage of innovation itself).

He’s not alone Mark Cuban has made a similar set of observations.

Fred Wilson too, although he is heading into an altogether different and more exciting direction again, which I will follow up in a different post (hint, blockchain, distributed consensus and personal data sovereignty). Still this quote from Fred’s post sums up the situation rather well from the VC point of view.

My partner Brad hypothesized that it had something with the rise of native mobile apps as the dominant go to market strategy for large networks in the past four to five years. So Brian pulled out his iPhone and I pulled out my Android and we took at trip through the top 200 apps on our respective app stores. And there were mighty few venture backed businesses that were started in the past three years on those lists. It has gotten harder, not easier, to innovate on the Internet with the smartphone emerging as the platform of choice vs the desktop browser.

The app stores are indeed problems in this regard, I don’t disagree and will leave it to Chris and Mark’s posts to explain why. I am less concerned about this aspect partly because it’s not my game, I don’t make my money funding app building or building apps. But mostly I’m unconcerned because I really believe that (native) app culture has to be an adolescent phase (adolescent in as much as the internet itself is a youngster, not that the people buying and using apps are youngsters).

If I am right and this is a passing phase can we make some meaningful guesses as to what might change along the way?

I’ve been looking at mobile through the eyes of Clay Christensen’s innovators dilemma recently, which I find to be a useful filter for examining the human factors in technology disruptions.

Clearly if mobile is disrupting anything it is disrupting desktop computing, and true to form the mobile computing experience is deeply inferior to that available on the desktop or laptop. There are, actually, lots of technical inferiorities in mobile computing but the one that I want to focus on is the most simple one of all, size, the mobile screen is small.

Small small small small. I don’t think it gets enough attention, this smallness.

If it’s not clear where I’m coming from, small is an inferior experience in many ways, although it is, of course a prerequisite for the kind of mobility we associate with our phones.

Ever wondered why we still call what are actually small computers, phones? Why we continually, semantically place these extremely clever devices in the world of phones and not computers? After all the phone itself is really not very important, it’s a tiny fraction of the overall functionality. Smartphones? That’s the bridge phrasing, still tightly tied to the world of phones not computers. We know they are computers of course, it’s just been easier to drive their adoption through the marketing semantic of telephony rather than computing. It also becomes easier to dismiss their limitations.

Mobile computers actually out date smartphones by decades, they were/still are, called laptops.

Anyway, I digress. For now let’s run with the problems caused by small computers. There are 2 big issues, that manifest in a few different ways. Screen size and the user interface.

Let’s start with screen size (which of course is actually one of the main constraints that hampers the user interface). Screen size is really important for watching TV, or youtube, or Netflix etc

We all have a friend, or 2, that gets very proud of the size of their TV. The development trend is always towards bigger and bigger TV screens. The experience of watching sport, or a movie on a big TV is clearly a lot better than on a small one (and cinema is bigger again again). I don’t need to go on, it’s a straightforward observation.

No-one thinks that the mobile viewing experience is fantastic. We think it’s fantastic that we can watch a movie on our phones (I once watched the first half of Wales v Argentina on a train as I was not able to be at home, I was stoked), but that is very different from thinking it’s a great viewing experience. The tech is impressive, but then so were those little tiny 6 inch by 6 inch black and white portable TV’s that started to appear in the 80’s.

Eventually all TV (all audio visual) content will be delivered over IP protocols. There are already lots of preliminary moves in this area (smart TVs, Roku, Chromecast, Apple TV) and not just from the incumbent TV and internet players. The console gaming industry is also extremely interested in owning TV eyeballs as they fight to remain relevant in the living room. Once we watch all our AV content via internet protocols the hardware in the front room is reduced to a computer and a screen, not a TV and not a gaming console. If the consoles don’t win the fight for the living room they lose the fight with the PC gamers.

Don’t believe me? Watch this highlights reel from the Xbox one launch….TV TV TV TV TV (and a little bit of Halo and COD)

The device that the family watches moving images on will eventually be nothing but a screen. A big dumb screen.

The Smart functionality, the computing functionality, that navigates the protocols to find and deliver the requisite content, well that will be provided by any number of computing devices. The screen will be completely agnostic, an unthinking receptor.

The computer could be your mobile device but it probably won’t be. As much as I have derided the phone part of our smartphones they are increasingly our main form of telephony. As much as it is an attractive thought we are unlikely to want to take our phones effectively offline every time we watch the telly. Even if at some stage mobile works out effective multi-tasking we will still likely keep our personal devices available for personal use.

What about advertising? That’s another area where bigger is just better. I’ve written about this before, all advertising channels trend to larger dominant experiences when they can. We charge more for the full page press ad, the homepage takeover, the enormous poster over the little high street navigational signs telling you that B&Q is around the corner. Advertising craves attention and uses a number of devices to grab/force it. Size is one of the most important.

Mobile is the smallest channel we have ever tried to push advertising into. It won’t end well.

The Flurry report tries to paint a strong picture for mobile advertising, but it’s not doing a very good job, the data just doesn’t help them out. They start with the (sound) idea that ad spend follows consumption ie. where we spend time as consumers of media, is where advertisers buy media in roughly similar proportions. But the only part of the mobile data they present that follows this rule is Facebook, the only real success story in the field of mobile display advertising. The clear winner, as with all things digital is Google, taking away the market with their intention based ad product delivered via search, quite disproportionally.

Media consumption v ads

There are clouds gathering in the Facebook mobile advertising story too. Brands have discovered that it is harder to get content into their followers newsfeeds. The Facebook solution? Buy access to your followers.

That’s a bait and switch. A lot of these brands are significant contributors to the whole Facebook story itself. Every single TV ad, TV program, media columnist or local store that spent money to devote some part of their communication real estate to the line “follow us on Facebook” helped to build the Facebook brand, helped to make it the sticky honeypot that it is today.

And now if you want to leverage those communities, you have to pay again. Ha.

Oh but there’s value in having a loyal active base of followers I hear the acolytes say. Facebook is just the wrapper. Ok, then in that case ask them all for an email address and be guaranteed that 100% of your communications will reach them. Think that will work? Computer says no.

Buying Facebook friends? That’s starting to whiff too.

That leaves Google/search. The only widely implemented current revenue model that has a future on the mobile web. Do you know what would make the Google model even more effective? A bigger screen, and a content universe on the open web.

To be fair the idea that we must pursue pay models is starting to take hold in the communities that matter. This is good news.

Read this piece. It’s as honest a view as you will find within the advertising community and it still doesn’t get that the mobile medium is fundamentally flawed simply because it is small. There is no get out of jail card free for mobile, unless it gets bigger.

Many of the businesses we met have a mobile core. A lot of them make apps providing a subscription service, some manage and analyze big data, and others are designed to improve our health care system. The hitch? None of them rely on advertising to generate revenue. A few years ago, before the rise of mobile, every new startup had a line item on its business plan that assumed some advertising revenue.

What did you think about the Facebook WhatsApp purchase? It can’t have been buying subs, it’s inconceivable that most of the 450m WhatsApp users weren’t already on Facebook. Similarly even though there is an emerging markets theme to the WhatsApp universe, Facebook hasn’t been struggling in pick up adoption in new markets. My money is that Zuckerberg bought a very rare cadre of digital consumer, consumer’s willing to pay for a service that can be found in myriad other places for free. WhatsApp charges $1 a year, a number of commenters have noted this and talked of Zuckerberg purchasing a revenue line. I think they have missed the wood for the trees. That $1 a year, its good money when you have 450m users, for sure. It’s even better when you have nearly a billion and many of them are getting fed up with spammy newsfeeds. What will you do if you get an email that says “Facebook, no ads ever again, $5”. He doesn’t just get money, he also gets to concentrate on re-building the user experience around consumers, not advertisers.

The WhatsApp purchase is the first time I have tipped my metaphorical hat in Zuckerberg’s direction. The rest is morally dubious crud that fell into his lap, this is a decently impressive strategic move. I still won’t be signing up anytime soon though.

Ok. Small. Bad for advertising. What else?

Small devices also have necessarily limited user interfaces. The qwerty keyboard for all of its random derivation works really well. There is no other method that I use that is as flexible and as speedy for transferring words and numbers into a computer.

The touchscreen is impressive though, the device that, for my money, transformed the market via the iPhone. My first thought when I saw one was that I would be able to actually access web pages and make that experience useful through the pinch and zoom functionality. I had a blackberry at the time with the little scroll wheel on the side. Ugh. But even that wonder passed, now that we optimise the mobile web to fit on the small screens as efficiently as possible. No need to pinch and zoom, just scroll up and down, rich consumption experience be damned.

Touchscreens are great, they just aren’t as flexible as a keyboard. As part of an interface suite alongside a keyboard they are brilliant. As an interim interface to make the mobile experience ‘good enough’, again brilliant. As my primary computing interface … no thanks.

When the iPad arrived the creative director at my agency, excited like a small child as he held his tablet for the first time, decided to test its ability as a primary computing interface. He got the IT department to set up a host of VPN’s so that he could access his networked folders via the internet (no storage on the iPad) and then went on a tour of Eastern Europe without his MacBook.

He never did it again. He was honest about his experience, it simply didn’t work.

What about non text creativity? Well, if you can, have a look inside the creative department at an advertising agency. It will be wall to wall Apple. But they will also be the biggest Macs you have ever seen. Huge glorious screens, everywhere.

You can do creative things on a phone, or a tablet but it is not the optimised form factor. It seems silly to even have to type that out.

If small is bad, and if mobile plays out according to Christensen’s theories (and remember this is just speculation), then eventually mobile has to get bigger. Considering that we have already reduced the acceptable size of tablets in favour of enhanced mobility this seems like some claim.

For my money the mobile experience goes broadly in one of two ways.

In both scenarios it becomes the primary home of deeply personal functionality and storage, your phone, your email (which is almost entirely web accessible these days),your messenger tool, maps, documents, music, reminders, life logging tools (how many steps did you step today) as well as trivial (in comparison) entertainments such as the kind of games we play on the commute home.

The fork is derived from what happens with desktop computing. The computing of creation and the computing of deeper better consumption experiences.

We either end up in a world with such cheap access to computing power that everyone has access to both mobile computing and desktop computing, via myriad access points, and the division of labour between those two is derived purely on which platform best suits a task. Each household might have 15 different computers (and I’m not even thinking of wearables at this stage) for 15 different tasks. This is a bit like the world we have today except that the desktop access is much more available and cheap, and not always on a desk.

Alternatively, the mobile platform does mature to become our primary computing device, a part of almost everything we do. This is much more exciting. This is the world of dumb screens that I mentioned earlier. If mobile is to fulfil its promise and become the dominant internet access point then it must transcend its smallness. There are ways this could happen.

We already have prosthetics. Every clip on keyboard for a tablet is a prosthetic.

We already have plugins. Every docking station for a smartphone or a tablet is a plugin.

The quality of these will get better but they still destroy the idea of portability. If you have to haul around all the elements of a laptop in order to have effective access to computing you may as well put your laptop in your bag.

I can see (hold for the sci-fi wishlist) projection solutions, for both keyboards and screens where the keyboard is projected onto a horizontal surface, the screen onto a vertical surface. Or what about augmented and virtual reality? Google glass is a player, although the input interface is a dead duck for full scale computing, but how about an optimised VR headset that isn’t the size of a shoebox? Zuckerberg might have been thinking about the metaverse when he bought Occulus but there will potentially be other interim uses before humanity decides to migrate to the virtual a la “ready player one”, VR could be a part of that.

The more plausible call is a plugin / cheap access model hybrid. Some computing will always be use case. I think it’s likely that a household’s primary AV entertainment screen for example, the big one in the living room, will have its own box, maybe controlled by a mobile interface but configured such that your mobile platform isn’t integral to media consumption. The wider world, though, I think will see dumb screens and intelligent mobile devices working together everywhere. Sit down slot in your mobile, nice big screen, all your files, all your preferences, a proper keyboard (the dumb screen is largely static of course, so it comes with a keyboard) and touchscreen too of course, all in all a nice computing experience.

There are worlds of development to get there of course. Storage, security and battery life among other areas will all need a pick up, but if we have learnt anything since 1974 it’s that computers do get better.

I really woke up to Christensen’s theory with the example of the transistor radio, which was deeply inferior to the big old valve table tops that dominated a family’s front room, often in pride of place. The transistor radio was so poor that the kids that used them needed to tightly align the receivers with the broadcast towers. If you were facing South when you needed to face North, you were only going to get static. But that experience was good enough. Not for mum and dad who still listened to the big radio at home, but for the teenagers this was the only way they could hear the songs they wanted to hear, without adults hovering. All of those limitations were transcended eventually and in the end everyone had their own radio. One in the kitchen, one in every teenager’s room, the car, the garage, portable and not portable. Radio’s eventually got to be everywhere, but none of them were built around valves like those big table top sets.

Mobile computing is a teenager’s first wholly owned computing experience (I’m not talking about owning the hardware). It happens when they want it to, wherever they are. It’s also been many an older person’s first wholly owned computing experience, and it’s certainly been the first truly mobile computing experience for anyone. I might have made the trivial observation that laptops were the first mobile computers, and they were, but you couldn’t pull one out of your pocket to get directions to the supermarket.

Christensen’s theory makes many observations. Among them are that the technology always improves and ends up replacing the incumbent that it originally disrupted. The users of the old technology eventually move to the new technology, the old never wins.

We aren’t going to lose desktop computing though. Or at least we aren’t going lose the need states that are best served by desktop computing, if you want to write a 2000 word essay, your phone is always going to be a painful tool to choose. But you might be able to plug it in to an easily available dumb workstation and write away in comfort instead.

At least let’s hope so.


Speaking Truth to Power

Quinn Norton is a name that hovered on the edge of my consciousness for a chunk of time. She dated Aaron Swartz for 3 years and in the resultant furore after his death her name was a part of that story as it became clear that prosecutors had put pressure on her to offer testimony against him, something she did not do.

At this year’s Chaos Communication Congress she spoke on the topic “No Neutral Ground in a Burning World” with Eleanor Saitta which gave an historical perspective on governmental surveillance and how technology is impacting this need for legibility.

 

 

Then finally I found this piece, “A Day of Speaking Truth to Power, Visiting the ODNI”. The ODNI is the Office of the Director of National Intelligence, spook central, a scary place for anyone let alone a journalist with a reputation for speaking ill of the intelligence community (IC). It’s a great read for so many reasons. It paints a picture of the IC that is both confirming and revelatory and it also surfaces a number of interesting ideas, that could have been brought up in a completely different context, but found their way into this conversation instead.

When I read longer form essays I often take one quote, sometimes two, and send them to my email for recycling into a post (credited of course) or to remind myself to dig deeper into a topic, or just to add a highlight to a particular piece, to make it standout against everything else I am reading.

With this essay I sent myself 9 different quotes, I’ve listed 5 of them below. Please read all of it though, it’s fascinating.

 

It was clear to me part of this mess we’re in arose from the IC feeling that if everyone was giving away their privacy, then what they, the Good Guys, did with it could not be that big of a deal.

 

I said the end of antibiotics and the rise of diseases like resistant tuberculosis would mean that tech would end at our skin, and we’d be limited in our access to public assembly. It was probably the darkest thing I said all day, and one of my table mates said (not unkindly) “I don’t like your future.”

 

I realized that one of the problems is that being at the top of the heap, which the security services definitely are, makes the future an abomination and terror. There is no possible great change where the top doesn’t lose, even if it’s just control, rather than position. The future can only be worse than the present, and a future without fear is a future to fear.

 

They were mostly pleasant and thoughtful people, and they listened as best they could. One told me I made her feel like Cheney, “And I’m not used to feeling like Cheney.”

 

When we learn to live online we will want to be full humans again. When, as one of the outside experts put it, the Eternal September is over, we will learn once again to mind our own fucking business, and we will expect others to, as well.

 

That last quote is something we are going to visit again and again over the next 20, 30 or 50 years (it will likely be a hard slog). The truth is that being online is such a young concept and we haven’t a clue yet, what it means in human terms, social terms, societal terms. In the same way that we expect privacy at a coffee table in a public coffee shop, even though it is possible for someone to listen to our conversation, we will find a social code developing that covers the digital in ways we will find reminiscent of the analogue.