How will brand advertising work? (Disintermediation and the curious case of digital advertising continued)

This is the third installment in my series exploring why the internet has so far failed to disintermediate digital advertising yet at the end of this essay we still won’t have arrived at a solution that describes a fully disintermediated model for digital advertising. That’s because I’m still trying to work it out.

This blog is chiefly written, selfishly, to force me to examine and understand things. This particular topic has been no different and I have found my perspectives changing quite a lot as I dig further into the issues. Writing this particular essay has taken some time, I published the last one in March, 8 months ago, and a lot has changed in the ad tech landscape in that time. I’m sure ad tech is here to stay, but I’m also sure that problems caused by the current ad tech deployment are still problems.

This exploration is about advertising primarily, and therefore is approaching these problems through the advertising filter uniquely, even though the issues relating to data ubiquity and misuse affect more than just advertising paradigms.

From the point of view of an advertiser the biggest problem with ad tech (programmatic as it’s called by advertisers) is that it, and the internet at large, is not currently setup to deliver brand advertising. At all.

The goal of this essay, therefore, is to examine what needs to happen to mould the internet publishing space into one that can support brand advertising goals as well as direct response or performance goals, something that I believe is very important.

Somehow, somewhere along the way we are going to have to fix the revenue side of modern digital publishing without recourse to clickbait and lists, which have their place for sure I guess, but should not be the only revenue lifeline for publishers. Similarly the trend to ever more intrusive and creepy data work is unlikely to end well.

In my opinion the rescue of revenue models for premium publishing output is important for lots of reasons but it is particularly important for the future of brand advertising.

Metrics are more than just scorecards – a quick recent history of brand and DR media process

I started working for a global media agency in early 2000, as a direct response (DR) specialist working on the account of a global technology/computing brand, working across their European markets.

At my interview I was asked if I was familiar with direct response metrics, which at the time meant cost per response (CPR) and cost per acquisition (CPA) with a little ROI (return on investment) thrown in. In late 1999, when that interview took place, we didn’t have cost per click on the dashboard, because the use of digital media as an advertising medium was still very young and because where it was getting attention it was being treated as a branding medium.

I remember being quite taken aback at that line of questioning, not offended just surprised. I had thought it was an entry level concept, to have an understanding of the very basic metrics CPR and CPA. What after all is there to understand? It is worth pointing out that the question wasn’t asking about attribution techniques, nor was it hinting at deeper subtleties about measurement in terms of statistical validity, longer term lifetime value or brand halo effects.

The questioning was as basic as could be simply because my interviewers were not in any way familiar with these metrics. They were, for the record both very sharp people, but they had no experience of direct response marketing, and certainly no experience of the management of media within direct response operational models.

Somehow or other I must have covered my bemusement well enough because I was eventually offered the job.

This not entirely exciting anecdote points out something much bigger than the scope of understanding that 3 people held in late 1999 about DR metrics. What it showed was that the wider media agency world’s intellectual alignment in 1999/2000 was almost exclusively focused on the metrics, methodologies and theories of how to build brands.

There were some agencies that focused on direct response as a discipline but they were specialists, and in comparison to the brand companies in the media agency space, they were very small. When I was working client side in financial services we used one of these specialist agencies. As a result I was as unaware of the branding metrics, reach and frequency, as my interviewers were of CPR and CPA. During that strange interview worlds were colliding. If the roles had been reversed my interviewers would have been as bemused as I was.

Today it is all very different.

There are no media agencies that do not offer a DR product (in the extreme cases only because they will at the very least offer digital media buying) and the DR product is no longer the idiot brother of brand. Back in 1999 it was the home of junk mail and “sausage machine” decision making driven by spreadsheets (as difficult as it is to imagine, in 1999/2000 spreadsheet skills were rare in media planning circles), nobody in my agency was looking to take my job or even to join my team. DR was considered very much the low rent discipline, to brand’s shining pinnacle.

The advertising cognoscenti of the day would have been in revolt if they had understood just what the advent of digital was going to do to the ascendancy of direct marketing (the fact that those very same people now lead the DR/ad tech frontier says a lot about how trends propagate within advertising agencies, but that’s a topic for another day).

True to their heritage the old school brand agencies, which for good reason are the biggest agencies out there, have re-branded the discipline of direct response marketing. The DR department is now called the performance department and the job titles are all branded with the new sobriquet – performance planners, performance strategists, performance directors.

There is something slightly distasteful, yet ultimately revealing, about being re-branded by the advertising community in order to be recognised and ultimately assimilated by them. Let no-one tell you that the agency community doesn’t do irony, even if they don’t recognise it as such themselves.

Clearly the factor that changed everything was the rise of digital. Or more specifically, the uptake of digital behaviour among our populations which in turn, led to a transfer of where those populations spent their media consuming hours.

At its very most simplistic core, media planning involves understanding this simple idea. We establish where people spend their time, and then we follow them into those spaces. The finest of campaigns go beyond such basic thoughts to generate greater value and resonant emotional impact, but they never lose sight of the core need to find the right people and put the message in front of them.

So, when media consumption increased in digital channels, digital advertising was sure to follow. At first the community in the brand agencies maintained the belief that this was a brand medium, not to be sullied by the techniques of direct marketing. I had many a good natured discussion (argument) with my colleagues in the digital team about where the channel was ultimately going.

Some of this perverse position was driven by a general resistance to change but it was also fuelled by the realities of the agency business model and our relationships with our clients. If 95% of the money you spend on behalf of a client comes from the clients with responsibility for branding, then it makes sense that you tell them this new channel, this new frontier is great for their brand. Which is what we did. Which is also why, finally, when the client communities caught up with reality, so did the agency models.

If for this reason alone, I think the generally assumed idea that frontiers in advertising are driven by the agency community over the client community is complete hogwash. Agencies sell what clients will buy, or they go out of business, there is little real room for leadership from the agency community (at least not while margins are tight).

Slowly, through the 2000’s almost all of the top media agencies built out and incorporated cross channel DR teams and gradually allowed the digital channel to be planned against the tsunami of new performance data being generated every day. The development of Google’s search product played no small part, effectively removing any space for an ad planner (of the day) to see brand theory being deployed through the small data driven text ads on the side of Google’s results page.

Where was the brand man?

Nowhere. Nowhere at all. Largely because the other big factor in this story is the failure of branding metrics ability to be operationally useful in the new medium. Whereas DR looks to identify and count the number of actions that a consumer must pass through on their way to a purchase (served the ad, clicked the ad, arrived at the site, put item in basket, paid for item) and then uses these data points to plan further activity, brand advertising is planned against metrics that are much less precise. This is a feature not a bug.

Reach and frequency are the planning staples of brand advertising. What percentage of your target audience saw your advert (reach), and on average how many times did they see it (frequency). You will notice that these metrics are conspicuously absent of any reference to business value, there is no mention of cost, and no reference to sales or indeed any consumer commercial activities. These metrics are measured against total population too, the full circulation of a newspaper for example, and not the smaller percentage of the circulation that looked at every ad placed there. Who reads every single page of a newspaper? Some do, but clearly not all.

Understanding whether a brand campaign was successful or not was a matter for econometric modelling after the fact, which in turn then fed the heuristics that we used for planning (effective frequency of 3 anyone?). But mostly it just didn’t happen very often. It was not without some foundation that those of us in the DR world often joked that the key skill of a brand planner was a bottomless ability to post rationalise failure (although to be fair, and even though that wasn’t an entirely accurate observation, it was undoubtedly a useful skill for an agency planner to possess).

Nonetheless, and for all my DR snark, there is a good reason behind this fluffiness. Contrary to opinion in certain quarters the theories that drive advertising, both direct and brand, are derived from a great deal of experience and knowledge.

The reason for this vague set of metrics is at the heart of brand planning media strategy. Broadly speaking brand advertising is about making the target feel something, whereas DR advertising is about making the target do something, and something very specific at that (call this number, click this ad). The following couplets are not absolute statements, there will be many exceptions to almost all of them, but they still round out a good basic approach for understanding the different tasks the 2 disciplines can undertake

  • Brand generates demand, DR harvests it.
  • Brand operates almost exclusively in the field of emotional response, DR provides the, seemingly, rational spur to action (price, product features, short term offers, superior financing, a free pen).
  • Brand often works in the periphery of your perception, DR demands your full attention.
  • Brand is built on conspicuous waste, DR is all about efficiency (it is no accident that almost all DR metrics have cost as a key component).

The idea of conspicuous waste is very important, it is the same idea that explains the peacock’s feathers…“I am healthy, I can waste resource on this fanciful display”. In earlier times this was incompletely characterised by the phrase, often seen in print ads, “as seen on TV”, implying that a company that could afford to be on TV was a substantial and trustworthy entity. The dynamics of today’s TV market may, in some cases, make a mockery of such a thought but at the time it was valid and valuable.

The landscape may have changed, but the human psychology that these ideas feed on has not. Establishing worthiness through conspicuous waste is still important, it’s just a little harder to do, you have to be more accurate in more spaces than you did 30 or 40 years ago. Mass media, social media, design, interface quality and brand/product experience are now all major contributors to a brand’s intrinsic health.

The theory that encompasses this idea of conspicuous waste, this peacock’s feather, is called signalling. Humans and other animals use signalling throughout their lives, and so do brands. Brands signal in many ways, as mentioned above, but advertising remains one of the most controllable tactical and strategic weapons we have, and as such retains a particularly valuable place in the business tools arsenal. It is one of the very few things that can be turned on or off tomorrow and that makes it very important.

Rory Sutherland, vice Chairman of the Ogilvy group in the UK and ex-president of the Institute of the Practitioners of Advertising has been talking about signalling for years. He offered these thoughts during a talk he gave at the Business of Software conference in 2011.

Specifically on the subject of peacocks feathers (my emphasis added).

Now arguably the Ferrari is effectively the sort of you know, the human male equivalent. It also interestingly has to be slightly pointless or wasteful to have meaning, it has to be a handicap you know, if woman (sic) were merely attracted to man (sic) with expensive vehicles they’d all chase truck drivers but the truck doesn’t serve a useful signalling purpose because it’s actually useful

Sutherland translates this idea to the challenge of building brands. The context of this next quote was the idea of buying an engagement ring (signalling as a biological theory is almost entirely about mating).

What this is saying, what this upfront investment is saying is ‘I am probably, because this is expensive, and to have meaning signalling sometimes has to cost money, I am probably playing the long game’, OK? And a brand is doing the same. It has taken me 15 to 20 years to build this brand, it has cost me an enormous amount of money in terms of advertising and reputation building, therefore it is probably not in my interest to make a quick buck by selling you something that is rubbish.

Anti-signalling

On the other side of the same coin Don Marti does a great job of showing that highly targeted DR advertising can’t send effective signals. Don’s observation here is particularly poignant for businesses that only engage in DR advertising.

Let’s use Lakeland’s example of carpet. I can go carpet shopping at the store that’s been paying Little League teams to wear its name for 20 years, or I can listen to the door-to-door guy who shows up in my driveway and says he has a great roll of carpet that’s perfect for my house, and can cut me a deal.

A sufficiently well-targeted ad is just the online version of the guy in the driveway. And the customer is left just as skeptical.

He also points to the famous Man in the Chair advert (via Eaon Pritchard)

advertising man in chair

The man in the chair

If you take all these quotes and ideas together you come to a simple conclusion, one of many perhaps, but one that I want to highlight here. Targeting, the idea of targeting, suggests a spectrum of quality from high precision accuracy to missing by a mile. In the world of weapons we can understand the different use of a shotgun and a sniper’s rifle and the world of advertising is the same. We have DR targeting which in its purest expression is the sniper’s rifle and we have brand advertising which is more like a shotgun, lots of the bullets are wasted (by design) but we still manage to shoot the bastard.

So brand uses quite a lax targeting regime while DR is built entirely around the concept of very accurate targeting. The maxim that drives DR is “right message, to the right person, at the right time”, which today operationally translates to the high level of targeting that has been hyper realised by big data in the online space.

Brand, on the other end of the spectrum, works with targeting that is considerably more aggregated than that. This is part of the handicap that Rory Sutherland refers to, the difference between the Porsche and the big truck, for advertising, is the ability to advertise in high value, high cost, environments without worrying about wastage and efficiency, without worrying if everyone there is a potential customer. Knowing that the value of the signal this delivers is worth it.

Money

Right now I can hear sceptical marketers and media buyers thinking “Get out of town. If that was true why do we bother to negotiate so aggressively for the media space we run brand campaigns in?”

On the surface that is a good enough question, but only on the surface. After Sutherland’s statements on handicapping and utility you might be forgiven for thinking that the price of the media buy is what matters. But it isn’t. The cost of the media buy is largely, if not entirely, invisible to the potential customer. Only in certain marquee scenarios is the cost issue widely acknowledged, like the Superbowl half time ads.

The Superbowl ads are a great example of signalling through excess. For sure there is a big audience, and live sport is one of the few remaining ways to access big audiences in one hit, but the real value of the Superbowl spots is admission into a select group of businesses, businesses so healthy they can afford these insane amounts of money, simply for access to your eyeballs for 60 or 120 seconds. And of course, it’s not just the media buy that is expensive, the quality of the creative is also premium. This is the venue that the big idea is launched in, the really memorable, high production value, expensive to make advert. Superbowl advertisers are widely acknowledged to pull out all the stops.

As long as a premium environment is accessed the impression of cost, of conspicuous waste, is created, regardless of what was actually paid behind the scenes. This is why media agencies still seek to negotiate cost savings for these deals. We don’t refund the cost saving to the advertiser, we buy more TV spots instead. It would be insane to do anything else.

What I am rather clumsily building towards is that in brand advertising the excessive value of the media buy, Sutherland’s handicap, is signaled through the environment that the ad is placed in (and the quality of the creative execution) and the dominance that the ad has in that place. Brands are not built in the gutter.

One last thing before I get back to my main theme. There is a place for both brand and DR advertising, they are both when used well, effective business models. There is another spectrum at play here, and it can be quite legitimate for any particular business to be placed at any position along that spectrum depending on their unique attributes and objectives. I am not arguing for brand, and against DR, I am suggesting that the evolution of the digital channel as an advertising medium has massively favoured DR to the significant detriment of brand, and that this needs to be rectified. For the good of all.

Part of the adjustment required is a scaling back of some of the evolving trends, this includes some of what ad tech is doing, and some of the adjustment is about building new support for the revenue models of high value premium advertising environments, which also, partly, has something to do with what ad tech is doing.

So, with all that said I can finally link back to the main purpose of this essay.

The digital medium, as it is today, has a number of fundamental weaknesses when it is used for signalling via premium editorial environments. We need to start changing that or we will lose these premium editorial environments. Here are 3 weaknesses to start us off.

  • The transition from print to digital has not been kind to publishers. The business models are still being developed and the consumer behaviours are still being tested and stretched. Ad tech, in particular, has offered a helping hand to reduce operating costs for inventory management and remnant inventory sales while increasing the scope of hyper targeting. It is also driving prices into the ground. None of that helps brand revenues for premium environment publishers.
  • Page dominance is also under fire. No-one gets annoyed by a full page ad in a print vehicle. GQ, Vogue and all the upmarket style magazines run a bundle of full page ads right up front in the most important positions, before you get anywhere near the actual content you bought. In digital channels however, popups, full page takeovers, auto run video and other types of ad unit that gain dominance by intrusion irritate the user and make them angry. This is not good for brand advertisers, who, even though they do want to provoke a strong emotional reaction, rarely wish to be associated with anger, and certainly not anger directed at them. Meanwhile regular ad units vie for our attention competing against editorial and are often placed to garner maximum impression volumes instead of maximum dwell time. In a cluttered environment dwell time is one of the few positive compensating factors but doesn’t currently feature as a planning metric.
  • There is no ad break. Commercial TV has trained us that every 15 or 20 minutes our consumption will be interrupted and the screen will be 100% handed over to the advertisers. For sure, many of us stand up and make a cup of tea and all kinds of other ad avoidances, but enough people just sit there, or forget to fast forward or simply don’t mind. We are aware that the ads are part of the price we pay for the content and we accept that. The web, however, does not have a similar mechanic and the consumer offers the publishers no such largesse even though the same mechanic (ads = free/cheaper content) is widely understood.

This is not a good place to find yourself in if you want to use advertising to help build brands over the next 10 or 20 or 30 years. And let’s be clear here, that is exactly the time frame you should be considering if you are trying to build a brand. You can certainly get further along the path today faster than you could 20 years ago, but that is no reason not to plan for the same longer term time frames, brands are by definition long term economic plays.

Let’s focus on the most theoretical part of the argument. It’s a broad brush concept, but I think it is fundamentally the key to the whole show.

Because advertising is unashamedly an economic activity that, therefore, responds to incentives and is framed by supply and demand, we should start by looking at some of the economic frames and ask what we can do to change them.

The obvious issue, as highlighted in the last essay, is the uneasy relationship web publishers have with their quasi infinite stock of inventory. It is economically easy to introduce new advertising stock in digital channels in ways that cannot be replicated in other media. This combined with what ad tech can do for the selling and trading of this new inventory causes great problems for the premium publisher in particular.

Premium content by definition needs to be relatively scarce. The more there is, relative to demand, the faster the slide to commoditisation, the horrors of ever declining prices and the loss of signalling value. High value editorial, which in turn becomes the location of high value advertising environments must be driven at least partially by scarcity. The Superbowl is the ultimate demonstration of this idea.

Premium publishers must, therefore, somehow re-assert the scarcity of their premium editorial product and as a result re-assert the high pricing that premium editorial content should garner. This is a lot easier to say than to do as it will need the combined and aligned efforts of the other 2 parts of the system, the agencies and advertisers, to succeed.

Ad tech, as currently structured, doesn’t help either.

While we have publishers using ad tech to sell cheap inventory for tiny CPM’s, we must recognise that this inventory almost always has to be served with cheaper less valuable editorial product. If ad tech is seen as the future of publishing revenue models, which is the case at the moment, then  premium editorial, in particular, will be threatened even more. Because good editorial costs a lot more than weak editorial, ad tech, which drives down average CPM’s cannot help us with editorial quality.

Even more troubling is the practice of using ad tech to target premium audiences (because we know they have visited a premium site or via other demographic or behavioural markers), within cheaper trashier low value ad environments. This is the retargeting strategy, currently so beloved of the agency digital planner. In the face of the need to restore some kind of peak revenue levels this device is especially pernicious as it doesn’t simply absorb ad dollars in an adversarial fashion, but it actively converts actual premium audiences into commoditised and ineffective cheap buys, delivering all the revenue to media businesses that did not even create the original premium content in the first place. It hurts the future of digital brand advertising harder than any other single ad tech ‘innovation’ currently out there.

So for their part agencies must stop selling cheap CPM’s via ad tech as the prime market innovation. This in turn would stop incentivising the creation of vast acres of cheap, dreck filled click fest media properties with so called ‘attractive’ pricing.

For their part advertisers must rediscover the idea of brand signalling and realise that it can’t be achieved in 50 cents a thousand shitholes surrounded by 24 pictures of cats called Burt.

So to clarify, in order to reassert effective scarcity into the premium editorial market and resurrect premium CPMs, publishers need to restrict their premium inventory, agencies need to stop pushing ad tech as the answer to every digital advertising challenge and the brand advertisers themselves need to rediscover the theory of brand signalling to give a structure and purpose to these changes.

But how does this happen?  

For all 3 sectors the easiest tool, perhaps the only tool that synchronises across all 3, that can mechanically drive these changes, is to recast the metrics we use. Metrics define how we plan and what we buy. This seemingly starts with the publishers and agencies (sellers and buyers), but can only start if these changes are accepted/driven by the advertisers. Because, ultimately, the advertisers are the only source of revenue in this ecosystem, they hold the keys to change. As is ever the case money talks.

Who moves first?

It should be the advertisers (and might still be) but there is a difficulty in targeting the advertisers as the agents of change. Advertisers are intensely adversarial by definition, much more so than agencies and publishers (who at least have industry wide representative bodies and conflict management processes), and hence they are the most disaggregated and least aligned of the 3.

Agencies? Theoretically the ownership of best practice in advertising sits with the agencies, which should put them in the prime position to steward the industry through these challenging times. Rather sadly this is a charge they have resolutely relinquished. Agencies are either the most passionate cheerleaders of ad tech and programmatic advertising because they, either truly believe in it or (if I am feeling charitable) are making a series of weak moves to protect their market from technology players by adopting a re-seller status. Both are short term positions and neither enables a market leadership stance of any kind, let alone one that can re-position the academic theories that drive the operational practice of brand advertising.

Publishers? They mostly need someone to take the foot from off their necks. It’s hard to imagine they will be able to lead these changes which they could only do by adopting a tough trading position. That needs deeper pockets than they have.

The fate of all 3 stakeholders here is firmly intertwined so we need to look at the responsibilities of these 3 parties in more detail, one by one.

Publishers

These guys are having a hard time, they are at the thinnest end of the thinnest wedge. Broadly speaking unless their model is built around surfacing commercial intention from their readers behaviour (a la Google) it is difficult to see what the market has done for them in 10 years except squeeze, and squeeze ever harder.

The broad idea, restoring the scarcity of premium content and hence premium advertising environments, starts with the publishers as they alone control this stuff. Of course the decisions they make are heavily influenced by market dynamics and the needs/wants of their customers, but at the end of the day those decisions are still theirs and theirs alone.

Publishers aren’t in this situation because they are stupid or lazy. They face some hard and brave decisions that will only pay off in the medium to long term, and that will also see some of them (maybe a lot of them even) fail. That is the reality of less inventory, less publication. The painful truth that we simply can’t get away from is the fact that restoring scarcity means less published content, not more. That’s a trend that might be reversible once healthier revenue dynamics and models are restored, but for the moment at least less = more. It should be noted that historically, and within channels, the amount of money spent on building brands tends to dwarf that spent on direct response.

From the perspective of the publishers we need to change the supply dynamics of valuable human attention. Instead of selling clicks they need to sell a consumer’s attention. The metrics that we use to communicate the values of a media buy, therefore, need to reflect actual consumer attention, not clicks. This is not a new idea.

Time magazine recently published an essay, “What you think you know about the web is wrong”, which surfaced some of the bigger misconceptions that we, as a whole industry, have held in regard to digital ad measurement and performance data.

The killer point is in the first paragraph.

We confuse what people have clicked on for what they’ve read

The article is written by Tony Haile, the CEO of Chartbeat, a web analytics company, and he shares the learnings of analysis covering the user behaviour data for over 2000 websites. He tackles 4 myths by showing what the data actually tells us, which is often quite contra to the concepts we have been using to plan advertising campaigns. I’m going to focus here on just 2 of these myths.

Myth 1: We read what we’ve clicked on

…All the topics above got roughly the same amount of traffic, but the best performers captured approximately 5 times the attention of the worst performers. Editors might say that as long as those topics are generating clicks, they are doing their job, but that’s if the only value we see in content is the traffic, any traffic, that lands on that page.

The focus on clicking is our collective original sin. From this original formulation almost all the other problems were born. Once you start to think about this there is a chance that you will wonder why we need data to make the point, it stands on an intuitive basis also.

We have all on many an occasion arrived at a webpage, and immediately left it again, or after a quick scan of the headline and 1 paragraph maybe, also moved on somewhere else. This is not uncommon behaviour. The old habit of spending an hour with the Sunday newspaper has never been replicated online. We spend an hour online reading perhaps, but rarely on just one site.

We scan much more than we consume.

The web makes the ability to move on so incredibly easy. And if the main copy doesn’t contain links to redirect us (more clicks, more impressions, more CPM, less attention) the chances are that the sidebar(s) will, as will the footer and any ads that are running. All in all its an environment built to foster additional click throughs generated at the expense of actual quality consumption measured in time.

Webpages are designed like this directly and precisely because the success of an advertiser campaign is measured in clicks. Publishers want advertisers to succeed, so that they in turn buy more ads.

What should a successful brand ad want?

For a brand campaign the value should be twofold. For sure some people will click on an ad and as a result visit the brand homepage and experience a deeper more complete brand message, it would be peculiar to turn click through functions off just because its a brand ad. This is good but generally asks more of people than they commonly wish to give. It asks someone to stop what they are doing, reading a great little article, and submerge themselves in our brand ‘experience’.

We might kid ourselves that consumers are excited to consume our cleverly designed brand experiences but in the main they just aren’t. Common sense suggests that if they were summarily queuing up for such things then the whole ad industry simply would not exist. We have to buy our ad space, people don’t volunteer to watch ads. The entire media agency product is based on making people watch/read something they don’t really want to watch or read at all. As Doc Searls says, “there is no demand for messaging”.

Of course you can tempt people in with competitions or other giveaways for sure, but the role of the advertising in such a scenario is DR, the branding value, if it comes at all, comes from the post advertising experience, the homepage.

The key value for a brand advertising campaign, therefore, should be the quality of the signalling, not navigating users to a brand experience hosted elsewhere. Driving awareness of a brand, aligning that brand with various positive attributes and gaining mind space through quality environment association is where the long term brand building comes from. So whereas you will not be unhappy if your brand campaign does generate clicks, that should not be the goal of the campaign. The goal must be to drive brand awareness, association and purchase consideration through the placement itself.

Such a goal needs to be reinforced by the consumer experience delivered by the publishers.

Brand advertisers therefore  should be looking to change the definition of campaign success and moving the discussion away from clicks. While success is defined by clicks we cannot expect publishing environments to deliver optimal experiences for any other factor. DR tactics optimise to drive clicks, brand tactics should not need to.

Because of this focus on clicks we tend to believe myth 4 also.

Myth 4: Banner ads don’t work

…Here’s the skinny, 66% of attention on a normal media page is spent below the fold. That leaderboard at the top of the page? People scroll right past that and spend their time where the content not the cruft is. Yet most agency media planners will still demand that their ads run in the places where people aren’t and will ignore the places where they are.

Now we are really getting somewhere.

Below the fold

Below the fold

To drive association between a brand and a media environment the key factor is consumer time spent in the locations where that association is being made, maybe over one exposure, more likely over several. The long standing brand metric that helps understand this idea of association is effective frequency (frequency of course is not only about media association, it is also about messaging legibility when consumption is often peripheral).  Effective frequency is the minimum number of times someone should see your ad for them to absorb from it what you want them to absorb.

It is not too much of a jump to transfer the ethos behind effective frequency to actual time spent on a page. A brand ad has a much greater chance of being consumed by either your conscious or sub conscious brain if it spends more time in front of your eyeballs.

This was always understood in old school print media planning even if we couldn’t measure it. The front sections of magazines were, still are, the prime positions not only because of the signalling value but also because ads have a much better chance of being seen in those positions. Similarly facing editorial was a concern because good editorial drove dwell time on the page which in turn also increased the ads consumption value.

In this regard the only difference between print and digital is that in digital we can measure that attention metric, we can be more informed about where we can capture that valuable attention.

But we don’t. Yet. Although we will eventually. And at that point we will have turned an incredibly important corner. It is an easy action point. Let’s plan our brand campaigns against a metric, that is available already, but that allows us to give value to brand advertising instead of DR. Time spent.

Why would a media planner buy above the fold when it’s obvious that most attention will be below the fold? It’s because even though more attention is below the fold, 100% of the eyeballs arriving on a page see the content that is above the fold, but not all of them go below. So the argument is that placements below the fold are seen by less people, which is true. This makes sense for a pure numbers game like DR, measured and bought as cost efficient impressions and clicks, but not for brand advertising that should be measured against ad and brand recall and planned against reach and frequency/attention.

You do need to read the whole Time article by the way, it is very good. The problems highlighted, and solutions advocated are all good for brand marketing, but not so good for DR which, after all, wants that click above anything else.

Bundling and verticals 

Time spent also offers support for publisher revenues in tight verticals. What was once seen as a horrible new reality for newspapers, transferring an existing print franchise to the digital domain (and it was), can now be the strength of a savvy publisher. Unbundling.

The commercial idea of a content bundle, and the Sunday papers are the most obvious example of this, is to build environments that are suitable for vertical advertising sectors. The reason that the Sunday papers are so large is because of the extra free time (dwell time) that we have on the weekend which enables the habit (now being lost) of reading the Sunday papers as a specific leisurely pursuit. Hence the motoring section was a great place for car manufacturers to advertise, the travel section a great place for tourism and holiday destinations and travel agents, the garden section for gardening products and the financial section for pensions, life assurance and savings/ investment products.

It was a very effective revenue model for the news industry. But it was utterly destroyed by the internet because the internet is an unbundling machine. If you like the Times news reporting, but prefer the travel section of the Guardian, then you can simply consume only those sections of the relevant titles. More likely, however, you are probably going to get your motoring fix from a digital property that has nothing to do with either the Times or the Guardian. The chances are that you will find a site dedicated to motoring which fulfils your needs.

This is bad news for the news industry.

Or at least it is if they don’t start to compete in those vertical spaces. With brands separate from their existing news brands.

What is the first thing that would happen? Traffic would drop. There is no doubt that the current structure of branded bundles is used to drive traffic through different sections of the site. The masthead at the top of a news site’s front page contains links to all the subsections for a good reason. This traffic loss is bad for a click revenue model.

What is the second thing that would happen? Quality. The content in a standalone vertical has to be good. If you are writing for people with either a strong topic affinity or a current topic need, then the writing needs to be good. If it isn’t then it will fail. Which as I’m arguing that we need to trim the overall volume of premium content to restore some level of scarcity is not such a bad thing.

To build a regular audience for say, a motoring vertical, would need the capture of respected, popular writers, and a reputation for being a solid home of quality writing. The second factor often the tool that attracts the better writers. It would also, over time, need to become a brand in its own right.

The third thing to happen? Engaged readers spend lots of time reading the content. Real valuable consumer attention. And even though smaller audiences might be likely, if we can shift the metrics from clicks to attention then we can still look to maintain, or possibly even increase ad revenues.

So in summary there are 2 broad areas for development that sit with publishers.

Firstly collect and disseminate attention metrics. Today. There is no need to wait for the agencies and advertisers to ask for these data points, make them available now and drive the debate.

Secondly build out editorial strategies that thrive in an attention model. I’ve talked in a very brief fashion about only one idea, the incentives that could accompany a move into a range of verticals, but I’m sure there must be other ways to design for a revenue model driven by attention metrics.

Agencies

Quietly the agency world, the media agency world at least, is preparing for its exit, although they would never admit such a thing (maybe they would call it a pivot). The current media agency business model is under fire, holed below the water line and has no chance of recovery. It is likely to be a long exit, there are revenues to be made for some time yet, but if anyone thinks this business has a 20+ year future in its current guise then they aren’t thinking things through properly.

The writing was on the wall the moment Google explained that they wouldn’t be paying agency commission for paid search, but we hurt ourselves a little while before that when media became an independent service separated out from our creative agency cousins. The move to independence laid bare the business model, and sure enough over the years we have cannibalised the 15% commission model, buying clients with promises of commission rebates. What was once 15% is now often only 1 or 2% and sometimes even less than that. Money talks and we’ve been giving it away.

Until quite recently I had been more confused by the strategic plays being made by media agencies than by any other business in advertising. They are also the businesses closest to my heart and where I have spent the vast majority of my career. I have heard digital planners and evangelists in agencies talk with delighted and excited passion about marketing automation for the best part of 10 years. From the perspective of a business model that provides outsourced media market participation already (what else is a media agency?) I always thought it was a ludicrous place to go. Why would the agency world want to package up everything they do and put it in a black box, to literally outsource, via technology, what they are already paid to do on an outsourced basis?

The first stirrings of marketing automation and media management black boxing was driven by a weak strategic vision for digital which left the nascent frontier in the hands of clever people with little experience, or people with much experience and little knowledge (of digital).

10 years ago neither of these constituencies had seen what was coming. The operational end, the young clever people who actually ran the campaigns could see how automation could help them and how it could streamline process and add insight. They weren’t thinking about the next 10 years, they were thinking about next week. Which was what they had been hired to do.

The people who were supposed to think about the next 10 years, the experienced strategic managers, agency heads, CEOs and FDs didn’t have enough instinctive knowledge of the landscape, which to be fair was being built in real time around them, to see what was coming either.

So, we had a revolution built, innocently enough, from within. When the real leadership did wake up to the realities of a technology driven advertising future they made the only decision they could. Acknowledging the destruction of the commission revenue model the media agency sector has been forced to accept the old playground adage…If you can’t beat them, join them.

The media agency world is falling over itself to become an ad technology sector.

As individuals they are lining up to be employed by Google.

As businesses they are either moving into the territory of technology re-sellers, or they are (depending on how big they are) buying into the technology market directly either through development or acquisition. That’s a cold rational assessment of their futures, but it is cohesive.

The agency position is extremely problematic, as it stands, because we can’t develop a sane basis for digital brand advertising alongside a future based on the current view of ad technology dominated by DR/performance and hence clicks, as in fact the brand model almost certainly needs to dominate the DR model.

It is, chiefly problematic, because the agency model is likely to remain in a precarious position even once they move over to a tech footing. This rather means that they aren’t looking to develop a long term rational advertising model, which they should be. They should be wowing advertisers with wise and intelligent stewardship of the academic needs of advertising, and proving the need to pay them for their thinking and not just their buying. Instead they are just trying to grab the nearest life raft. They have little choice.

The concern is that even if we, rather charitably, dismiss the role played by agencies in the rise of programmatic ad tech they face an even trickier challenge laid out in the structural development of this ad buying paradigm.

Programmatic, like Google’s search ads are a 1 to 1 auction. Each individual buy is one consumer on a website being bought by one advertiser. For a business model built entirely on the promise of scale (“we represent 30% of the market, therefore we can buy media cheaper than anyone else”) a 1 to 1 purchase, which by definition pays no respect to scale whatsoever, is an incontrovertible harbinger of doom. No scale, no business model.

The guys at the top understand this. They are no longer sleeping on shift. There are 2 things happening as a result.

Firstly they are spreading fear, uncertainty and doubt (FUD) liberally through the marketing/advertising ecosystem about how difficult it is to run programmatic (ad tech) campaigns, the only sensible take out (from their perspective) being that advertisers can’t possibly run these new operational models without the wise and knowledgeable help of agencies. They have seen their ultimate disintermediation on the horizon and are buying time.

Secondly they are looking to reassert scale in the market by becoming the principle on media buying contracts via PMP’s, private market places, that are accessed using the programmatic tech stack.

PMP’s have been heralded as the solution for brand advertising by the ad tech evangelists, which they can be, if they understand the issues I have discussed in this essay and trade in the metrics I have suggested instead of clicks.

The logic is that these artificial closed gardens within the programmatic universe (each PMP belongs to a publisher or a publishing group and can be accessed via obtaining the correct digital token) deliver a price protection mechanic via a pricing floor, and can also deliver a more effective approach to fraud management through much tighter inventory controls and visibilities. Much of the discussion has focused on how publishers can more comfortably release their prime inventory to programmatic buying processes through PMP’s without destroying revenue.

So, on the surface I draw some hope from the emergence of PMP’s although it should be clear that such optimism depends on a correct adoption of which metrics they choose to trade in.

There is a problem, however, that must be overcome. PMP’s offer the last chance for agencies to chase off the disintermediation horror story that they face, but by doing so they may introduce something worse again.

Agencies as principles – no thanks

As mentioned media agencies have begun to buy up PMP inventory as the principle to the contract, instead of buying the same inventory on behalf of their advertiser clients, who would historically be the principle. Once they have purchased these impressions they are adding a mark-up and then reselling them to advertisers, as well as charging for their time (via the agency team first, then secondly through the agency group trading desk fees – a classic double dip. Actually a triple dip if you add in the mark-up on the resold inventory).

Not so many years ago such a strategy involved a very significant risk. If the inventory could not be resold the loss could be too heavy to bear. Now that same unsold inventory can just be dumped back out onto the networks, via the programmatic stack, and at the very least deliver a small loss or a break even.

The big shiny goal here is the already mentioned reassertion of scale. So, whereas once an agency would offer you advantageous pricing through their buying scale being brought to bear in negotiation with the media owners they may now seek to offer you advantageous access to inventory, instead of or potentially alongside price, by being the biggest purchaser of inventory via PMP’s. Scale is important under this new paradigm as a wider base of clients offers much greater opportunity to resell, which in turn offers both a greater inventory holding and a more effective risk mitigation position.

Its a fundamentally different proposition, half opportunity and half threat. One that pitches them into both a client/service position with their customers and an adversarial/negotiating position with those same customers.

In other words a desperate play.

We are getting close to the business end of the disintermediation challenge now, and as a result a possible way out for some of the problems that this whole journey has created.

Of the 3 players in the ecosystem, advertisers, agencies and publishers the intermediators are clearly the agencies, and as harsh as it may be these will be the businesses that eventually face the fate of modern digital/internet inspired disintermediation. The technology is already here that spells out the operational and intellectual future of advertising media planning and buying, and there is no role in it for modern media agencies.

So, it all comes down to the advertisers themselves

If you haven’t twigged where this is going, we are now very close to the end. And, I apologise, if you have stayed with me all this way, this will feel slightly anticlimactic.

The role of the advertisers in this story is to take the programmatic ad technology stack in house and thereby remove the agency from the picture. By placing the operational structure in the hands of the advertisers we remove one of the economic players from the conundrum that seeks to perpetuate this destructive cycle. By removing that player we can more easily re-balance the whole ecosystem.

It is tempting to seek a future that does not involve the programmatic buying of media space. But as much as the liberal minded privacy respecting individual in me wishes it so I simply cannot see the media industry moving back to a human heavy operational model, much less so when we pass peak TV (when sufficient %’s of TV impressions are delivered over internet protocols as to make the dissemination of programmatic buying of TV, or more accurately audio visual ads, an inevitability that encompasses all AV ads, over the air or IP).

If we accept that programmatic is with us to stay, in some form or another, then we need to seek a way to make it work properly. Of all the players in the advertising ecosystem only the advertisers themselves lose if we blindly destroy the ability to use advertising to build brands.

If we don’t manage to change the current model then agencies still get to buy media, whether it is DR or brand media, and publishers still get to run advertising whether it is DR or brand advertising. They can still operate effective revenue models on either basis.

The players at risk are the advertisers, the businesses that depend on trust and value and long term positive connection between their users and their products, the businesses that need a brand, particularly those in adversarial sectors where brand saliency is key. Advertisers don’t succeed because they advertise, they succeed because they sell a huge diverse range of products at a profit and they use a wide range of business tools to do that. Most advertisers (although not all of them) are best served by having brand advertising in the tool box even if they don’t always use it.

The only option here is for the advertisers to grab control of their own destiny, to just go ahead and build what they need without worrying about the agencies. It may not be clear to a sector that is built on the binary of agency and advertiser but right now media agencies’ needs are not aligned with the advertisers’ needs.

We aren’t going to get a sensible resolution to this mess by examining it from the side-lines and academically “resolving” the conflicts. There are too many, powerful entrenched interests for that to work. Somewhere along the line advertisers themselves are going to have take control of the tech and the operational model, and by doing so create a business opportunity for  the publishers that builds an incentive to force them to properly organise their publications to better serve brand advertising.

Here’s how I hope it plays out.

There are a number of advertisers that have already brought their ad tech in house, and many of them are doing well. A lot of that value is being driven from clever 1st party data work, not from a control of brand advertising destiny, but in this 1st instance I just hope to see the business case of in house programmatic being verified. To be clear, there is always going to be a business role for DR, data led advertising via the programmatic stack, I am chiefly trying to find a way to engender a significant rebalance between brand and DR as the internet is currently not fit for purpose as a brand advertising medium.

If it does come to pass that the business case for in house can be verified then I hope to see a tipping point which is the precursor to a wholesale operational change with digital ad buying moving client side and the 1st phase of disintermediation finding a foothold.

Following from this fairly fundamental change in the ecosystem we will need to bring pressure to bear on the publisher community to prepare their sites to drive optimisation of the metrics that demonstrate real valuable human attention. Not shares, or likes, or tweets or other metrics that enable contrary interpretation, but simple straightforward unequivocal data such as dwell time and demographics. This can be achieved by building (re-building) the business case for brand advertising in digital media, and also as a natural externality of that business case, the withholding of budget from recalcitrant publishers.

Once these data points are commonly made available by publishers then we can use the programmatic stack to execute campaigns against these brand metrics, which by definition do not need to be intrusive or personal or disrespectful of social norms  and which do not need to be derived from vast detailed profiling tools. The relative price points of these campaigns will rise, but so will the scarcity of genuine premium quality environments and therefore so will the signalling value that they represent. Sorted.

Well, maybe not sorted. Not quite as easy as that. But some food for thought hopefully.


Mixed bag – 4 links

This is just a really pleasant read. I was aware of the Burning man festival when it was still a fairly unknown event in the freak calendar. I was young enough, although not terribly young to be honest, to imagine I would go. Not this year, of course, next year, next year would be the year I’d get it together and get out there. The rather ironic desire to get organised enough to get out to Burning man was not something that hit me with the same laconic amusement that it does right now.

This rather wonderful recounting of a trip to Burning man tells the tale of Wells Tower, his 69 year old father, and 2 of their friends, and what they encountered when they got there. Tower senior, an economics professor with a voting record that included support for George W Bush, and his son are not the kind of people you might expect to see at such a hippie mecca, but that combined with the simple fact of the enthusiastic immersion of Tower senior, and the nervous, sceptical but sincere engagement of Tower junior makes this a true delight.

It also reassures me that I’ve still got plenty of time to book my own Black Rock experience. Next year. I’ll be organised enough next year.

Burning Man

 

 

This is a short interview with Michael Wolff taking a small, but forensic and caustic, sledgehammer to the state of play for digital media in 2014. He doesn’t go into depth here but this is a quick read that should give you pause for thought when you consider the challenges that lie ahead for the health of digital publishing. I think we can turn the corner, that we can find a way to retain some quality in the landscape (quality publishing environments…as Wolff points we aren’t suffering for quality of information) but Wolff holds a bleaker vision than I do, even though we overlap on a great deal of what is happening.

Harsh words

 

 

Ian Bogost had been somewhat critical about the world, as he perceived it, of social games. The push for monetisation, the unthinking assumed ownership of a players time (he observes that deadlines, driven by absolute time, not game time force a user to play through the imposition of dread. Miss that harvest point….no sir, you don’t want to do that), among other things. Anyway he was challenged, not unreasonably, to experience the developers side of the experience. And so, part game, part art and fully satirical, he created cow clicker, a game where you are allowed to click your cow every 6 hours. It was never a huge success by social game standards but it still pulled 50,000 users at its peak. This article gives us the detail.

Cow Clicker

 

 

Finally a long exploration of what happens when opiates hit a community and what subsequently happens when those opiates are taken away again through invasive and draconian law enforcement. This is the story of Subutex, and what happened when it took over the drug scene in Georgia (Europe not the US), and then when a deliberately aggressive government program more or less eradicated its usage.

There are no folksy tales here that either side of the drug debate can take solace from. Those that advocate for a loosening of drug regulation must surely be appalled by the widespread adoption of Subutex abuse prior to the legal crackdown, while those that seek zero tolerance regimes must acknowledge that a safer (but not safe) drug that really doesn’t kill many people (even compared to the likes of methadone) has now been replaced by the sheer horror of krokodil.

Instead of retiring his syringe, he injected krokodil, a homebrew so vile that I had to ask him twice to repeat the recipe. It is simple. First get codeine from a pharmacy. Then mix it with toilet-cleaner, red phosphorus (the strike-strips on matchbooks are a good source), and lighter fluid. Voilà, your krokodilis served.

You did read that right.

Subutex

 


Small phones, mobile computing, innovative disruption and speculation


Apps apps apps

I don’t like getting my content from apps. I use the mobile web instead if I have to, but the majority of my web access is on a laptop with a nice big screen, which means my prejudice doesn’t hurt me too much.

I know that that sets me apart from the majority of smartphone users, but as I said before, my primary web access point isn’t my phone so the limitations of the mobile web that have enabled app culture don’t really hurt me.

I also believe that it is a culture with a short shelf life.

It is a system with an unusual degree of friction in comparison to an easily imagined alternative. The whole native apps versus web apps argument that has been running for years now is largely decided, at any one point in time, by the state of play with regard to mobile technology and the user requirements of said technology. I’m pretty certain the technology will improve faster than our user requirements will complicate.

That easily imagined alternative is pretty straightforward too. A single platform, open and accessible to all, built on open protocols, that can support good enough access to hardware sensors, is served by effective search functionality and doesn’t force content to live within walled gardens.

The web, in other words. Not fit for purpose today (on mobile), but it will be soon enough.

Monetisation possibly remains a challenge but I’m also pretty sure we will eventually learn to pay for the web services that are worth paying for. Meaningful payment paradigms will arrive, the profitless start up model/subsidised media (which is what almost all these apps are) can’t last. We will get over ourselves and move beyond the expectation of free.

Should everything be on the web? No, not at all. For me there is a fuzzy line between functional apps and content apps. The line is fuzzy because some content apps contain some really functional features, and vice versa. Similarly some apps provide function specifically when a network connection is unavailable (pocket, for example). The line is drawn between content and function only because I strongly believe that the effective searchability of content, via search engines or via a surfing schema, and the portability of that content via hyperlinks is the heart of what the web does so well and generates so much of its value. If all our content gets hidden away in silos and walled gardens then it simply won’t be read as much as if it was on the open web. And that would be a big perverse shame.

Why can’t we just publish to the stream? Because it is a weak consumption architecture, it is widely unsearchable (although I suspect this can be changed although we need to move beyond the infinite scroll real quick) and because you don’t own it (don’t build your house on someone else’s land). That’s another whole essay and not the focus of this one.

I’ve felt this way for a long time so I am happy to say that finally some important people are weighing in with concerns in the same area. They aren’t worried about the same specific things as me, but I can live with that if it puts a small dent in the acceleration of app culture.

Chris Dixon recently published this piece calling out the decline of the mobile web. I think he is over stating the issue actually, but only because I think the data doesn’t distinguish effectively between apps that I believe should be web based and those I am ambivalent about. He cites 2 key data sources.

comscore-mobile-users-desktop-users-2014

The ComScore data for total web users vs total mobile users is solid enough, showing that mobile overtook desktop in the early part of this year. Interestingly the graphic also shows that desktop user volumes are still rising, even if the rate of acceleration lags behind mobile. Not many people are taking this on board in the rush to fall in love with the so called post PC era. It’s important because to really understand the second data point, provided by Flurry, you have to look at the very different reasons people are using their mobile computers.

Flurry tells us that the amount of time spent in apps, as a % of time on the phone/internet itself, rose from 80% in 2013 to 86% (so far) in 2014. That’s a lot of time spent in apps at the cost of time spent on the mobile web.

But the more detailed report, shows that that headline number is less impressive than it at first seems. All that time in apps, it turns out isn’t actually at the cost of time spent on the web, not all of it anyway.

time_spent

Of the 86%, 32% is gaming, 9.5% is messaging,  8% is utilities and 4% productivity. That totals 53.5% of total mobile time, and 62% of all time in apps specifically. The fact that gaming now occurs within networked apps on mobile devices, rather than as standalone software on desktops, only tells us that the app universe has grown to include new functionality not historically associated with the desktop web. It tells us that this new functionality has not been removed from the mobile web, that in this case the mobile web has not declined. As I said I agree with the thrust of the post I just think the situation is less acute.

Chris Dixon’s main concern is for the health of his business, he’s a partner at Andreessen Horowitz (a leading tech VC), and he is seeing the rise of 2 great gatekeepers in Google and Apple, the owners of the 2 app stores that matter, as a problematic barrier to innovation, or rather a barrier to the success of innovation (there is clearly no shortage of innovation itself).

He’s not alone Mark Cuban has made a similar set of observations.

Fred Wilson too, although he is heading into an altogether different and more exciting direction again, which I will follow up in a different post (hint, blockchain, distributed consensus and personal data sovereignty). Still this quote from Fred’s post sums up the situation rather well from the VC point of view.

My partner Brad hypothesized that it had something with the rise of native mobile apps as the dominant go to market strategy for large networks in the past four to five years. So Brian pulled out his iPhone and I pulled out my Android and we took at trip through the top 200 apps on our respective app stores. And there were mighty few venture backed businesses that were started in the past three years on those lists. It has gotten harder, not easier, to innovate on the Internet with the smartphone emerging as the platform of choice vs the desktop browser.

The app stores are indeed problems in this regard, I don’t disagree and will leave it to Chris and Mark’s posts to explain why. I am less concerned about this aspect partly because it’s not my game, I don’t make my money funding app building or building apps. But mostly I’m unconcerned because I really believe that (native) app culture has to be an adolescent phase (adolescent in as much as the internet itself is a youngster, not that the people buying and using apps are youngsters).

If I am right and this is a passing phase can we make some meaningful guesses as to what might change along the way?

I’ve been looking at mobile through the eyes of Clay Christensen’s innovators dilemma recently, which I find to be a useful filter for examining the human factors in technology disruptions.

Clearly if mobile is disrupting anything it is disrupting desktop computing, and true to form the mobile computing experience is deeply inferior to that available on the desktop or laptop. There are, actually, lots of technical inferiorities in mobile computing but the one that I want to focus on is the most simple one of all, size, the mobile screen is small.

Small small small small. I don’t think it gets enough attention, this smallness.

If it’s not clear where I’m coming from, small is an inferior experience in many ways, although it is, of course a prerequisite for the kind of mobility we associate with our phones.

Ever wondered why we still call what are actually small computers, phones? Why we continually, semantically place these extremely clever devices in the world of phones and not computers? After all the phone itself is really not very important, it’s a tiny fraction of the overall functionality. Smartphones? That’s the bridge phrasing, still tightly tied to the world of phones not computers. We know they are computers of course, it’s just been easier to drive their adoption through the marketing semantic of telephony rather than computing. It also becomes easier to dismiss their limitations.

Mobile computers actually out date smartphones by decades, they were/still are, called laptops.

Anyway, I digress. For now let’s run with the problems caused by small computers. There are 2 big issues, that manifest in a few different ways. Screen size and the user interface.

Let’s start with screen size (which of course is actually one of the main constraints that hampers the user interface). Screen size is really important for watching TV, or youtube, or Netflix etc

We all have a friend, or 2, that gets very proud of the size of their TV. The development trend is always towards bigger and bigger TV screens. The experience of watching sport, or a movie on a big TV is clearly a lot better than on a small one (and cinema is bigger again again). I don’t need to go on, it’s a straightforward observation.

No-one thinks that the mobile viewing experience is fantastic. We think it’s fantastic that we can watch a movie on our phones (I once watched the first half of Wales v Argentina on a train as I was not able to be at home, I was stoked), but that is very different from thinking it’s a great viewing experience. The tech is impressive, but then so were those little tiny 6 inch by 6 inch black and white portable TV’s that started to appear in the 80’s.

Eventually all TV (all audio visual) content will be delivered over IP protocols. There are already lots of preliminary moves in this area (smart TVs, Roku, Chromecast, Apple TV) and not just from the incumbent TV and internet players. The console gaming industry is also extremely interested in owning TV eyeballs as they fight to remain relevant in the living room. Once we watch all our AV content via internet protocols the hardware in the front room is reduced to a computer and a screen, not a TV and not a gaming console. If the consoles don’t win the fight for the living room they lose the fight with the PC gamers.

Don’t believe me? Watch this highlights reel from the Xbox one launch….TV TV TV TV TV (and a little bit of Halo and COD)

The device that the family watches moving images on will eventually be nothing but a screen. A big dumb screen.

The Smart functionality, the computing functionality, that navigates the protocols to find and deliver the requisite content, well that will be provided by any number of computing devices. The screen will be completely agnostic, an unthinking receptor.

The computer could be your mobile device but it probably won’t be. As much as I have derided the phone part of our smartphones they are increasingly our main form of telephony. As much as it is an attractive thought we are unlikely to want to take our phones effectively offline every time we watch the telly. Even if at some stage mobile works out effective multi-tasking we will still likely keep our personal devices available for personal use.

What about advertising? That’s another area where bigger is just better. I’ve written about this before, all advertising channels trend to larger dominant experiences when they can. We charge more for the full page press ad, the homepage takeover, the enormous poster over the little high street navigational signs telling you that B&Q is around the corner. Advertising craves attention and uses a number of devices to grab/force it. Size is one of the most important.

Mobile is the smallest channel we have ever tried to push advertising into. It won’t end well.

The Flurry report tries to paint a strong picture for mobile advertising, but it’s not doing a very good job, the data just doesn’t help them out. They start with the (sound) idea that ad spend follows consumption ie. where we spend time as consumers of media, is where advertisers buy media in roughly similar proportions. But the only part of the mobile data they present that follows this rule is Facebook, the only real success story in the field of mobile display advertising. The clear winner, as with all things digital is Google, taking away the market with their intention based ad product delivered via search, quite disproportionally.

Media consumption v ads

There are clouds gathering in the Facebook mobile advertising story too. Brands have discovered that it is harder to get content into their followers newsfeeds. The Facebook solution? Buy access to your followers.

That’s a bait and switch. A lot of these brands are significant contributors to the whole Facebook story itself. Every single TV ad, TV program, media columnist or local store that spent money to devote some part of their communication real estate to the line “follow us on Facebook” helped to build the Facebook brand, helped to make it the sticky honeypot that it is today.

And now if you want to leverage those communities, you have to pay again. Ha.

Oh but there’s value in having a loyal active base of followers I hear the acolytes say. Facebook is just the wrapper. Ok, then in that case ask them all for an email address and be guaranteed that 100% of your communications will reach them. Think that will work? Computer says no.

Buying Facebook friends? That’s starting to whiff too.

That leaves Google/search. The only widely implemented current revenue model that has a future on the mobile web. Do you know what would make the Google model even more effective? A bigger screen, and a content universe on the open web.

To be fair the idea that we must pursue pay models is starting to take hold in the communities that matter. This is good news.

Read this piece. It’s as honest a view as you will find within the advertising community and it still doesn’t get that the mobile medium is fundamentally flawed simply because it is small. There is no get out of jail card free for mobile, unless it gets bigger.

Many of the businesses we met have a mobile core. A lot of them make apps providing a subscription service, some manage and analyze big data, and others are designed to improve our health care system. The hitch? None of them rely on advertising to generate revenue. A few years ago, before the rise of mobile, every new startup had a line item on its business plan that assumed some advertising revenue.

What did you think about the Facebook WhatsApp purchase? It can’t have been buying subs, it’s inconceivable that most of the 450m WhatsApp users weren’t already on Facebook. Similarly even though there is an emerging markets theme to the WhatsApp universe, Facebook hasn’t been struggling in pick up adoption in new markets. My money is that Zuckerberg bought a very rare cadre of digital consumer, consumer’s willing to pay for a service that can be found in myriad other places for free. WhatsApp charges $1 a year, a number of commenters have noted this and talked of Zuckerberg purchasing a revenue line. I think they have missed the wood for the trees. That $1 a year, its good money when you have 450m users, for sure. It’s even better when you have nearly a billion and many of them are getting fed up with spammy newsfeeds. What will you do if you get an email that says “Facebook, no ads ever again, $5”. He doesn’t just get money, he also gets to concentrate on re-building the user experience around consumers, not advertisers.

The WhatsApp purchase is the first time I have tipped my metaphorical hat in Zuckerberg’s direction. The rest is morally dubious crud that fell into his lap, this is a decently impressive strategic move. I still won’t be signing up anytime soon though.

Ok. Small. Bad for advertising. What else?

Small devices also have necessarily limited user interfaces. The qwerty keyboard for all of its random derivation works really well. There is no other method that I use that is as flexible and as speedy for transferring words and numbers into a computer.

The touchscreen is impressive though, the device that, for my money, transformed the market via the iPhone. My first thought when I saw one was that I would be able to actually access web pages and make that experience useful through the pinch and zoom functionality. I had a blackberry at the time with the little scroll wheel on the side. Ugh. But even that wonder passed, now that we optimise the mobile web to fit on the small screens as efficiently as possible. No need to pinch and zoom, just scroll up and down, rich consumption experience be damned.

Touchscreens are great, they just aren’t as flexible as a keyboard. As part of an interface suite alongside a keyboard they are brilliant. As an interim interface to make the mobile experience ‘good enough’, again brilliant. As my primary computing interface … no thanks.

When the iPad arrived the creative director at my agency, excited like a small child as he held his tablet for the first time, decided to test its ability as a primary computing interface. He got the IT department to set up a host of VPN’s so that he could access his networked folders via the internet (no storage on the iPad) and then went on a tour of Eastern Europe without his MacBook.

He never did it again. He was honest about his experience, it simply didn’t work.

What about non text creativity? Well, if you can, have a look inside the creative department at an advertising agency. It will be wall to wall Apple. But they will also be the biggest Macs you have ever seen. Huge glorious screens, everywhere.

You can do creative things on a phone, or a tablet but it is not the optimised form factor. It seems silly to even have to type that out.

If small is bad, and if mobile plays out according to Christensen’s theories (and remember this is just speculation), then eventually mobile has to get bigger. Considering that we have already reduced the acceptable size of tablets in favour of enhanced mobility this seems like some claim.

For my money the mobile experience goes broadly in one of two ways.

In both scenarios it becomes the primary home of deeply personal functionality and storage, your phone, your email (which is almost entirely web accessible these days),your messenger tool, maps, documents, music, reminders, life logging tools (how many steps did you step today) as well as trivial (in comparison) entertainments such as the kind of games we play on the commute home.

The fork is derived from what happens with desktop computing. The computing of creation and the computing of deeper better consumption experiences.

We either end up in a world with such cheap access to computing power that everyone has access to both mobile computing and desktop computing, via myriad access points, and the division of labour between those two is derived purely on which platform best suits a task. Each household might have 15 different computers (and I’m not even thinking of wearables at this stage) for 15 different tasks. This is a bit like the world we have today except that the desktop access is much more available and cheap, and not always on a desk.

Alternatively, the mobile platform does mature to become our primary computing device, a part of almost everything we do. This is much more exciting. This is the world of dumb screens that I mentioned earlier. If mobile is to fulfil its promise and become the dominant internet access point then it must transcend its smallness. There are ways this could happen.

We already have prosthetics. Every clip on keyboard for a tablet is a prosthetic.

We already have plugins. Every docking station for a smartphone or a tablet is a plugin.

The quality of these will get better but they still destroy the idea of portability. If you have to haul around all the elements of a laptop in order to have effective access to computing you may as well put your laptop in your bag.

I can see (hold for the sci-fi wishlist) projection solutions, for both keyboards and screens where the keyboard is projected onto a horizontal surface, the screen onto a vertical surface. Or what about augmented and virtual reality? Google glass is a player, although the input interface is a dead duck for full scale computing, but how about an optimised VR headset that isn’t the size of a shoebox? Zuckerberg might have been thinking about the metaverse when he bought Occulus but there will potentially be other interim uses before humanity decides to migrate to the virtual a la “ready player one”, VR could be a part of that.

The more plausible call is a plugin / cheap access model hybrid. Some computing will always be use case. I think it’s likely that a household’s primary AV entertainment screen for example, the big one in the living room, will have its own box, maybe controlled by a mobile interface but configured such that your mobile platform isn’t integral to media consumption. The wider world, though, I think will see dumb screens and intelligent mobile devices working together everywhere. Sit down slot in your mobile, nice big screen, all your files, all your preferences, a proper keyboard (the dumb screen is largely static of course, so it comes with a keyboard) and touchscreen too of course, all in all a nice computing experience.

There are worlds of development to get there of course. Storage, security and battery life among other areas will all need a pick up, but if we have learnt anything since 1974 it’s that computers do get better.

I really woke up to Christensen’s theory with the example of the transistor radio, which was deeply inferior to the big old valve table tops that dominated a family’s front room, often in pride of place. The transistor radio was so poor that the kids that used them needed to tightly align the receivers with the broadcast towers. If you were facing South when you needed to face North, you were only going to get static. But that experience was good enough. Not for mum and dad who still listened to the big radio at home, but for the teenagers this was the only way they could hear the songs they wanted to hear, without adults hovering. All of those limitations were transcended eventually and in the end everyone had their own radio. One in the kitchen, one in every teenager’s room, the car, the garage, portable and not portable. Radio’s eventually got to be everywhere, but none of them were built around valves like those big table top sets.

Mobile computing is a teenager’s first wholly owned computing experience (I’m not talking about owning the hardware). It happens when they want it to, wherever they are. It’s also been many an older person’s first wholly owned computing experience, and it’s certainly been the first truly mobile computing experience for anyone. I might have made the trivial observation that laptops were the first mobile computers, and they were, but you couldn’t pull one out of your pocket to get directions to the supermarket.

Christensen’s theory makes many observations. Among them are that the technology always improves and ends up replacing the incumbent that it originally disrupted. The users of the old technology eventually move to the new technology, the old never wins.

We aren’t going to lose desktop computing though. Or at least we aren’t going lose the need states that are best served by desktop computing, if you want to write a 2000 word essay, your phone is always going to be a painful tool to choose. But you might be able to plug it in to an easily available dumb workstation and write away in comfort instead.

At least let’s hope so.


Links Links Links

Out of all the main social media channels the only one which really left me confused was Snapchat, the service that deletes the message 10 seconds after its been read. I didn’t buy the argument that its usage was a response to governmental or parental surveillance. The former because I don’t think teenagers worry about the government, even if they should, and the latter because its obscure enough to still fly under most parental radars and I think teens know that. Danah Boyd, who has recently published “It’s complicated” about how teens use social media (which is brilliant and needs to be widely read), explains the phenomena. It turns out that the attraction is one of scarcity in a world of persistence and abundance. The Snapchat message is cherished beyond others directly because of its ephemerality. In a world of message saturation, the one that disappears is consumed with the recipient’s full attention, even if that means waiting for a suitable moment.

Why Snapchat is valuable

 

Here’s a solid and thorough analysis that overturns a few of the ‘truths’ that are widely accepted by large portions of the publisher, agency and advertiser universes. If you work in any of those industries this should go straight into your reading list.

A widespread assumption is that the more content is liked or shared, the more engaging it must be, the more willing people are to devote their attention to it. However, the data doesn’t back that up. We looked at 10,000 socially-shared articles and found that there is no relationship whatsoever between the amount a piece of content is shared and the amount of attention an average reader will give that content.

What you think you know about the web is wrong

 

This is a fascinating read about Camden New Jersey, an American city which has had an interesting recent history. Camden is an economically depressed and dangerous place. In 2011 in response to huge reductions in their state subsidies they made 168 police officers redundant from a force of 368. It was pretty lawless before that happened, it got a lot worse afterwards. The move was calculated to break the union hold over the police department which had seen a buildup of unacceptable largesse which had been pushed through in light of the the acknowledged danger of the job. It was a successful gambit, and now the police are structured the way the Governor wants they have seen a resurgence of support from the state coffers and are taking control back almost street by street (it seems it was that bad). Camden is where the white suburbanite drug users from Philadelphia buy their heroin, with a daily and steady stream of buyers arriving to keep the Camden drug market vibrant. Its incidental to the main narrative but I found this small, but perverse, anecdote very interesting…

In fact, when Camden made the papers a few years back after a batch of Fentanyl-laced heroin caused a series of fatalities here, it attracted dope fiends from hundreds of miles away. “People were like, ‘Wow, I’ve gotta try that,'” says Adrian, a recovering addict from nearby Logan Township who used to come in from the suburbs to score every day and is now here to visit a nearby methadone clinic. “Yeah,” says her friend Adam, another suburban white methadone commuter. “If someone dies at a dope set, that’s where you want to get your dope.

Rolling Stone

 

Recently Microsoft used their reserved right to read their users’ Hotmail messages to get to the bottom of a criminal software leak. It was an action that was enabled by their terms of service, but did not need a warrant or the involvement of police or the legal profession. They caught a great deal of flak for it, largely because it would likely have been possible to gain the same access by following due process. They have recently announced that in similar circumstances in the future they will indeed resolve the issue through the police. Rather sadly, as the quote below makes plain, this really should not be a ‘concession’. The days of boilerplate terms of service that dismiss obvious standard behaviours needs to end.

Smith asks a seemingly rhetorical question: “What is the best way to strike the balance in other circumstances that involve, on the one hand, consumer privacy interests, and on the other hand, protecting people and the security of Internet services they use?” That is indeed a fascinating question, but in the specific case of Hotmail, I feel like it has a pretty obvious answer: change your terms of service so that you promise not to read your customers’ email without a court order. Then, if you think there’s a situation that warrants invading your customers’ privacy, get a court order. This is just basic rule-of-law stuff, and it’s the kind of thing you’d hope Microsoft’s General Counsel would find obvious.

Rule of law

 

Paul Graham is one of the co-founders of Y-Combinator one of silicon valley’s most prominent tech accelerators. In this interview he covers a lot of ground in a fairly short amount of time. He also makes one observation which backs up the thought that the new swathe of entrepreneurs are in fact the new labour class, not the management or owner class that perhaps they imagine themselves to be, or at least that they are working to achieve. Tales of founders being replaced at the whim of VC’s have never sat well with me. This quote from Graham, regarding the qualities they look for in a startup/idea certainly sounds like the words of an employer not an investor assessing a specific business.

Mostly, we look at the people. If they seem determined, flexible, and energetic, and their ideas are not just flamingly terrible, then we’ll fund them. We thought Airbnb was a bad idea.

Paul Graham interview


Disintermediation and the curious case of digital marketing – revisited

lumacape display

This essay is the follow up, long overdue, to this small observation, posted in February 2013.

The whole issue of disintermediation is one of the key phenomena of the internet age, yet for some reason digital advertising seems to have missed out. In fact, somewhat perversely, digital advertising has instead managed to go in entirely the other direction, filling up with a whole new class of mediators, the adtech companies.

I’m going to argue that this is a situation that cannot last long (years not months, probably at least 5). King Canute’s original command that the tide stop rising was never going to be a great business model.

Although the common telling of the Canute story has him drowning against the onslaught of the waves, there is evidence that he instead adapted to the new knowledge and changed his practices.

..he commanded that his chair should be set on the shore, when the tide began to rise. And then he spoke to the rising sea saying “You are part of my dominion, and the ground that I am seated upon is mine, nor has anyone disobeyed my orders with impunity. Therefore, I order you not to rise onto my land, nor to wet the clothes or body of your Lord”. But the sea carried on rising as usual without any reverence for his person, and soaked his feet and legs. Then he moving away said:  “All the inhabitants of the world should know that the power of kings is vain and trivial, and that none is worthy the name of king but He whose command the heaven, earth and sea obey by eternal laws”. Therefore King Cnut never afterwards placed the crown on his head, but above a picture of the Lord nailed to the cross, turning it forever into a means to praise God, the great king.

I will cover the detail in a separate post, but I also believe that this could be the moment when digital advertising and digital marketing is forced to make some short term and slightly painful changes that will eventually open up a whole new vista of opportunity and the chance to truly lead the media landscape.

Before going into the depth of the essay it might be worth taking a quick 2 minute look at this link which explains the core ideas behind how the adtech model works. It’s a clever and accurate explanation and is easy to digest, an informative visualisation of a highly complex system. It will also provide clarity regarding exactly which part of the adtech universe I am talking about.

Mediators and Disintermediators

Digital advertising hasn’t been disintermediated yet because firstly it is in and of itself, a mediator (all advertising is) and like all incumbent mediators, is not keen to be disintermediated. Simply moving advertising from the offline realm to the digital is not enough. It might seem that web platforms force disintermediation all on their own, but they don’t. Other factors need to align as well.

For a start there needs to be a new model that can replace the current one, and that new model also needs to deliver enough consumer/user  benefit to overcome the ever present inertias.

Amazon, for example, took the intermediation costs out of the book market but replaced the incumbent model with a different way to sell books. From the consumer perspective the loss of the mediators was painless, almost invisible, they just bought their books in a different shop that happened to be located on the internet instead of the high street, and for significant cost savings too.

That Amazon could offer digital shopping, that their technology existed and was functional, was the substantial factor that enabled the new market dynamics to take hold.

It’s not the same in advertising today.

With regard to advertising, the consumers in question are the advertisers, not the people who buy the advertised products.

Businesses buy their advertising from a large (and getting larger) ecosystem of multi-skilled providers. Very few businesses create their advertising in-house and generally do not own the resources required to do so (copywriters, art directors, studios, media planners, media buyers, systems, relationships….etc). As a result marketing services agencies and advertiser marketing teams are currently deeply co-dependent.

Disintermediation, by definition would seek to weaken that relationship and few people, on either side, can see how that could function. In short the advertisers are not using their market position to force disintermediation in the marketing services ecosystem, and hence their access to customers remains firmly managed.

Part of the reason for this co-dependency is the burgeoning complexity in the use of data.

The second and more problematic reason that we have mediation instead of disintermediation therefore, is related to what’s happening with all this data. Or rather not so much what is happening with the data, but why so much effort is being concentrated on the data. The companies that do things with data are in the marketing services sector, for the reasons stated above and whereas advertisers’ markets grow if they sell their products or services (via marketing or sales initiatives, or whatever) the marketing services sector grows when they sell more services to the advertisers.

The most important, acknowledged, growth opportunity in the marketing services industry today, is in data management and data targeting…. the enabler of such things is adtech.

This is the prime economic incentive driving growth in advertising at large and it has the simultaneous benefit of minimising macro level change too, as the focus remains on trading eyeballs even while the industry appears to be innovating.

On the surface it seems that everyone is a winner. Even the publishers.

The short answer, then, is that digital advertising is currently avoiding disintermediation due to the absence of a plausible replacement and the dynamics/politics of the growth opportunities for marketing services. The advertisers, who could credibly drive the need for such a replacement, via market forces, are currently heavily co-dependent on a vibrant marketing services world, and in thrall to its innovations in data. As a result they are not putting any pressure on their suppliers for anything except more of the same.

But the short answer is deeply unsatisfying. If adtech was the correct way to go, then the adtech market itself would be consolidating not exploding.

As it happens there really is something deeply wrong with the adtech model.

It’s all about eyeballs (and data)

The publishing industry has always been about eyeballs, because eyeballs generate their revenue. Much of the tech sector is going the same way. In the absence of subscription revenue much technology innovation is built on ad supported models.

Google is eyeballs and data. Facebook is eyeballs and data. Everything is eyeballs and data.

But the traditional home of disruptive companies, the kind that have generated the internet’s reputation for disintermediation is the technology start up sector itself isn’t it? So what’s going on?

It’s not that the startup world has incomprehensibly passed on by in the world of digital advertising, it’s that in this particular sector the short term incentives have lined up to foster a glut of mediation instead of disintermediation.

Ironic certainly, but the question that needs to be answered is, at what point does it become a problem?

I think we are already there and the reason why I say that lies in the wider mechanics of today’s digital publishing business model.

It is worth pointing out that I do not intend to set out a vision here for the whole of the publishing industry’s business future. What I want to do instead is to pull on just one loose thread, one that hopefully might reveal a feasible possible future, as the cardigan it belongs to starts to disintegrate.

The loose thread is this.

As a publishing medium the web has no inventory limit that is effectively bounded by physics, cost or performance.

  • The cost of adding pages to a printed medium requires capacity at the printing press and the cost of adding relevant content that has commercial worth, to both consumers and advertisers
  • The number of worthwhile sites to place outdoor posters is limited and generally fully explored
  • TV is bounded by production and distribution costs and the fact that there are 24 hours in a day
  • Radio has a finite audience, as does cinema, which is not growing
  • Direct marketing messages are bounded by the size of target populations

The internet, on the other hand, offers trivial extension costs, a seemingly infinite audience and some unique content creation opportunities.

But what about worthwhile content, right? That surely establishes some kind of significant commercial boundary. Well, yes and no. Some parts of the publisher universe have to pay for their content, but the ability to originate all your own content is no longer an entry level requirement.

Take for example something like Buzzfeed, or the Huffington Post. Both very legitimate publications. Some of their content is provided free by the writers, some of it is given value by the quality of the data optimisation (multiple headlines, automatic analysis = loads of eyeballs, only one writer’s cheque and it’s a relatively small one, as the writer isn’t the most important part of the equation), and some of it, a small part of it, might be remunerated at traditional levels.

Some sites simply steal content, while generating ad revenues (whiffy publishers), and some sites use a variety of legitimate but infuriating tactics to up the impression count (3000 word articles split over 8 pages, for example), let alone the sites that stuff footers with 1×1 pixels.

These are all troubling practices but more worrisome is the rise of retargeting, via tracking technology and the buying of audience profiles through ad exchanges. Retargeting and profiling are more worrisome because they are totally legitimised by the industry at large, are vulnerable to significant fraud (the biggest problem) and are educating a generation of advertisers (client and agency) to disregard huge swathes of wisdom that we have, as an industry over many years, learned about advertising.

We have forgotten about the value of context and the quality of the ad environment.

This has happened because of the huge volume of useless inventory ‘magically’ masquerading as prime media, distorting the economics of the marketplace. It doesn’t help that misaligned incentives across the whole executional chain (publisher, agency, advertiser) are hampering the efforts to clean up the situation.

Useless Inventory

I should first explain why this inventory is useless and to be fair some of it isn’t totally useless (although it’s not far off).

Some of it is just low grade, being sold as something better because of the addition of insight from data analysis….this is the retargeting strategy. The idea that because I can track you from a  premium environment to a low grade environment there is no need to pay the higher cost of the quality placement. After all you’re the same person aren’t you, why wouldn’t you be just as persuaded consuming an advert on GQ.com compared to say, uselessshite.com?

To be fair the idea of buying an audience, which is what we are seeing here, is not alien to advertising, it is after all similar to the way we buy TV. However, it is not a good like for like comparison, however much we want it to be. This is, again, because one medium has a finite inventory (TV), while the other is not even close to finding its market defined ceiling.

In this case it changes the dynamics of verification.

A TV buyer is aware if the network is dumping its weak spots in one buy, while the digital buyer doesn’t know anything about the contextual quality of their purchase. In both circumstances the buyer will get the audience they purchased but the TV buyer can exercise more control over the environment, and hence the value, that their audience is delivered into.

I dislike retargeting strategies because I think they are theoretically weak. Environment quality is important and should not be disregarded. I don’t like retargeting but it is at least derived from a legitimate concept, they are trying to put the ad in front of the right person. If that seems like a low bar for acceptance, it is.

Sadly it gets a lot worse from here.

The second class of useless inventory, is the inventory that isn’t even human. This is the bigger issue. Bot traffic is running between 30% and 46% of online display impressions, depending on which set of stats you want to use.

You did read that right. Roughly 1 in 3 digital display impressions being bought today aren’t even human.

There is no argument that can possibly turn these into good media tactics. If no humans are seeing these adverts then it follows, without controversy, that they can’t influence a human to purchase a product. Yet the industry at large is making these buys, seemingly with full knowledge every day.

This is not an unreported phenomenon, by the way, I am not breaking news here.

It’s very important to understand that media planners and buyers aren’t idiots. Not by a long margin. There must, therefore, be reasons that can help explain this behaviour.

First as previously mentioned the commercial incentives really don’t help

These conditions might explain why some parts of the industry are making these mistakes, but they aren’t sufficient to explain why the whole industry is making these mistakes. If this was all that was going on the only brands doing this would be those in urgent need of cost control. The successful top quartile, at least, would still be executing against ‘premium’ inventory. But they aren’t, so something even more foundational must be at work.

To understand what that is we need to investigate how this inventory is getting into circulation. The question is a harsh one, how can an industry that has operated for so long suddenly be duped into buying campaigns on such a flawed basis?

This is where the impact of adtech becomes a nightmare.

Adtech has created automated networks that operate, in real time on either side of the publisher. Publishers can buy part of their audience from a network, and then they can sell advertising against that exact same purchased visitor, through another network.

If you can manage the money such that the buy costs less than the sell, even if that is a tiny amount, then over vast volumes of impressions, the riches await. Moreover, if 30-40% of the traffic being sold is created by bots, those costs can be incredibly low, which in turn reduces the price needed on the ad networks. Which means that the advertisers are picking up impressions at a stunning low price too.

And that’s what’s happening, no joke.

There is a veil of respectability in buying these cheap impressions because the kosher media properties, who are looking to establish high cpm’s in exchange for real advertising value (with their premium stock) are using these exchanges to generate incremental revenue on their remnant inventory too. Along with the impressions being sold in old traditional deals between humans this is what makes up the 60% of inventory that is actually put in front of real people.

Because there is too much inventory out there, as explained by the economics of digital real estate as already noted, this massive cost saving can be comfortably excused as simple supply and demand dynamics.

However, as also already noted all this is not a secret. The industry is aware. It’s therefore a very fair question to ask how this situation could possibly be tolerated let alone enthusiastically adopted.

Part of the question is answered by the intense fervour generated around the technology itself. This technology is clever and complex, there is much to be admired in engineering terms. However, this very complexity enables a troubling but comfortable ignorance. It means that the buyers can’t vouch for the veracity of their buys even if they wanted to. As things stand they can’t be held accountable.

If that sounds hyperbolic I must point out that this complex machinery isn’t lamented for the separation of knowledge it creates, at all, quite the opposite in fact.

John Battelle just loves adtech

To make my case I would point you towards the following 3 links. They are all connected to, or written by John Batelle, an important and intelligent champion of the adtech/digital advertising/digital landscape. An industry leader, he is no fool.

First up is the highly instructive visualisation I linked to in the first part of this essay, that shows the technology I have been talking about in action. It’s well constructed and explains better than a wall of text how this all works. If you haven’t already please do look at this link even if you look at none of the others, it shows how the modern digital market operates more clearly than anything else I have seen, Battelle is very proud of it as he should be.

I am keen to highlight this explanation of the adtech model because I don’t want anyone to think that I have misunderstood what is being sold. This explanation is provided not by sceptics like me, but by enthusiastic cheerleaders.

Near the end we are triumphantly told that,

Dozens of sophisticated servers can be involved in a single ad placement, which takes less than a quarter of a second.

I’m really not sure that that is something any media planner should be happy reading, to be honest, knowing as we do that at least 1/3 of impressions aren’t even human.

Secondly we have Battelle’s homage to the deep importance of adtech. He is almost universally positive about it, although I sense a few moments in this ‘love letter’ that suggest a few notes of caution have made themselves known to him. If you think I am being snarky in describing this link as an homage and a ‘love letter’ I would point out in riposte that the title (seemingly without irony) of the post is “Why the banner ad is heroic and adtech is our greatest technology artefact”.

He proudly heralds the LUMAscapes as evidence of success rather than inane profligacy, champions the technology that allows “a pair of shoes to chase you across the web” as heroic, instead of creepy, but pays nothing but dismissive lip service to the deep seated concerns raised by several respected leaders of today’s digital world such as Lawrence Lessig, Jonathan Zittrain and Tim Wu, none of whom are uninformed luddites.

OK. Let’s step back for a second. When you think of this infrastructure, are  you concerned? Good. Because it’s imperative that we consider the choices we make as we engage with such a portentous creation…..

What are the architectural constraints of the infrastructure which processes that information? What values do we build into it? Can it be audited? Is it based on principles of openness, or is it driven by business rules and data-structures which favor closed platforms?

These questions have been raised, and continue to be well articulated, by LessigZittrainWu, and many others. But we’re entering a new, more urgent era of this conversation. Many of these authors’ works warned of a world where code will eventually augur early lock down in political and social conventions. That time is no longer in the future. It’s now. And I believe as goes adtech, so goes our social code.

My emphasis added. He acknowledges concerns, big concerns, but is alarmingly sanguine about trying to solve them. If adtech is our social code as he suggests, then our future is damningly bleak.

However the 3rd link is the one I struggle with the most.

This is where we learn that Battelle has been appointed as co-chair, by the IAB, of the “traffic of good intent” task force. Good intent in this context means traffic that is actually worth buying. In their words….to,

more effectively address the negative impacts” of bots and other “non-intentional” traffic.

Which quite frankly is a weak mission statement when the actual job required is the eradication of the massive fraud at the heart of digital advertising.

Still what can we expect, the co-chair of this task force is the same man who believes that

Every retail store you visit, every automobile you drive (or are driven by), every single interaction of value in this world can and will become data that interacts with this programmatic infrastructure.

Right. Time to step back and return to my narrative. I need to explain why Battelle cannot idolise this technology and also solve the fraud problem without destroying privacy.

[I must state, at this juncture, that my concerns, the reason I am writing this, are not rooted in a fear of a world with no privacy, although I do not wish to live in that world. In truth my concerns are for the veracity of my trade, I work in advertising, it’s what I do, I would prefer it be intelligent and effective not automated and fraudulent.]

Battelle does a great job of demonstrating and lauding the very complexity that totally obfuscates the ability to examine the veracity of the media buys. It is this complexity that forces a critical compromise on us.

Pick two

Don Marti lays the compromises bare, as a 3 way play. “Ad tech, privacy, fraud control: pick two?”

He makes a strong case.

Remember ad tech is being gamed by big volumes of fraudulent non-human ad traffic, being passed off as the real thing. As Battelle’s graphic has pointed out the data work that fuels this market is all profile based and anonymous. No-one is following around named individuals, instead they are following a 35yr old, male, that plays golf and was recently browsing for a new pair of shoes. They know a lot about you as it goes, but not your name (apparently).

Under this system there is enough space for 40% of the traffic to be spoofed without getting caught.

The only way to resolve the fraud then, and maintain the adtech structure is to deliver complete visibility of who is who, where and when.

Or to put it another way to answer the question of whether or not that impression, that was just purchased, is a bot, or a human requires that you the human forgoe your desire, your right, to remain an anonymous data point. There is nothing partial about this solution, the only way to really kill the fraud is to always identify the humans. But not in a binary, human/not human way, a simple flag is too easy for a bot to overcome or to spoof. To really kill fraud, and maintain the adtech structure we will have to relinquish our digital privacy in totality.

All of a sudden this line from Battelle becomes more worryingly profound,

Every retail store you visit, every automobile you drive (or are driven by), every single interaction of value in this world can and will become data that interacts with this programmatic infrastructure.

The italic emphasis in that quote is his by the way, not mine.

So, which couplet do you fancy most?

  1. Ad tech + privacy = lots of fraud
  2. Ad tech + fraud control = no privacy
  3. Fraud control + privacy = no ad tech

Option 1 should be totally unacceptable to the whole advertising industry (publishers, agencies and advertisers) but it’s actually where we find ourselves today.

Option 2 is, ultimately, in my opinion outside of the influence of the advertising industry. Even though Facebook has revealed significant comfort with the idea of hugely reduced digital privacy, among great swathes of the population, there are big signs that this trend is not fully supportable long into the future. Natives are aware of the role of online personas and spend as much time spoofing the ‘correct’ image on the mainstream tools, as they do behaving like teenagers on the networks that Mum, Dad, aunty Glad and uncle Bill don’t know about. The Snowden/NSA revelations are not going to help reverse that trend.

Moreover the world at large needs a level of achievable anonymity to function. Cities were the first such artefact, providing the young with an arena within which to grow and find themselves, outside of the gaze of those who ‘know’ them best. There is a reason much innovation is rooted in the city, not the farm.

Option 3, on the other hand, makes sense for a lot of reasons, not just in terms of privacy. For a start it would force digital advertising to re-assess some of the more human aspects of driving useful commercial interactions instead of ceding more and more decision making to misunderstood algorithms. We might even start to understand how best to use the digital medium as a branding channel, something that has been glossed over in the data revolution.

It could also be part of the evolution of digital publishing into a fiscally secure enterprise. Rampant content theft by rogue commercial entities and the eroding of premium rates for quality placements would by definition be somewhat stymied. It would also start to create an economic boundary for the quasi infinite inventory problem by reintroducing meaningful performance limitations.

A good result all round no?

So how do we get there?

If we want to clean up this situation it is painfully obvious what needs to happen, we simply take adtech out of the picture and in its absence return to first principles. The reasons why advertising works, why it sells products, haven’t changed because of the internet, we can still function without adtech.

The trickier part of the equation is getting to this inflection point, getting the industry, at large, to choose to move past adtech. That’s a lot harder and the nature of that journey will be influenced by whose outrage is powerful enough to drive change, consumer outrage, advertiser outrage or a bit of both.

It’s true that the consumer is concerned about privacy today in ways that have been lacking in recent years. This is inspired by Snowden and the NSA, as opposed to creepy adtech. Yet here in the UK there still doesn’t seem to be a huge concern. This strange silence, this peculiar lack of British outrage, is not mirrored across the Atlantic though, and what is finally adopted in the States will soon become the standard here too. Similarly our European neighbours seem to have more concerns in this regard than we do, certainly the EU seems more committed to consumer protections in this area than the British government or population. So, even if the US doesn’t move I suspect the EU will.

The connection between commercial surveillance and state surveillance is robust and real, there can be no reform in the one without the other. So I fully imagine that regulatory responses to the Snowden scandal will impact on commercial data practice.

Finally there is the influence through market forces of consumers seeking privacy friendly solutions to account for, although the scale of such influence is not likely to prove substantial in the shorter term, in my opinion.

These factors, stemming from consumer outrage are actually relatively weak as agents of the kind of change I’m advocating. However they do suggest that the commercial surveillance machinery, adtech, will struggle to deliver the total surveillance advocated by Battelle. And if that is true then the ability to stem the tide of impression fraud is severely compromised.

The real force for change will likely be advertiser outrage and that will come from a different direction.

If total digital transparency doesn’t arrive to solve the fraud issue, then it will not take long for advertisers to start asking the hard questions that they should really be asking today. Eventually they will refuse to pay a 40% surcharge for their media. The pervasive reality of useless inventory is not a secret, it can only become knowledge to a wider audience not a smaller one. Soon enough the advertisers’ CFOs will cotton on.

At that point the advertisers will have to insist on fraud control, and the only option that will be able to deliver it, is turning off the adtech machine. You can imagine a discussion between the CMO and the CFO that goes a little bit like this.

CFO: Why are you buying inventory that you can’t verify?

CMO: Well if we stop using exchanges we won’t be able to hit our reach and cost targets

CFO: But you aren’t hitting them today. 40% of your traffic is robots!

CMO: Well, that’s true but the only way to buy volume that cheaply is via exchanges. We can’t ignore digital advertising, we have to be there, it’s a cost of doing business

CFO: OK, well can you show me data that proves that these buys are effective? That it’s a cost of business worth paying?

CMO: ah, no not really. You’ll have to trust me.

And here we come to one of the genuinely hidden aspects of all this, the effectiveness question. It’s very important and is a big part of the reason why this situation persists. You see, both the CFO and the CMO are talking some sense here, as strange as that sounds.

Measurement and Effectiveness – the Great Big Stinky Elephant in the Room

If you don’t work in advertising or marketing it might seem crazy but it has long been difficult to conclusively prove which sections of an ad campaign were successful. Analysis is conducted mostly at the level of the media channels that were used, chiefly because this is how the money is committed.

So, for example, the CMO wants to know if £10 million is best spent on print or TV or radio or digital. More precisely actually, she wants to know what the optimal mix of those channels should be, the way channels work together is crucial, particularly with respect to digital. It’s not easy to do, and in some cases not possible at all.

Direct marketing, the stuff that asks you to phone up and buy something (or visit a website) was always more measurable. Should the call centre get 1000 calls tonight? What is the target conversion rate, 15%? Did it happen? Both the calls and the sales can be counted so that question can be answered.

That worked well enough for a long while but it wasn’t perfect. Direct marketers would always like to send out their ads while there was a peak in brand awareness (often coinciding with the big TV campaign), even though the cost of that brand campaign was rarely incorporated into their ROI analysis.

Most ad spend wasn’t in direct marketing though, it was in brand marketing. If you only go back even just 20 years the media landscape was more concentrated and more innately understandable and a big chunk of the advertising dollars were intended to push shoppers to retail outlets. As a result proxy metrics like brand awareness were the best we had, technology simply couldn’t follow you from the ad to the purchase. These proxy metrics favoured big spends in the classic mass mediums, with TV considered king of the hill.

Although it is an oversimplification there is some truth to the idea that whether or not you advertised on TV was mostly a factor of the size of your overall budget. If you could afford to do TV accepted wisdom was that you should. TV has a low cost in terms of cost per thousand impressions, but there was a substantial cost of entry, it is both cheap (relative cost per thousand) and expensive (total capital outlay). To shift brand metrics costs millions of pounds. There were lots of exceptions of course, but the general principle stood up to an intuitive examination, and when good econometric studies were done (not often enough) these big spends were usually identified as most likely to shift the metrics. TV still dominates media spends today by the way. This accepted wisdom has not yet been supplanted.

Nonetheless when digital advertising came of age it was thought that we might be entering an era of eminently more measurable advertising. As it turned out nothing could have been further form the truth.

Digital is an innately direct medium. Every interaction can be counted, if your computer sees my ad I am made aware of that by the ad serving tools. If you click on my ad, then similarly this is a solid data point that can be collected.

Even though digital was a direct media it was never really ceded to the direct marketers, instead it became a part of the brand marketers playbook.

The resulting problem wasn’t one of competence, the counting methods at the heart of direct response were only ever a challenge of the ego (spreadsheets, moi?) and easily overcome, the problem was one of competing evaluation methodologies.

To fit digital into econometrics (the method of choice for evaluating brand campaigns) it needed to be assessed against the same cost metrics as the other channels, which is cost per rating point (a coarse profiled audience buying metric) and reach and frequency. This wasn’t possible as the volatile digital landscape debarred effective reach and frequency measurement (a % based metric) while absolute cost metrics (price per point) were replaced with performance metrics (cost per click). The absolute cost metrics existed of course, but they weren’t driving buying decisions.

There is nothing wrong with either body of cost management, but you simply cannot use them interchangeably, and facilitate a meaningful analysis. This is evidenced by the ongoing inability of the industry to effectively quantify the effect of TV aircover on digital acquisition campaigns. It stands to reason that there will be a relationship but quickly measuring and quantifying that effect after the adverts have run and the business results are in….we haven’t got there yet.

The real takeout here? If you manage a reasonably sized advertising media budget, you still can’t accurately and quickly understand which part of your spend works well and which part doesn’t.

That was a long explanation of the vagaries of measurement, and do I apologise, but it is this inability to measure that is the reason why the CFO and the CMO can both be partially right.

It’s a madness to willingly spend 40% of your digital budget serving ads to robots. The CFO is therefore right to ask why.

But it’s also true that an advertiser with a decent budget these days is likely to be best served by a multi-channel strategy and that digital display should almost certainly be a part of that. And while the industry at large is accessing the remnant market (via exchanges) It would be a very brave CMO who decided to take a stand, all on their lonesome.

In truth the CMO probably should stop buying on the network exchanges, but her costs will rise significantly as most advertisers very sensibly take advantage of remnant markets to control costs (in most channels not just digital) and whereas that might well be the right thing to do academically, there are few finance departments that would be comfortable with such an overnight increase in relative cost.

So change will likely only come when the CFO gets wind of the adtech fraud. Money talks and the CFO is the ultimate money man. Until then the CMO is unlikely to highlight the issue, and subsequently will be unable to explain the increase in media costs and the reduction in reach.

The pressures that will lead to a solution are therefore essentially political. Both the politics between CFOs and CMOs and between digital tech evangelists and more widely obligated marketers (client side and agency side), while the regulatory politics that will play out in response to the NSA debacle and consumer privacy concerns will also play their part.

When your ability to understand what is working is impaired you must return to first principles. So, we kill the adtech, and we once again plan ad campaigns according to the hard won knowledge of 100 years of advertising experience.

Easy….(sarcasm).

Back to disintermediation?

You might have noted that I still haven’t explained why there is no disintermediation in digital advertising. What I have done, instead, is show why the sector has been flooded with surplus mediation and proposed a sequence of events that might lead us to a position where digital mediates between advertisers and customers in the same way that offline advertising does.

However, the possibility that the internet can do to advertising what it did to book selling is more than a pipe dream. There is a lot of work being done to deliver what is called vendor relationship management, whereby the innate value of data insight is harnessed to bring efficiency to the commercial marketplace. But that data will be controlled by the consumer, not mediators.

More to follow in another post.


Tracking ideas not people

 

In 2003 I developed a concept I called AISS, which stands for All Ideas Start Somewhere. It was just a series of thoughts, nothing was ever built.

By today’s standards it isn’t anything to make you look twice, it was simply an idea to track what I saw would be the vast publicly available reaches of content. And then to understand how that content moved through the various technical layers that presented it to us, the public.

It was clever, for its time, but I still couldn’t get anyone to spend money to put it together. It is, of course, a lot easier 10 years later to claim with hindsight, that it was a good idea.

Since then, of course, various parts of the concept have come to pass, completely independently of me, and none in the exact format that I was suggesting. My logic was not unique, just early and unconnected to working capital.

That said I still think that an opportunity has been missed. The thing that I think is still absent is best thought of as a missed turning, a right / left option that when taken led into a murky forest instead of an angelic dale or meadow.

In 2003 I was suggesting that the unit of tracking should be the content itself, and that we needed to monitor the movement of the content through our digital ecosystems.

Instead we track people.

The internet is a vast looking glass that hardly anyone uses to its fullest potential. Instead of understanding what people want and giving it to them, we instead understand where people are and give them what we want them to have.

As everything changes, nothing changes.

Marketing has spent most of its life pushing crafted messages at vast demographics hoping for response rates that are typically less than a fraction of a percentage point. Brand marketing has hitched its wagon onto any number of perceived positive memes hoping that some of that positivity will attach to the brand itself. It’s a logical idea, but I’m at a loss as to why no-one seems to use the internet’s salient visibility to track the ideas or memes that are naturally liked by the target demographics.

Instead we have built a vast people tracking infrastructure that is so sophisticated it was co-opted by the NSA and GCHQ. This isn’t about the Snowden revelations, or at least it’s not about the morality of the Snowden revelations, it is instead a short note to point out that we have built such a great and vast tracking panopticon that even the spies wanted in.

And in return? Well, we still don’t get response rates that beat what we could get 20 years ago while we are failing to build trust with our customers over the use of their data. And the key words in that last sentence? …their data…  it might sit on company databases today, but it is still their (the consumer, the prospect, the customer) data. It should be treated as such. 


Disintermediation and the curious case of digital marketing

From Wikipedia

 

In economics, disintermediation is the removal of intermediaries in a supply chain, or “cutting out the middleman”. Instead of going through traditional distribution channels, which had some type of intermediate (such as a distributor, wholesaler, broker, or agent), companies may now deal with every customer directly, for example via the Internet.

Meanwhile from LUMA Partners, we have LUMAscapes, which are pretty impressive visualisations of a number of different parts of the current digital marketing landscape (display, search, video, mobile, social, commerce, digital capitol and gaming). Here’s one for display advertising.

display-advertising-lumascape-email-ads-1024x748

This is going to be a short post.

I thought I’d just make the observation, with very little in the way of commentary (none in fact), that despite the internet’s reputation for disintermediating everything it touches, there doesn’t seem to be much disintermediation going on in digital marketing, that’s all.

 

Update: I have now tried to answer the question properly, here