It seems to be an unavoidable truth that just before economic bubbles pop, trading confidence and bullish sentiment is likely to be at a significant high. Whereas some of this is undoubtedly definitional (a bubble can’t build without aggressive bullish sentiment) it always leaves me asking why people don’t see the inevitable coming. The short answer is that actually they do, or at least some people do. Provenance is important here. There are commentators that do nothing apart from forecast doom laden futures, and they spend a large amount of their time being wrong (even if they claim it is only the timing they are missing, it still feels to me, like the stopped watch which tells the time correctly twice a day). When the warning signs are being broadcast by the bankers themselves however, I feel we should take note. It won’t change any behaviours of course, bullish pre-burst behaviour is as much about herd tendencies as anything else, so these charts are for those of us without our fingers on the so called pulse.
Apparently 30 miles up from the surface of Venus is a sweet spot for human habitation, with Mediterranean temperatures and ideal barometric pressures. This article outlines the discussion about the real world feasibility of colonising Venus with floating cities. Its a somewhat Hemingway-esque life that’s described, where men really are men. Not for me, but I’d love to see the movie.
Venus is not for the timid, or people too afraid to shove a fat bird out the airlock and let the harsh laws of thermodynamics do the work. Venus is for men. Men who like to eat meat – cooked in fire and acid and seasoned with the Devil’s own mix of volatiles boiled up from the pits of hell.
Finally back to more mundane earthly matters we have this interesting, recently discovered feature of Google maps. In parts of the world where there are visceral border disputes Google shows different boundaries depending on where the map request is made from. Does this fuzziness likely promote more comfort for those involved in the dispute? I must conclude that it is the only sensible option Google can take, delivering the map each territory expects to see. In this regard this is no different to the maps that would have prevailed 200 years ago. The really interesting thing though is derived from imaging what would happen if Google decided not to bother and simply presented one version, and by such a process taking a role as an arbiter of nation boundaries. They don’t do that today but it is the kind of thing that could one day fall under the remit of “Organising all the world’s information”.
Earlier this year Netflix and Comcast had a little contretemps about their peering agreement. Unless you spend time trying to keep up with the various layers, and the players therein, that make up the technical infrastructure of the internet then that statement potentially means absolutely nothing to you whatsoever, but actually its all quite important.
Peering is the name given to the agreements that cover the terms for transferring internet traffic between networks and they are a fundamental cornerstone of the modern internet. Because there is not one single network that covers the whole world it is important that traffic can be exchanged, as required by the needs of the user, with the least possible friction. Historically these agreements were made between engineers and each network simply agreed to open to each other as required, no money involved. Comcast decided that they wanted to buck that history and demanded payment from Netflix. Netflix suggested that prior to this negotiation Comcast were deliberately allowing congestion in certain parts of their network to negatively affect Netflix’s customers (something Comcast denied) and that they had no choice but to pay the ransom.
Level 3 is one of the big global internet backbone companies that carry enormous amounts of web traffic. They are the one of the companies that pay for, and own and maintain, the cable that runs under the oceans. This post on their blog lays out some of the dynamics that go into peering agreements and even though this post doesn’t deal directly with the situation between Netflix and Comcast it should probably give you enough information to understand who is playing the shithead and who isn’t.
I never did try playing Go the ancient Chinese strategy game, and after reading this article i’m starting to think that that is no bad thing. Like chess its a 2 player war game, but unlike chess its a game (possibly the only game left) that retains an unbeaten crown in human vs computer match-ups. Machines beat humans at checkers in 1994 and chess was added to the list in 1997. Now, 17 years later Go still holds out, and holds out with comfort. Every year the University of Electro-Communications in Tokyo hosts the UEC cup where computer programs compete against each other for the opportunity to compete against a Go sage, who will be one of the world’s top Go players. The challenge is not one of brute force computing power, its more about strategic understanding and the fact that by all accounts we don’t really understand what goes into being a great Go player, human or otherwise.
Ever wondered why the airwaves are licensed by the government? If you think about it a little, the chances are you will come to what seems like a very simple and straightforward conclusion, which is that the airwaves are licensed so that broadcasting signals don’t interfere with each other. To ensure that when you tune in to 97-99 FM here in the UK, you will receive radio one, not some other outfit broadcasting on the same frequency. Which is all well and good except that electromagnetic waves simply don’t interfere with each other. This concept of interference seems to imply that there is only so much space on any particular frequency that can carry signals, that there is only so much spectrum available. Colours are on the same spectrum as radio waves, separated only by their different frequencies, to suggest that spectrum is limited is to suggest that there is only so much Red to go around, which is clearly a farcical concept.
All of that, which is sound and uncontroversial 6th form physics, does raise some interesting questions about our radios and TVs and mobile phones, all of which broadcast across licensed electromagnetic frequencies. It turns out that the problem of interference is a problem of the broadcasting and receiving equipment not the natural scarcity of the airwaves. We have a system that limits access to frequencies because we are still using a technology base optimised to an old technical paradigm.
This piece published in Salon gives you the full detail and quotes extensively from the work of David Reed an important and prominent scientist from MIT, famous for writing the text that nailed the modern architecture of the internet. Understanding what is actually going on here turns out to be entertaining and enlightening.
I don’t know whether this last link is being serious or not, and that alone might be the best observation I have to make about the state of modern economics.
Alex Tabarrok is a right leaning economist who authors the blog Marginal Revolution with Tyler Cowen, both are professors at George Mason University in Virginia. This short post, one of many from the right responding to the fuss being made by Piketty’s Capital, offers “2 surefire solutions to inequality”. One is to increase fertility among the rich, dilute the inheritance and reduce capital concentration. The other surefire way? To reduce fertility among the rich! The author of the post puts a lot more detail into this position than I do here. I’m leaning towards the opinion that he is simply having a laugh, but then as he is an economist i’m really not so sure.
Right kids. Me and your Mum have decided it’s time to buy a new TV. Exciting eh.
Yes, yes, it’ll be a good one, we’ve decided to get a good one. It might not be the biggest, it still has to fit in the corner, but it will be a good one, lots better than the one we’ve got now.
OK, but you know there are some decisions we have to make, right. We’re going to have to decide which channels we want.
Well, yes it would be nice if we could get them all, but different TV’s show different channels don’t they, you know that.
Why? Well that’s just the way it is.
What? Well yes we could just give them more money, you’d have thought so wouldn’t you, but actually we can’t. They don’t operate like that.
That doesn’t make sense? No, I guess it doesn’t does it.
Aren’t they trying to make money? Well yes they are. That’s why…. that’s why…
Look, you see, the TV people, they have deals with the people who make the programs, and the people who make the programs, well, they only want you to watch their programs on certain types of TV.
That doesn’t make sense either? …….No I guess it doesn’t does it.
Well yes, you would have thought that the people who make the programs want as many people as possible to watch.
Yes, if it was available on all the TV’s then more people could watch it. That’s true.
Look kids, you’re just going to have to trust me on this one. We just can’t get all the channels, it’s not possible, they won’t sell it to us like that. We’re going to have to choose which ones we want.
No, it doesn’t make sense. I agree.
Sorry, what was that, I didn’t hear you?
Please stop mumbling, say that again.
Adults are stupid. Ah…ok
CONTEXT: Please imagine this conversation happening anytime from 1980 up until 2003 (…or so). The link below describes the real situation as it occurs in today’s world (a very good read), and a Youtube clip to identify Joyce Grenfell for those who have never heard of her.
This essay is the follow up, long overdue, to this small observation, posted in February 2013.
The whole issue of disintermediation is one of the key phenomena of the internet age, yet for some reason digital advertising seems to have missed out. In fact, somewhat perversely, digital advertising has instead managed to go in entirely the other direction, filling up with a whole new class of mediators, the adtech companies.
I’m going to argue that this is a situation that cannot last long (years not months, probably at least 5). King Canute’s original command that the tide stop rising was never going to be a great business model.
Although the common telling of the Canute story has him drowning against the onslaught of the waves, there is evidence that he instead adapted to the new knowledge and changed his practices.
..he commanded that his chair should be set on the shore, when the tide began to rise. And then he spoke to the rising sea saying “You are part of my dominion, and the ground that I am seated upon is mine, nor has anyone disobeyed my orders with impunity. Therefore, I order you not to rise onto my land, nor to wet the clothes or body of your Lord”. But the sea carried on rising as usual without any reverence for his person, and soaked his feet and legs. Then he moving away said: “All the inhabitants of the world should know that the power of kings is vain and trivial, and that none is worthy the name of king but He whose command the heaven, earth and sea obey by eternal laws”. Therefore King Cnut never afterwards placed the crown on his head, but above a picture of the Lord nailed to the cross, turning it forever into a means to praise God, the great king.
I will cover the detail in a separate post, but I also believe that this could be the moment when digital advertising and digital marketing is forced to make some short term and slightly painful changes that will eventually open up a whole new vista of opportunity and the chance to truly lead the media landscape.
Before going into the depth of the essay it might be worth taking a quick 2 minute look at this link which explains the core ideas behind how the adtech model works. It’s a clever and accurate explanation and is easy to digest, an informative visualisation of a highly complex system. It will also provide clarity regarding exactly which part of the adtech universe I am talking about.
Mediators and Disintermediators
Digital advertising hasn’t been disintermediated yet because firstly it is in and of itself, a mediator (all advertising is) and like all incumbent mediators, is not keen to be disintermediated. Simply moving advertising from the offline realm to the digital is not enough. It might seem that web platforms force disintermediation all on their own, but they don’t. Other factors need to align as well.
For a start there needs to be a new model that can replace the current one, and that new model also needs to deliver enough consumer/user benefit to overcome the ever present inertias.
Amazon, for example, took the intermediation costs out of the book market but replaced the incumbent model with a different way to sell books. From the consumer perspective the loss of the mediators was painless, almost invisible, they just bought their books in a different shop that happened to be located on the internet instead of the high street, and for significant cost savings too.
That Amazon could offer digital shopping, that their technology existed and was functional, was the substantial factor that enabled the new market dynamics to take hold.
It’s not the same in advertising today.
With regard to advertising, the consumers in question are the advertisers, not the people who buy the advertised products.
Businesses buy their advertising from a large (and getting larger) ecosystem of multi-skilled providers. Very few businesses create their advertising in-house and generally do not own the resources required to do so (copywriters, art directors, studios, media planners, media buyers, systems, relationships….etc). As a result marketing services agencies and advertiser marketing teams are currently deeply co-dependent.
Disintermediation, by definition would seek to weaken that relationship and few people, on either side, can see how that could function. In short the advertisers are not using their market position to force disintermediation in the marketing services ecosystem, and hence their access to customers remains firmly managed.
Part of the reason for this co-dependency is the burgeoning complexity in the use of data.
The second and more problematic reason that we have mediation instead of disintermediation therefore, is related to what’s happening with all this data. Or rather not so much what is happening with the data, but why so much effort is being concentrated on the data. The companies that do things with data are in the marketing services sector, for the reasons stated above and whereas advertisers’ markets grow if they sell their products or services (via marketing or sales initiatives, or whatever) the marketing services sector grows when they sell more services to the advertisers.
The most important, acknowledged, growth opportunity in the marketing services industry today, is in data management and data targeting…. the enabler of such things is adtech.
This is the prime economic incentive driving growth in advertising at large and it has the simultaneous benefit of minimising macro level change too, as the focus remains on trading eyeballs even while the industry appears to be innovating.
On the surface it seems that everyone is a winner. Even the publishers.
The short answer, then, is that digital advertising is currently avoiding disintermediation due to the absence of a plausible replacement and the dynamics/politics of the growth opportunities for marketing services. The advertisers, who could credibly drive the need for such a replacement, via market forces, are currently heavily co-dependent on a vibrant marketing services world, and in thrall to its innovations in data. As a result they are not putting any pressure on their suppliers for anything except more of the same.
But the short answer is deeply unsatisfying. If adtech was the correct way to go, then the adtech market itself would be consolidating not exploding.
As it happens there really is something deeply wrong with the adtech model.
It’s all about eyeballs (and data)
The publishing industry has always been about eyeballs, because eyeballs generate their revenue. Much of the tech sector is going the same way. In the absence of subscription revenue much technology innovation is built on ad supported models.
Google is eyeballs and data. Facebook is eyeballs and data. Everything is eyeballs and data.
But the traditional home of disruptive companies, the kind that have generated the internet’s reputation for disintermediation is the technology start up sector itself isn’t it? So what’s going on?
It’s not that the startup world has incomprehensibly passed on by in the world of digital advertising, it’s that in this particular sector the short term incentives have lined up to foster a glut of mediation instead of disintermediation.
Ironic certainly, but the question that needs to be answered is, at what point does it become a problem?
I think we are already there and the reason why I say that lies in the wider mechanics of today’s digital publishing business model.
It is worth pointing out that I do not intend to set out a vision here for the whole of the publishing industry’s business future. What I want to do instead is to pull on just one loose thread, one that hopefully might reveal a feasible possible future, as the cardigan it belongs to starts to disintegrate.
The loose thread is this.
As a publishing medium the web has no inventory limit that is effectively bounded by physics, cost or performance.
- The cost of adding pages to a printed medium requires capacity at the printing press and the cost of adding relevant content that has commercial worth, to both consumers and advertisers
- The number of worthwhile sites to place outdoor posters is limited and generally fully explored
- TV is bounded by production and distribution costs and the fact that there are 24 hours in a day
- Radio has a finite audience, as does cinema, which is not growing
- Direct marketing messages are bounded by the size of target populations
The internet, on the other hand, offers trivial extension costs, a seemingly infinite audience and some unique content creation opportunities.
But what about worthwhile content, right? That surely establishes some kind of significant commercial boundary. Well, yes and no. Some parts of the publisher universe have to pay for their content, but the ability to originate all your own content is no longer an entry level requirement.
Take for example something like Buzzfeed, or the Huffington Post. Both very legitimate publications. Some of their content is provided free by the writers, some of it is given value by the quality of the data optimisation (multiple headlines, automatic analysis = loads of eyeballs, only one writer’s cheque and it’s a relatively small one, as the writer isn’t the most important part of the equation), and some of it, a small part of it, might be remunerated at traditional levels.
Some sites simply steal content, while generating ad revenues (whiffy publishers), and some sites use a variety of legitimate but infuriating tactics to up the impression count (3000 word articles split over 8 pages, for example), let alone the sites that stuff footers with 1×1 pixels.
These are all troubling practices but more worrisome is the rise of retargeting, via tracking technology and the buying of audience profiles through ad exchanges. Retargeting and profiling are more worrisome because they are totally legitimised by the industry at large, are vulnerable to significant fraud (the biggest problem) and are educating a generation of advertisers (client and agency) to disregard huge swathes of wisdom that we have, as an industry over many years, learned about advertising.
We have forgotten about the value of context and the quality of the ad environment.
This has happened because of the huge volume of useless inventory ‘magically’ masquerading as prime media, distorting the economics of the marketplace. It doesn’t help that misaligned incentives across the whole executional chain (publisher, agency, advertiser) are hampering the efforts to clean up the situation.
I should first explain why this inventory is useless and to be fair some of it isn’t totally useless (although it’s not far off).
Some of it is just low grade, being sold as something better because of the addition of insight from data analysis….this is the retargeting strategy. The idea that because I can track you from a premium environment to a low grade environment there is no need to pay the higher cost of the quality placement. After all you’re the same person aren’t you, why wouldn’t you be just as persuaded consuming an advert on GQ.com compared to say, uselessshite.com?
To be fair the idea of buying an audience, which is what we are seeing here, is not alien to advertising, it is after all similar to the way we buy TV. However, it is not a good like for like comparison, however much we want it to be. This is, again, because one medium has a finite inventory (TV), while the other is not even close to finding its market defined ceiling.
In this case it changes the dynamics of verification.
A TV buyer is aware if the network is dumping its weak spots in one buy, while the digital buyer doesn’t know anything about the contextual quality of their purchase. In both circumstances the buyer will get the audience they purchased but the TV buyer can exercise more control over the environment, and hence the value, that their audience is delivered into.
I dislike retargeting strategies because I think they are theoretically weak. Environment quality is important and should not be disregarded. I don’t like retargeting but it is at least derived from a legitimate concept, they are trying to put the ad in front of the right person. If that seems like a low bar for acceptance, it is.
Sadly it gets a lot worse from here.
The second class of useless inventory, is the inventory that isn’t even human. This is the bigger issue. Bot traffic is running between 30% and 46% of online display impressions, depending on which set of stats you want to use.
You did read that right. Roughly 1 in 3 digital display impressions being bought today aren’t even human.
There is no argument that can possibly turn these into good media tactics. If no humans are seeing these adverts then it follows, without controversy, that they can’t influence a human to purchase a product. Yet the industry at large is making these buys, seemingly with full knowledge every day.
This is not an unreported phenomenon, by the way, I am not breaking news here.
It’s very important to understand that media planners and buyers aren’t idiots. Not by a long margin. There must, therefore, be reasons that can help explain this behaviour.
First as previously mentioned the commercial incentives really don’t help
- clients are charged with hitting cost and reach targets
- agencies need to help them hit those targets, as that is the job they are hired to do, but also their business models are significantly based on trading volume
- publishers need to squeeze every penny they can from the market at the lowest executional cost possible
These conditions might explain why some parts of the industry are making these mistakes, but they aren’t sufficient to explain why the whole industry is making these mistakes. If this was all that was going on the only brands doing this would be those in urgent need of cost control. The successful top quartile, at least, would still be executing against ‘premium’ inventory. But they aren’t, so something even more foundational must be at work.
To understand what that is we need to investigate how this inventory is getting into circulation. The question is a harsh one, how can an industry that has operated for so long suddenly be duped into buying campaigns on such a flawed basis?
This is where the impact of adtech becomes a nightmare.
Adtech has created automated networks that operate, in real time on either side of the publisher. Publishers can buy part of their audience from a network, and then they can sell advertising against that exact same purchased visitor, through another network.
If you can manage the money such that the buy costs less than the sell, even if that is a tiny amount, then over vast volumes of impressions, the riches await. Moreover, if 30-40% of the traffic being sold is created by bots, those costs can be incredibly low, which in turn reduces the price needed on the ad networks. Which means that the advertisers are picking up impressions at a stunning low price too.
And that’s what’s happening, no joke.
There is a veil of respectability in buying these cheap impressions because the kosher media properties, who are looking to establish high cpm’s in exchange for real advertising value (with their premium stock) are using these exchanges to generate incremental revenue on their remnant inventory too. Along with the impressions being sold in old traditional deals between humans this is what makes up the 60% of inventory that is actually put in front of real people.
Because there is too much inventory out there, as explained by the economics of digital real estate as already noted, this massive cost saving can be comfortably excused as simple supply and demand dynamics.
However, as also already noted all this is not a secret. The industry is aware. It’s therefore a very fair question to ask how this situation could possibly be tolerated let alone enthusiastically adopted.
Part of the question is answered by the intense fervour generated around the technology itself. This technology is clever and complex, there is much to be admired in engineering terms. However, this very complexity enables a troubling but comfortable ignorance. It means that the buyers can’t vouch for the veracity of their buys even if they wanted to. As things stand they can’t be held accountable.
If that sounds hyperbolic I must point out that this complex machinery isn’t lamented for the separation of knowledge it creates, at all, quite the opposite in fact.
John Battelle just loves adtech
To make my case I would point you towards the following 3 links. They are all connected to, or written by John Batelle, an important and intelligent champion of the adtech/digital advertising/digital landscape. An industry leader, he is no fool.
First up is the highly instructive visualisation I linked to in the first part of this essay, that shows the technology I have been talking about in action. It’s well constructed and explains better than a wall of text how this all works. If you haven’t already please do look at this link even if you look at none of the others, it shows how the modern digital market operates more clearly than anything else I have seen, Battelle is very proud of it as he should be.
I am keen to highlight this explanation of the adtech model because I don’t want anyone to think that I have misunderstood what is being sold. This explanation is provided not by sceptics like me, but by enthusiastic cheerleaders.
Near the end we are triumphantly told that,
Dozens of sophisticated servers can be involved in a single ad placement, which takes less than a quarter of a second.
I’m really not sure that that is something any media planner should be happy reading, to be honest, knowing as we do that at least 1/3 of impressions aren’t even human.
Secondly we have Battelle’s homage to the deep importance of adtech. He is almost universally positive about it, although I sense a few moments in this ‘love letter’ that suggest a few notes of caution have made themselves known to him. If you think I am being snarky in describing this link as an homage and a ‘love letter’ I would point out in riposte that the title (seemingly without irony) of the post is “Why the banner ad is heroic and adtech is our greatest technology artefact”.
He proudly heralds the LUMAscapes as evidence of success rather than inane profligacy, champions the technology that allows “a pair of shoes to chase you across the web” as heroic, instead of creepy, but pays nothing but dismissive lip service to the deep seated concerns raised by several respected leaders of today’s digital world such as Lawrence Lessig, Jonathan Zittrain and Tim Wu, none of whom are uninformed luddites.
OK. Let’s step back for a second. When you think of this infrastructure, are you concerned? Good. Because it’s imperative that we consider the choices we make as we engage with such a portentous creation…..
What are the architectural constraints of the infrastructure which processes that information? What values do we build into it? Can it be audited? Is it based on principles of openness, or is it driven by business rules and data-structures which favor closed platforms?
These questions have been raised, and continue to be well articulated, by Lessig, Zittrain, Wu, and many others. But we’re entering a new, more urgent era of this conversation. Many of these authors’ works warned of a world where code will eventually augur early lock down in political and social conventions. That time is no longer in the future. It’s now. And I believe as goes adtech, so goes our social code.
My emphasis added. He acknowledges concerns, big concerns, but is alarmingly sanguine about trying to solve them. If adtech is our social code as he suggests, then our future is damningly bleak.
However the 3rd link is the one I struggle with the most.
This is where we learn that Battelle has been appointed as co-chair, by the IAB, of the “traffic of good intent” task force. Good intent in this context means traffic that is actually worth buying. In their words….to,
more effectively address the negative impacts” of bots and other “non-intentional” traffic.
Which quite frankly is a weak mission statement when the actual job required is the eradication of the massive fraud at the heart of digital advertising.
Still what can we expect, the co-chair of this task force is the same man who believes that
Every retail store you visit, every automobile you drive (or are driven by), every single interaction of value in this world can and will become data that interacts with this programmatic infrastructure.
Right. Time to step back and return to my narrative. I need to explain why Battelle cannot idolise this technology and also solve the fraud problem without destroying privacy.
[I must state, at this juncture, that my concerns, the reason I am writing this, are not rooted in a fear of a world with no privacy, although I do not wish to live in that world. In truth my concerns are for the veracity of my trade, I work in advertising, it’s what I do, I would prefer it be intelligent and effective not automated and fraudulent.]
Battelle does a great job of demonstrating and lauding the very complexity that totally obfuscates the ability to examine the veracity of the media buys. It is this complexity that forces a critical compromise on us.
Don Marti lays the compromises bare, as a 3 way play. “Ad tech, privacy, fraud control: pick two?”
He makes a strong case.
Remember ad tech is being gamed by big volumes of fraudulent non-human ad traffic, being passed off as the real thing. As Battelle’s graphic has pointed out the data work that fuels this market is all profile based and anonymous. No-one is following around named individuals, instead they are following a 35yr old, male, that plays golf and was recently browsing for a new pair of shoes. They know a lot about you as it goes, but not your name (apparently).
Under this system there is enough space for 40% of the traffic to be spoofed without getting caught.
The only way to resolve the fraud then, and maintain the adtech structure is to deliver complete visibility of who is who, where and when.
Or to put it another way to answer the question of whether or not that impression, that was just purchased, is a bot, or a human requires that you the human forgoe your desire, your right, to remain an anonymous data point. There is nothing partial about this solution, the only way to really kill the fraud is to always identify the humans. But not in a binary, human/not human way, a simple flag is too easy for a bot to overcome or to spoof. To really kill fraud, and maintain the adtech structure we will have to relinquish our digital privacy in totality.
All of a sudden this line from Battelle becomes more worryingly profound,
Every retail store you visit, every automobile you drive (or are driven by), every single interaction of value in this world can and will become data that interacts with this programmatic infrastructure.
The italic emphasis in that quote is his by the way, not mine.
So, which couplet do you fancy most?
- Ad tech + privacy = lots of fraud
- Ad tech + fraud control = no privacy
- Fraud control + privacy = no ad tech
Option 1 should be totally unacceptable to the whole advertising industry (publishers, agencies and advertisers) but it’s actually where we find ourselves today.
Option 2 is, ultimately, in my opinion outside of the influence of the advertising industry. Even though Facebook has revealed significant comfort with the idea of hugely reduced digital privacy, among great swathes of the population, there are big signs that this trend is not fully supportable long into the future. Natives are aware of the role of online personas and spend as much time spoofing the ‘correct’ image on the mainstream tools, as they do behaving like teenagers on the networks that Mum, Dad, aunty Glad and uncle Bill don’t know about. The Snowden/NSA revelations are not going to help reverse that trend.
Moreover the world at large needs a level of achievable anonymity to function. Cities were the first such artefact, providing the young with an arena within which to grow and find themselves, outside of the gaze of those who ‘know’ them best. There is a reason much innovation is rooted in the city, not the farm.
Option 3, on the other hand, makes sense for a lot of reasons, not just in terms of privacy. For a start it would force digital advertising to re-assess some of the more human aspects of driving useful commercial interactions instead of ceding more and more decision making to misunderstood algorithms. We might even start to understand how best to use the digital medium as a branding channel, something that has been glossed over in the data revolution.
It could also be part of the evolution of digital publishing into a fiscally secure enterprise. Rampant content theft by rogue commercial entities and the eroding of premium rates for quality placements would by definition be somewhat stymied. It would also start to create an economic boundary for the quasi infinite inventory problem by reintroducing meaningful performance limitations.
A good result all round no?
So how do we get there?
If we want to clean up this situation it is painfully obvious what needs to happen, we simply take adtech out of the picture and in its absence return to first principles. The reasons why advertising works, why it sells products, haven’t changed because of the internet, we can still function without adtech.
The trickier part of the equation is getting to this inflection point, getting the industry, at large, to choose to move past adtech. That’s a lot harder and the nature of that journey will be influenced by whose outrage is powerful enough to drive change, consumer outrage, advertiser outrage or a bit of both.
It’s true that the consumer is concerned about privacy today in ways that have been lacking in recent years. This is inspired by Snowden and the NSA, as opposed to creepy adtech. Yet here in the UK there still doesn’t seem to be a huge concern. This strange silence, this peculiar lack of British outrage, is not mirrored across the Atlantic though, and what is finally adopted in the States will soon become the standard here too. Similarly our European neighbours seem to have more concerns in this regard than we do, certainly the EU seems more committed to consumer protections in this area than the British government or population. So, even if the US doesn’t move I suspect the EU will.
The connection between commercial surveillance and state surveillance is robust and real, there can be no reform in the one without the other. So I fully imagine that regulatory responses to the Snowden scandal will impact on commercial data practice.
Finally there is the influence through market forces of consumers seeking privacy friendly solutions to account for, although the scale of such influence is not likely to prove substantial in the shorter term, in my opinion.
These factors, stemming from consumer outrage are actually relatively weak as agents of the kind of change I’m advocating. However they do suggest that the commercial surveillance machinery, adtech, will struggle to deliver the total surveillance advocated by Battelle. And if that is true then the ability to stem the tide of impression fraud is severely compromised.
The real force for change will likely be advertiser outrage and that will come from a different direction.
If total digital transparency doesn’t arrive to solve the fraud issue, then it will not take long for advertisers to start asking the hard questions that they should really be asking today. Eventually they will refuse to pay a 40% surcharge for their media. The pervasive reality of useless inventory is not a secret, it can only become knowledge to a wider audience not a smaller one. Soon enough the advertisers’ CFOs will cotton on.
At that point the advertisers will have to insist on fraud control, and the only option that will be able to deliver it, is turning off the adtech machine. You can imagine a discussion between the CMO and the CFO that goes a little bit like this.
CFO: Why are you buying inventory that you can’t verify?
CMO: Well if we stop using exchanges we won’t be able to hit our reach and cost targets
CFO: But you aren’t hitting them today. 40% of your traffic is robots!
CMO: Well, that’s true but the only way to buy volume that cheaply is via exchanges. We can’t ignore digital advertising, we have to be there, it’s a cost of doing business
CFO: OK, well can you show me data that proves that these buys are effective? That it’s a cost of business worth paying?
CMO: ah, no not really. You’ll have to trust me.
And here we come to one of the genuinely hidden aspects of all this, the effectiveness question. It’s very important and is a big part of the reason why this situation persists. You see, both the CFO and the CMO are talking some sense here, as strange as that sounds.
Measurement and Effectiveness – the Great Big Stinky Elephant in the Room
If you don’t work in advertising or marketing it might seem crazy but it has long been difficult to conclusively prove which sections of an ad campaign were successful. Analysis is conducted mostly at the level of the media channels that were used, chiefly because this is how the money is committed.
So, for example, the CMO wants to know if £10 million is best spent on print or TV or radio or digital. More precisely actually, she wants to know what the optimal mix of those channels should be, the way channels work together is crucial, particularly with respect to digital. It’s not easy to do, and in some cases not possible at all.
Direct marketing, the stuff that asks you to phone up and buy something (or visit a website) was always more measurable. Should the call centre get 1000 calls tonight? What is the target conversion rate, 15%? Did it happen? Both the calls and the sales can be counted so that question can be answered.
That worked well enough for a long while but it wasn’t perfect. Direct marketers would always like to send out their ads while there was a peak in brand awareness (often coinciding with the big TV campaign), even though the cost of that brand campaign was rarely incorporated into their ROI analysis.
Most ad spend wasn’t in direct marketing though, it was in brand marketing. If you only go back even just 20 years the media landscape was more concentrated and more innately understandable and a big chunk of the advertising dollars were intended to push shoppers to retail outlets. As a result proxy metrics like brand awareness were the best we had, technology simply couldn’t follow you from the ad to the purchase. These proxy metrics favoured big spends in the classic mass mediums, with TV considered king of the hill.
Although it is an oversimplification there is some truth to the idea that whether or not you advertised on TV was mostly a factor of the size of your overall budget. If you could afford to do TV accepted wisdom was that you should. TV has a low cost in terms of cost per thousand impressions, but there was a substantial cost of entry, it is both cheap (relative cost per thousand) and expensive (total capital outlay). To shift brand metrics costs millions of pounds. There were lots of exceptions of course, but the general principle stood up to an intuitive examination, and when good econometric studies were done (not often enough) these big spends were usually identified as most likely to shift the metrics. TV still dominates media spends today by the way. This accepted wisdom has not yet been supplanted.
Nonetheless when digital advertising came of age it was thought that we might be entering an era of eminently more measurable advertising. As it turned out nothing could have been further form the truth.
Digital is an innately direct medium. Every interaction can be counted, if your computer sees my ad I am made aware of that by the ad serving tools. If you click on my ad, then similarly this is a solid data point that can be collected.
Even though digital was a direct media it was never really ceded to the direct marketers, instead it became a part of the brand marketers playbook.
The resulting problem wasn’t one of competence, the counting methods at the heart of direct response were only ever a challenge of the ego (spreadsheets, moi?) and easily overcome, the problem was one of competing evaluation methodologies.
To fit digital into econometrics (the method of choice for evaluating brand campaigns) it needed to be assessed against the same cost metrics as the other channels, which is cost per rating point (a coarse profiled audience buying metric) and reach and frequency. This wasn’t possible as the volatile digital landscape debarred effective reach and frequency measurement (a % based metric) while absolute cost metrics (price per point) were replaced with performance metrics (cost per click). The absolute cost metrics existed of course, but they weren’t driving buying decisions.
There is nothing wrong with either body of cost management, but you simply cannot use them interchangeably, and facilitate a meaningful analysis. This is evidenced by the ongoing inability of the industry to effectively quantify the effect of TV aircover on digital acquisition campaigns. It stands to reason that there will be a relationship but quickly measuring and quantifying that effect after the adverts have run and the business results are in….we haven’t got there yet.
The real takeout here? If you manage a reasonably sized advertising media budget, you still can’t accurately and quickly understand which part of your spend works well and which part doesn’t.
That was a long explanation of the vagaries of measurement, and do I apologise, but it is this inability to measure that is the reason why the CFO and the CMO can both be partially right.
It’s a madness to willingly spend 40% of your digital budget serving ads to robots. The CFO is therefore right to ask why.
But it’s also true that an advertiser with a decent budget these days is likely to be best served by a multi-channel strategy and that digital display should almost certainly be a part of that. And while the industry at large is accessing the remnant market (via exchanges) It would be a very brave CMO who decided to take a stand, all on their lonesome.
In truth the CMO probably should stop buying on the network exchanges, but her costs will rise significantly as most advertisers very sensibly take advantage of remnant markets to control costs (in most channels not just digital) and whereas that might well be the right thing to do academically, there are few finance departments that would be comfortable with such an overnight increase in relative cost.
So change will likely only come when the CFO gets wind of the adtech fraud. Money talks and the CFO is the ultimate money man. Until then the CMO is unlikely to highlight the issue, and subsequently will be unable to explain the increase in media costs and the reduction in reach.
The pressures that will lead to a solution are therefore essentially political. Both the politics between CFOs and CMOs and between digital tech evangelists and more widely obligated marketers (client side and agency side), while the regulatory politics that will play out in response to the NSA debacle and consumer privacy concerns will also play their part.
When your ability to understand what is working is impaired you must return to first principles. So, we kill the adtech, and we once again plan ad campaigns according to the hard won knowledge of 100 years of advertising experience.
Back to disintermediation?
You might have noted that I still haven’t explained why there is no disintermediation in digital advertising. What I have done, instead, is show why the sector has been flooded with surplus mediation and proposed a sequence of events that might lead us to a position where digital mediates between advertisers and customers in the same way that offline advertising does.
However, the possibility that the internet can do to advertising what it did to book selling is more than a pipe dream. There is a lot of work being done to deliver what is called vendor relationship management, whereby the innate value of data insight is harnessed to bring efficiency to the commercial marketplace. But that data will be controlled by the consumer, not mediators.
More to follow in another post.
Ever found yourself in an office, several floors up looking out of the window, a large window that in fact stretches from the floor to the ceiling? I have, quite a few times actually (not that its anything special). On a few occasions I have leaned against the glass trusting that it would be secure, and the evidence of this being written of course tells that it was. On a few other occasions I have pondered the idea of testing the glass with something more substantial than a simple lean. Its a very easy idea to walk away from. I am never going to run at the glass to test that it is strong enough to prevent me falling to my death. If only Garry Hoy felt the same way he wouldn’t be dead. Running at the glass was Garry’s party trick until one day the whole pane popped out of its frame and that was the end of him.
Then there is Sir Adrian Paul Ghislain Carton de Wiart and officer in the British army. He was possibly the hardest man who ever lived. From his Wikipedia listing.
He served in the Boer War, First World War, and Second World War; was shot in the face, head, stomach, ankle, leg, hip, and ear; survived two plane crashes; tunnelled out of a POW camp; and bit off his own fingers when a doctor refused to amputate them. Describing his experiences in World War I, he wrote, “Frankly I had enjoyed the war.”
Meet J002e3f an object found to be orbiting earth in 2002. This was a mystery as it had long been the accepted wisdom that the moon was the only substantial body in orbit in our skies. Originally it was assumed to be an asteroid but eventual analysis showed that the electromagnetic spectrum of the object was consistent with the paint used by NASA for the Saturn V rockets. Its no longer flying around us as the moon helped to slingshot it out of our orbit, although it is expected to return some time in the 2040’s. The first link shows its orbital path as it flirted with hitting us.
I have recently found a number of Wikipedia entries that just purport to function as simple nodes for crowd sourced lists. A nice dynamic resource. Here are 3 of them. Objects in the solar system by size, science in 2013 by month and emerging technologies by sector.