This is just a really pleasant read. I was aware of the Burning man festival when it was still a fairly unknown event in the freak calendar. I was young enough, although not terribly young to be honest, to imagine I would go. Not this year, of course, next year, next year would be the year I’d get it together and get out there. The rather ironic desire to get organised enough to get out to Burning man was not something that hit me with the same laconic amusement that it does right now.
This rather wonderful recounting of a trip to Burning man tells the tale of Wells Tower, his 69 year old father, and 2 of their friends, and what they encountered when they got there. Tower senior, an economics professor with a voting record that included support for George W Bush, and his son are not the kind of people you might expect to see at such a hippie mecca, but that combined with the simple fact of the enthusiastic immersion of Tower senior, and the nervous, sceptical but sincere engagement of Tower junior makes this a true delight.
It also reassures me that I’ve still got plenty of time to book my own Black Rock experience. Next year. I’ll be organised enough next year.
This is a short interview with Michael Wolff taking a small, but forensic and caustic, sledgehammer to the state of play for digital media in 2014. He doesn’t go into depth here but this is a quick read that should give you pause for thought when you consider the challenges that lie ahead for the health of digital publishing. I think we can turn the corner, that we can find a way to retain some quality in the landscape (quality publishing environments…as Wolff points we aren’t suffering for quality of information) but Wolff holds a bleaker vision than I do, even though we overlap on a great deal of what is happening.
Ian Bogost had been somewhat critical about the world, as he perceived it, of social games. The push for monetisation, the unthinking assumed ownership of a players time (he observes that deadlines, driven by absolute time, not game time force a user to play through the imposition of dread. Miss that harvest point….no sir, you don’t want to do that), among other things. Anyway he was challenged, not unreasonably, to experience the developers side of the experience. And so, part game, part art and fully satirical, he created cow clicker, a game where you are allowed to click your cow every 6 hours. It was never a huge success by social game standards but it still pulled 50,000 users at its peak. This article gives us the detail.
Finally a long exploration of what happens when opiates hit a community and what subsequently happens when those opiates are taken away again through invasive and draconian law enforcement. This is the story of Subutex, and what happened when it took over the drug scene in Georgia (Europe not the US), and then when a deliberately aggressive government program more or less eradicated its usage.
There are no folksy tales here that either side of the drug debate can take solace from. Those that advocate for a loosening of drug regulation must surely be appalled by the widespread adoption of Subutex abuse prior to the legal crackdown, while those that seek zero tolerance regimes must acknowledge that a safer (but not safe) drug that really doesn’t kill many people (even compared to the likes of methadone) has now been replaced by the sheer horror of krokodil.
Instead of retiring his syringe, he injected krokodil, a homebrew so vile that I had to ask him twice to repeat the recipe. It is simple. First get codeine from a pharmacy. Then mix it with toilet-cleaner, red phosphorus (the strike-strips on matchbooks are a good source), and lighter fluid. Voilà, your krokodilis served.
You did read that right.
I don’t like getting my content from apps. I use the mobile web instead if I have to, but the majority of my web access is on a laptop with a nice big screen, which means my prejudice doesn’t hurt me too much.
I know that that sets me apart from the majority of smartphone users, but as I said before, my primary web access point isn’t my phone so the limitations of the mobile web that have enabled app culture don’t really hurt me.
I also believe that it is a culture with a short shelf life.
It is a system with an unusual degree of friction in comparison to an easily imagined alternative. The whole native apps versus web apps argument that has been running for years now is largely decided, at any one point in time, by the state of play with regard to mobile technology and the user requirements of said technology. I’m pretty certain the technology will improve faster than our user requirements will complicate.
That easily imagined alternative is pretty straightforward too. A single platform, open and accessible to all, built on open protocols, that can support good enough access to hardware sensors, is served by effective search functionality and doesn’t force content to live within walled gardens.
The web, in other words. Not fit for purpose today (on mobile), but it will be soon enough.
Monetisation possibly remains a challenge but I’m also pretty sure we will eventually learn to pay for the web services that are worth paying for. Meaningful payment paradigms will arrive, the profitless start up model/subsidised media (which is what almost all these apps are) can’t last. We will get over ourselves and move beyond the expectation of free.
Should everything be on the web? No, not at all. For me there is a fuzzy line between functional apps and content apps. The line is fuzzy because some content apps contain some really functional features, and vice versa. Similarly some apps provide function specifically when a network connection is unavailable (pocket, for example). The line is drawn between content and function only because I strongly believe that the effective searchability of content, via search engines or via a surfing schema, and the portability of that content via hyperlinks is the heart of what the web does so well and generates so much of its value. If all our content gets hidden away in silos and walled gardens then it simply won’t be read as much as if it was on the open web. And that would be a big perverse shame.
Why can’t we just publish to the stream? Because it is a weak consumption architecture, it is widely unsearchable (although I suspect this can be changed although we need to move beyond the infinite scroll real quick) and because you don’t own it (don’t build your house on someone else’s land). That’s another whole essay and not the focus of this one.
I’ve felt this way for a long time so I am happy to say that finally some important people are weighing in with concerns in the same area. They aren’t worried about the same specific things as me, but I can live with that if it puts a small dent in the acceleration of app culture.
Chris Dixon recently published this piece calling out the decline of the mobile web. I think he is over stating the issue actually, but only because I think the data doesn’t distinguish effectively between apps that I believe should be web based and those I am ambivalent about. He cites 2 key data sources.
The ComScore data for total web users vs total mobile users is solid enough, showing that mobile overtook desktop in the early part of this year. Interestingly the graphic also shows that desktop user volumes are still rising, even if the rate of acceleration lags behind mobile. Not many people are taking this on board in the rush to fall in love with the so called post PC era. It’s important because to really understand the second data point, provided by Flurry, you have to look at the very different reasons people are using their mobile computers.
Flurry tells us that the amount of time spent in apps, as a % of time on the phone/internet itself, rose from 80% in 2013 to 86% (so far) in 2014. That’s a lot of time spent in apps at the cost of time spent on the mobile web.
But the more detailed report, shows that that headline number is less impressive than it at first seems. All that time in apps, it turns out isn’t actually at the cost of time spent on the web, not all of it anyway.
Of the 86%, 32% is gaming, 9.5% is messaging, 8% is utilities and 4% productivity. That totals 53.5% of total mobile time, and 62% of all time in apps specifically. The fact that gaming now occurs within networked apps on mobile devices, rather than as standalone software on desktops, only tells us that the app universe has grown to include new functionality not historically associated with the desktop web. It tells us that this new functionality has not been removed from the mobile web, that in this case the mobile web has not declined. As I said I agree with the thrust of the post I just think the situation is less acute.
Chris Dixon’s main concern is for the health of his business, he’s a partner at Andreessen Horowitz (a leading tech VC), and he is seeing the rise of 2 great gatekeepers in Google and Apple, the owners of the 2 app stores that matter, as a problematic barrier to innovation, or rather a barrier to the success of innovation (there is clearly no shortage of innovation itself).
He’s not alone Mark Cuban has made a similar set of observations.
Fred Wilson too, although he is heading into an altogether different and more exciting direction again, which I will follow up in a different post (hint, blockchain, distributed consensus and personal data sovereignty). Still this quote from Fred’s post sums up the situation rather well from the VC point of view.
My partner Brad hypothesized that it had something with the rise of native mobile apps as the dominant go to market strategy for large networks in the past four to five years. So Brian pulled out his iPhone and I pulled out my Android and we took at trip through the top 200 apps on our respective app stores. And there were mighty few venture backed businesses that were started in the past three years on those lists. It has gotten harder, not easier, to innovate on the Internet with the smartphone emerging as the platform of choice vs the desktop browser.
The app stores are indeed problems in this regard, I don’t disagree and will leave it to Chris and Mark’s posts to explain why. I am less concerned about this aspect partly because it’s not my game, I don’t make my money funding app building or building apps. But mostly I’m unconcerned because I really believe that (native) app culture has to be an adolescent phase (adolescent in as much as the internet itself is a youngster, not that the people buying and using apps are youngsters).
If I am right and this is a passing phase can we make some meaningful guesses as to what might change along the way?
I’ve been looking at mobile through the eyes of Clay Christensen’s innovators dilemma recently, which I find to be a useful filter for examining the human factors in technology disruptions.
Clearly if mobile is disrupting anything it is disrupting desktop computing, and true to form the mobile computing experience is deeply inferior to that available on the desktop or laptop. There are, actually, lots of technical inferiorities in mobile computing but the one that I want to focus on is the most simple one of all, size, the mobile screen is small.
Small small small small. I don’t think it gets enough attention, this smallness.
If it’s not clear where I’m coming from, small is an inferior experience in many ways, although it is, of course a prerequisite for the kind of mobility we associate with our phones.
Ever wondered why we still call what are actually small computers, phones? Why we continually, semantically place these extremely clever devices in the world of phones and not computers? After all the phone itself is really not very important, it’s a tiny fraction of the overall functionality. Smartphones? That’s the bridge phrasing, still tightly tied to the world of phones not computers. We know they are computers of course, it’s just been easier to drive their adoption through the marketing semantic of telephony rather than computing. It also becomes easier to dismiss their limitations.
Mobile computers actually out date smartphones by decades, they were/still are, called laptops.
Anyway, I digress. For now let’s run with the problems caused by small computers. There are 2 big issues, that manifest in a few different ways. Screen size and the user interface.
Let’s start with screen size (which of course is actually one of the main constraints that hampers the user interface). Screen size is really important for watching TV, or youtube, or Netflix etc
We all have a friend, or 2, that gets very proud of the size of their TV. The development trend is always towards bigger and bigger TV screens. The experience of watching sport, or a movie on a big TV is clearly a lot better than on a small one (and cinema is bigger again again). I don’t need to go on, it’s a straightforward observation.
No-one thinks that the mobile viewing experience is fantastic. We think it’s fantastic that we can watch a movie on our phones (I once watched the first half of Wales v Argentina on a train as I was not able to be at home, I was stoked), but that is very different from thinking it’s a great viewing experience. The tech is impressive, but then so were those little tiny 6 inch by 6 inch black and white portable TV’s that started to appear in the 80’s.
Eventually all TV (all audio visual) content will be delivered over IP protocols. There are already lots of preliminary moves in this area (smart TVs, Roku, Chromecast, Apple TV) and not just from the incumbent TV and internet players. The console gaming industry is also extremely interested in owning TV eyeballs as they fight to remain relevant in the living room. Once we watch all our AV content via internet protocols the hardware in the front room is reduced to a computer and a screen, not a TV and not a gaming console. If the consoles don’t win the fight for the living room they lose the fight with the PC gamers.
Don’t believe me? Watch this highlights reel from the Xbox one launch….TV TV TV TV TV (and a little bit of Halo and COD)
The device that the family watches moving images on will eventually be nothing but a screen. A big dumb screen.
The Smart functionality, the computing functionality, that navigates the protocols to find and deliver the requisite content, well that will be provided by any number of computing devices. The screen will be completely agnostic, an unthinking receptor.
The computer could be your mobile device but it probably won’t be. As much as I have derided the phone part of our smartphones they are increasingly our main form of telephony. As much as it is an attractive thought we are unlikely to want to take our phones effectively offline every time we watch the telly. Even if at some stage mobile works out effective multi-tasking we will still likely keep our personal devices available for personal use.
What about advertising? That’s another area where bigger is just better. I’ve written about this before, all advertising channels trend to larger dominant experiences when they can. We charge more for the full page press ad, the homepage takeover, the enormous poster over the little high street navigational signs telling you that B&Q is around the corner. Advertising craves attention and uses a number of devices to grab/force it. Size is one of the most important.
Mobile is the smallest channel we have ever tried to push advertising into. It won’t end well.
The Flurry report tries to paint a strong picture for mobile advertising, but it’s not doing a very good job, the data just doesn’t help them out. They start with the (sound) idea that ad spend follows consumption ie. where we spend time as consumers of media, is where advertisers buy media in roughly similar proportions. But the only part of the mobile data they present that follows this rule is Facebook, the only real success story in the field of mobile display advertising. The clear winner, as with all things digital is Google, taking away the market with their intention based ad product delivered via search, quite disproportionally.
There are clouds gathering in the Facebook mobile advertising story too. Brands have discovered that it is harder to get content into their followers newsfeeds. The Facebook solution? Buy access to your followers.
That’s a bait and switch. A lot of these brands are significant contributors to the whole Facebook story itself. Every single TV ad, TV program, media columnist or local store that spent money to devote some part of their communication real estate to the line “follow us on Facebook” helped to build the Facebook brand, helped to make it the sticky honeypot that it is today.
And now if you want to leverage those communities, you have to pay again. Ha.
Oh but there’s value in having a loyal active base of followers I hear the acolytes say. Facebook is just the wrapper. Ok, then in that case ask them all for an email address and be guaranteed that 100% of your communications will reach them. Think that will work? Computer says no.
Buying Facebook friends? That’s starting to whiff too.
That leaves Google/search. The only widely implemented current revenue model that has a future on the mobile web. Do you know what would make the Google model even more effective? A bigger screen, and a content universe on the open web.
To be fair the idea that we must pursue pay models is starting to take hold in the communities that matter. This is good news.
Read this piece. It’s as honest a view as you will find within the advertising community and it still doesn’t get that the mobile medium is fundamentally flawed simply because it is small. There is no get out of jail card free for mobile, unless it gets bigger.
Many of the businesses we met have a mobile core. A lot of them make apps providing a subscription service, some manage and analyze big data, and others are designed to improve our health care system. The hitch? None of them rely on advertising to generate revenue. A few years ago, before the rise of mobile, every new startup had a line item on its business plan that assumed some advertising revenue.
What did you think about the Facebook WhatsApp purchase? It can’t have been buying subs, it’s inconceivable that most of the 450m WhatsApp users weren’t already on Facebook. Similarly even though there is an emerging markets theme to the WhatsApp universe, Facebook hasn’t been struggling in pick up adoption in new markets. My money is that Zuckerberg bought a very rare cadre of digital consumer, consumer’s willing to pay for a service that can be found in myriad other places for free. WhatsApp charges $1 a year, a number of commenters have noted this and talked of Zuckerberg purchasing a revenue line. I think they have missed the wood for the trees. That $1 a year, its good money when you have 450m users, for sure. It’s even better when you have nearly a billion and many of them are getting fed up with spammy newsfeeds. What will you do if you get an email that says “Facebook, no ads ever again, $5”. He doesn’t just get money, he also gets to concentrate on re-building the user experience around consumers, not advertisers.
The WhatsApp purchase is the first time I have tipped my metaphorical hat in Zuckerberg’s direction. The rest is morally dubious crud that fell into his lap, this is a decently impressive strategic move. I still won’t be signing up anytime soon though.
Ok. Small. Bad for advertising. What else?
Small devices also have necessarily limited user interfaces. The qwerty keyboard for all of its random derivation works really well. There is no other method that I use that is as flexible and as speedy for transferring words and numbers into a computer.
The touchscreen is impressive though, the device that, for my money, transformed the market via the iPhone. My first thought when I saw one was that I would be able to actually access web pages and make that experience useful through the pinch and zoom functionality. I had a blackberry at the time with the little scroll wheel on the side. Ugh. But even that wonder passed, now that we optimise the mobile web to fit on the small screens as efficiently as possible. No need to pinch and zoom, just scroll up and down, rich consumption experience be damned.
Touchscreens are great, they just aren’t as flexible as a keyboard. As part of an interface suite alongside a keyboard they are brilliant. As an interim interface to make the mobile experience ‘good enough’, again brilliant. As my primary computing interface … no thanks.
When the iPad arrived the creative director at my agency, excited like a small child as he held his tablet for the first time, decided to test its ability as a primary computing interface. He got the IT department to set up a host of VPN’s so that he could access his networked folders via the internet (no storage on the iPad) and then went on a tour of Eastern Europe without his MacBook.
He never did it again. He was honest about his experience, it simply didn’t work.
What about non text creativity? Well, if you can, have a look inside the creative department at an advertising agency. It will be wall to wall Apple. But they will also be the biggest Macs you have ever seen. Huge glorious screens, everywhere.
You can do creative things on a phone, or a tablet but it is not the optimised form factor. It seems silly to even have to type that out.
If small is bad, and if mobile plays out according to Christensen’s theories (and remember this is just speculation), then eventually mobile has to get bigger. Considering that we have already reduced the acceptable size of tablets in favour of enhanced mobility this seems like some claim.
For my money the mobile experience goes broadly in one of two ways.
In both scenarios it becomes the primary home of deeply personal functionality and storage, your phone, your email (which is almost entirely web accessible these days),your messenger tool, maps, documents, music, reminders, life logging tools (how many steps did you step today) as well as trivial (in comparison) entertainments such as the kind of games we play on the commute home.
The fork is derived from what happens with desktop computing. The computing of creation and the computing of deeper better consumption experiences.
We either end up in a world with such cheap access to computing power that everyone has access to both mobile computing and desktop computing, via myriad access points, and the division of labour between those two is derived purely on which platform best suits a task. Each household might have 15 different computers (and I’m not even thinking of wearables at this stage) for 15 different tasks. This is a bit like the world we have today except that the desktop access is much more available and cheap, and not always on a desk.
Alternatively, the mobile platform does mature to become our primary computing device, a part of almost everything we do. This is much more exciting. This is the world of dumb screens that I mentioned earlier. If mobile is to fulfil its promise and become the dominant internet access point then it must transcend its smallness. There are ways this could happen.
We already have prosthetics. Every clip on keyboard for a tablet is a prosthetic.
We already have plugins. Every docking station for a smartphone or a tablet is a plugin.
The quality of these will get better but they still destroy the idea of portability. If you have to haul around all the elements of a laptop in order to have effective access to computing you may as well put your laptop in your bag.
I can see (hold for the sci-fi wishlist) projection solutions, for both keyboards and screens where the keyboard is projected onto a horizontal surface, the screen onto a vertical surface. Or what about augmented and virtual reality? Google glass is a player, although the input interface is a dead duck for full scale computing, but how about an optimised VR headset that isn’t the size of a shoebox? Zuckerberg might have been thinking about the metaverse when he bought Occulus but there will potentially be other interim uses before humanity decides to migrate to the virtual a la “ready player one”, VR could be a part of that.
The more plausible call is a plugin / cheap access model hybrid. Some computing will always be use case. I think it’s likely that a household’s primary AV entertainment screen for example, the big one in the living room, will have its own box, maybe controlled by a mobile interface but configured such that your mobile platform isn’t integral to media consumption. The wider world, though, I think will see dumb screens and intelligent mobile devices working together everywhere. Sit down slot in your mobile, nice big screen, all your files, all your preferences, a proper keyboard (the dumb screen is largely static of course, so it comes with a keyboard) and touchscreen too of course, all in all a nice computing experience.
There are worlds of development to get there of course. Storage, security and battery life among other areas will all need a pick up, but if we have learnt anything since 1974 it’s that computers do get better.
I really woke up to Christensen’s theory with the example of the transistor radio, which was deeply inferior to the big old valve table tops that dominated a family’s front room, often in pride of place. The transistor radio was so poor that the kids that used them needed to tightly align the receivers with the broadcast towers. If you were facing South when you needed to face North, you were only going to get static. But that experience was good enough. Not for mum and dad who still listened to the big radio at home, but for the teenagers this was the only way they could hear the songs they wanted to hear, without adults hovering. All of those limitations were transcended eventually and in the end everyone had their own radio. One in the kitchen, one in every teenager’s room, the car, the garage, portable and not portable. Radio’s eventually got to be everywhere, but none of them were built around valves like those big table top sets.
Mobile computing is a teenager’s first wholly owned computing experience (I’m not talking about owning the hardware). It happens when they want it to, wherever they are. It’s also been many an older person’s first wholly owned computing experience, and it’s certainly been the first truly mobile computing experience for anyone. I might have made the trivial observation that laptops were the first mobile computers, and they were, but you couldn’t pull one out of your pocket to get directions to the supermarket.
Christensen’s theory makes many observations. Among them are that the technology always improves and ends up replacing the incumbent that it originally disrupted. The users of the old technology eventually move to the new technology, the old never wins.
We aren’t going to lose desktop computing though. Or at least we aren’t going lose the need states that are best served by desktop computing, if you want to write a 2000 word essay, your phone is always going to be a painful tool to choose. But you might be able to plug it in to an easily available dumb workstation and write away in comfort instead.
At least let’s hope so.
This essay is the follow up, long overdue, to this small observation, posted in February 2013.
The whole issue of disintermediation is one of the key phenomena of the internet age, yet for some reason digital advertising seems to have missed out. In fact, somewhat perversely, digital advertising has instead managed to go in entirely the other direction, filling up with a whole new class of mediators, the adtech companies.
I’m going to argue that this is a situation that cannot last long (years not months, probably at least 5). King Canute’s original command that the tide stop rising was never going to be a great business model.
Although the common telling of the Canute story has him drowning against the onslaught of the waves, there is evidence that he instead adapted to the new knowledge and changed his practices.
..he commanded that his chair should be set on the shore, when the tide began to rise. And then he spoke to the rising sea saying “You are part of my dominion, and the ground that I am seated upon is mine, nor has anyone disobeyed my orders with impunity. Therefore, I order you not to rise onto my land, nor to wet the clothes or body of your Lord”. But the sea carried on rising as usual without any reverence for his person, and soaked his feet and legs. Then he moving away said: “All the inhabitants of the world should know that the power of kings is vain and trivial, and that none is worthy the name of king but He whose command the heaven, earth and sea obey by eternal laws”. Therefore King Cnut never afterwards placed the crown on his head, but above a picture of the Lord nailed to the cross, turning it forever into a means to praise God, the great king.
I will cover the detail in a separate post, but I also believe that this could be the moment when digital advertising and digital marketing is forced to make some short term and slightly painful changes that will eventually open up a whole new vista of opportunity and the chance to truly lead the media landscape.
Before going into the depth of the essay it might be worth taking a quick 2 minute look at this link which explains the core ideas behind how the adtech model works. It’s a clever and accurate explanation and is easy to digest, an informative visualisation of a highly complex system. It will also provide clarity regarding exactly which part of the adtech universe I am talking about.
Mediators and Disintermediators
Digital advertising hasn’t been disintermediated yet because firstly it is in and of itself, a mediator (all advertising is) and like all incumbent mediators, is not keen to be disintermediated. Simply moving advertising from the offline realm to the digital is not enough. It might seem that web platforms force disintermediation all on their own, but they don’t. Other factors need to align as well.
For a start there needs to be a new model that can replace the current one, and that new model also needs to deliver enough consumer/user benefit to overcome the ever present inertias.
Amazon, for example, took the intermediation costs out of the book market but replaced the incumbent model with a different way to sell books. From the consumer perspective the loss of the mediators was painless, almost invisible, they just bought their books in a different shop that happened to be located on the internet instead of the high street, and for significant cost savings too.
That Amazon could offer digital shopping, that their technology existed and was functional, was the substantial factor that enabled the new market dynamics to take hold.
It’s not the same in advertising today.
With regard to advertising, the consumers in question are the advertisers, not the people who buy the advertised products.
Businesses buy their advertising from a large (and getting larger) ecosystem of multi-skilled providers. Very few businesses create their advertising in-house and generally do not own the resources required to do so (copywriters, art directors, studios, media planners, media buyers, systems, relationships….etc). As a result marketing services agencies and advertiser marketing teams are currently deeply co-dependent.
Disintermediation, by definition would seek to weaken that relationship and few people, on either side, can see how that could function. In short the advertisers are not using their market position to force disintermediation in the marketing services ecosystem, and hence their access to customers remains firmly managed.
Part of the reason for this co-dependency is the burgeoning complexity in the use of data.
The second and more problematic reason that we have mediation instead of disintermediation therefore, is related to what’s happening with all this data. Or rather not so much what is happening with the data, but why so much effort is being concentrated on the data. The companies that do things with data are in the marketing services sector, for the reasons stated above and whereas advertisers’ markets grow if they sell their products or services (via marketing or sales initiatives, or whatever) the marketing services sector grows when they sell more services to the advertisers.
The most important, acknowledged, growth opportunity in the marketing services industry today, is in data management and data targeting…. the enabler of such things is adtech.
This is the prime economic incentive driving growth in advertising at large and it has the simultaneous benefit of minimising macro level change too, as the focus remains on trading eyeballs even while the industry appears to be innovating.
On the surface it seems that everyone is a winner. Even the publishers.
The short answer, then, is that digital advertising is currently avoiding disintermediation due to the absence of a plausible replacement and the dynamics/politics of the growth opportunities for marketing services. The advertisers, who could credibly drive the need for such a replacement, via market forces, are currently heavily co-dependent on a vibrant marketing services world, and in thrall to its innovations in data. As a result they are not putting any pressure on their suppliers for anything except more of the same.
But the short answer is deeply unsatisfying. If adtech was the correct way to go, then the adtech market itself would be consolidating not exploding.
As it happens there really is something deeply wrong with the adtech model.
It’s all about eyeballs (and data)
The publishing industry has always been about eyeballs, because eyeballs generate their revenue. Much of the tech sector is going the same way. In the absence of subscription revenue much technology innovation is built on ad supported models.
Google is eyeballs and data. Facebook is eyeballs and data. Everything is eyeballs and data.
But the traditional home of disruptive companies, the kind that have generated the internet’s reputation for disintermediation is the technology start up sector itself isn’t it? So what’s going on?
It’s not that the startup world has incomprehensibly passed on by in the world of digital advertising, it’s that in this particular sector the short term incentives have lined up to foster a glut of mediation instead of disintermediation.
Ironic certainly, but the question that needs to be answered is, at what point does it become a problem?
I think we are already there and the reason why I say that lies in the wider mechanics of today’s digital publishing business model.
It is worth pointing out that I do not intend to set out a vision here for the whole of the publishing industry’s business future. What I want to do instead is to pull on just one loose thread, one that hopefully might reveal a feasible possible future, as the cardigan it belongs to starts to disintegrate.
The loose thread is this.
As a publishing medium the web has no inventory limit that is effectively bounded by physics, cost or performance.
- The cost of adding pages to a printed medium requires capacity at the printing press and the cost of adding relevant content that has commercial worth, to both consumers and advertisers
- The number of worthwhile sites to place outdoor posters is limited and generally fully explored
- TV is bounded by production and distribution costs and the fact that there are 24 hours in a day
- Radio has a finite audience, as does cinema, which is not growing
- Direct marketing messages are bounded by the size of target populations
The internet, on the other hand, offers trivial extension costs, a seemingly infinite audience and some unique content creation opportunities.
But what about worthwhile content, right? That surely establishes some kind of significant commercial boundary. Well, yes and no. Some parts of the publisher universe have to pay for their content, but the ability to originate all your own content is no longer an entry level requirement.
Take for example something like Buzzfeed, or the Huffington Post. Both very legitimate publications. Some of their content is provided free by the writers, some of it is given value by the quality of the data optimisation (multiple headlines, automatic analysis = loads of eyeballs, only one writer’s cheque and it’s a relatively small one, as the writer isn’t the most important part of the equation), and some of it, a small part of it, might be remunerated at traditional levels.
Some sites simply steal content, while generating ad revenues (whiffy publishers), and some sites use a variety of legitimate but infuriating tactics to up the impression count (3000 word articles split over 8 pages, for example), let alone the sites that stuff footers with 1×1 pixels.
These are all troubling practices but more worrisome is the rise of retargeting, via tracking technology and the buying of audience profiles through ad exchanges. Retargeting and profiling are more worrisome because they are totally legitimised by the industry at large, are vulnerable to significant fraud (the biggest problem) and are educating a generation of advertisers (client and agency) to disregard huge swathes of wisdom that we have, as an industry over many years, learned about advertising.
We have forgotten about the value of context and the quality of the ad environment.
This has happened because of the huge volume of useless inventory ‘magically’ masquerading as prime media, distorting the economics of the marketplace. It doesn’t help that misaligned incentives across the whole executional chain (publisher, agency, advertiser) are hampering the efforts to clean up the situation.
I should first explain why this inventory is useless and to be fair some of it isn’t totally useless (although it’s not far off).
Some of it is just low grade, being sold as something better because of the addition of insight from data analysis….this is the retargeting strategy. The idea that because I can track you from a premium environment to a low grade environment there is no need to pay the higher cost of the quality placement. After all you’re the same person aren’t you, why wouldn’t you be just as persuaded consuming an advert on GQ.com compared to say, uselessshite.com?
To be fair the idea of buying an audience, which is what we are seeing here, is not alien to advertising, it is after all similar to the way we buy TV. However, it is not a good like for like comparison, however much we want it to be. This is, again, because one medium has a finite inventory (TV), while the other is not even close to finding its market defined ceiling.
In this case it changes the dynamics of verification.
A TV buyer is aware if the network is dumping its weak spots in one buy, while the digital buyer doesn’t know anything about the contextual quality of their purchase. In both circumstances the buyer will get the audience they purchased but the TV buyer can exercise more control over the environment, and hence the value, that their audience is delivered into.
I dislike retargeting strategies because I think they are theoretically weak. Environment quality is important and should not be disregarded. I don’t like retargeting but it is at least derived from a legitimate concept, they are trying to put the ad in front of the right person. If that seems like a low bar for acceptance, it is.
Sadly it gets a lot worse from here.
The second class of useless inventory, is the inventory that isn’t even human. This is the bigger issue. Bot traffic is running between 30% and 46% of online display impressions, depending on which set of stats you want to use.
You did read that right. Roughly 1 in 3 digital display impressions being bought today aren’t even human.
There is no argument that can possibly turn these into good media tactics. If no humans are seeing these adverts then it follows, without controversy, that they can’t influence a human to purchase a product. Yet the industry at large is making these buys, seemingly with full knowledge every day.
This is not an unreported phenomenon, by the way, I am not breaking news here.
It’s very important to understand that media planners and buyers aren’t idiots. Not by a long margin. There must, therefore, be reasons that can help explain this behaviour.
First as previously mentioned the commercial incentives really don’t help
- clients are charged with hitting cost and reach targets
- agencies need to help them hit those targets, as that is the job they are hired to do, but also their business models are significantly based on trading volume
- publishers need to squeeze every penny they can from the market at the lowest executional cost possible
These conditions might explain why some parts of the industry are making these mistakes, but they aren’t sufficient to explain why the whole industry is making these mistakes. If this was all that was going on the only brands doing this would be those in urgent need of cost control. The successful top quartile, at least, would still be executing against ‘premium’ inventory. But they aren’t, so something even more foundational must be at work.
To understand what that is we need to investigate how this inventory is getting into circulation. The question is a harsh one, how can an industry that has operated for so long suddenly be duped into buying campaigns on such a flawed basis?
This is where the impact of adtech becomes a nightmare.
Adtech has created automated networks that operate, in real time on either side of the publisher. Publishers can buy part of their audience from a network, and then they can sell advertising against that exact same purchased visitor, through another network.
If you can manage the money such that the buy costs less than the sell, even if that is a tiny amount, then over vast volumes of impressions, the riches await. Moreover, if 30-40% of the traffic being sold is created by bots, those costs can be incredibly low, which in turn reduces the price needed on the ad networks. Which means that the advertisers are picking up impressions at a stunning low price too.
And that’s what’s happening, no joke.
There is a veil of respectability in buying these cheap impressions because the kosher media properties, who are looking to establish high cpm’s in exchange for real advertising value (with their premium stock) are using these exchanges to generate incremental revenue on their remnant inventory too. Along with the impressions being sold in old traditional deals between humans this is what makes up the 60% of inventory that is actually put in front of real people.
Because there is too much inventory out there, as explained by the economics of digital real estate as already noted, this massive cost saving can be comfortably excused as simple supply and demand dynamics.
However, as also already noted all this is not a secret. The industry is aware. It’s therefore a very fair question to ask how this situation could possibly be tolerated let alone enthusiastically adopted.
Part of the question is answered by the intense fervour generated around the technology itself. This technology is clever and complex, there is much to be admired in engineering terms. However, this very complexity enables a troubling but comfortable ignorance. It means that the buyers can’t vouch for the veracity of their buys even if they wanted to. As things stand they can’t be held accountable.
If that sounds hyperbolic I must point out that this complex machinery isn’t lamented for the separation of knowledge it creates, at all, quite the opposite in fact.
John Battelle just loves adtech
To make my case I would point you towards the following 3 links. They are all connected to, or written by John Batelle, an important and intelligent champion of the adtech/digital advertising/digital landscape. An industry leader, he is no fool.
First up is the highly instructive visualisation I linked to in the first part of this essay, that shows the technology I have been talking about in action. It’s well constructed and explains better than a wall of text how this all works. If you haven’t already please do look at this link even if you look at none of the others, it shows how the modern digital market operates more clearly than anything else I have seen, Battelle is very proud of it as he should be.
I am keen to highlight this explanation of the adtech model because I don’t want anyone to think that I have misunderstood what is being sold. This explanation is provided not by sceptics like me, but by enthusiastic cheerleaders.
Near the end we are triumphantly told that,
Dozens of sophisticated servers can be involved in a single ad placement, which takes less than a quarter of a second.
I’m really not sure that that is something any media planner should be happy reading, to be honest, knowing as we do that at least 1/3 of impressions aren’t even human.
Secondly we have Battelle’s homage to the deep importance of adtech. He is almost universally positive about it, although I sense a few moments in this ‘love letter’ that suggest a few notes of caution have made themselves known to him. If you think I am being snarky in describing this link as an homage and a ‘love letter’ I would point out in riposte that the title (seemingly without irony) of the post is “Why the banner ad is heroic and adtech is our greatest technology artefact”.
He proudly heralds the LUMAscapes as evidence of success rather than inane profligacy, champions the technology that allows “a pair of shoes to chase you across the web” as heroic, instead of creepy, but pays nothing but dismissive lip service to the deep seated concerns raised by several respected leaders of today’s digital world such as Lawrence Lessig, Jonathan Zittrain and Tim Wu, none of whom are uninformed luddites.
OK. Let’s step back for a second. When you think of this infrastructure, are you concerned? Good. Because it’s imperative that we consider the choices we make as we engage with such a portentous creation…..
What are the architectural constraints of the infrastructure which processes that information? What values do we build into it? Can it be audited? Is it based on principles of openness, or is it driven by business rules and data-structures which favor closed platforms?
These questions have been raised, and continue to be well articulated, by Lessig, Zittrain, Wu, and many others. But we’re entering a new, more urgent era of this conversation. Many of these authors’ works warned of a world where code will eventually augur early lock down in political and social conventions. That time is no longer in the future. It’s now. And I believe as goes adtech, so goes our social code.
My emphasis added. He acknowledges concerns, big concerns, but is alarmingly sanguine about trying to solve them. If adtech is our social code as he suggests, then our future is damningly bleak.
However the 3rd link is the one I struggle with the most.
This is where we learn that Battelle has been appointed as co-chair, by the IAB, of the “traffic of good intent” task force. Good intent in this context means traffic that is actually worth buying. In their words….to,
more effectively address the negative impacts” of bots and other “non-intentional” traffic.
Which quite frankly is a weak mission statement when the actual job required is the eradication of the massive fraud at the heart of digital advertising.
Still what can we expect, the co-chair of this task force is the same man who believes that
Every retail store you visit, every automobile you drive (or are driven by), every single interaction of value in this world can and will become data that interacts with this programmatic infrastructure.
Right. Time to step back and return to my narrative. I need to explain why Battelle cannot idolise this technology and also solve the fraud problem without destroying privacy.
[I must state, at this juncture, that my concerns, the reason I am writing this, are not rooted in a fear of a world with no privacy, although I do not wish to live in that world. In truth my concerns are for the veracity of my trade, I work in advertising, it’s what I do, I would prefer it be intelligent and effective not automated and fraudulent.]
Battelle does a great job of demonstrating and lauding the very complexity that totally obfuscates the ability to examine the veracity of the media buys. It is this complexity that forces a critical compromise on us.
Don Marti lays the compromises bare, as a 3 way play. “Ad tech, privacy, fraud control: pick two?”
He makes a strong case.
Remember ad tech is being gamed by big volumes of fraudulent non-human ad traffic, being passed off as the real thing. As Battelle’s graphic has pointed out the data work that fuels this market is all profile based and anonymous. No-one is following around named individuals, instead they are following a 35yr old, male, that plays golf and was recently browsing for a new pair of shoes. They know a lot about you as it goes, but not your name (apparently).
Under this system there is enough space for 40% of the traffic to be spoofed without getting caught.
The only way to resolve the fraud then, and maintain the adtech structure is to deliver complete visibility of who is who, where and when.
Or to put it another way to answer the question of whether or not that impression, that was just purchased, is a bot, or a human requires that you the human forgoe your desire, your right, to remain an anonymous data point. There is nothing partial about this solution, the only way to really kill the fraud is to always identify the humans. But not in a binary, human/not human way, a simple flag is too easy for a bot to overcome or to spoof. To really kill fraud, and maintain the adtech structure we will have to relinquish our digital privacy in totality.
All of a sudden this line from Battelle becomes more worryingly profound,
Every retail store you visit, every automobile you drive (or are driven by), every single interaction of value in this world can and will become data that interacts with this programmatic infrastructure.
The italic emphasis in that quote is his by the way, not mine.
So, which couplet do you fancy most?
- Ad tech + privacy = lots of fraud
- Ad tech + fraud control = no privacy
- Fraud control + privacy = no ad tech
Option 1 should be totally unacceptable to the whole advertising industry (publishers, agencies and advertisers) but it’s actually where we find ourselves today.
Option 2 is, ultimately, in my opinion outside of the influence of the advertising industry. Even though Facebook has revealed significant comfort with the idea of hugely reduced digital privacy, among great swathes of the population, there are big signs that this trend is not fully supportable long into the future. Natives are aware of the role of online personas and spend as much time spoofing the ‘correct’ image on the mainstream tools, as they do behaving like teenagers on the networks that Mum, Dad, aunty Glad and uncle Bill don’t know about. The Snowden/NSA revelations are not going to help reverse that trend.
Moreover the world at large needs a level of achievable anonymity to function. Cities were the first such artefact, providing the young with an arena within which to grow and find themselves, outside of the gaze of those who ‘know’ them best. There is a reason much innovation is rooted in the city, not the farm.
Option 3, on the other hand, makes sense for a lot of reasons, not just in terms of privacy. For a start it would force digital advertising to re-assess some of the more human aspects of driving useful commercial interactions instead of ceding more and more decision making to misunderstood algorithms. We might even start to understand how best to use the digital medium as a branding channel, something that has been glossed over in the data revolution.
It could also be part of the evolution of digital publishing into a fiscally secure enterprise. Rampant content theft by rogue commercial entities and the eroding of premium rates for quality placements would by definition be somewhat stymied. It would also start to create an economic boundary for the quasi infinite inventory problem by reintroducing meaningful performance limitations.
A good result all round no?
So how do we get there?
If we want to clean up this situation it is painfully obvious what needs to happen, we simply take adtech out of the picture and in its absence return to first principles. The reasons why advertising works, why it sells products, haven’t changed because of the internet, we can still function without adtech.
The trickier part of the equation is getting to this inflection point, getting the industry, at large, to choose to move past adtech. That’s a lot harder and the nature of that journey will be influenced by whose outrage is powerful enough to drive change, consumer outrage, advertiser outrage or a bit of both.
It’s true that the consumer is concerned about privacy today in ways that have been lacking in recent years. This is inspired by Snowden and the NSA, as opposed to creepy adtech. Yet here in the UK there still doesn’t seem to be a huge concern. This strange silence, this peculiar lack of British outrage, is not mirrored across the Atlantic though, and what is finally adopted in the States will soon become the standard here too. Similarly our European neighbours seem to have more concerns in this regard than we do, certainly the EU seems more committed to consumer protections in this area than the British government or population. So, even if the US doesn’t move I suspect the EU will.
The connection between commercial surveillance and state surveillance is robust and real, there can be no reform in the one without the other. So I fully imagine that regulatory responses to the Snowden scandal will impact on commercial data practice.
Finally there is the influence through market forces of consumers seeking privacy friendly solutions to account for, although the scale of such influence is not likely to prove substantial in the shorter term, in my opinion.
These factors, stemming from consumer outrage are actually relatively weak as agents of the kind of change I’m advocating. However they do suggest that the commercial surveillance machinery, adtech, will struggle to deliver the total surveillance advocated by Battelle. And if that is true then the ability to stem the tide of impression fraud is severely compromised.
The real force for change will likely be advertiser outrage and that will come from a different direction.
If total digital transparency doesn’t arrive to solve the fraud issue, then it will not take long for advertisers to start asking the hard questions that they should really be asking today. Eventually they will refuse to pay a 40% surcharge for their media. The pervasive reality of useless inventory is not a secret, it can only become knowledge to a wider audience not a smaller one. Soon enough the advertisers’ CFOs will cotton on.
At that point the advertisers will have to insist on fraud control, and the only option that will be able to deliver it, is turning off the adtech machine. You can imagine a discussion between the CMO and the CFO that goes a little bit like this.
CFO: Why are you buying inventory that you can’t verify?
CMO: Well if we stop using exchanges we won’t be able to hit our reach and cost targets
CFO: But you aren’t hitting them today. 40% of your traffic is robots!
CMO: Well, that’s true but the only way to buy volume that cheaply is via exchanges. We can’t ignore digital advertising, we have to be there, it’s a cost of doing business
CFO: OK, well can you show me data that proves that these buys are effective? That it’s a cost of business worth paying?
CMO: ah, no not really. You’ll have to trust me.
And here we come to one of the genuinely hidden aspects of all this, the effectiveness question. It’s very important and is a big part of the reason why this situation persists. You see, both the CFO and the CMO are talking some sense here, as strange as that sounds.
Measurement and Effectiveness – the Great Big Stinky Elephant in the Room
If you don’t work in advertising or marketing it might seem crazy but it has long been difficult to conclusively prove which sections of an ad campaign were successful. Analysis is conducted mostly at the level of the media channels that were used, chiefly because this is how the money is committed.
So, for example, the CMO wants to know if £10 million is best spent on print or TV or radio or digital. More precisely actually, she wants to know what the optimal mix of those channels should be, the way channels work together is crucial, particularly with respect to digital. It’s not easy to do, and in some cases not possible at all.
Direct marketing, the stuff that asks you to phone up and buy something (or visit a website) was always more measurable. Should the call centre get 1000 calls tonight? What is the target conversion rate, 15%? Did it happen? Both the calls and the sales can be counted so that question can be answered.
That worked well enough for a long while but it wasn’t perfect. Direct marketers would always like to send out their ads while there was a peak in brand awareness (often coinciding with the big TV campaign), even though the cost of that brand campaign was rarely incorporated into their ROI analysis.
Most ad spend wasn’t in direct marketing though, it was in brand marketing. If you only go back even just 20 years the media landscape was more concentrated and more innately understandable and a big chunk of the advertising dollars were intended to push shoppers to retail outlets. As a result proxy metrics like brand awareness were the best we had, technology simply couldn’t follow you from the ad to the purchase. These proxy metrics favoured big spends in the classic mass mediums, with TV considered king of the hill.
Although it is an oversimplification there is some truth to the idea that whether or not you advertised on TV was mostly a factor of the size of your overall budget. If you could afford to do TV accepted wisdom was that you should. TV has a low cost in terms of cost per thousand impressions, but there was a substantial cost of entry, it is both cheap (relative cost per thousand) and expensive (total capital outlay). To shift brand metrics costs millions of pounds. There were lots of exceptions of course, but the general principle stood up to an intuitive examination, and when good econometric studies were done (not often enough) these big spends were usually identified as most likely to shift the metrics. TV still dominates media spends today by the way. This accepted wisdom has not yet been supplanted.
Nonetheless when digital advertising came of age it was thought that we might be entering an era of eminently more measurable advertising. As it turned out nothing could have been further form the truth.
Digital is an innately direct medium. Every interaction can be counted, if your computer sees my ad I am made aware of that by the ad serving tools. If you click on my ad, then similarly this is a solid data point that can be collected.
Even though digital was a direct media it was never really ceded to the direct marketers, instead it became a part of the brand marketers playbook.
The resulting problem wasn’t one of competence, the counting methods at the heart of direct response were only ever a challenge of the ego (spreadsheets, moi?) and easily overcome, the problem was one of competing evaluation methodologies.
To fit digital into econometrics (the method of choice for evaluating brand campaigns) it needed to be assessed against the same cost metrics as the other channels, which is cost per rating point (a coarse profiled audience buying metric) and reach and frequency. This wasn’t possible as the volatile digital landscape debarred effective reach and frequency measurement (a % based metric) while absolute cost metrics (price per point) were replaced with performance metrics (cost per click). The absolute cost metrics existed of course, but they weren’t driving buying decisions.
There is nothing wrong with either body of cost management, but you simply cannot use them interchangeably, and facilitate a meaningful analysis. This is evidenced by the ongoing inability of the industry to effectively quantify the effect of TV aircover on digital acquisition campaigns. It stands to reason that there will be a relationship but quickly measuring and quantifying that effect after the adverts have run and the business results are in….we haven’t got there yet.
The real takeout here? If you manage a reasonably sized advertising media budget, you still can’t accurately and quickly understand which part of your spend works well and which part doesn’t.
That was a long explanation of the vagaries of measurement, and do I apologise, but it is this inability to measure that is the reason why the CFO and the CMO can both be partially right.
It’s a madness to willingly spend 40% of your digital budget serving ads to robots. The CFO is therefore right to ask why.
But it’s also true that an advertiser with a decent budget these days is likely to be best served by a multi-channel strategy and that digital display should almost certainly be a part of that. And while the industry at large is accessing the remnant market (via exchanges) It would be a very brave CMO who decided to take a stand, all on their lonesome.
In truth the CMO probably should stop buying on the network exchanges, but her costs will rise significantly as most advertisers very sensibly take advantage of remnant markets to control costs (in most channels not just digital) and whereas that might well be the right thing to do academically, there are few finance departments that would be comfortable with such an overnight increase in relative cost.
So change will likely only come when the CFO gets wind of the adtech fraud. Money talks and the CFO is the ultimate money man. Until then the CMO is unlikely to highlight the issue, and subsequently will be unable to explain the increase in media costs and the reduction in reach.
The pressures that will lead to a solution are therefore essentially political. Both the politics between CFOs and CMOs and between digital tech evangelists and more widely obligated marketers (client side and agency side), while the regulatory politics that will play out in response to the NSA debacle and consumer privacy concerns will also play their part.
When your ability to understand what is working is impaired you must return to first principles. So, we kill the adtech, and we once again plan ad campaigns according to the hard won knowledge of 100 years of advertising experience.
Back to disintermediation?
You might have noted that I still haven’t explained why there is no disintermediation in digital advertising. What I have done, instead, is show why the sector has been flooded with surplus mediation and proposed a sequence of events that might lead us to a position where digital mediates between advertisers and customers in the same way that offline advertising does.
However, the possibility that the internet can do to advertising what it did to book selling is more than a pipe dream. There is a lot of work being done to deliver what is called vendor relationship management, whereby the innate value of data insight is harnessed to bring efficiency to the commercial marketplace. But that data will be controlled by the consumer, not mediators.
More to follow in another post.
In 2003 I developed a concept I called AISS, which stands for All Ideas Start Somewhere. It was just a series of thoughts, nothing was ever built.
By today’s standards it isn’t anything to make you look twice, it was simply an idea to track what I saw would be the vast publicly available reaches of content. And then to understand how that content moved through the various technical layers that presented it to us, the public.
It was clever, for its time, but I still couldn’t get anyone to spend money to put it together. It is, of course, a lot easier 10 years later to claim with hindsight, that it was a good idea.
Since then, of course, various parts of the concept have come to pass, completely independently of me, and none in the exact format that I was suggesting. My logic was not unique, just early and unconnected to working capital.
That said I still think that an opportunity has been missed. The thing that I think is still absent is best thought of as a missed turning, a right / left option that when taken led into a murky forest instead of an angelic dale or meadow.
In 2003 I was suggesting that the unit of tracking should be the content itself, and that we needed to monitor the movement of the content through our digital ecosystems.
Instead we track people.
The internet is a vast looking glass that hardly anyone uses to its fullest potential. Instead of understanding what people want and giving it to them, we instead understand where people are and give them what we want them to have.
As everything changes, nothing changes.
Marketing has spent most of its life pushing crafted messages at vast demographics hoping for response rates that are typically less than a fraction of a percentage point. Brand marketing has hitched its wagon onto any number of perceived positive memes hoping that some of that positivity will attach to the brand itself. It’s a logical idea, but I’m at a loss as to why no-one seems to use the internet’s salient visibility to track the ideas or memes that are naturally liked by the target demographics.
Instead we have built a vast people tracking infrastructure that is so sophisticated it was co-opted by the NSA and GCHQ. This isn’t about the Snowden revelations, or at least it’s not about the morality of the Snowden revelations, it is instead a short note to point out that we have built such a great and vast tracking panopticon that even the spies wanted in.
And in return? Well, we still don’t get response rates that beat what we could get 20 years ago while we are failing to build trust with our customers over the use of their data. And the key words in that last sentence? …their data… it might sit on company databases today, but it is still their (the consumer, the prospect, the customer) data. It should be treated as such.
In economics, disintermediation is the removal of intermediaries in a supply chain, or “cutting out the middleman”. Instead of going through traditional distribution channels, which had some type of intermediate (such as a distributor, wholesaler, broker, or agent), companies may now deal with every customer directly, for example via the Internet.
Meanwhile from LUMA Partners, we have LUMAscapes, which are pretty impressive visualisations of a number of different parts of the current digital marketing landscape (display, search, video, mobile, social, commerce, digital capitol and gaming). Here’s one for display advertising.
This is going to be a short post.
I thought I’d just make the observation, with very little in the way of commentary (none in fact), that despite the internet’s reputation for disintermediating everything it touches, there doesn’t seem to be much disintermediation going on in digital marketing, that’s all.
Update: I have now tried to answer the question properly, here