20 links

I’ve not been posting much, November last year was the last time I got anything out, but I have been reading a lot. So I figured I’d put up a bunch of links, all of which I think are worth reading. In no particular order…..

 

Eigenjesus and Eigenmoses

Instead I asked myself: “what other ‘vicious circles’ in science and philosophy could one unravel using the same linear-algebra trick that CLEVER and PageRank exploit?”  After all, CLEVER and PageRank were both founded on what looked like a hopelessly circular intuition: “a web page is important if other important web pages link to it.”  Yet they both managed to use math to defeat the circularity.

 

 

Confessions of a Sociopath

Perhaps the most noticeable aspect of my confidence is the way I sustain eye contact. Some people have called it the “predator stare.” Sociopaths are unfazed by uninterrupted eye contact. Our failure to look away politely is also perceived as being aggressive or seductive. It can throw people off balance, but often in an exciting way that imitates the unsettling feeling of infatuation. Do you ever find yourself using charm and confidence to get people to do things for you that they otherwise wouldn’t? Some might call it manipulation, but I like to think I’m using what God gave me.

 

Airport runway markings

Another thing to keep an eye out for? Markings that have been blasted away. The FAA keeps strict rules about how paint or marks are removed—they’re absolutely never to be painted over, since the decay of the new mark might wear away to reveal conflicting information. Instead, airports have to sandblast or power wash the old paint away and re-do it.

 

Common sense regarding the Apple watch…

And yet, the panel seemed perfectly comfortable speculating for over an hour on what features in the Apple Watch would be delivering a great user experience and how users would find the Watch valuable. This really surprised me because the panel of experts was speculating about features that are available to test run today…on Android Wear.

 

Oh, regulators…such fun

Silva related how the top bankers in the nation were asked to contribute money to save Lehman. He described his disappointment when Goldman executives initially balked. Silva acknowledged that it might have been a hard sell to shareholders, but added that “if Goldman had stepped up with a big number, that would have encouraged the others.”

“It was extraordinarily disappointing to me that they weren’t thinking as Americans,” Silva says in the recording. “Those two things are very powerful experiences that, I will admit, influence my thinking.”

 

The Economist talks sense about online advertising

Essentially their model — a lot of seem to be reliant on “eventually we hope AOL or Yahoo will buy us, and then it’s their problem if we don’t make any money.” So they’re doing some really good journalism, but I think there’s a problem there.

 

Charlie Munger …you’d best read this if you like hearing how the really successful guys do things 

When I talk about this false precision, this great hope for reliable, precise formulas, I am reminded of Arthur Laffer, who’s in my political party, and who is one of the all-time horses’ asses when it comes to doing economics. His trouble is his craving for false precision, which is not an adult way of dealing with his subject matter.

 

It took some time but people are finally looking at apps in a more instructive way

By comparison, news-related applications do not requires a lot of phone resources. They collect XML feeds, some low resolution images and render those in pre-defined, non-dynamic templates. They use a tiny fraction of a modern smartphone’s processing power.

 

This explains why we have security theater in the post 911 world

Yet the attack involved not merely the towers, but two other actions, in particular a successful attack on the Pentagon. Why is it that only a few people take proper notice of that? If the 9/11 operation had been a conventional military campaign, the Pentagon attack would have received most of the attention. In this attack, al‑Qaida managed to destroy part of the enemy’s central headquarters, killing and wounding senior commanders and analysts. Why is it that public memory gives far more importance to the destruction of two civilian buildings, and the killing of brokers and accountants?

 

The sharing economy – a critical assessment

What triggers a mutation? Schumpeter insists that this evolution is disciplined above all by new consumers’ needs combined with the new institutional forms that must be invented to reliably meet those needs.

 

That point where you go…ah its all bulls**t

Venture capitalists may remind founders not to get carried away because they may need to raise money again soon, perhaps in a less-favorable market. Fundraising at a lower valuation is known as a “down round.” It’s a major Silicon Valley no-no in terms of perception, and it can have negative effects, depending on the other stipulations of the agreement.

 

You should probably just read everything this guy writes, just for fun

In Yemen you don’t really talk while eating. This makes the heavily-armed diners staring up at me from their carpets seem even more menacing. The wait staff, on the other hand (at least I assume that’s what they are) run round shouting at the top of their voices. Almost everyone has already purchased a bag of qat, which they’ll start chewing after lunch. The qat is leafy and kind of bulky, and comes in a colored plastic bag. Some men tuck it under their shirt, so it hangs over their belt like a paunch. Others carry it in their hand. The effect is to make everyone look excessively health conscious and obsessed with salad.

 

Who knew agile was a labour movement..

The fifth principle asks that managers trust developers and create a pleasant work environment that motivates good work. The eleventh principle advocates for self-organizing teams—giving developers autonomy and freedom to choose the projects and features that most interest them and avoiding a dictatorial style of management.

 

Always bet on text

It can be indexed and searched efficiently, even by hand. It can be translated. It can be produced and consumed at variable speeds. It is asynchronous. It can be compared, diffed, clustered, corrected, summarized and filtered algorithmically. It permits multiparty editing. It permits branching conversations, lurking, annotation, quoting, reviewing, summarizing, structured responses, exegesis, even fan fic. The breadth, scale and depth of ways people use text is unmatched by anything.

 

So, music is in a good place…

I’m sure it happens every day – that a kid in one of these far-flung places can find a new favourite band, send that band a message, and that singer of that band will read it and personally reply to it from his cell phone half a world away. How much better is that? I’ll tell you, it’s infinitely better than having a relationship to a band limited to reading it on the back of the record jacket. If such a thing were possible when I was a teenager I’m certain I would have become a right nuisance to the Ramones.

 

A view on where London is heading, might explain why the conservatives are so complacent about a Brexit

In order to gain the position of a global financial super-hub, and potentially that of world financial capital, the City of London has been making an effort to attract Chinese and Islamic finance, both in rapid expansion.

 

Context collapse, women and the internet

We no longer know what neighborhood we’ve wandered into, we don’t know the shape of the building, or the condition of the room. We don’t know what other people are wearing — our first and most profound way of signaling cultural affiliation and social-economic status.

 

How it all began

“But as time went on, every time there was a problem in the camp, he was at the centre of it,” Abu Ahmed recalled. “He wanted to be the head of the prison – and when I look back now, he was using a policy of conquer and divide to get what he wanted, which was status. And it worked.” By December 2004, Baghdadi was deemed by his jailers to pose no further risk and his release was authorised.

 

There’s something about medium that causes me to pause but I still like what is being said here…

That’s why, internally, our top-line metric is “TTR,” which stands for total time reading. It’s an imperfect measure of time people spend on story pages. We think this is a better estimate of whether people are actually getting value out of Medium

 

We need more people to burst these bubbles

I, personally, want to put a gold chain on my phone, pop it into a waistcoat pocket, and refer to it as my “digital fob watch” whenever I check the time on it. Just to make the point in as snotty and high-handed a way as possible: This is the decadent end of the current innovation cycle, the part where people stop having new ideas and start adding filigree and extra orifices to the stuff we’ve got and call it the future.

 

Advertisements

How will brand advertising work? (Disintermediation and the curious case of digital advertising continued)

This is the third installment in my series exploring why the internet has so far failed to disintermediate digital advertising yet at the end of this essay we still won’t have arrived at a solution that describes a fully disintermediated model for digital advertising. That’s because I’m still trying to work it out.

This blog is chiefly written, selfishly, to force me to examine and understand things. This particular topic has been no different and I have found my perspectives changing quite a lot as I dig further into the issues. Writing this particular essay has taken some time, I published the last one in March, 8 months ago, and a lot has changed in the ad tech landscape in that time. I’m sure ad tech is here to stay, but I’m also sure that problems caused by the current ad tech deployment are still problems.

This exploration is about advertising primarily, and therefore is approaching these problems through the advertising filter uniquely, even though the issues relating to data ubiquity and misuse affect more than just advertising paradigms.

From the point of view of an advertiser the biggest problem with ad tech (programmatic as it’s called by advertisers) is that it, and the internet at large, is not currently setup to deliver brand advertising. At all.

The goal of this essay, therefore, is to examine what needs to happen to mould the internet publishing space into one that can support brand advertising goals as well as direct response or performance goals, something that I believe is very important.

Somehow, somewhere along the way we are going to have to fix the revenue side of modern digital publishing without recourse to clickbait and lists, which have their place for sure I guess, but should not be the only revenue lifeline for publishers. Similarly the trend to ever more intrusive and creepy data work is unlikely to end well.

In my opinion the rescue of revenue models for premium publishing output is important for lots of reasons but it is particularly important for the future of brand advertising.

Metrics are more than just scorecards – a quick recent history of brand and DR media process

I started working for a global media agency in early 2000, as a direct response (DR) specialist working on the account of a global technology/computing brand, working across their European markets.

At my interview I was asked if I was familiar with direct response metrics, which at the time meant cost per response (CPR) and cost per acquisition (CPA) with a little ROI (return on investment) thrown in. In late 1999, when that interview took place, we didn’t have cost per click on the dashboard, because the use of digital media as an advertising medium was still very young and because where it was getting attention it was being treated as a branding medium.

I remember being quite taken aback at that line of questioning, not offended just surprised. I had thought it was an entry level concept, to have an understanding of the very basic metrics CPR and CPA. What after all is there to understand? It is worth pointing out that the question wasn’t asking about attribution techniques, nor was it hinting at deeper subtleties about measurement in terms of statistical validity, longer term lifetime value or brand halo effects.

The questioning was as basic as could be simply because my interviewers were not in any way familiar with these metrics. They were, for the record both very sharp people, but they had no experience of direct response marketing, and certainly no experience of the management of media within direct response operational models.

Somehow or other I must have covered my bemusement well enough because I was eventually offered the job.

This not entirely exciting anecdote points out something much bigger than the scope of understanding that 3 people held in late 1999 about DR metrics. What it showed was that the wider media agency world’s intellectual alignment in 1999/2000 was almost exclusively focused on the metrics, methodologies and theories of how to build brands.

There were some agencies that focused on direct response as a discipline but they were specialists, and in comparison to the brand companies in the media agency space, they were very small. When I was working client side in financial services we used one of these specialist agencies. As a result I was as unaware of the branding metrics, reach and frequency, as my interviewers were of CPR and CPA. During that strange interview worlds were colliding. If the roles had been reversed my interviewers would have been as bemused as I was.

Today it is all very different.

There are no media agencies that do not offer a DR product (in the extreme cases only because they will at the very least offer digital media buying) and the DR product is no longer the idiot brother of brand. Back in 1999 it was the home of junk mail and “sausage machine” decision making driven by spreadsheets (as difficult as it is to imagine, in 1999/2000 spreadsheet skills were rare in media planning circles), nobody in my agency was looking to take my job or even to join my team. DR was considered very much the low rent discipline, to brand’s shining pinnacle.

The advertising cognoscenti of the day would have been in revolt if they had understood just what the advent of digital was going to do to the ascendancy of direct marketing (the fact that those very same people now lead the DR/ad tech frontier says a lot about how trends propagate within advertising agencies, but that’s a topic for another day).

True to their heritage the old school brand agencies, which for good reason are the biggest agencies out there, have re-branded the discipline of direct response marketing. The DR department is now called the performance department and the job titles are all branded with the new sobriquet – performance planners, performance strategists, performance directors.

There is something slightly distasteful, yet ultimately revealing, about being re-branded by the advertising community in order to be recognised and ultimately assimilated by them. Let no-one tell you that the agency community doesn’t do irony, even if they don’t recognise it as such themselves.

Clearly the factor that changed everything was the rise of digital. Or more specifically, the uptake of digital behaviour among our populations which in turn, led to a transfer of where those populations spent their media consuming hours.

At its very most simplistic core, media planning involves understanding this simple idea. We establish where people spend their time, and then we follow them into those spaces. The finest of campaigns go beyond such basic thoughts to generate greater value and resonant emotional impact, but they never lose sight of the core need to find the right people and put the message in front of them.

So, when media consumption increased in digital channels, digital advertising was sure to follow. At first the community in the brand agencies maintained the belief that this was a brand medium, not to be sullied by the techniques of direct marketing. I had many a good natured discussion (argument) with my colleagues in the digital team about where the channel was ultimately going.

Some of this perverse position was driven by a general resistance to change but it was also fuelled by the realities of the agency business model and our relationships with our clients. If 95% of the money you spend on behalf of a client comes from the clients with responsibility for branding, then it makes sense that you tell them this new channel, this new frontier is great for their brand. Which is what we did. Which is also why, finally, when the client communities caught up with reality, so did the agency models.

If for this reason alone, I think the generally assumed idea that frontiers in advertising are driven by the agency community over the client community is complete hogwash. Agencies sell what clients will buy, or they go out of business, there is little real room for leadership from the agency community (at least not while margins are tight).

Slowly, through the 2000’s almost all of the top media agencies built out and incorporated cross channel DR teams and gradually allowed the digital channel to be planned against the tsunami of new performance data being generated every day. The development of Google’s search product played no small part, effectively removing any space for an ad planner (of the day) to see brand theory being deployed through the small data driven text ads on the side of Google’s results page.

Where was the brand man?

Nowhere. Nowhere at all. Largely because the other big factor in this story is the failure of branding metrics ability to be operationally useful in the new medium. Whereas DR looks to identify and count the number of actions that a consumer must pass through on their way to a purchase (served the ad, clicked the ad, arrived at the site, put item in basket, paid for item) and then uses these data points to plan further activity, brand advertising is planned against metrics that are much less precise. This is a feature not a bug.

Reach and frequency are the planning staples of brand advertising. What percentage of your target audience saw your advert (reach), and on average how many times did they see it (frequency). You will notice that these metrics are conspicuously absent of any reference to business value, there is no mention of cost, and no reference to sales or indeed any consumer commercial activities. These metrics are measured against total population too, the full circulation of a newspaper for example, and not the smaller percentage of the circulation that looked at every ad placed there. Who reads every single page of a newspaper? Some do, but clearly not all.

Understanding whether a brand campaign was successful or not was a matter for econometric modelling after the fact, which in turn then fed the heuristics that we used for planning (effective frequency of 3 anyone?). But mostly it just didn’t happen very often. It was not without some foundation that those of us in the DR world often joked that the key skill of a brand planner was a bottomless ability to post rationalise failure (although to be fair, and even though that wasn’t an entirely accurate observation, it was undoubtedly a useful skill for an agency planner to possess).

Nonetheless, and for all my DR snark, there is a good reason behind this fluffiness. Contrary to opinion in certain quarters the theories that drive advertising, both direct and brand, are derived from a great deal of experience and knowledge.

The reason for this vague set of metrics is at the heart of brand planning media strategy. Broadly speaking brand advertising is about making the target feel something, whereas DR advertising is about making the target do something, and something very specific at that (call this number, click this ad). The following couplets are not absolute statements, there will be many exceptions to almost all of them, but they still round out a good basic approach for understanding the different tasks the 2 disciplines can undertake

  • Brand generates demand, DR harvests it.
  • Brand operates almost exclusively in the field of emotional response, DR provides the, seemingly, rational spur to action (price, product features, short term offers, superior financing, a free pen).
  • Brand often works in the periphery of your perception, DR demands your full attention.
  • Brand is built on conspicuous waste, DR is all about efficiency (it is no accident that almost all DR metrics have cost as a key component).

The idea of conspicuous waste is very important, it is the same idea that explains the peacock’s feathers…“I am healthy, I can waste resource on this fanciful display”. In earlier times this was incompletely characterised by the phrase, often seen in print ads, “as seen on TV”, implying that a company that could afford to be on TV was a substantial and trustworthy entity. The dynamics of today’s TV market may, in some cases, make a mockery of such a thought but at the time it was valid and valuable.

The landscape may have changed, but the human psychology that these ideas feed on has not. Establishing worthiness through conspicuous waste is still important, it’s just a little harder to do, you have to be more accurate in more spaces than you did 30 or 40 years ago. Mass media, social media, design, interface quality and brand/product experience are now all major contributors to a brand’s intrinsic health.

The theory that encompasses this idea of conspicuous waste, this peacock’s feather, is called signalling. Humans and other animals use signalling throughout their lives, and so do brands. Brands signal in many ways, as mentioned above, but advertising remains one of the most controllable tactical and strategic weapons we have, and as such retains a particularly valuable place in the business tools arsenal. It is one of the very few things that can be turned on or off tomorrow and that makes it very important.

Rory Sutherland, vice Chairman of the Ogilvy group in the UK and ex-president of the Institute of the Practitioners of Advertising has been talking about signalling for years. He offered these thoughts during a talk he gave at the Business of Software conference in 2011.

Specifically on the subject of peacocks feathers (my emphasis added).

Now arguably the Ferrari is effectively the sort of you know, the human male equivalent. It also interestingly has to be slightly pointless or wasteful to have meaning, it has to be a handicap you know, if woman (sic) were merely attracted to man (sic) with expensive vehicles they’d all chase truck drivers but the truck doesn’t serve a useful signalling purpose because it’s actually useful

Sutherland translates this idea to the challenge of building brands. The context of this next quote was the idea of buying an engagement ring (signalling as a biological theory is almost entirely about mating).

What this is saying, what this upfront investment is saying is ‘I am probably, because this is expensive, and to have meaning signalling sometimes has to cost money, I am probably playing the long game’, OK? And a brand is doing the same. It has taken me 15 to 20 years to build this brand, it has cost me an enormous amount of money in terms of advertising and reputation building, therefore it is probably not in my interest to make a quick buck by selling you something that is rubbish.

Anti-signalling

On the other side of the same coin Don Marti does a great job of showing that highly targeted DR advertising can’t send effective signals. Don’s observation here is particularly poignant for businesses that only engage in DR advertising.

Let’s use Lakeland’s example of carpet. I can go carpet shopping at the store that’s been paying Little League teams to wear its name for 20 years, or I can listen to the door-to-door guy who shows up in my driveway and says he has a great roll of carpet that’s perfect for my house, and can cut me a deal.

A sufficiently well-targeted ad is just the online version of the guy in the driveway. And the customer is left just as skeptical.

He also points to the famous Man in the Chair advert (via Eaon Pritchard)

advertising man in chair

The man in the chair

If you take all these quotes and ideas together you come to a simple conclusion, one of many perhaps, but one that I want to highlight here. Targeting, the idea of targeting, suggests a spectrum of quality from high precision accuracy to missing by a mile. In the world of weapons we can understand the different use of a shotgun and a sniper’s rifle and the world of advertising is the same. We have DR targeting which in its purest expression is the sniper’s rifle and we have brand advertising which is more like a shotgun, lots of the bullets are wasted (by design) but we still manage to shoot the bastard.

So brand uses quite a lax targeting regime while DR is built entirely around the concept of very accurate targeting. The maxim that drives DR is “right message, to the right person, at the right time”, which today operationally translates to the high level of targeting that has been hyper realised by big data in the online space.

Brand, on the other end of the spectrum, works with targeting that is considerably more aggregated than that. This is part of the handicap that Rory Sutherland refers to, the difference between the Porsche and the big truck, for advertising, is the ability to advertise in high value, high cost, environments without worrying about wastage and efficiency, without worrying if everyone there is a potential customer. Knowing that the value of the signal this delivers is worth it.

Money

Right now I can hear sceptical marketers and media buyers thinking “Get out of town. If that was true why do we bother to negotiate so aggressively for the media space we run brand campaigns in?”

On the surface that is a good enough question, but only on the surface. After Sutherland’s statements on handicapping and utility you might be forgiven for thinking that the price of the media buy is what matters. But it isn’t. The cost of the media buy is largely, if not entirely, invisible to the potential customer. Only in certain marquee scenarios is the cost issue widely acknowledged, like the Superbowl half time ads.

The Superbowl ads are a great example of signalling through excess. For sure there is a big audience, and live sport is one of the few remaining ways to access big audiences in one hit, but the real value of the Superbowl spots is admission into a select group of businesses, businesses so healthy they can afford these insane amounts of money, simply for access to your eyeballs for 60 or 120 seconds. And of course, it’s not just the media buy that is expensive, the quality of the creative is also premium. This is the venue that the big idea is launched in, the really memorable, high production value, expensive to make advert. Superbowl advertisers are widely acknowledged to pull out all the stops.

As long as a premium environment is accessed the impression of cost, of conspicuous waste, is created, regardless of what was actually paid behind the scenes. This is why media agencies still seek to negotiate cost savings for these deals. We don’t refund the cost saving to the advertiser, we buy more TV spots instead. It would be insane to do anything else.

What I am rather clumsily building towards is that in brand advertising the excessive value of the media buy, Sutherland’s handicap, is signaled through the environment that the ad is placed in (and the quality of the creative execution) and the dominance that the ad has in that place. Brands are not built in the gutter.

One last thing before I get back to my main theme. There is a place for both brand and DR advertising, they are both when used well, effective business models. There is another spectrum at play here, and it can be quite legitimate for any particular business to be placed at any position along that spectrum depending on their unique attributes and objectives. I am not arguing for brand, and against DR, I am suggesting that the evolution of the digital channel as an advertising medium has massively favoured DR to the significant detriment of brand, and that this needs to be rectified. For the good of all.

Part of the adjustment required is a scaling back of some of the evolving trends, this includes some of what ad tech is doing, and some of the adjustment is about building new support for the revenue models of high value premium advertising environments, which also, partly, has something to do with what ad tech is doing.

So, with all that said I can finally link back to the main purpose of this essay.

The digital medium, as it is today, has a number of fundamental weaknesses when it is used for signalling via premium editorial environments. We need to start changing that or we will lose these premium editorial environments. Here are 3 weaknesses to start us off.

  • The transition from print to digital has not been kind to publishers. The business models are still being developed and the consumer behaviours are still being tested and stretched. Ad tech, in particular, has offered a helping hand to reduce operating costs for inventory management and remnant inventory sales while increasing the scope of hyper targeting. It is also driving prices into the ground. None of that helps brand revenues for premium environment publishers.
  • Page dominance is also under fire. No-one gets annoyed by a full page ad in a print vehicle. GQ, Vogue and all the upmarket style magazines run a bundle of full page ads right up front in the most important positions, before you get anywhere near the actual content you bought. In digital channels however, popups, full page takeovers, auto run video and other types of ad unit that gain dominance by intrusion irritate the user and make them angry. This is not good for brand advertisers, who, even though they do want to provoke a strong emotional reaction, rarely wish to be associated with anger, and certainly not anger directed at them. Meanwhile regular ad units vie for our attention competing against editorial and are often placed to garner maximum impression volumes instead of maximum dwell time. In a cluttered environment dwell time is one of the few positive compensating factors but doesn’t currently feature as a planning metric.
  • There is no ad break. Commercial TV has trained us that every 15 or 20 minutes our consumption will be interrupted and the screen will be 100% handed over to the advertisers. For sure, many of us stand up and make a cup of tea and all kinds of other ad avoidances, but enough people just sit there, or forget to fast forward or simply don’t mind. We are aware that the ads are part of the price we pay for the content and we accept that. The web, however, does not have a similar mechanic and the consumer offers the publishers no such largesse even though the same mechanic (ads = free/cheaper content) is widely understood.

This is not a good place to find yourself in if you want to use advertising to help build brands over the next 10 or 20 or 30 years. And let’s be clear here, that is exactly the time frame you should be considering if you are trying to build a brand. You can certainly get further along the path today faster than you could 20 years ago, but that is no reason not to plan for the same longer term time frames, brands are by definition long term economic plays.

Let’s focus on the most theoretical part of the argument. It’s a broad brush concept, but I think it is fundamentally the key to the whole show.

Because advertising is unashamedly an economic activity that, therefore, responds to incentives and is framed by supply and demand, we should start by looking at some of the economic frames and ask what we can do to change them.

The obvious issue, as highlighted in the last essay, is the uneasy relationship web publishers have with their quasi infinite stock of inventory. It is economically easy to introduce new advertising stock in digital channels in ways that cannot be replicated in other media. This combined with what ad tech can do for the selling and trading of this new inventory causes great problems for the premium publisher in particular.

Premium content by definition needs to be relatively scarce. The more there is, relative to demand, the faster the slide to commoditisation, the horrors of ever declining prices and the loss of signalling value. High value editorial, which in turn becomes the location of high value advertising environments must be driven at least partially by scarcity. The Superbowl is the ultimate demonstration of this idea.

Premium publishers must, therefore, somehow re-assert the scarcity of their premium editorial product and as a result re-assert the high pricing that premium editorial content should garner. This is a lot easier to say than to do as it will need the combined and aligned efforts of the other 2 parts of the system, the agencies and advertisers, to succeed.

Ad tech, as currently structured, doesn’t help either.

While we have publishers using ad tech to sell cheap inventory for tiny CPM’s, we must recognise that this inventory almost always has to be served with cheaper less valuable editorial product. If ad tech is seen as the future of publishing revenue models, which is the case at the moment, then  premium editorial, in particular, will be threatened even more. Because good editorial costs a lot more than weak editorial, ad tech, which drives down average CPM’s cannot help us with editorial quality.

Even more troubling is the practice of using ad tech to target premium audiences (because we know they have visited a premium site or via other demographic or behavioural markers), within cheaper trashier low value ad environments. This is the retargeting strategy, currently so beloved of the agency digital planner. In the face of the need to restore some kind of peak revenue levels this device is especially pernicious as it doesn’t simply absorb ad dollars in an adversarial fashion, but it actively converts actual premium audiences into commoditised and ineffective cheap buys, delivering all the revenue to media businesses that did not even create the original premium content in the first place. It hurts the future of digital brand advertising harder than any other single ad tech ‘innovation’ currently out there.

So for their part agencies must stop selling cheap CPM’s via ad tech as the prime market innovation. This in turn would stop incentivising the creation of vast acres of cheap, dreck filled click fest media properties with so called ‘attractive’ pricing.

For their part advertisers must rediscover the idea of brand signalling and realise that it can’t be achieved in 50 cents a thousand shitholes surrounded by 24 pictures of cats called Burt.

So to clarify, in order to reassert effective scarcity into the premium editorial market and resurrect premium CPMs, publishers need to restrict their premium inventory, agencies need to stop pushing ad tech as the answer to every digital advertising challenge and the brand advertisers themselves need to rediscover the theory of brand signalling to give a structure and purpose to these changes.

But how does this happen?  

For all 3 sectors the easiest tool, perhaps the only tool that synchronises across all 3, that can mechanically drive these changes, is to recast the metrics we use. Metrics define how we plan and what we buy. This seemingly starts with the publishers and agencies (sellers and buyers), but can only start if these changes are accepted/driven by the advertisers. Because, ultimately, the advertisers are the only source of revenue in this ecosystem, they hold the keys to change. As is ever the case money talks.

Who moves first?

It should be the advertisers (and might still be) but there is a difficulty in targeting the advertisers as the agents of change. Advertisers are intensely adversarial by definition, much more so than agencies and publishers (who at least have industry wide representative bodies and conflict management processes), and hence they are the most disaggregated and least aligned of the 3.

Agencies? Theoretically the ownership of best practice in advertising sits with the agencies, which should put them in the prime position to steward the industry through these challenging times. Rather sadly this is a charge they have resolutely relinquished. Agencies are either the most passionate cheerleaders of ad tech and programmatic advertising because they, either truly believe in it or (if I am feeling charitable) are making a series of weak moves to protect their market from technology players by adopting a re-seller status. Both are short term positions and neither enables a market leadership stance of any kind, let alone one that can re-position the academic theories that drive the operational practice of brand advertising.

Publishers? They mostly need someone to take the foot from off their necks. It’s hard to imagine they will be able to lead these changes which they could only do by adopting a tough trading position. That needs deeper pockets than they have.

The fate of all 3 stakeholders here is firmly intertwined so we need to look at the responsibilities of these 3 parties in more detail, one by one.

Publishers

These guys are having a hard time, they are at the thinnest end of the thinnest wedge. Broadly speaking unless their model is built around surfacing commercial intention from their readers behaviour (a la Google) it is difficult to see what the market has done for them in 10 years except squeeze, and squeeze ever harder.

The broad idea, restoring the scarcity of premium content and hence premium advertising environments, starts with the publishers as they alone control this stuff. Of course the decisions they make are heavily influenced by market dynamics and the needs/wants of their customers, but at the end of the day those decisions are still theirs and theirs alone.

Publishers aren’t in this situation because they are stupid or lazy. They face some hard and brave decisions that will only pay off in the medium to long term, and that will also see some of them (maybe a lot of them even) fail. That is the reality of less inventory, less publication. The painful truth that we simply can’t get away from is the fact that restoring scarcity means less published content, not more. That’s a trend that might be reversible once healthier revenue dynamics and models are restored, but for the moment at least less = more. It should be noted that historically, and within channels, the amount of money spent on building brands tends to dwarf that spent on direct response.

From the perspective of the publishers we need to change the supply dynamics of valuable human attention. Instead of selling clicks they need to sell a consumer’s attention. The metrics that we use to communicate the values of a media buy, therefore, need to reflect actual consumer attention, not clicks. This is not a new idea.

Time magazine recently published an essay, “What you think you know about the web is wrong”, which surfaced some of the bigger misconceptions that we, as a whole industry, have held in regard to digital ad measurement and performance data.

The killer point is in the first paragraph.

We confuse what people have clicked on for what they’ve read

The article is written by Tony Haile, the CEO of Chartbeat, a web analytics company, and he shares the learnings of analysis covering the user behaviour data for over 2000 websites. He tackles 4 myths by showing what the data actually tells us, which is often quite contra to the concepts we have been using to plan advertising campaigns. I’m going to focus here on just 2 of these myths.

Myth 1: We read what we’ve clicked on

…All the topics above got roughly the same amount of traffic, but the best performers captured approximately 5 times the attention of the worst performers. Editors might say that as long as those topics are generating clicks, they are doing their job, but that’s if the only value we see in content is the traffic, any traffic, that lands on that page.

The focus on clicking is our collective original sin. From this original formulation almost all the other problems were born. Once you start to think about this there is a chance that you will wonder why we need data to make the point, it stands on an intuitive basis also.

We have all on many an occasion arrived at a webpage, and immediately left it again, or after a quick scan of the headline and 1 paragraph maybe, also moved on somewhere else. This is not uncommon behaviour. The old habit of spending an hour with the Sunday newspaper has never been replicated online. We spend an hour online reading perhaps, but rarely on just one site.

We scan much more than we consume.

The web makes the ability to move on so incredibly easy. And if the main copy doesn’t contain links to redirect us (more clicks, more impressions, more CPM, less attention) the chances are that the sidebar(s) will, as will the footer and any ads that are running. All in all its an environment built to foster additional click throughs generated at the expense of actual quality consumption measured in time.

Webpages are designed like this directly and precisely because the success of an advertiser campaign is measured in clicks. Publishers want advertisers to succeed, so that they in turn buy more ads.

What should a successful brand ad want?

For a brand campaign the value should be twofold. For sure some people will click on an ad and as a result visit the brand homepage and experience a deeper more complete brand message, it would be peculiar to turn click through functions off just because its a brand ad. This is good but generally asks more of people than they commonly wish to give. It asks someone to stop what they are doing, reading a great little article, and submerge themselves in our brand ‘experience’.

We might kid ourselves that consumers are excited to consume our cleverly designed brand experiences but in the main they just aren’t. Common sense suggests that if they were summarily queuing up for such things then the whole ad industry simply would not exist. We have to buy our ad space, people don’t volunteer to watch ads. The entire media agency product is based on making people watch/read something they don’t really want to watch or read at all. As Doc Searls says, “there is no demand for messaging”.

Of course you can tempt people in with competitions or other giveaways for sure, but the role of the advertising in such a scenario is DR, the branding value, if it comes at all, comes from the post advertising experience, the homepage.

The key value for a brand advertising campaign, therefore, should be the quality of the signalling, not navigating users to a brand experience hosted elsewhere. Driving awareness of a brand, aligning that brand with various positive attributes and gaining mind space through quality environment association is where the long term brand building comes from. So whereas you will not be unhappy if your brand campaign does generate clicks, that should not be the goal of the campaign. The goal must be to drive brand awareness, association and purchase consideration through the placement itself.

Such a goal needs to be reinforced by the consumer experience delivered by the publishers.

Brand advertisers therefore  should be looking to change the definition of campaign success and moving the discussion away from clicks. While success is defined by clicks we cannot expect publishing environments to deliver optimal experiences for any other factor. DR tactics optimise to drive clicks, brand tactics should not need to.

Because of this focus on clicks we tend to believe myth 4 also.

Myth 4: Banner ads don’t work

…Here’s the skinny, 66% of attention on a normal media page is spent below the fold. That leaderboard at the top of the page? People scroll right past that and spend their time where the content not the cruft is. Yet most agency media planners will still demand that their ads run in the places where people aren’t and will ignore the places where they are.

Now we are really getting somewhere.

Below the fold

Below the fold

To drive association between a brand and a media environment the key factor is consumer time spent in the locations where that association is being made, maybe over one exposure, more likely over several. The long standing brand metric that helps understand this idea of association is effective frequency (frequency of course is not only about media association, it is also about messaging legibility when consumption is often peripheral).  Effective frequency is the minimum number of times someone should see your ad for them to absorb from it what you want them to absorb.

It is not too much of a jump to transfer the ethos behind effective frequency to actual time spent on a page. A brand ad has a much greater chance of being consumed by either your conscious or sub conscious brain if it spends more time in front of your eyeballs.

This was always understood in old school print media planning even if we couldn’t measure it. The front sections of magazines were, still are, the prime positions not only because of the signalling value but also because ads have a much better chance of being seen in those positions. Similarly facing editorial was a concern because good editorial drove dwell time on the page which in turn also increased the ads consumption value.

In this regard the only difference between print and digital is that in digital we can measure that attention metric, we can be more informed about where we can capture that valuable attention.

But we don’t. Yet. Although we will eventually. And at that point we will have turned an incredibly important corner. It is an easy action point. Let’s plan our brand campaigns against a metric, that is available already, but that allows us to give value to brand advertising instead of DR. Time spent.

Why would a media planner buy above the fold when it’s obvious that most attention will be below the fold? It’s because even though more attention is below the fold, 100% of the eyeballs arriving on a page see the content that is above the fold, but not all of them go below. So the argument is that placements below the fold are seen by less people, which is true. This makes sense for a pure numbers game like DR, measured and bought as cost efficient impressions and clicks, but not for brand advertising that should be measured against ad and brand recall and planned against reach and frequency/attention.

You do need to read the whole Time article by the way, it is very good. The problems highlighted, and solutions advocated are all good for brand marketing, but not so good for DR which, after all, wants that click above anything else.

Bundling and verticals 

Time spent also offers support for publisher revenues in tight verticals. What was once seen as a horrible new reality for newspapers, transferring an existing print franchise to the digital domain (and it was), can now be the strength of a savvy publisher. Unbundling.

The commercial idea of a content bundle, and the Sunday papers are the most obvious example of this, is to build environments that are suitable for vertical advertising sectors. The reason that the Sunday papers are so large is because of the extra free time (dwell time) that we have on the weekend which enables the habit (now being lost) of reading the Sunday papers as a specific leisurely pursuit. Hence the motoring section was a great place for car manufacturers to advertise, the travel section a great place for tourism and holiday destinations and travel agents, the garden section for gardening products and the financial section for pensions, life assurance and savings/ investment products.

It was a very effective revenue model for the news industry. But it was utterly destroyed by the internet because the internet is an unbundling machine. If you like the Times news reporting, but prefer the travel section of the Guardian, then you can simply consume only those sections of the relevant titles. More likely, however, you are probably going to get your motoring fix from a digital property that has nothing to do with either the Times or the Guardian. The chances are that you will find a site dedicated to motoring which fulfils your needs.

This is bad news for the news industry.

Or at least it is if they don’t start to compete in those vertical spaces. With brands separate from their existing news brands.

What is the first thing that would happen? Traffic would drop. There is no doubt that the current structure of branded bundles is used to drive traffic through different sections of the site. The masthead at the top of a news site’s front page contains links to all the subsections for a good reason. This traffic loss is bad for a click revenue model.

What is the second thing that would happen? Quality. The content in a standalone vertical has to be good. If you are writing for people with either a strong topic affinity or a current topic need, then the writing needs to be good. If it isn’t then it will fail. Which as I’m arguing that we need to trim the overall volume of premium content to restore some level of scarcity is not such a bad thing.

To build a regular audience for say, a motoring vertical, would need the capture of respected, popular writers, and a reputation for being a solid home of quality writing. The second factor often the tool that attracts the better writers. It would also, over time, need to become a brand in its own right.

The third thing to happen? Engaged readers spend lots of time reading the content. Real valuable consumer attention. And even though smaller audiences might be likely, if we can shift the metrics from clicks to attention then we can still look to maintain, or possibly even increase ad revenues.

So in summary there are 2 broad areas for development that sit with publishers.

Firstly collect and disseminate attention metrics. Today. There is no need to wait for the agencies and advertisers to ask for these data points, make them available now and drive the debate.

Secondly build out editorial strategies that thrive in an attention model. I’ve talked in a very brief fashion about only one idea, the incentives that could accompany a move into a range of verticals, but I’m sure there must be other ways to design for a revenue model driven by attention metrics.

Agencies

Quietly the agency world, the media agency world at least, is preparing for its exit, although they would never admit such a thing (maybe they would call it a pivot). The current media agency business model is under fire, holed below the water line and has no chance of recovery. It is likely to be a long exit, there are revenues to be made for some time yet, but if anyone thinks this business has a 20+ year future in its current guise then they aren’t thinking things through properly.

The writing was on the wall the moment Google explained that they wouldn’t be paying agency commission for paid search, but we hurt ourselves a little while before that when media became an independent service separated out from our creative agency cousins. The move to independence laid bare the business model, and sure enough over the years we have cannibalised the 15% commission model, buying clients with promises of commission rebates. What was once 15% is now often only 1 or 2% and sometimes even less than that. Money talks and we’ve been giving it away.

Until quite recently I had been more confused by the strategic plays being made by media agencies than by any other business in advertising. They are also the businesses closest to my heart and where I have spent the vast majority of my career. I have heard digital planners and evangelists in agencies talk with delighted and excited passion about marketing automation for the best part of 10 years. From the perspective of a business model that provides outsourced media market participation already (what else is a media agency?) I always thought it was a ludicrous place to go. Why would the agency world want to package up everything they do and put it in a black box, to literally outsource, via technology, what they are already paid to do on an outsourced basis?

The first stirrings of marketing automation and media management black boxing was driven by a weak strategic vision for digital which left the nascent frontier in the hands of clever people with little experience, or people with much experience and little knowledge (of digital).

10 years ago neither of these constituencies had seen what was coming. The operational end, the young clever people who actually ran the campaigns could see how automation could help them and how it could streamline process and add insight. They weren’t thinking about the next 10 years, they were thinking about next week. Which was what they had been hired to do.

The people who were supposed to think about the next 10 years, the experienced strategic managers, agency heads, CEOs and FDs didn’t have enough instinctive knowledge of the landscape, which to be fair was being built in real time around them, to see what was coming either.

So, we had a revolution built, innocently enough, from within. When the real leadership did wake up to the realities of a technology driven advertising future they made the only decision they could. Acknowledging the destruction of the commission revenue model the media agency sector has been forced to accept the old playground adage…If you can’t beat them, join them.

The media agency world is falling over itself to become an ad technology sector.

As individuals they are lining up to be employed by Google.

As businesses they are either moving into the territory of technology re-sellers, or they are (depending on how big they are) buying into the technology market directly either through development or acquisition. That’s a cold rational assessment of their futures, but it is cohesive.

The agency position is extremely problematic, as it stands, because we can’t develop a sane basis for digital brand advertising alongside a future based on the current view of ad technology dominated by DR/performance and hence clicks, as in fact the brand model almost certainly needs to dominate the DR model.

It is, chiefly problematic, because the agency model is likely to remain in a precarious position even once they move over to a tech footing. This rather means that they aren’t looking to develop a long term rational advertising model, which they should be. They should be wowing advertisers with wise and intelligent stewardship of the academic needs of advertising, and proving the need to pay them for their thinking and not just their buying. Instead they are just trying to grab the nearest life raft. They have little choice.

The concern is that even if we, rather charitably, dismiss the role played by agencies in the rise of programmatic ad tech they face an even trickier challenge laid out in the structural development of this ad buying paradigm.

Programmatic, like Google’s search ads are a 1 to 1 auction. Each individual buy is one consumer on a website being bought by one advertiser. For a business model built entirely on the promise of scale (“we represent 30% of the market, therefore we can buy media cheaper than anyone else”) a 1 to 1 purchase, which by definition pays no respect to scale whatsoever, is an incontrovertible harbinger of doom. No scale, no business model.

The guys at the top understand this. They are no longer sleeping on shift. There are 2 things happening as a result.

Firstly they are spreading fear, uncertainty and doubt (FUD) liberally through the marketing/advertising ecosystem about how difficult it is to run programmatic (ad tech) campaigns, the only sensible take out (from their perspective) being that advertisers can’t possibly run these new operational models without the wise and knowledgeable help of agencies. They have seen their ultimate disintermediation on the horizon and are buying time.

Secondly they are looking to reassert scale in the market by becoming the principle on media buying contracts via PMP’s, private market places, that are accessed using the programmatic tech stack.

PMP’s have been heralded as the solution for brand advertising by the ad tech evangelists, which they can be, if they understand the issues I have discussed in this essay and trade in the metrics I have suggested instead of clicks.

The logic is that these artificial closed gardens within the programmatic universe (each PMP belongs to a publisher or a publishing group and can be accessed via obtaining the correct digital token) deliver a price protection mechanic via a pricing floor, and can also deliver a more effective approach to fraud management through much tighter inventory controls and visibilities. Much of the discussion has focused on how publishers can more comfortably release their prime inventory to programmatic buying processes through PMP’s without destroying revenue.

So, on the surface I draw some hope from the emergence of PMP’s although it should be clear that such optimism depends on a correct adoption of which metrics they choose to trade in.

There is a problem, however, that must be overcome. PMP’s offer the last chance for agencies to chase off the disintermediation horror story that they face, but by doing so they may introduce something worse again.

Agencies as principles – no thanks

As mentioned media agencies have begun to buy up PMP inventory as the principle to the contract, instead of buying the same inventory on behalf of their advertiser clients, who would historically be the principle. Once they have purchased these impressions they are adding a mark-up and then reselling them to advertisers, as well as charging for their time (via the agency team first, then secondly through the agency group trading desk fees – a classic double dip. Actually a triple dip if you add in the mark-up on the resold inventory).

Not so many years ago such a strategy involved a very significant risk. If the inventory could not be resold the loss could be too heavy to bear. Now that same unsold inventory can just be dumped back out onto the networks, via the programmatic stack, and at the very least deliver a small loss or a break even.

The big shiny goal here is the already mentioned reassertion of scale. So, whereas once an agency would offer you advantageous pricing through their buying scale being brought to bear in negotiation with the media owners they may now seek to offer you advantageous access to inventory, instead of or potentially alongside price, by being the biggest purchaser of inventory via PMP’s. Scale is important under this new paradigm as a wider base of clients offers much greater opportunity to resell, which in turn offers both a greater inventory holding and a more effective risk mitigation position.

Its a fundamentally different proposition, half opportunity and half threat. One that pitches them into both a client/service position with their customers and an adversarial/negotiating position with those same customers.

In other words a desperate play.

We are getting close to the business end of the disintermediation challenge now, and as a result a possible way out for some of the problems that this whole journey has created.

Of the 3 players in the ecosystem, advertisers, agencies and publishers the intermediators are clearly the agencies, and as harsh as it may be these will be the businesses that eventually face the fate of modern digital/internet inspired disintermediation. The technology is already here that spells out the operational and intellectual future of advertising media planning and buying, and there is no role in it for modern media agencies.

So, it all comes down to the advertisers themselves

If you haven’t twigged where this is going, we are now very close to the end. And, I apologise, if you have stayed with me all this way, this will feel slightly anticlimactic.

The role of the advertisers in this story is to take the programmatic ad technology stack in house and thereby remove the agency from the picture. By placing the operational structure in the hands of the advertisers we remove one of the economic players from the conundrum that seeks to perpetuate this destructive cycle. By removing that player we can more easily re-balance the whole ecosystem.

It is tempting to seek a future that does not involve the programmatic buying of media space. But as much as the liberal minded privacy respecting individual in me wishes it so I simply cannot see the media industry moving back to a human heavy operational model, much less so when we pass peak TV (when sufficient %’s of TV impressions are delivered over internet protocols as to make the dissemination of programmatic buying of TV, or more accurately audio visual ads, an inevitability that encompasses all AV ads, over the air or IP).

If we accept that programmatic is with us to stay, in some form or another, then we need to seek a way to make it work properly. Of all the players in the advertising ecosystem only the advertisers themselves lose if we blindly destroy the ability to use advertising to build brands.

If we don’t manage to change the current model then agencies still get to buy media, whether it is DR or brand media, and publishers still get to run advertising whether it is DR or brand advertising. They can still operate effective revenue models on either basis.

The players at risk are the advertisers, the businesses that depend on trust and value and long term positive connection between their users and their products, the businesses that need a brand, particularly those in adversarial sectors where brand saliency is key. Advertisers don’t succeed because they advertise, they succeed because they sell a huge diverse range of products at a profit and they use a wide range of business tools to do that. Most advertisers (although not all of them) are best served by having brand advertising in the tool box even if they don’t always use it.

The only option here is for the advertisers to grab control of their own destiny, to just go ahead and build what they need without worrying about the agencies. It may not be clear to a sector that is built on the binary of agency and advertiser but right now media agencies’ needs are not aligned with the advertisers’ needs.

We aren’t going to get a sensible resolution to this mess by examining it from the side-lines and academically “resolving” the conflicts. There are too many, powerful entrenched interests for that to work. Somewhere along the line advertisers themselves are going to have take control of the tech and the operational model, and by doing so create a business opportunity for  the publishers that builds an incentive to force them to properly organise their publications to better serve brand advertising.

Here’s how I hope it plays out.

There are a number of advertisers that have already brought their ad tech in house, and many of them are doing well. A lot of that value is being driven from clever 1st party data work, not from a control of brand advertising destiny, but in this 1st instance I just hope to see the business case of in house programmatic being verified. To be clear, there is always going to be a business role for DR, data led advertising via the programmatic stack, I am chiefly trying to find a way to engender a significant rebalance between brand and DR as the internet is currently not fit for purpose as a brand advertising medium.

If it does come to pass that the business case for in house can be verified then I hope to see a tipping point which is the precursor to a wholesale operational change with digital ad buying moving client side and the 1st phase of disintermediation finding a foothold.

Following from this fairly fundamental change in the ecosystem we will need to bring pressure to bear on the publisher community to prepare their sites to drive optimisation of the metrics that demonstrate real valuable human attention. Not shares, or likes, or tweets or other metrics that enable contrary interpretation, but simple straightforward unequivocal data such as dwell time and demographics. This can be achieved by building (re-building) the business case for brand advertising in digital media, and also as a natural externality of that business case, the withholding of budget from recalcitrant publishers.

Once these data points are commonly made available by publishers then we can use the programmatic stack to execute campaigns against these brand metrics, which by definition do not need to be intrusive or personal or disrespectful of social norms  and which do not need to be derived from vast detailed profiling tools. The relative price points of these campaigns will rise, but so will the scarcity of genuine premium quality environments and therefore so will the signalling value that they represent. Sorted.

Well, maybe not sorted. Not quite as easy as that. But some food for thought hopefully.


links again, 4 of ’em

Earlier this year Netflix and Comcast had a little contretemps about their peering agreement. Unless you spend time trying to keep up with the various layers, and the players therein, that make up the technical infrastructure of the internet then that statement potentially means absolutely nothing to you whatsoever, but actually its all quite important.

Peering is the name given to the agreements that cover the terms for transferring internet traffic between networks and they are a fundamental cornerstone of the modern internet. Because there is not one single network that covers the whole world it is important that traffic can be exchanged, as required by the needs of the user, with the least possible friction. Historically these agreements were made between engineers and each network simply agreed to open to each other as required, no money involved. Comcast decided that they wanted to buck that history and demanded payment from Netflix. Netflix suggested that prior to this negotiation Comcast were deliberately allowing congestion in certain parts of their network to negatively affect Netflix’s customers (something Comcast denied) and that they had no choice but to pay the ransom.

Level 3 is one of the big global internet backbone companies that carry enormous amounts of web traffic. They are the one of the companies that pay for, and own and maintain, the cable that runs under the oceans. This post on their blog lays out some of the dynamics that go into peering agreements and even though this post doesn’t deal directly with the situation between Netflix and Comcast it should probably give you enough information to understand who is playing the shithead and who isn’t.

Level 3

 

I never did try playing Go the ancient Chinese strategy game, and after reading this article i’m starting to think that that is no bad thing. Like chess its a 2 player war game, but unlike chess its a game (possibly the only game left) that retains an unbeaten crown in human vs computer match-ups. Machines beat humans at checkers in 1994 and chess was added to the list in 1997. Now, 17 years later Go still holds out, and holds out with comfort. Every year the University of Electro-Communications in Tokyo hosts the UEC cup where computer programs compete against each other for the opportunity to compete against a Go sage, who will be one of the world’s top Go players. The challenge is not one of brute force computing power, its more about strategic understanding and the fact that by all accounts we don’t really understand what goes into being a great Go player, human or otherwise.

Wired

 

Ever wondered why the airwaves are licensed by the government? If you think about it a little, the chances are you will come to what seems like a very simple and straightforward conclusion, which is that the airwaves are licensed so that broadcasting signals don’t interfere with each other. To ensure that when you tune in to 97-99 FM here in the UK, you will receive radio one, not some other outfit broadcasting on the same frequency. Which is all well and good except that electromagnetic waves simply don’t interfere with each other. This concept of interference seems to imply that there is only so much space on any particular frequency that can carry signals, that there is only so much spectrum available. Colours are on the same spectrum as radio waves, separated only by their different frequencies, to suggest that spectrum is limited is to suggest that there is only so much Red to go around, which is clearly a farcical concept.

All of that, which is sound and uncontroversial 6th form physics, does raise some interesting questions about our radios and TVs and mobile phones, all of which broadcast across licensed electromagnetic frequencies. It turns out that the problem of interference is a problem of the broadcasting and receiving equipment not the natural scarcity of the airwaves. We have a system that limits access to frequencies because we are still using a technology base optimised to an old technical paradigm.

This piece published in Salon gives you the full detail and quotes extensively from the work of David Reed an important and prominent scientist from MIT, famous for writing the text that nailed the modern architecture of the internet. Understanding what is actually going on here turns out to be entertaining and enlightening.

Salon – The myth of interference

 

I don’t know whether this last link is being serious or not, and that alone might be the best observation I have to make about the state of modern economics.

Alex Tabarrok is a right leaning economist who authors the blog Marginal Revolution with Tyler Cowen, both are professors at George Mason University in Virginia. This short post, one of many from the right responding to the fuss being made by Piketty’s Capital, offers “2 surefire solutions to inequality”. One is to increase fertility among the rich, dilute the inheritance and reduce capital concentration. The other surefire way? To reduce fertility among the rich! The author of the post puts a lot more detail into this position than I do here. I’m leaning towards the opinion that he is simply having a laugh, but then as he is an economist i’m really not so sure.

Marginal Revolution

 

 


Links – High frequency trading, poison and Jimmy Carter

I’ve had my eye on high frequency trading (HFT) for a little while. I was initially fascinated by the overwhelming need for connectivity speeds driving up the cost of real estate near the exchanges. If you could locate your trading operation next to the exchange the tiny fractions of a second saved by geographical proximity were worth huge amounts in naked profit.

The whole thing struck me as another example of short term profiteering taking precedence over the more sensible longer term allocation of risk and investment capital (which is after all what stock markets are supposed to do).

The long term trend of shareholding periods has been in decline since the 1960’s when shares were held on average for 8 years. The following 2 links are in disagreement about what the average holding period was in 2012 but whichever data point you prefer, 5 days according to Businessinsider or 22 seconds according to London’s Daily Telegraph, it is clear that we are using the mechanic that is the stock market, in a radically different way than we have ever done before.

Part of the overarching societal philosophy of share ownership was that the long term incentives held by investors, via shareholding, would act as a brake on the incentives for larceny, held by the professional management class. But those long term incentives can only act as a brake if the shares are held in a long term pattern, and today that is not the case. To be fair this is not a triumph of the managerial class over the investor class, this is driven by the profiteering of Wall street and London’s City too. Simplistically more trading generates more trading income, but there are other factors acting here too.

Firstly we have a global capital investment paradigm that is focused on the needs of finance capital not production capital. This so called casino approach to capitalism, and its relationship to production capital, is best understood through the analysis of Carlota Perez in her stunning, and surprisingly accessible, Technological Revolutions and Financial CapitalThis is a very quick summary of her ideas.

Secondly, and something that has only surfaced into popular discourse recently, it appears that there is a fraudulent mechanic in widespread use as a result of HFT. Michael Lewis has penned the expose in his book Flash Boys.

In essence the advantage bestowed on certain traders as a result of their proximity to certain exchanges generates more than quicker sight of price movements. The original story of HFT was that this small advantage generated thousands of small arbitrage trades, but the window of opportunity to make these trades was so small that it could only be accomplished by computers acting autonomously via algorithms.

Among other things Lewis has showed that the various routings around the multiple exchanges are illegally skirting regulations that are designed to present the same price on all exchanges at the same time. This means that a trading outfit that operates on multiple exchanges can make trades based on the market intelligence of a received order. If a client instructs the trader to buy all 10,000 shares at the price they are listed they will need to buy them from multiple exchanges, but the trader’s ability to communicate to the furthest exchanges faster than the official price channels sees them instruct their machines at those locations to buy what is available. The end result is that the original client will only receive say 4000 of the shares they wanted. The differential 6000 shares are now owned by the trader, but only because of the inherent value generated by knowledge of the instruction for the original 10,000 transaction.

Interestingly a key component here is the difficulty of verification.

This lack of human insight about what is occurring through these technical networks, obscures knowledge of what is happening so much so that fraud can flourish. In this regard it is very similar to the fraud rife in the ad tech networks for digital display advertising.

This link from the Washington Post sets the scene very well, explaining the nature of the wider problem.

This piece, published by the NYT is actually an adaptation from Lewis’s book and details how the situation was eventually understood and the actions being taken to rectify the problem. Its long but really good.

 

 

In 2006 Alexander Litvinenko was poisoned in London with the lethally radioactive chemical Polonium. If you recall the incident at the time you will remember that it took a long time before anyone was able to understand what was happening to him, what the poisoning agent was and where he was poisoned. The original location was believed to be the Itsu sushi restaurant in Piccadilly although it eventually turned out to be the Pine Bar in the Millennium hotel in Grosvenor square.

The story is much more involved than any 5 minute news segment could possibly have hoped to portray, which I guess is not a surprise. Litvinenko himself was no innocent, with a history in the feared FSB and the KGB before that. He took a stand in favour of Boris Berezovsky over Vladimir Putin, something that no ordinary man would do, and maybe ultimately was contributory to the situation he found himself in.

This article is very long but if you enjoy a spy story, fictional or otherwise, then you will certainly enjoy this. Something I didn’t understand was how much concern was raised by the presence of Polonium in London. The decontamination program employed a staggering 3000 people and operated across 50 different sites, 2 simple numbers but they invoke images of a Men in Black style clean-up operation happening under our noses but with no obvious sign whatsoever that it was happening.

The article also describes the strange world of the Russian Oligarchs and as a result tells part of the recent history of the post cold war Russian experience.

It’s worth finding the time to read this. For fun.

How radioactive poison became the assassin’s weapon of choice

 

 

This next piece is also a pleasure to read if only because it is so refreshing to hear an American president (ex) talking so candidly about the role and position of America in the world today and issues within the scope of American domestic politics as well. Jimmy Carter never was like all the rest, whether you agreed with him, whether or not you are a natural republican or a natural democrat it is difficult to fault his sincerity and his integrity. There aren’t many world leaders that it’s easy to say that about.

On the topic of religious persecution of women’s rights.

Well, they actually find these verses in the Bible. You know, I can look through the New Testament, which I teach every Sunday, and I can find verses that are written by Paul that tell women that they shouldn’t speak in church, they shouldn’t adorn themselves and so forth. But I also find verses from the same author, Paul, that say all people are created equal in the eyes of God. That men and women are the same before God; that masters and slaves are the same and that Jews and Gentiles are the same. There’s no difference between people in the eyes of God. And I also know that Paul wrote the 16th chapter of Romans to that church and he pointed out about 25 people who had been heroes in the very early church — and about half of them are women. So, you know, you could find verses, but as far as Jesus Christ is concerned, he was unanimously and always the champion of women’s rights. He never deviated from that standard. And in fact he was the most prominent champion of human rights that lived in his time and I think there’s been no one more committed to that ideal than he is.

I’m not a religious man but I do wish there were more prominent leaders who hold a strong faith, willing to call out the contradictions within their religious texts and simply choose the side that is kind caring and decent instead of aggressive and divisive.

Jimmy Carter

 


3 ways to look at the world, past, present and future

I’ve read this essay twice now, some months apart. On both occasions it has irritated and enthralled me at the same time. It is an odd experience to be enthralled by something (the ideas in the essay) that simultaneously causes feelings of extreme discomfort. There are some great ideas here, but I can’t help feel that the solutions suggested are overly influenced by the very young world view of the American model of governance and administration.

Venkat lays out 2 eye opening concepts for understanding the way a state seeks to govern a population.

Firstly a reduction of the state’s requirements (in order to exist), to counting the population, conscripting the population and taxing the population. Obvious once you think about it maybe, but one of those ideas that is huge once you do.

Secondly, that technologies are always the medium by which government’s gain the consent of a population to be governed. To be fair consent is not necessarily the right word, as the first such technology that Venkat mentions is the sword, but the idea (semantics apart) is still a powerful filter for looking at the relationship between a state and its population (even if at times the picture is ugly).

The broad argument is that a population that is inherently gaining mobility (virtual and geographic), causes problems for the current model of governance which is significantly based on knowing that most people in a population don’t move in and out of administrative boundaries on a regular basis. As this mobility increases, goes the argument, then if we accept that a government will have to govern (counting, conscripting, taxing) then with increased mobility must come increased surveillance. Hence the consent of the surveilled.

It’s a long essay, and you will have to stop and think along the way to follow every idea presented (as usual with Venkat the piece is very dense), but it is worth it.

Upfront I suggested that I had a concern with the piece, and I do, but it does not diminish what is a very rich piece of writing that can spark much involved thinking of your own. I do feel that much of what is written is informed by an immature model of American governance, based on the history of that country. The 50 states of the USA have always struck me as a slightly odd model built by a young country still obsessed in strange measures with the idea of a wild west. As a child even, I remember feeling a peculiar dissonance watching the movie Porkies, where to evade the police the high schoolers merely needed to cross county lines.

If, however, you consider these ideas as factors affecting the global stage, we can start to think about how political globalisation will inevitably be influenced. Maybe that’s a 30 year horizon, maybe it’s a 300 year horizon, either way there is lots to get your teeth into here.

Consent of the Surveilled 

 

Crypto-currencies are really nothing new, they’ve been around as concepts for a long long while and as real available commercially exchangeable entities since 2009 when the Bitcoin code was released. Nonetheless we are this year starting to see a significant maturation of how Bitcoin is being reported. More specifically we are seeing the emergence of stories about the blockchain, the idea at the core of the Bitcoin protocol. Particularly there is significant activity in the VC sector driving much of this interest.

Bitcoin is often misconstrued as an anonymous currency, which simply isn’t true. In fact the whole secret sauce is that in the absence of a central banking authority your transactions are actually made very public indeed. It is possible to obfuscate your link to any particular bitcoin transaction but this is not an inherent part of the protocol at all.

The excitement is around the idea of distributed consensus systems. The heart of a distributed consensus system, in this reading, is the idea known as the blockchain. Essentially a public ledger of transactions, the blockchain cleverly solves many of the problems that are commonly dealt with by centralised authority.

Distributed consensus as an idea potentially impacts many more sectors than just currency. Any societal entity that depends on a centralised authority could be in the cross hairs, one example is Namecoin, an alternative to the DNS, which can allocate .bit domain names on a decentralised basis.

This article does a superb job of explaining the blockchain itself and how it solves the problems of centralisation. If you have any interest in technology, and emergent trends therein, then this is something you need to read.

How the Bitcoin Protocol Actually Works

 

In contrast to Venkat’s pragmatic reading of the oblique structures that inform modern governance models, this essay from Yanis Varoufakis of the University of Athens (currently resident at University of Texas at Austin), explores the question of what the internet can do to ‘repair’ democracy.

The hunch underpinning this paper is that, behind voter apathy and the low participation in politics, lays a powerful social force, buried deeply in the institutions of our liberal democracies and working inexorably toward undermining democratic politics. If this hunch is right, it will take a great deal more to re-vitalise democracy than a brilliant Internet-based network linking legislators, executive and voters.

His approach is informed by an examination of the original democracy of Athens, paying attention to both its structures and its hypocrisies (Athens was a state built on slavery), and how that differs from the underpinning historic influences that structured modern western liberal democracies.

His aim is not to discourage those that seek to use modern technologies to reinvigorate democracy, but to point their innovations in the right direction. Thus he explores the perceived issues with direct democracy, the idea that we can all vote on all the issues, and the subsequent conclusion that voter apathy is not a bug of a governance model built on the needs of a mercantilist ruling class, but a feature.

In contrast to the Athenian model, whereby the poor but free labourer was empowered equally alongside the richest man, in a system that has evolved to devalue citizenship while massively extending its reach (no slaves in liberal democracies, at least not mandated by law) whilst simultaneously transferring real power from the political sphere to the economic, solutions that simply make political connection between our governing institutions and the people more efficient, will likely make little difference.

In this reading the changes required are economic, not political, with subsequent change in the political sphere only plausible once such changes within the economy are in place.

This may not be the most optimistic reading of today’s political landscape, but it is a well constructed and researched argument worthy of your time. Must read reading.

Can the Internet Democratise Capitalism


6 speculations on the next financial crash

There has been a small spate of articles in the last 3 months or so suggesting that we might be in for another horrible global financial shock this year. I’m in no position to judge these opinions with any certainty, but do note that they are coming from both the right and left flanks of modern political discourse. Recently I have increasingly taken the position that economists, when considered as a singular group, are in general no better at such predictions than almost any other group of people, economics being, chiefly, an ideology masquerading as a science in its common usage (to be clear economic thought could/should be scientifically applied, but in most expositions isn’t). So, when I see the right and the left making similar claims I take notice.

Causing some jitters

Causing some jitters

First up we have the Wall Street Journal’s Marketwatch relaying the eery similarities between the movement of the Dow Jones Industrial Average (DIJA) in 1928/29 and the movements of the DIJA today. To my untrained eye it certainly does look very similar, however, of more concern is the evolution of how Wall street traders are commenting. No-one is panicking but, as is noted, the level of concern is rising the longer the similarities continue. The chart was first circulated in November 2013, and wasn’t taken too seriously  by seemingly anyone. Now, the rhetoric is more cautious.

One of the market gurus responsible for widely publicizing this chart is hedge-fund manager Doug Kass, of Seabreeze Partners and CNBC fame. In an email earlier this week, Kass wrote of the parallels with 1928-29: “While investment history doesn’t necessarily repeat itself, it does rhyme.”

http://www.marketwatch.com/story/scary-1929-market-chart-gains-traction-2014-02-11

 

From the Guardian we have Ha-Joon Chang, currently Reader in the Political Economy of Development at the University of Cambridge and author of “23 Things They Don’t Tell You About Capitalism”.

His point is quite succinct. In the UK and the US we are currently seeing stock markets at record marks, despite the underlying economies performing at levels that have not recovered to 2007 levels. Chang’s key point of reference is the growth in per capita growth in income, a data point that speaks to the underlying real economy (I find it strange that economists of all stripes talk of the real economy as an aside, surely it should be the main focus). More scarily he makes the observation that at this point no-one is offering any kind of narrative to explain these huge performance numbers. This differs starkly from the dot com bubble, explained away by tech innovation, and the 2008 crash, financial innovation leading to better risk management. The commenters were wrong in 2000 and in 2008, as the bubbles burst causing great losses, but at least someone believed in the market movements. This time is seems that no-one does.

http://www.theguardian.com/commentisfree/2014/feb/24/recovery-bubble-crash-uk-us-investors

 

David Cay Johnston, writing for Aljazeera, takes a stab at some of the crazy changes in valuation metrics that have become prevalent over the last 15 years or so. In short this is an intelligent and in depth (although very accessible – not too long) look at the tendency to massively value tech stocks that seemingly turn over no profits. Superficially this seems similar to what happened in 2000, however he also shows that there is some deep complicity between the journalistic sector and the speculative elements of today’s trading universe allied to a degradation of traditional measures of value, such as price/earnings ratios being superceded by price to revenue. For example, Facebook’s traditional PE is currently around 113, against a century long S&P500 average of 15, but its PR is 26 ($5bn revenue to market cap of $132bn), which is seemingly more palatable to speculators.

PE ten year average profits

The article makes the point that investors that push money into stocks like Twitter, that has yet to book a red cent in profit are speculators.

Would that we could bring back Benjamin Graham, whose 1949 book, “The Intelligent Investor,” explains how to value stocks. Warren Buffett calls it “by far the best book on investing ever written.”

Graham looked at the profits companies earned, not the promises of what they might someday make. That is, he was an investor.

Markets can benefit from speculators, who take risks that prudent people and institutions should avoid, but speculators should represent the edges, not the core of the market.

http://america.aljazeera.com/opinions/2013/12/stock-market-techbubble.html

 

Back to the WSJ’s Marketwatch. This post is a little strange, almost an homage to the deep fallibility of the human animal in totality, not just in regard to the prediction of financial crashes. The premise is very straightforward, there are always warnings, there have always been warnings, before the 1929 crash, 2000, 2008 the same. And they were all ignored. All of them. And it will be the same this time. It’s a heartfelt characterisation of a human process that will play out against pre-ordained personal prejudice and bias (naturally a bull, naturally a bear), regardless of what the rational analysis tells us.

Yes, you will read new warnings, like “ Soros doubles a bearish bet on the S&P 500, to the tune of $1.3 billion.” You may double down too. Or do nothing. You may listen to Hulbert, Gross, Gundlach, Ellis, Shilling, Roubini and Schiff. And still do nothing. Or something. You will listen, take it all in, and do what you always do. Your way, based not so much on all the warnings, the facts, evidence, predictions. Rather you’ll make your own decisions based on some inner consensus of voices that always guides you from deep inside your brain.

http://www.marketwatch.com/story/crash-of-2014-like-1929-youll-never-hear-it-coming-2014-02-19?dist=tbeforebell

 

Talking of George Soros and his $1.3billion Put, here are 2 opinions, first from the WSJ again and secondly from the strange folk at zerohedge (a site that is popular with aggressive opinion. Nearly all columns are written by “Tyler Durden” a nom de plume designed, with a somewhat wicked sense of humour, to enable industry insiders to pontificate anonymously without fear of jeopardising their employment). Both articles are strangely inconclusive actually, it may be a hedge, or it may be a sign that Soros is concerned about China and is anticipating a big fall. Still, an interesting marker to keep an eye on, Soros, after all, has a history of getting big bets right.

The Soros Put

The Soros Put

http://blogs.marketwatch.com/thetell/2014/02/17/soros-doubles-a-bearish-bet-on-the-sp-500-to-the-tune-of-1-3-billion/

http://www.zerohedge.com/news/2014-02-17/soros-put-hits-record-billionaires-downside-hedge-rises-154-q4-13-billion

 

Finally a somewhat left field and unscientific historical observation from David Brin.

His suggestion is that 1914 and 1814 were the real starting points of their respective centuries, with the Concert of Vienna 1814 leading to an extended era of peace on the European continent, shattered rudely by the horror of WW1 in 1914. As a thought experiment, and Brin is clear that this whole exercise is almost an amusing aside, we can speculate what might be brought forward from 2014. It could go both ways. Let’s hope common sense prevails.

http://www.bloombergview.com/articles/2013-12-31/what-if-the-21st-century-begins-in-2014-


4 quick bits

I was reading a reddit thread this week that was complaining that people, generally, are totally unaware of the huge and existing advances being made in technology right now, today. The thread was in the futurology sub reddit, so it’s understandable that some frustration was being vented. The debate was not terribly sophisticated if i’m being honest and the solutions discussed, sensible as they were, simply came down to showing people the articles instead of breathlessly telling them what was happening. One of the existing technologies discussed was driverless cars, something we are all probably aware of through Google’s activities in championing the concept. One of the posters on the thread pointed out that Rio Tinto the global mining corporation was already using driverless trucks in its mining operations in Australia. He claimed that 30% of their trucks were driverless but didn’t link to a source. I couldn’t verify that exact number but did find the following article that does indeed back up the claim that driverless technology is not simply a west coast tech hipster near future but actually part of today’s industrial reality.

Here is the link

For some time now I have been of the opinion that the generational change in technology adoption would be followed by a generational change in an understanding of technology. So while it’s obvious that today’s teenagers are growing up in a world that is digital (no digital analogue divide for these guys, just one reality that they don’t question) and as such adopt technology without thought, this article is arguing that actually the increasing invisibility of the computing layer is creating a generation with no greater knowledge of the underlying functionality than today’s 40 – 60 year olds. This is worrying. Everything in today’s teenagers’ futures will be governed by code, right now, even, your average new car contains over 100 million lines of code. There is an important distinction to be raised here between using computers and understanding computers, between the graphic user interface and the command line interface. I know that most of you, will fall on the side of those that don’t understand computing in the way that the author does. That includes me, and I am more up to speed than most people my age, I couldn’t install and use a Linux distro for example (I even had to use google to make sure I used that phrase correctly), although it is on my list of things to learn. The concerns are huge because computers will affect everything in the future, not least the quality and effectiveness of our laws. Take, for example, Cameron’s ridiculous porn firewall. Nothing wrong with his stated intentions to protect children, but everything wrong with his solution simply because it will not protect children while simultaneously creating a host of negative externalities that are nothing to do with child access to the internet. Moreover by creating a false sense of safety a whole new raft of parents will be given an excuse to ignore their parental responsibilities. I was hoping that 20 years hence we would have a computing literate population and these problems would be a temporary issue. Apparently not. Doh.

Here is the link

This is awesome. Money and the scale of it presented in a fantastic, and huge graphic by Randall Munroe at xkcd. Be warned this could swallow an hour of your life before you’ve even realised. It must have taken an enormous amount of time to research, let alone draw. Full of mind blowing observations.

Here is the link

As our current coalition administration ponders a potential loss of world influence as a result of the parliamentary no vote on military action in Syria (and kudos to Cameron for not charging ahead anyway, not something Blair would likely have even considered) I’d suggest they should have a look at this interesting infographic produced by the MIT technology review. It’s a cursory analysis of 8 global technology innovation hubs and even though the data points aren’t terribly wide ranging they don’t paint a pretty picture for Britain. The key scare stat for me is that of all the hubs listed (silicon valley, Boston, Paris, Israel, Skolkovo, Beijing, Bangalore and London) we have the smallest available budget (whether that be private or governmental monies) by a long margin. There are only 2 hubs who don’t have funding potentials in the billions, London and Bangalore, and at 161 million to 300 million Bangalore has almost twice as much going on as London. Simply not good enough.

Here is the link