3 ways to look at the world, past, present and future

I’ve read this essay twice now, some months apart. On both occasions it has irritated and enthralled me at the same time. It is an odd experience to be enthralled by something (the ideas in the essay) that simultaneously causes feelings of extreme discomfort. There are some great ideas here, but I can’t help feel that the solutions suggested are overly influenced by the very young world view of the American model of governance and administration.

Venkat lays out 2 eye opening concepts for understanding the way a state seeks to govern a population.

Firstly a reduction of the state’s requirements (in order to exist), to counting the population, conscripting the population and taxing the population. Obvious once you think about it maybe, but one of those ideas that is huge once you do.

Secondly, that technologies are always the medium by which government’s gain the consent of a population to be governed. To be fair consent is not necessarily the right word, as the first such technology that Venkat mentions is the sword, but the idea (semantics apart) is still a powerful filter for looking at the relationship between a state and its population (even if at times the picture is ugly).

The broad argument is that a population that is inherently gaining mobility (virtual and geographic), causes problems for the current model of governance which is significantly based on knowing that most people in a population don’t move in and out of administrative boundaries on a regular basis. As this mobility increases, goes the argument, then if we accept that a government will have to govern (counting, conscripting, taxing) then with increased mobility must come increased surveillance. Hence the consent of the surveilled.

It’s a long essay, and you will have to stop and think along the way to follow every idea presented (as usual with Venkat the piece is very dense), but it is worth it.

Upfront I suggested that I had a concern with the piece, and I do, but it does not diminish what is a very rich piece of writing that can spark much involved thinking of your own. I do feel that much of what is written is informed by an immature model of American governance, based on the history of that country. The 50 states of the USA have always struck me as a slightly odd model built by a young country still obsessed in strange measures with the idea of a wild west. As a child even, I remember feeling a peculiar dissonance watching the movie Porkies, where to evade the police the high schoolers merely needed to cross county lines.

If, however, you consider these ideas as factors affecting the global stage, we can start to think about how political globalisation will inevitably be influenced. Maybe that’s a 30 year horizon, maybe it’s a 300 year horizon, either way there is lots to get your teeth into here.

Consent of the Surveilled 

 

Crypto-currencies are really nothing new, they’ve been around as concepts for a long long while and as real available commercially exchangeable entities since 2009 when the Bitcoin code was released. Nonetheless we are this year starting to see a significant maturation of how Bitcoin is being reported. More specifically we are seeing the emergence of stories about the blockchain, the idea at the core of the Bitcoin protocol. Particularly there is significant activity in the VC sector driving much of this interest.

Bitcoin is often misconstrued as an anonymous currency, which simply isn’t true. In fact the whole secret sauce is that in the absence of a central banking authority your transactions are actually made very public indeed. It is possible to obfuscate your link to any particular bitcoin transaction but this is not an inherent part of the protocol at all.

The excitement is around the idea of distributed consensus systems. The heart of a distributed consensus system, in this reading, is the idea known as the blockchain. Essentially a public ledger of transactions, the blockchain cleverly solves many of the problems that are commonly dealt with by centralised authority.

Distributed consensus as an idea potentially impacts many more sectors than just currency. Any societal entity that depends on a centralised authority could be in the cross hairs, one example is Namecoin, an alternative to the DNS, which can allocate .bit domain names on a decentralised basis.

This article does a superb job of explaining the blockchain itself and how it solves the problems of centralisation. If you have any interest in technology, and emergent trends therein, then this is something you need to read.

How the Bitcoin Protocol Actually Works

 

In contrast to Venkat’s pragmatic reading of the oblique structures that inform modern governance models, this essay from Yanis Varoufakis of the University of Athens (currently resident at University of Texas at Austin), explores the question of what the internet can do to ‘repair’ democracy.

The hunch underpinning this paper is that, behind voter apathy and the low participation in politics, lays a powerful social force, buried deeply in the institutions of our liberal democracies and working inexorably toward undermining democratic politics. If this hunch is right, it will take a great deal more to re-vitalise democracy than a brilliant Internet-based network linking legislators, executive and voters.

His approach is informed by an examination of the original democracy of Athens, paying attention to both its structures and its hypocrisies (Athens was a state built on slavery), and how that differs from the underpinning historic influences that structured modern western liberal democracies.

His aim is not to discourage those that seek to use modern technologies to reinvigorate democracy, but to point their innovations in the right direction. Thus he explores the perceived issues with direct democracy, the idea that we can all vote on all the issues, and the subsequent conclusion that voter apathy is not a bug of a governance model built on the needs of a mercantilist ruling class, but a feature.

In contrast to the Athenian model, whereby the poor but free labourer was empowered equally alongside the richest man, in a system that has evolved to devalue citizenship while massively extending its reach (no slaves in liberal democracies, at least not mandated by law) whilst simultaneously transferring real power from the political sphere to the economic, solutions that simply make political connection between our governing institutions and the people more efficient, will likely make little difference.

In this reading the changes required are economic, not political, with subsequent change in the political sphere only plausible once such changes within the economy are in place.

This may not be the most optimistic reading of today’s political landscape, but it is a well constructed and researched argument worthy of your time. Must read reading.

Can the Internet Democratise Capitalism

Advertisements

Gravitational waves – Andrei Linde and Inflation

You might recall a gentle excitement reported in the news last month about a huge breakthrough in physics. The discovery being reported was the confirmation of primordial gravitational waves, which to be fair, may not mean much to you or anyone else not involved in cutting edge cosmology and physics. Even if you have little interest in the detail of the discovery and its implications I implore you to at least watch the first video below. It is the moment that Andrei Linde is surprised with the news that such waves have been confirmed. Linde was the physicist who initially proposed the idea of  inflation (the huge expansion that the universe went through in the tiniest fraction of a second after the big bang) which is now confirmed by the discovery of these waves. Even if you dislike physics, watching a man and his wife receive confirmation of their life’s work is a genuine joy.

 

If that has whetted your appetite to understand what the fuss is all about then this video from Veritasium does a good job of explaining the discovery and why is is so fundamental to our understanding of the big bang and subsequently the universe.

 

 


Small phones, mobile computing, innovative disruption and speculation


Apps apps apps

I don’t like getting my content from apps. I use the mobile web instead if I have to, but the majority of my web access is on a laptop with a nice big screen, which means my prejudice doesn’t hurt me too much.

I know that that sets me apart from the majority of smartphone users, but as I said before, my primary web access point isn’t my phone so the limitations of the mobile web that have enabled app culture don’t really hurt me.

I also believe that it is a culture with a short shelf life.

It is a system with an unusual degree of friction in comparison to an easily imagined alternative. The whole native apps versus web apps argument that has been running for years now is largely decided, at any one point in time, by the state of play with regard to mobile technology and the user requirements of said technology. I’m pretty certain the technology will improve faster than our user requirements will complicate.

That easily imagined alternative is pretty straightforward too. A single platform, open and accessible to all, built on open protocols, that can support good enough access to hardware sensors, is served by effective search functionality and doesn’t force content to live within walled gardens.

The web, in other words. Not fit for purpose today (on mobile), but it will be soon enough.

Monetisation possibly remains a challenge but I’m also pretty sure we will eventually learn to pay for the web services that are worth paying for. Meaningful payment paradigms will arrive, the profitless start up model/subsidised media (which is what almost all these apps are) can’t last. We will get over ourselves and move beyond the expectation of free.

Should everything be on the web? No, not at all. For me there is a fuzzy line between functional apps and content apps. The line is fuzzy because some content apps contain some really functional features, and vice versa. Similarly some apps provide function specifically when a network connection is unavailable (pocket, for example). The line is drawn between content and function only because I strongly believe that the effective searchability of content, via search engines or via a surfing schema, and the portability of that content via hyperlinks is the heart of what the web does so well and generates so much of its value. If all our content gets hidden away in silos and walled gardens then it simply won’t be read as much as if it was on the open web. And that would be a big perverse shame.

Why can’t we just publish to the stream? Because it is a weak consumption architecture, it is widely unsearchable (although I suspect this can be changed although we need to move beyond the infinite scroll real quick) and because you don’t own it (don’t build your house on someone else’s land). That’s another whole essay and not the focus of this one.

I’ve felt this way for a long time so I am happy to say that finally some important people are weighing in with concerns in the same area. They aren’t worried about the same specific things as me, but I can live with that if it puts a small dent in the acceleration of app culture.

Chris Dixon recently published this piece calling out the decline of the mobile web. I think he is over stating the issue actually, but only because I think the data doesn’t distinguish effectively between apps that I believe should be web based and those I am ambivalent about. He cites 2 key data sources.

comscore-mobile-users-desktop-users-2014

The ComScore data for total web users vs total mobile users is solid enough, showing that mobile overtook desktop in the early part of this year. Interestingly the graphic also shows that desktop user volumes are still rising, even if the rate of acceleration lags behind mobile. Not many people are taking this on board in the rush to fall in love with the so called post PC era. It’s important because to really understand the second data point, provided by Flurry, you have to look at the very different reasons people are using their mobile computers.

Flurry tells us that the amount of time spent in apps, as a % of time on the phone/internet itself, rose from 80% in 2013 to 86% (so far) in 2014. That’s a lot of time spent in apps at the cost of time spent on the mobile web.

But the more detailed report, shows that that headline number is less impressive than it at first seems. All that time in apps, it turns out isn’t actually at the cost of time spent on the web, not all of it anyway.

time_spent

Of the 86%, 32% is gaming, 9.5% is messaging,  8% is utilities and 4% productivity. That totals 53.5% of total mobile time, and 62% of all time in apps specifically. The fact that gaming now occurs within networked apps on mobile devices, rather than as standalone software on desktops, only tells us that the app universe has grown to include new functionality not historically associated with the desktop web. It tells us that this new functionality has not been removed from the mobile web, that in this case the mobile web has not declined. As I said I agree with the thrust of the post I just think the situation is less acute.

Chris Dixon’s main concern is for the health of his business, he’s a partner at Andreessen Horowitz (a leading tech VC), and he is seeing the rise of 2 great gatekeepers in Google and Apple, the owners of the 2 app stores that matter, as a problematic barrier to innovation, or rather a barrier to the success of innovation (there is clearly no shortage of innovation itself).

He’s not alone Mark Cuban has made a similar set of observations.

Fred Wilson too, although he is heading into an altogether different and more exciting direction again, which I will follow up in a different post (hint, blockchain, distributed consensus and personal data sovereignty). Still this quote from Fred’s post sums up the situation rather well from the VC point of view.

My partner Brad hypothesized that it had something with the rise of native mobile apps as the dominant go to market strategy for large networks in the past four to five years. So Brian pulled out his iPhone and I pulled out my Android and we took at trip through the top 200 apps on our respective app stores. And there were mighty few venture backed businesses that were started in the past three years on those lists. It has gotten harder, not easier, to innovate on the Internet with the smartphone emerging as the platform of choice vs the desktop browser.

The app stores are indeed problems in this regard, I don’t disagree and will leave it to Chris and Mark’s posts to explain why. I am less concerned about this aspect partly because it’s not my game, I don’t make my money funding app building or building apps. But mostly I’m unconcerned because I really believe that (native) app culture has to be an adolescent phase (adolescent in as much as the internet itself is a youngster, not that the people buying and using apps are youngsters).

If I am right and this is a passing phase can we make some meaningful guesses as to what might change along the way?

I’ve been looking at mobile through the eyes of Clay Christensen’s innovators dilemma recently, which I find to be a useful filter for examining the human factors in technology disruptions.

Clearly if mobile is disrupting anything it is disrupting desktop computing, and true to form the mobile computing experience is deeply inferior to that available on the desktop or laptop. There are, actually, lots of technical inferiorities in mobile computing but the one that I want to focus on is the most simple one of all, size, the mobile screen is small.

Small small small small. I don’t think it gets enough attention, this smallness.

If it’s not clear where I’m coming from, small is an inferior experience in many ways, although it is, of course a prerequisite for the kind of mobility we associate with our phones.

Ever wondered why we still call what are actually small computers, phones? Why we continually, semantically place these extremely clever devices in the world of phones and not computers? After all the phone itself is really not very important, it’s a tiny fraction of the overall functionality. Smartphones? That’s the bridge phrasing, still tightly tied to the world of phones not computers. We know they are computers of course, it’s just been easier to drive their adoption through the marketing semantic of telephony rather than computing. It also becomes easier to dismiss their limitations.

Mobile computers actually out date smartphones by decades, they were/still are, called laptops.

Anyway, I digress. For now let’s run with the problems caused by small computers. There are 2 big issues, that manifest in a few different ways. Screen size and the user interface.

Let’s start with screen size (which of course is actually one of the main constraints that hampers the user interface). Screen size is really important for watching TV, or youtube, or Netflix etc

We all have a friend, or 2, that gets very proud of the size of their TV. The development trend is always towards bigger and bigger TV screens. The experience of watching sport, or a movie on a big TV is clearly a lot better than on a small one (and cinema is bigger again again). I don’t need to go on, it’s a straightforward observation.

No-one thinks that the mobile viewing experience is fantastic. We think it’s fantastic that we can watch a movie on our phones (I once watched the first half of Wales v Argentina on a train as I was not able to be at home, I was stoked), but that is very different from thinking it’s a great viewing experience. The tech is impressive, but then so were those little tiny 6 inch by 6 inch black and white portable TV’s that started to appear in the 80’s.

Eventually all TV (all audio visual) content will be delivered over IP protocols. There are already lots of preliminary moves in this area (smart TVs, Roku, Chromecast, Apple TV) and not just from the incumbent TV and internet players. The console gaming industry is also extremely interested in owning TV eyeballs as they fight to remain relevant in the living room. Once we watch all our AV content via internet protocols the hardware in the front room is reduced to a computer and a screen, not a TV and not a gaming console. If the consoles don’t win the fight for the living room they lose the fight with the PC gamers.

Don’t believe me? Watch this highlights reel from the Xbox one launch….TV TV TV TV TV (and a little bit of Halo and COD)

The device that the family watches moving images on will eventually be nothing but a screen. A big dumb screen.

The Smart functionality, the computing functionality, that navigates the protocols to find and deliver the requisite content, well that will be provided by any number of computing devices. The screen will be completely agnostic, an unthinking receptor.

The computer could be your mobile device but it probably won’t be. As much as I have derided the phone part of our smartphones they are increasingly our main form of telephony. As much as it is an attractive thought we are unlikely to want to take our phones effectively offline every time we watch the telly. Even if at some stage mobile works out effective multi-tasking we will still likely keep our personal devices available for personal use.

What about advertising? That’s another area where bigger is just better. I’ve written about this before, all advertising channels trend to larger dominant experiences when they can. We charge more for the full page press ad, the homepage takeover, the enormous poster over the little high street navigational signs telling you that B&Q is around the corner. Advertising craves attention and uses a number of devices to grab/force it. Size is one of the most important.

Mobile is the smallest channel we have ever tried to push advertising into. It won’t end well.

The Flurry report tries to paint a strong picture for mobile advertising, but it’s not doing a very good job, the data just doesn’t help them out. They start with the (sound) idea that ad spend follows consumption ie. where we spend time as consumers of media, is where advertisers buy media in roughly similar proportions. But the only part of the mobile data they present that follows this rule is Facebook, the only real success story in the field of mobile display advertising. The clear winner, as with all things digital is Google, taking away the market with their intention based ad product delivered via search, quite disproportionally.

Media consumption v ads

There are clouds gathering in the Facebook mobile advertising story too. Brands have discovered that it is harder to get content into their followers newsfeeds. The Facebook solution? Buy access to your followers.

That’s a bait and switch. A lot of these brands are significant contributors to the whole Facebook story itself. Every single TV ad, TV program, media columnist or local store that spent money to devote some part of their communication real estate to the line “follow us on Facebook” helped to build the Facebook brand, helped to make it the sticky honeypot that it is today.

And now if you want to leverage those communities, you have to pay again. Ha.

Oh but there’s value in having a loyal active base of followers I hear the acolytes say. Facebook is just the wrapper. Ok, then in that case ask them all for an email address and be guaranteed that 100% of your communications will reach them. Think that will work? Computer says no.

Buying Facebook friends? That’s starting to whiff too.

That leaves Google/search. The only widely implemented current revenue model that has a future on the mobile web. Do you know what would make the Google model even more effective? A bigger screen, and a content universe on the open web.

To be fair the idea that we must pursue pay models is starting to take hold in the communities that matter. This is good news.

Read this piece. It’s as honest a view as you will find within the advertising community and it still doesn’t get that the mobile medium is fundamentally flawed simply because it is small. There is no get out of jail card free for mobile, unless it gets bigger.

Many of the businesses we met have a mobile core. A lot of them make apps providing a subscription service, some manage and analyze big data, and others are designed to improve our health care system. The hitch? None of them rely on advertising to generate revenue. A few years ago, before the rise of mobile, every new startup had a line item on its business plan that assumed some advertising revenue.

What did you think about the Facebook WhatsApp purchase? It can’t have been buying subs, it’s inconceivable that most of the 450m WhatsApp users weren’t already on Facebook. Similarly even though there is an emerging markets theme to the WhatsApp universe, Facebook hasn’t been struggling in pick up adoption in new markets. My money is that Zuckerberg bought a very rare cadre of digital consumer, consumer’s willing to pay for a service that can be found in myriad other places for free. WhatsApp charges $1 a year, a number of commenters have noted this and talked of Zuckerberg purchasing a revenue line. I think they have missed the wood for the trees. That $1 a year, its good money when you have 450m users, for sure. It’s even better when you have nearly a billion and many of them are getting fed up with spammy newsfeeds. What will you do if you get an email that says “Facebook, no ads ever again, $5”. He doesn’t just get money, he also gets to concentrate on re-building the user experience around consumers, not advertisers.

The WhatsApp purchase is the first time I have tipped my metaphorical hat in Zuckerberg’s direction. The rest is morally dubious crud that fell into his lap, this is a decently impressive strategic move. I still won’t be signing up anytime soon though.

Ok. Small. Bad for advertising. What else?

Small devices also have necessarily limited user interfaces. The qwerty keyboard for all of its random derivation works really well. There is no other method that I use that is as flexible and as speedy for transferring words and numbers into a computer.

The touchscreen is impressive though, the device that, for my money, transformed the market via the iPhone. My first thought when I saw one was that I would be able to actually access web pages and make that experience useful through the pinch and zoom functionality. I had a blackberry at the time with the little scroll wheel on the side. Ugh. But even that wonder passed, now that we optimise the mobile web to fit on the small screens as efficiently as possible. No need to pinch and zoom, just scroll up and down, rich consumption experience be damned.

Touchscreens are great, they just aren’t as flexible as a keyboard. As part of an interface suite alongside a keyboard they are brilliant. As an interim interface to make the mobile experience ‘good enough’, again brilliant. As my primary computing interface … no thanks.

When the iPad arrived the creative director at my agency, excited like a small child as he held his tablet for the first time, decided to test its ability as a primary computing interface. He got the IT department to set up a host of VPN’s so that he could access his networked folders via the internet (no storage on the iPad) and then went on a tour of Eastern Europe without his MacBook.

He never did it again. He was honest about his experience, it simply didn’t work.

What about non text creativity? Well, if you can, have a look inside the creative department at an advertising agency. It will be wall to wall Apple. But they will also be the biggest Macs you have ever seen. Huge glorious screens, everywhere.

You can do creative things on a phone, or a tablet but it is not the optimised form factor. It seems silly to even have to type that out.

If small is bad, and if mobile plays out according to Christensen’s theories (and remember this is just speculation), then eventually mobile has to get bigger. Considering that we have already reduced the acceptable size of tablets in favour of enhanced mobility this seems like some claim.

For my money the mobile experience goes broadly in one of two ways.

In both scenarios it becomes the primary home of deeply personal functionality and storage, your phone, your email (which is almost entirely web accessible these days),your messenger tool, maps, documents, music, reminders, life logging tools (how many steps did you step today) as well as trivial (in comparison) entertainments such as the kind of games we play on the commute home.

The fork is derived from what happens with desktop computing. The computing of creation and the computing of deeper better consumption experiences.

We either end up in a world with such cheap access to computing power that everyone has access to both mobile computing and desktop computing, via myriad access points, and the division of labour between those two is derived purely on which platform best suits a task. Each household might have 15 different computers (and I’m not even thinking of wearables at this stage) for 15 different tasks. This is a bit like the world we have today except that the desktop access is much more available and cheap, and not always on a desk.

Alternatively, the mobile platform does mature to become our primary computing device, a part of almost everything we do. This is much more exciting. This is the world of dumb screens that I mentioned earlier. If mobile is to fulfil its promise and become the dominant internet access point then it must transcend its smallness. There are ways this could happen.

We already have prosthetics. Every clip on keyboard for a tablet is a prosthetic.

We already have plugins. Every docking station for a smartphone or a tablet is a plugin.

The quality of these will get better but they still destroy the idea of portability. If you have to haul around all the elements of a laptop in order to have effective access to computing you may as well put your laptop in your bag.

I can see (hold for the sci-fi wishlist) projection solutions, for both keyboards and screens where the keyboard is projected onto a horizontal surface, the screen onto a vertical surface. Or what about augmented and virtual reality? Google glass is a player, although the input interface is a dead duck for full scale computing, but how about an optimised VR headset that isn’t the size of a shoebox? Zuckerberg might have been thinking about the metaverse when he bought Occulus but there will potentially be other interim uses before humanity decides to migrate to the virtual a la “ready player one”, VR could be a part of that.

The more plausible call is a plugin / cheap access model hybrid. Some computing will always be use case. I think it’s likely that a household’s primary AV entertainment screen for example, the big one in the living room, will have its own box, maybe controlled by a mobile interface but configured such that your mobile platform isn’t integral to media consumption. The wider world, though, I think will see dumb screens and intelligent mobile devices working together everywhere. Sit down slot in your mobile, nice big screen, all your files, all your preferences, a proper keyboard (the dumb screen is largely static of course, so it comes with a keyboard) and touchscreen too of course, all in all a nice computing experience.

There are worlds of development to get there of course. Storage, security and battery life among other areas will all need a pick up, but if we have learnt anything since 1974 it’s that computers do get better.

I really woke up to Christensen’s theory with the example of the transistor radio, which was deeply inferior to the big old valve table tops that dominated a family’s front room, often in pride of place. The transistor radio was so poor that the kids that used them needed to tightly align the receivers with the broadcast towers. If you were facing South when you needed to face North, you were only going to get static. But that experience was good enough. Not for mum and dad who still listened to the big radio at home, but for the teenagers this was the only way they could hear the songs they wanted to hear, without adults hovering. All of those limitations were transcended eventually and in the end everyone had their own radio. One in the kitchen, one in every teenager’s room, the car, the garage, portable and not portable. Radio’s eventually got to be everywhere, but none of them were built around valves like those big table top sets.

Mobile computing is a teenager’s first wholly owned computing experience (I’m not talking about owning the hardware). It happens when they want it to, wherever they are. It’s also been many an older person’s first wholly owned computing experience, and it’s certainly been the first truly mobile computing experience for anyone. I might have made the trivial observation that laptops were the first mobile computers, and they were, but you couldn’t pull one out of your pocket to get directions to the supermarket.

Christensen’s theory makes many observations. Among them are that the technology always improves and ends up replacing the incumbent that it originally disrupted. The users of the old technology eventually move to the new technology, the old never wins.

We aren’t going to lose desktop computing though. Or at least we aren’t going lose the need states that are best served by desktop computing, if you want to write a 2000 word essay, your phone is always going to be a painful tool to choose. But you might be able to plug it in to an easily available dumb workstation and write away in comfort instead.

At least let’s hope so.


Speaking Truth to Power

Quinn Norton is a name that hovered on the edge of my consciousness for a chunk of time. She dated Aaron Swartz for 3 years and in the resultant furore after his death her name was a part of that story as it became clear that prosecutors had put pressure on her to offer testimony against him, something she did not do.

At this year’s Chaos Communication Congress she spoke on the topic “No Neutral Ground in a Burning World” with Eleanor Saitta which gave an historical perspective on governmental surveillance and how technology is impacting this need for legibility.

 

 

Then finally I found this piece, “A Day of Speaking Truth to Power, Visiting the ODNI”. The ODNI is the Office of the Director of National Intelligence, spook central, a scary place for anyone let alone a journalist with a reputation for speaking ill of the intelligence community (IC). It’s a great read for so many reasons. It paints a picture of the IC that is both confirming and revelatory and it also surfaces a number of interesting ideas, that could have been brought up in a completely different context, but found their way into this conversation instead.

When I read longer form essays I often take one quote, sometimes two, and send them to my email for recycling into a post (credited of course) or to remind myself to dig deeper into a topic, or just to add a highlight to a particular piece, to make it standout against everything else I am reading.

With this essay I sent myself 9 different quotes, I’ve listed 5 of them below. Please read all of it though, it’s fascinating.

 

It was clear to me part of this mess we’re in arose from the IC feeling that if everyone was giving away their privacy, then what they, the Good Guys, did with it could not be that big of a deal.

 

I said the end of antibiotics and the rise of diseases like resistant tuberculosis would mean that tech would end at our skin, and we’d be limited in our access to public assembly. It was probably the darkest thing I said all day, and one of my table mates said (not unkindly) “I don’t like your future.”

 

I realized that one of the problems is that being at the top of the heap, which the security services definitely are, makes the future an abomination and terror. There is no possible great change where the top doesn’t lose, even if it’s just control, rather than position. The future can only be worse than the present, and a future without fear is a future to fear.

 

They were mostly pleasant and thoughtful people, and they listened as best they could. One told me I made her feel like Cheney, “And I’m not used to feeling like Cheney.”

 

When we learn to live online we will want to be full humans again. When, as one of the outside experts put it, the Eternal September is over, we will learn once again to mind our own fucking business, and we will expect others to, as well.

 

That last quote is something we are going to visit again and again over the next 20, 30 or 50 years (it will likely be a hard slog). The truth is that being online is such a young concept and we haven’t a clue yet, what it means in human terms, social terms, societal terms. In the same way that we expect privacy at a coffee table in a public coffee shop, even though it is possible for someone to listen to our conversation, we will find a social code developing that covers the digital in ways we will find reminiscent of the analogue.


Could you kill a robot?

Could you? Not if you had to because it was running amok, but just because…for no real reason. How about maiming one? Is there a moral issue involved? Or is it a daft question considering that a robot has no consciousness or sentience.

Is destroying a robot the same as killing it? Which word generates the stronger emotional reaction?

boston dynamics robot

These might seem like irrelevant questions but actually they are important considerations as we prepare ourselves for the coming tide of robotics. Much is written about what will happen to the job market, and that is important too of course, but there are also big issues of a more personal nature. What will it mean to live in a society where artificial or fake sentience is commonplace? What will happen to our definitions of humanity if artificial life can evoke the same responses from humans as real biological life does.

Wikipedia defines sentience like this…

Sentience is the ability to feel, perceive, or to experience subjectivity

The overarching question is whether or not the appearance of sentience will affect our societal codes, both legislated and social, and if so, how?

To kick off this brief (very brief) inquiry have a quick look at this clip from the Graham Norton show. Graham is showing Gary Oldman, Toni Collette and Nick Frost a £12,000 mini robot, who quite frankly is very sweet, for want of a less cloying term.

That robot certainly has an uncanny ability to generate real empathy from the human audience.

At the point that it lifts its hand, like a child, to walk with Norton there is a sincere and substantial ‘aww’ from the audience, it is palpably emotive. When it falls and says ‘help me’, few people hear it (they are laughing at something else), but if you re-watch the clip and try to hear ‘help me’, it is strangely affecting.

There are other surprises too, when the technology is being stretched, the humanity disappears. At the point that it performs Gangnam style, I feel the empathy goes totally AWOL, its impressive, I guess, but in a different way, it is no longer pulling on the heartstrings, it’s talking to a different part of me.

Language is important

It would seem that the markers of human discourse, or at least a certain type of human discourse, are more subtle and less grand. In the biological realm it’s long been understood that there is a particular class of ‘linguistic tool’ in normal human speech called discourse markers, that when used facilitate a more courteous exchange. These are the small utterances that appear as, often self-deprecating filler, when humans talk to each other. For example people often say ‘probably’ , ‘I think’ , ‘maybe’ even though they harbour no such doubts about what they are saying. I did it, without realising, in the previous paragraph when I wrote, “its impressive, I guess, but in a different way”

android and human

This article argues that mastering such ‘simplicities’ of language (the language is humanly simple, the mastery of it in a technical architecture anything but) will be a major feature of effective human – robot communication.

For my money this idea seems to infer that the replication of human idioms within robotic architectures can generate sincere human responses. If the robots act, in certain ways, like humans then it seems that our autonomous thinking systems (which is most of our thinking apparatus), will treat them like humans.

They made videos of an “amateur baker” making cupcakes with a helper giving advice—in some videos the helper was a person, and in some videos the helper was a robot. Sometimes the helpers used hedges or discourse markers, and sometimes they didn’t.

You can probably already guess that the humans and robots that used hedges and discourse markers were described in more positive terms by the subjects who watched the videos. It’s somewhat more surprising to hear that the robots that hedged their speech came out better [than] the humans who did.

The author credits the novelty of robots as teachers, for their superior performance in this experiment. That makes some sense, and is testable too, as robotic interaction becomes commonplace. But for the moment I am more intrigued about understanding the action of our supposedly  autonomous brains, largely because if autonomous sub-conscious reactions are driving these empathetic feelings then once novelty wanes we can expect an even more seamless set of autonomic interactions with our robot compatriots. This line of thought suggests that the division between human – human interactions and human –robot interactions will become more, not less, opaque.

There are other confounding ideas to consider.

Recently a Washington Post journalist was called by a telemarketer selling health insurance, seemingly a pleasant, bright young woman. Something, however, felt a little off, he thought she was a robot. The call was recorded.

He, in my opinion is slightly aggressive, as he tests his inkling by repeatedly asking if she is a robot, to which she repeatedly answers “I am a real person”. You can see why he became suspicious, she does sound as though she could be a robot. But here’s the thing. So does he. Some of his challenges have a strangely artificial hue as well. My conjecture is that as a result of believing that he is talking to a robot his use of discourse markers is reduced, this would also explain why his conversation comes across as unduly aggressive.

As it turns out she wasn’t a robot. Well, not quite. She was a program used by call operators with weak English, or strong accents. So, there is in fact a human triggering the interactions, which explains the pauses and the repeated phrasings, clearly the operators have a ‘script’ of some kind, managed by the software which limits the conversational creativity that can be brought to bear. It also demonstrates quite clearly that our perceptions in the realm of human – robot interactions are not 100% effective.

At one level we now have 3 choices to consider (human – human, human – robot, human – hybrid), on a more challenging level we need to consider the pitfalls and troubles that these complications could cause if our perceptions fail us and we get our sub-conscious analysis wrong.

Being rude, to a real person, which is what happened here, is just one such problem.

However, to take it to the extreme, if you can’t distinguish between artificial and human communication in a world where faceless communication is increasingly becoming the norm, then what is the nature of humanity in this faceless future? Is there a meaningful division at all?

robot bar tender

So, back to killing a robot….it’s really not so cut and dry as it might have seemed. Robots have the ability to play with your emotions and you might be utterly unprepared for that.

Kate Darling is a researcher who is looking at robot ethics (hooray). Alongside safety, liability and privacy Kate also spends time looking at the social side of human – robot interaction.

She recently ran a popular session at Harvard. She tackles the killing question head on.

“….she encouraged participants to bond with these robots, then to destroy one. Participants were horrified, and it took threats to destroy all robots to get the group to destroy one of the six. She observes that we respond to social cues from lifelike machines, even if we know they are not real”

There are some interesting times up ahead for us all. I’m glad that people like Kate are sufficiently forward thinking to start the dialogue now. It’s almost guaranteed that we won’t be ready, but at least we won’t be totally unaware.


Money, maps and mystery – 3 links

I’m pretty uncomfortable with the recent developments in the field of genetically engineered crops. I haven’t done due diligence and investigated properly, so for the moment I will refrain from judgement even while I acknowledge my discomfort.

That said I do have a problem with the fact that, in the name of intellectual property protection, these companies have severed the ability for GE crops to produce their owns seeds, bringing even deeper commercial dependency into the agricultural world. That just strikes me as a deeply stupid thing to do, regardless of, or perhaps because of, the profit motive. This article is broadly in favour of GE developments in agriculture while also wishing to find a way for the intellectual property landscape, in this case patents, to become more serviceable to the world at large and not just the large global entities that have the resource and knowledge to exploit the system in their favour.

The chief protagonist, Richard Jefferson, uses the frankly wonderful story of Jan Huyghen van Linschoten to make an interesting point about how monopolies on knowledge stymie innovation and development. Linschoten, in the 16th century, copied the maps that had been the basis of the Portuguese and Spanish domination of world trade. They were a closely guarded secret that held real value. When he returned to his native Holland he didn’t sell the maps for profit he published them to freely spread the knowledge wide and far.

Jefferson’s point is that the patent system needs mapping in order to democratise it for, among others, developing world farmers. I can see the value in that but still feel that there are elements of shuffling deckchairs on the titanic in such an approach. Surely the anecdote casts doubts on the whole concept of patents, not simply their accessibility?

http://grist.org/food/a-16th-century-dutchman-can-tell-us-everything-we-need-to-know-about-gmo-patents/

 

The second link today is an anonymous screed from an American investment manager. His firm’s clients are firmly in the top 1% with net worth in excess of $5m and annual incomes above $300,000. The whole piece is very much worth reading, if only to discover that roughly $5m is required to secure a 30 year retirement income in the category that most of us would categorise as rich. That has certainly influenced the investment strategy that I will be adopting once I win the lottery (it has incidentally focused me almost exclusively on the Euromillions lottery which has potential wins above $10m, after all what is the point in taking such a ludicrous punt if I can’t even be financially secure for life as a result).

Truth be told there is little in here to find humour in. Instead it is a quantified look inside the finances of the super wealthy in America and what can be bought in return.

Unlike those in the lower half of the top 1%, those in the top half and, particularly, top 0.1%, can often borrow for almost nothing, keep profits and production overseas, hold personal assets in tax havens, ride out down markets and economies, and influence legislation in the U.S. They have access to the very best in accounting firms, tax and other attorneys, numerous consultants, private wealth managers, a network of other wealthy and powerful friends, lucrative business opportunities, and many other benefits. Most of those in the bottom half of the top 1% lack power and global flexibility and are essentially well-compensated workhorses for the top 0.5%, just like the bottom 99%.

http://www.businessinsider.com/an-investment-managers-view-2013-11

 

Finally something of a more friendly hue, Cicada 3301. For 2 years or so a nameless organisation has been posing a series of brain numbingly difficult puzzles and problems, originally seeded in locations on the open web. It, quite frankly, reads like something from a William Gibson novel, which is probably no accident.

A certain level of progress (the clues seemed to lead to more clues) delivered a telephone number and the message “call us”, which in turn after another act of deciphering led to a website and a countdown. The countdown revealed real world locations and a whole new angle opened up. There is much more to the story, best recounted by the article linked below rather than me just regurgitating it for you.

http://www.telegraph.co.uk/technology/internet/10468112/The-internet-mystery-that-has-the-world-baffled.html

 

 


Trust and vulnerability in 2013

It shouldn’t have taken me 40 years, but it did

I had an epiphany a few years ago regarding the nature of trust. What I had suddenly understood was this.

The more trust that I placed in someone or something, the more vulnerable I became. If I have no trust, my trust cannot be abused.

trust me

It sounds bleak but it’s true. Trust is, after all, essentially a lack of verification. If you say it’s true, I won’t check, I trust you. If you are wrong, I have become vulnerable to a loss of some kind. That doesn’t happen in every circumstance, of course, as a lot of the time trust is well placed. But in a significant number of occurrences, it is not.

Trust can only be infallible if humans never lie and are never mistaken. As this is simply not the case we are left with the thought that, in the aggregate, trust must create vulnerability and that trust and vulnerability are axiomatic and inverse truths. It simply is not possible to have one without the other.

Recently I have also been thinking about reciprocity in connection with our modern communication infrastructure and it occurred to me that there was a relationship between the 2 things. The reciprocal nature of the internet and the reciprocal nature of trust.

Reciprocity can be complicated sometimes, not least because it seems to have 2 slightly antagonistic definitions.

Firstly there is the concept of reciprocating a benevolence (or a cruelty). If you invite me to dinner, I will feel that I should reciprocate and return the invite and ask you to dinner at mine at a later date. I am going to call this mutual reciprocity.

And then there is the concept, presumably derived from the mathematic definition, of something reciprocated being the opposite or an inversion. This idea of inversion is also reflected in the nautical use of the term. A reciprocal course is 180 degrees from the current one. I am going to call this, somewhat unimaginatively, inverted reciprocation.

It turns out that the internet is rife with examples of both kinds of reciprocity.

Zero day exploits and Facebook

Take the habit of revealing security exploits at public technology and hacking conferences. Unless you are familiar with this practice it will seem to be an aggressively dangerous and irresponsible course of action, designed to primarily, place exploitable information into the public domain and available to bad actors. However, the truth rather perversely, is in fact the complete opposite.  For reasons of cost control, bureaucratic paralysis and reputation management it has become the case that a discrete, private notification of such security risks to the vendor in question doesn’t result in fixes being released. It instead results in a continuation of the discretion. If no-one knows, goes the argument, then there is no problem. Once the exploit is public knowledge a fix follows, but in almost every case where the exploit is kept secret the vulnerability is maintained. The only thing that seems to force vendors into accepting the cost and trouble of fixing these vulnerabilities is the threat of lost business, effected by public shaming.

zero day

Forcing security updates by publicising exploits is an inverted reciprocation. However, it has led rather directly to one of mutuality. What’s good for the goose is good for the gander and we now exist in a world where our governments have become the key players in a black market trading in zero day exploits (the name given to vulnerabilities that have not been made public). So where once we looked to our trusted actors (governments and vendors) to ensure that our computers were safe from exploitation we now should realise that they are as interested in using the ability to subvert our computing security as much as the black hat hackers and thieves. It’s not necessarily the case that governments want to use these exploits against us the citizenry [this was written before the Snowden NSA revelations and we now know that governments have been using considerably more explicit and constructed tools to subvert our privacy ]. Indeed the most publicised example of governments using zero day exploits are the two exploits that were apparently used to place the Stuxnet virus within the Iranian nuclear industry infrastructure. The problem, rather, is that in the absence of a fix being released, other forces, malign forces, may also get hold of these exploits and use them against common computer users.

In some cases the individual selling the exploit is paid a monthly fee for as long as it remains undiscovered, and pertinently, not fixed (whereupon it would become useless). If we accept that our governments are acting in good faith, then this last issue could be seen as a device to keep these exploits from bad actors, if only because once used by said bad actors the vulnerabilities will become known. But even if we grant such largesse, and I think that would be naive, the fact remains that we have started to incentivise the finding of dangerous exploits for direct monetary gain, rather than for the purpose of fixing them, which leads us back to an inverted reciprocation whereby the desire to make something safe only makes it more dangerous.

There are other areas of internet life where reciprocal action is less urgently frightening but no less relevant. Take for example the transformation of facebook and the flight, from it, of teenagers. As soon as facebook hit a certain size and level of ubiquity 2 things happened. Parents discovered that this social thing wasn’t just for teenagers and held value for them also, and they realised that they could now (to some extent at least) monitor their kids in the social space as well as in the real world space. Here we have a mutuality, parents joining facebook, that again leads to an inversion as the teenagers start to leave facebook. Or rather, they don’t leave but they do move their private interactions to other platforms. In this case it was originally a flight to instagram (and now you know why facebook bought instagram). We are essentially now looking at a big game of social whack a mole as fashions come and go.

Reciprocity impacts our intuitive decision making

The larger problem, the reason this is a troubling idea, is that with all this counter intuitive activity going on, the issue of who or what to trust becomes ever more complex and difficult. As computing developments lead to more and more invisible computing implementations this becomes a crucial and problematic vector for society to manage. These complications are creating significant dissonance, troublingly, at the point that we need to be able make important intuitive decisions. These decisions are increasingly core facets of life and not niche hobbyist interests. Without clarity around the actual effective vectors and actions, making quick sensible decisions becomes degraded.

The confusion caused by reciprocal factors isn’t new, of course, as it is an essentially human affair, not computational. However the hyper reality of the cyber world tends to bring this confusion forward more commonly and in ever more public forms, particularly where matters of trust are concerned.

The two examples I have discussed are explicitly connected to the concept of trust. Either the trust between children and parents (facebook), or the trust between the government, the population and a specific type of human, hackers (zero day exploits).

Moreover confusion through reciprocal complexity is not the only complication. Another important element of the digital trust equation is impacted by changing communication dynamics, as without communication of some kind all trust decisions become simple statements of faith. There are two examples, that I want to highlight, where communication modes are changing the trust landscape – the incentivisation of low effort communication and the increased geographical reach of all communications.

Does low effort = low value?

This article by Evan Selinger in wired is critiquing the ads that facebook produced to push their android interface, called Home. In short, they show 3 situations where facebook users are able to ignore the immediacy of their surroundings and instead concentrate on being ‘social’ facilitated by the seamless access of Home.

We have the 20 something girl at a family gathering, a young man on a plane ignoring the stewardess, and lastly a staffer at facebook ignoring his boss, Zuckerberg himself, as he address the team.

Selinger goes into some depth and pushes his observations to the reductio ad absurdum extreme which is that at some stage in the future the only worthwhile interactions will be those facilitated through Home (or whatever the dominant social tool is), fueled by the facebook algorithm that connects us to other people who appreciate our social personas.

I don’t buy the full doomsday scenario that Selinger describes, and I suspect he doesn’t either. But he does effectively identify another key reciprocal trust vector that is being hyper charged by the internet.

Stated simply it’s obvious. The greater the effort that is required to create and send a communication the more value that communication has, certainly to the sender, and if it is timely and effectively targeted, to the recipient also. The kind of content that facebook and other social tools prioritises is short form and low effort. Sometimes the prioritisation is through the voting algorithms and sometimes it is effected through the constraints of the tool, think of Twitter’s 140 characters, and sometimes it is both.

Take the situation that recently overtook r/atheism on reddit. The coup that was staged to take over moderator control of the sub was fueled in part by a belief that the quality of the content being ranked by the community’s voting habit was degraded and ‘off mission’. Aside from the laughable thought that an atheist community should be ‘on mission’ in the first place, there was actually something behind both sides of the argument.

The supplanted community position was that opinions on quality were the sole preserve of voting behaviour, and that whatever content was voted up was the content that the community wished to rank highly.

Those behind the coup argued, with some validity (not that such validity justified their behaviour in my opinion), that the ranking algorithm on reddit favoured content that could be consumed quickly, because the voting behaviours that occurred in the first few minutes after something was posted were disproportionally valuable and key to a strong overall ranking. Therefore a quick meme that took 30 seconds to consume and enjoy had an un-earned advantage over a long deeply researched essay that takes 10 minutes to read.

So, according to the reddit voting behaviour ranking algorithm, quick and dirty content was rewarded but long and thoughtful content was punished and r/atheism was a largely unmoderated community exposing the content to the full vagaries of that algorithm. The coup changed all that, instituting moderation rules that artificially amended behaviour to compensate for the bias in the ‘natural’ ranking. The r/atheism situation is interesting for many reasons and I could go on, but suffice to say for now that it is a great live example that demonstrates how the ease of communication (meme vs a 10,000 word essay) can be a key vector in content ranking.

This essay isn’t about devaluing the concept of social platforms, which of course exist to remove friction from the process of communication. They are what they are and they have an impact on a whole range of things. Nonetheless it is worth asking yourself what was the occasion when you last wrote and sent a letter (pen and ink, envelope and a stamp), and what was the occasion when you last received a written letter that wasn’t a birthday card? Whether this trend is for the best or not it is undeniable that the culture of letters is in decline.

This observation identifies 2 different vectors.

  • We communicate more frequently, but in ever weaker wrappers, to weak and strong connections both.
  • We communicate in strong wrappers less and less, to anyone

The reason these communication schemas matter is not for any qualitative perceived decline or improvement (although there is certainly a spectrum of such quality, and part of what drives position on that spectrum are facets of the schema itself), but because communication is fundamental to the building of human support networks, and support networks, consciously constructed or otherwise are one of the ways that we compensate for vulnerability.

Support networks and co-dependency

The structure of support networks used to be driven chiefly by the geographical distribution of network participants. This is a dynamic that has been shifted many times, by many different technologies that have increased our geographical communication reach and subsequently made real changes to the way we live our lives on a day to day basis.

It’s not a concept unique to digital.

Imagine living on an island that is connected to the mainland by a ferry that only travels twice a day. There is a community on the island, shops, doctors, a midwife, a school, a church. The community functions and the whole community has access to the essential functions of a modern life regardless of whether or not they all get along with each other. There is a deep co-dependency. Say you run the local shop, but have a long standing enmity with the midwife and you are pregnant. The chances are good that you allow her to buy her essentials in your shop and that if you go into an early labour at 3am she will get out of bed to birth your child. That’s what communities do. That’s what co-dependent support networks do.

Once a bridge is built to your island, and your ability to expand your geographical reach is massively enhanced (24 hour access to the mainland) these co-dependencies become very different creatures. You no longer have to shop in one shop and you no longer have to use just one midwife. One, both or neither of these 2 personas can now, feasibly, decide to withdraw their services from the other without suffering a critical loss.

CandlePowerForums

In a very similar way today’s digital communities, which are very effective at linking like-minded individuals across vast geographies also reduce the co-dependency of local geographical communities. We see this quite pertinently with the development of medical online communities and a corresponding reduction in trust in locally available medical practitioners.

I was once consulted by a medical group that was highly frustrated that their breast enhancement customers were willing to take advice, “from strangers on the internet” via forums. I spent some time looking at what was happening in those forums and it was quite surprising. There was a very clear demonstration of trust evident throughout the majority of the exchanges, these people were not interacting as if they were strangers. That was because they weren’t. It turned out that on average the decision making period for the surgery was about 18 months and that a huge amount of that time was spent actively within these forums. The information exchange was recorded and verifiable, moreover the community was in no way trivial, they were discussing the reality of a serious surgery. It was actually quite easy to see who was worth talking to, and who was not.

The doctors still hated it though.

Now without getting into the positives and negatives of specific situations (and for sure some of these developments are very positive, others not so much) this is just the reality of a broadly available global communication tool. When 10 people can give advice (from afar), where previously only 1 could (locally) there has by definition been a corrosion of the local dependency. That dependency was a real part of many human trust relationships and so we should acknowledge that the trust dynamics themselves will have changed too. In some spheres we will see improvements in others, we will not.

The medical sphere is a very good arena for exploring these ideas because we are forced to trust medical decisions even though our knowledge is generally low, while the stakes are often critically high.

Take the relationship that you might have with your GP (family doctor) today. There are, of course, a lot of different reasons why we have smaller relationships with our GP’s these days, financial, political and economic factors probably being the biggest influences. However, the ability of the internet to make available a corpus of knowledge that we can interact with plays into the situation as well.

The internet provides a measure of support, a place where you can seek reassurance, but it also reduces your dependency on your GP which in turn means that a lot of people will no longer form a relationship with their GP. Which means, in turn, that the GP’s knowledge of your medical history is less than it might have been. In the vast majority of situations this will have no discernible impact at all. The problem with medicine, though, is that the cost of one mistake can be horrific. In medicine we are obliged to build systems that manage the edge cases as a priority over the mundane.

The germs of the future

The situation regarding GP’s that I just described actually identifies the only place we can really go from here.

  • We aren’t going to turn off the internet.
  • Decisions that should be comfortable and intuitive will continue to become counter intuitive and problematic, while the rate of technological change suggests that any plateau in development, that might let intuitive reasoning catch up, is an unreasonably long way off.
  • Our support networks will become more geographically dispersed.

In short, as we see the impact of technology, and in this particular examination, the complex new trust dynamics fostered by technology, gaining stronger and stronger footholds on the human exchanges of modern society, and to a greater or lesser degree replacing or reducing the value of local geographically enforced co-dependencies, we are forced to acknowledge that the only potential safety net will in fact be the extended support networks operating at potentially vast geographic scales themselves. The problem is the solution.

So considering that the complexities of our installed invisible computing layer already make this world counter intuitive and fraught for many people, we are left to ask ourselves what can we do to improve the reality of trust dynamics in this digital world.

One angle must concentrate on trying to bring clarity to situations through communications, while another angle should concentrate on deepening the efficacy of geographically disperse support networks.

The work that is required is happening, it’s just that we are still in the very early stages of this grand experiment. All those social tools that have scoring systems of one kind or another, likes, upvotes, downvotes and credits et al they are all trust systems seeking to identify quantifiable transparent trust. We haven’t identified all the unintended consequences yet though, and we probably need to ask if we will ever be able to identify them all. I suspect such consequences may well be somewhat defined by each individual tool and differently in each human domain.

Similarly the quality of informal support networks is also evolving, although perhaps in a less structured fashion. Finding ways to trust the information that is dispersed through these forums, via the mechanics I described above is one such thing, but we also are seeing increasing connectivity being used to match actual local problems with actual local solutions. This is built on an un manageable supply of simple kindness, the desire of a stranger to reach out and help, but it may be all we have.

This is a problem that will almost certainly evolve to some kind of functional equilibrium one way or another, but on the other hand we should perhaps be prepared that such an equilibrium might feel incredibly alien to the world we live in today.