I’ve read this essay twice now, some months apart. On both occasions it has irritated and enthralled me at the same time. It is an odd experience to be enthralled by something (the ideas in the essay) that simultaneously causes feelings of extreme discomfort. There are some great ideas here, but I can’t help feel that the solutions suggested are overly influenced by the very young world view of the American model of governance and administration.
Venkat lays out 2 eye opening concepts for understanding the way a state seeks to govern a population.
Firstly a reduction of the state’s requirements (in order to exist), to counting the population, conscripting the population and taxing the population. Obvious once you think about it maybe, but one of those ideas that is huge once you do.
Secondly, that technologies are always the medium by which government’s gain the consent of a population to be governed. To be fair consent is not necessarily the right word, as the first such technology that Venkat mentions is the sword, but the idea (semantics apart) is still a powerful filter for looking at the relationship between a state and its population (even if at times the picture is ugly).
The broad argument is that a population that is inherently gaining mobility (virtual and geographic), causes problems for the current model of governance which is significantly based on knowing that most people in a population don’t move in and out of administrative boundaries on a regular basis. As this mobility increases, goes the argument, then if we accept that a government will have to govern (counting, conscripting, taxing) then with increased mobility must come increased surveillance. Hence the consent of the surveilled.
It’s a long essay, and you will have to stop and think along the way to follow every idea presented (as usual with Venkat the piece is very dense), but it is worth it.
Upfront I suggested that I had a concern with the piece, and I do, but it does not diminish what is a very rich piece of writing that can spark much involved thinking of your own. I do feel that much of what is written is informed by an immature model of American governance, based on the history of that country. The 50 states of the USA have always struck me as a slightly odd model built by a young country still obsessed in strange measures with the idea of a wild west. As a child even, I remember feeling a peculiar dissonance watching the movie Porkies, where to evade the police the high schoolers merely needed to cross county lines.
If, however, you consider these ideas as factors affecting the global stage, we can start to think about how political globalisation will inevitably be influenced. Maybe that’s a 30 year horizon, maybe it’s a 300 year horizon, either way there is lots to get your teeth into here.
Crypto-currencies are really nothing new, they’ve been around as concepts for a long long while and as real available commercially exchangeable entities since 2009 when the Bitcoin code was released. Nonetheless we are this year starting to see a significant maturation of how Bitcoin is being reported. More specifically we are seeing the emergence of stories about the blockchain, the idea at the core of the Bitcoin protocol. Particularly there is significant activity in the VC sector driving much of this interest.
Bitcoin is often misconstrued as an anonymous currency, which simply isn’t true. In fact the whole secret sauce is that in the absence of a central banking authority your transactions are actually made very public indeed. It is possible to obfuscate your link to any particular bitcoin transaction but this is not an inherent part of the protocol at all.
The excitement is around the idea of distributed consensus systems. The heart of a distributed consensus system, in this reading, is the idea known as the blockchain. Essentially a public ledger of transactions, the blockchain cleverly solves many of the problems that are commonly dealt with by centralised authority.
Distributed consensus as an idea potentially impacts many more sectors than just currency. Any societal entity that depends on a centralised authority could be in the cross hairs, one example is Namecoin, an alternative to the DNS, which can allocate .bit domain names on a decentralised basis.
This article does a superb job of explaining the blockchain itself and how it solves the problems of centralisation. If you have any interest in technology, and emergent trends therein, then this is something you need to read.
In contrast to Venkat’s pragmatic reading of the oblique structures that inform modern governance models, this essay from Yanis Varoufakis of the University of Athens (currently resident at University of Texas at Austin), explores the question of what the internet can do to ‘repair’ democracy.
The hunch underpinning this paper is that, behind voter apathy and the low participation in politics, lays a powerful social force, buried deeply in the institutions of our liberal democracies and working inexorably toward undermining democratic politics. If this hunch is right, it will take a great deal more to re-vitalise democracy than a brilliant Internet-based network linking legislators, executive and voters.
His approach is informed by an examination of the original democracy of Athens, paying attention to both its structures and its hypocrisies (Athens was a state built on slavery), and how that differs from the underpinning historic influences that structured modern western liberal democracies.
His aim is not to discourage those that seek to use modern technologies to reinvigorate democracy, but to point their innovations in the right direction. Thus he explores the perceived issues with direct democracy, the idea that we can all vote on all the issues, and the subsequent conclusion that voter apathy is not a bug of a governance model built on the needs of a mercantilist ruling class, but a feature.
In contrast to the Athenian model, whereby the poor but free labourer was empowered equally alongside the richest man, in a system that has evolved to devalue citizenship while massively extending its reach (no slaves in liberal democracies, at least not mandated by law) whilst simultaneously transferring real power from the political sphere to the economic, solutions that simply make political connection between our governing institutions and the people more efficient, will likely make little difference.
In this reading the changes required are economic, not political, with subsequent change in the political sphere only plausible once such changes within the economy are in place.
This may not be the most optimistic reading of today’s political landscape, but it is a well constructed and researched argument worthy of your time. Must read reading.
You might recall a gentle excitement reported in the news last month about a huge breakthrough in physics. The discovery being reported was the confirmation of primordial gravitational waves, which to be fair, may not mean much to you or anyone else not involved in cutting edge cosmology and physics. Even if you have little interest in the detail of the discovery and its implications I implore you to at least watch the first video below. It is the moment that Andrei Linde is surprised with the news that such waves have been confirmed. Linde was the physicist who initially proposed the idea of inflation (the huge expansion that the universe went through in the tiniest fraction of a second after the big bang) which is now confirmed by the discovery of these waves. Even if you dislike physics, watching a man and his wife receive confirmation of their life’s work is a genuine joy.
If that has whetted your appetite to understand what the fuss is all about then this video from Veritasium does a good job of explaining the discovery and why is is so fundamental to our understanding of the big bang and subsequently the universe.
I don’t like getting my content from apps. I use the mobile web instead if I have to, but the majority of my web access is on a laptop with a nice big screen, which means my prejudice doesn’t hurt me too much.
I know that that sets me apart from the majority of smartphone users, but as I said before, my primary web access point isn’t my phone so the limitations of the mobile web that have enabled app culture don’t really hurt me.
I also believe that it is a culture with a short shelf life.
It is a system with an unusual degree of friction in comparison to an easily imagined alternative. The whole native apps versus web apps argument that has been running for years now is largely decided, at any one point in time, by the state of play with regard to mobile technology and the user requirements of said technology. I’m pretty certain the technology will improve faster than our user requirements will complicate.
That easily imagined alternative is pretty straightforward too. A single platform, open and accessible to all, built on open protocols, that can support good enough access to hardware sensors, is served by effective search functionality and doesn’t force content to live within walled gardens.
The web, in other words. Not fit for purpose today (on mobile), but it will be soon enough.
Monetisation possibly remains a challenge but I’m also pretty sure we will eventually learn to pay for the web services that are worth paying for. Meaningful payment paradigms will arrive, the profitless start up model/subsidised media (which is what almost all these apps are) can’t last. We will get over ourselves and move beyond the expectation of free.
Should everything be on the web? No, not at all. For me there is a fuzzy line between functional apps and content apps. The line is fuzzy because some content apps contain some really functional features, and vice versa. Similarly some apps provide function specifically when a network connection is unavailable (pocket, for example). The line is drawn between content and function only because I strongly believe that the effective searchability of content, via search engines or via a surfing schema, and the portability of that content via hyperlinks is the heart of what the web does so well and generates so much of its value. If all our content gets hidden away in silos and walled gardens then it simply won’t be read as much as if it was on the open web. And that would be a big perverse shame.
Why can’t we just publish to the stream? Because it is a weak consumption architecture, it is widely unsearchable (although I suspect this can be changed although we need to move beyond the infinite scroll real quick) and because you don’t own it (don’t build your house on someone else’s land). That’s another whole essay and not the focus of this one.
I’ve felt this way for a long time so I am happy to say that finally some important people are weighing in with concerns in the same area. They aren’t worried about the same specific things as me, but I can live with that if it puts a small dent in the acceleration of app culture.
Chris Dixon recently published this piece calling out the decline of the mobile web. I think he is over stating the issue actually, but only because I think the data doesn’t distinguish effectively between apps that I believe should be web based and those I am ambivalent about. He cites 2 key data sources.
The ComScore data for total web users vs total mobile users is solid enough, showing that mobile overtook desktop in the early part of this year. Interestingly the graphic also shows that desktop user volumes are still rising, even if the rate of acceleration lags behind mobile. Not many people are taking this on board in the rush to fall in love with the so called post PC era. It’s important because to really understand the second data point, provided by Flurry, you have to look at the very different reasons people are using their mobile computers.
Flurry tells us that the amount of time spent in apps, as a % of time on the phone/internet itself, rose from 80% in 2013 to 86% (so far) in 2014. That’s a lot of time spent in apps at the cost of time spent on the mobile web.
But the more detailed report, shows that that headline number is less impressive than it at first seems. All that time in apps, it turns out isn’t actually at the cost of time spent on the web, not all of it anyway.
Of the 86%, 32% is gaming, 9.5% is messaging, 8% is utilities and 4% productivity. That totals 53.5% of total mobile time, and 62% of all time in apps specifically. The fact that gaming now occurs within networked apps on mobile devices, rather than as standalone software on desktops, only tells us that the app universe has grown to include new functionality not historically associated with the desktop web. It tells us that this new functionality has not been removed from the mobile web, that in this case the mobile web has not declined. As I said I agree with the thrust of the post I just think the situation is less acute.
Chris Dixon’s main concern is for the health of his business, he’s a partner at Andreessen Horowitz (a leading tech VC), and he is seeing the rise of 2 great gatekeepers in Google and Apple, the owners of the 2 app stores that matter, as a problematic barrier to innovation, or rather a barrier to the success of innovation (there is clearly no shortage of innovation itself).
He’s not alone Mark Cuban has made a similar set of observations.
Fred Wilson too, although he is heading into an altogether different and more exciting direction again, which I will follow up in a different post (hint, blockchain, distributed consensus and personal data sovereignty). Still this quote from Fred’s post sums up the situation rather well from the VC point of view.
My partner Brad hypothesized that it had something with the rise of native mobile apps as the dominant go to market strategy for large networks in the past four to five years. So Brian pulled out his iPhone and I pulled out my Android and we took at trip through the top 200 apps on our respective app stores. And there were mighty few venture backed businesses that were started in the past three years on those lists. It has gotten harder, not easier, to innovate on the Internet with the smartphone emerging as the platform of choice vs the desktop browser.
The app stores are indeed problems in this regard, I don’t disagree and will leave it to Chris and Mark’s posts to explain why. I am less concerned about this aspect partly because it’s not my game, I don’t make my money funding app building or building apps. But mostly I’m unconcerned because I really believe that (native) app culture has to be an adolescent phase (adolescent in as much as the internet itself is a youngster, not that the people buying and using apps are youngsters).
If I am right and this is a passing phase can we make some meaningful guesses as to what might change along the way?
I’ve been looking at mobile through the eyes of Clay Christensen’s innovators dilemma recently, which I find to be a useful filter for examining the human factors in technology disruptions.
Clearly if mobile is disrupting anything it is disrupting desktop computing, and true to form the mobile computing experience is deeply inferior to that available on the desktop or laptop. There are, actually, lots of technical inferiorities in mobile computing but the one that I want to focus on is the most simple one of all, size, the mobile screen is small.
Small small small small. I don’t think it gets enough attention, this smallness.
If it’s not clear where I’m coming from, small is an inferior experience in many ways, although it is, of course a prerequisite for the kind of mobility we associate with our phones.
Ever wondered why we still call what are actually small computers, phones? Why we continually, semantically place these extremely clever devices in the world of phones and not computers? After all the phone itself is really not very important, it’s a tiny fraction of the overall functionality. Smartphones? That’s the bridge phrasing, still tightly tied to the world of phones not computers. We know they are computers of course, it’s just been easier to drive their adoption through the marketing semantic of telephony rather than computing. It also becomes easier to dismiss their limitations.
Mobile computers actually out date smartphones by decades, they were/still are, called laptops.
Anyway, I digress. For now let’s run with the problems caused by small computers. There are 2 big issues, that manifest in a few different ways. Screen size and the user interface.
Let’s start with screen size (which of course is actually one of the main constraints that hampers the user interface). Screen size is really important for watching TV, or youtube, or Netflix etc
We all have a friend, or 2, that gets very proud of the size of their TV. The development trend is always towards bigger and bigger TV screens. The experience of watching sport, or a movie on a big TV is clearly a lot better than on a small one (and cinema is bigger again again). I don’t need to go on, it’s a straightforward observation.
No-one thinks that the mobile viewing experience is fantastic. We think it’s fantastic that we can watch a movie on our phones (I once watched the first half of Wales v Argentina on a train as I was not able to be at home, I was stoked), but that is very different from thinking it’s a great viewing experience. The tech is impressive, but then so were those little tiny 6 inch by 6 inch black and white portable TV’s that started to appear in the 80’s.
Eventually all TV (all audio visual) content will be delivered over IP protocols. There are already lots of preliminary moves in this area (smart TVs, Roku, Chromecast, Apple TV) and not just from the incumbent TV and internet players. The console gaming industry is also extremely interested in owning TV eyeballs as they fight to remain relevant in the living room. Once we watch all our AV content via internet protocols the hardware in the front room is reduced to a computer and a screen, not a TV and not a gaming console. If the consoles don’t win the fight for the living room they lose the fight with the PC gamers.
Don’t believe me? Watch this highlights reel from the Xbox one launch….TV TV TV TV TV (and a little bit of Halo and COD)
The device that the family watches moving images on will eventually be nothing but a screen. A big dumb screen.
The Smart functionality, the computing functionality, that navigates the protocols to find and deliver the requisite content, well that will be provided by any number of computing devices. The screen will be completely agnostic, an unthinking receptor.
The computer could be your mobile device but it probably won’t be. As much as I have derided the phone part of our smartphones they are increasingly our main form of telephony. As much as it is an attractive thought we are unlikely to want to take our phones effectively offline every time we watch the telly. Even if at some stage mobile works out effective multi-tasking we will still likely keep our personal devices available for personal use.
What about advertising? That’s another area where bigger is just better. I’ve written about this before, all advertising channels trend to larger dominant experiences when they can. We charge more for the full page press ad, the homepage takeover, the enormous poster over the little high street navigational signs telling you that B&Q is around the corner. Advertising craves attention and uses a number of devices to grab/force it. Size is one of the most important.
Mobile is the smallest channel we have ever tried to push advertising into. It won’t end well.
The Flurry report tries to paint a strong picture for mobile advertising, but it’s not doing a very good job, the data just doesn’t help them out. They start with the (sound) idea that ad spend follows consumption ie. where we spend time as consumers of media, is where advertisers buy media in roughly similar proportions. But the only part of the mobile data they present that follows this rule is Facebook, the only real success story in the field of mobile display advertising. The clear winner, as with all things digital is Google, taking away the market with their intention based ad product delivered via search, quite disproportionally.
There are clouds gathering in the Facebook mobile advertising story too. Brands have discovered that it is harder to get content into their followers newsfeeds. The Facebook solution? Buy access to your followers.
That’s a bait and switch. A lot of these brands are significant contributors to the whole Facebook story itself. Every single TV ad, TV program, media columnist or local store that spent money to devote some part of their communication real estate to the line “follow us on Facebook” helped to build the Facebook brand, helped to make it the sticky honeypot that it is today.
And now if you want to leverage those communities, you have to pay again. Ha.
Oh but there’s value in having a loyal active base of followers I hear the acolytes say. Facebook is just the wrapper. Ok, then in that case ask them all for an email address and be guaranteed that 100% of your communications will reach them. Think that will work? Computer says no.
Buying Facebook friends? That’s starting to whiff too.
That leaves Google/search. The only widely implemented current revenue model that has a future on the mobile web. Do you know what would make the Google model even more effective? A bigger screen, and a content universe on the open web.
To be fair the idea that we must pursue pay models is starting to take hold in the communities that matter. This is good news.
Read this piece. It’s as honest a view as you will find within the advertising community and it still doesn’t get that the mobile medium is fundamentally flawed simply because it is small. There is no get out of jail card free for mobile, unless it gets bigger.
Many of the businesses we met have a mobile core. A lot of them make apps providing a subscription service, some manage and analyze big data, and others are designed to improve our health care system. The hitch? None of them rely on advertising to generate revenue. A few years ago, before the rise of mobile, every new startup had a line item on its business plan that assumed some advertising revenue.
What did you think about the Facebook WhatsApp purchase? It can’t have been buying subs, it’s inconceivable that most of the 450m WhatsApp users weren’t already on Facebook. Similarly even though there is an emerging markets theme to the WhatsApp universe, Facebook hasn’t been struggling in pick up adoption in new markets. My money is that Zuckerberg bought a very rare cadre of digital consumer, consumer’s willing to pay for a service that can be found in myriad other places for free. WhatsApp charges $1 a year, a number of commenters have noted this and talked of Zuckerberg purchasing a revenue line. I think they have missed the wood for the trees. That $1 a year, its good money when you have 450m users, for sure. It’s even better when you have nearly a billion and many of them are getting fed up with spammy newsfeeds. What will you do if you get an email that says “Facebook, no ads ever again, $5”. He doesn’t just get money, he also gets to concentrate on re-building the user experience around consumers, not advertisers.
The WhatsApp purchase is the first time I have tipped my metaphorical hat in Zuckerberg’s direction. The rest is morally dubious crud that fell into his lap, this is a decently impressive strategic move. I still won’t be signing up anytime soon though.
Ok. Small. Bad for advertising. What else?
Small devices also have necessarily limited user interfaces. The qwerty keyboard for all of its random derivation works really well. There is no other method that I use that is as flexible and as speedy for transferring words and numbers into a computer.
The touchscreen is impressive though, the device that, for my money, transformed the market via the iPhone. My first thought when I saw one was that I would be able to actually access web pages and make that experience useful through the pinch and zoom functionality. I had a blackberry at the time with the little scroll wheel on the side. Ugh. But even that wonder passed, now that we optimise the mobile web to fit on the small screens as efficiently as possible. No need to pinch and zoom, just scroll up and down, rich consumption experience be damned.
Touchscreens are great, they just aren’t as flexible as a keyboard. As part of an interface suite alongside a keyboard they are brilliant. As an interim interface to make the mobile experience ‘good enough’, again brilliant. As my primary computing interface … no thanks.
When the iPad arrived the creative director at my agency, excited like a small child as he held his tablet for the first time, decided to test its ability as a primary computing interface. He got the IT department to set up a host of VPN’s so that he could access his networked folders via the internet (no storage on the iPad) and then went on a tour of Eastern Europe without his MacBook.
He never did it again. He was honest about his experience, it simply didn’t work.
What about non text creativity? Well, if you can, have a look inside the creative department at an advertising agency. It will be wall to wall Apple. But they will also be the biggest Macs you have ever seen. Huge glorious screens, everywhere.
You can do creative things on a phone, or a tablet but it is not the optimised form factor. It seems silly to even have to type that out.
If small is bad, and if mobile plays out according to Christensen’s theories (and remember this is just speculation), then eventually mobile has to get bigger. Considering that we have already reduced the acceptable size of tablets in favour of enhanced mobility this seems like some claim.
For my money the mobile experience goes broadly in one of two ways.
In both scenarios it becomes the primary home of deeply personal functionality and storage, your phone, your email (which is almost entirely web accessible these days),your messenger tool, maps, documents, music, reminders, life logging tools (how many steps did you step today) as well as trivial (in comparison) entertainments such as the kind of games we play on the commute home.
The fork is derived from what happens with desktop computing. The computing of creation and the computing of deeper better consumption experiences.
We either end up in a world with such cheap access to computing power that everyone has access to both mobile computing and desktop computing, via myriad access points, and the division of labour between those two is derived purely on which platform best suits a task. Each household might have 15 different computers (and I’m not even thinking of wearables at this stage) for 15 different tasks. This is a bit like the world we have today except that the desktop access is much more available and cheap, and not always on a desk.
Alternatively, the mobile platform does mature to become our primary computing device, a part of almost everything we do. This is much more exciting. This is the world of dumb screens that I mentioned earlier. If mobile is to fulfil its promise and become the dominant internet access point then it must transcend its smallness. There are ways this could happen.
We already have prosthetics. Every clip on keyboard for a tablet is a prosthetic.
We already have plugins. Every docking station for a smartphone or a tablet is a plugin.
The quality of these will get better but they still destroy the idea of portability. If you have to haul around all the elements of a laptop in order to have effective access to computing you may as well put your laptop in your bag.
I can see (hold for the sci-fi wishlist) projection solutions, for both keyboards and screens where the keyboard is projected onto a horizontal surface, the screen onto a vertical surface. Or what about augmented and virtual reality? Google glass is a player, although the input interface is a dead duck for full scale computing, but how about an optimised VR headset that isn’t the size of a shoebox? Zuckerberg might have been thinking about the metaverse when he bought Occulus but there will potentially be other interim uses before humanity decides to migrate to the virtual a la “ready player one”, VR could be a part of that.
The more plausible call is a plugin / cheap access model hybrid. Some computing will always be use case. I think it’s likely that a household’s primary AV entertainment screen for example, the big one in the living room, will have its own box, maybe controlled by a mobile interface but configured such that your mobile platform isn’t integral to media consumption. The wider world, though, I think will see dumb screens and intelligent mobile devices working together everywhere. Sit down slot in your mobile, nice big screen, all your files, all your preferences, a proper keyboard (the dumb screen is largely static of course, so it comes with a keyboard) and touchscreen too of course, all in all a nice computing experience.
There are worlds of development to get there of course. Storage, security and battery life among other areas will all need a pick up, but if we have learnt anything since 1974 it’s that computers do get better.
I really woke up to Christensen’s theory with the example of the transistor radio, which was deeply inferior to the big old valve table tops that dominated a family’s front room, often in pride of place. The transistor radio was so poor that the kids that used them needed to tightly align the receivers with the broadcast towers. If you were facing South when you needed to face North, you were only going to get static. But that experience was good enough. Not for mum and dad who still listened to the big radio at home, but for the teenagers this was the only way they could hear the songs they wanted to hear, without adults hovering. All of those limitations were transcended eventually and in the end everyone had their own radio. One in the kitchen, one in every teenager’s room, the car, the garage, portable and not portable. Radio’s eventually got to be everywhere, but none of them were built around valves like those big table top sets.
Mobile computing is a teenager’s first wholly owned computing experience (I’m not talking about owning the hardware). It happens when they want it to, wherever they are. It’s also been many an older person’s first wholly owned computing experience, and it’s certainly been the first truly mobile computing experience for anyone. I might have made the trivial observation that laptops were the first mobile computers, and they were, but you couldn’t pull one out of your pocket to get directions to the supermarket.
Christensen’s theory makes many observations. Among them are that the technology always improves and ends up replacing the incumbent that it originally disrupted. The users of the old technology eventually move to the new technology, the old never wins.
We aren’t going to lose desktop computing though. Or at least we aren’t going lose the need states that are best served by desktop computing, if you want to write a 2000 word essay, your phone is always going to be a painful tool to choose. But you might be able to plug it in to an easily available dumb workstation and write away in comfort instead.
At least let’s hope so.
Quinn Norton is a name that hovered on the edge of my consciousness for a chunk of time. She dated Aaron Swartz for 3 years and in the resultant furore after his death her name was a part of that story as it became clear that prosecutors had put pressure on her to offer testimony against him, something she did not do.
At this year’s Chaos Communication Congress she spoke on the topic “No Neutral Ground in a Burning World” with Eleanor Saitta which gave an historical perspective on governmental surveillance and how technology is impacting this need for legibility.
Then finally I found this piece, “A Day of Speaking Truth to Power, Visiting the ODNI”. The ODNI is the Office of the Director of National Intelligence, spook central, a scary place for anyone let alone a journalist with a reputation for speaking ill of the intelligence community (IC). It’s a great read for so many reasons. It paints a picture of the IC that is both confirming and revelatory and it also surfaces a number of interesting ideas, that could have been brought up in a completely different context, but found their way into this conversation instead.
When I read longer form essays I often take one quote, sometimes two, and send them to my email for recycling into a post (credited of course) or to remind myself to dig deeper into a topic, or just to add a highlight to a particular piece, to make it standout against everything else I am reading.
With this essay I sent myself 9 different quotes, I’ve listed 5 of them below. Please read all of it though, it’s fascinating.
It was clear to me part of this mess we’re in arose from the IC feeling that if everyone was giving away their privacy, then what they, the Good Guys, did with it could not be that big of a deal.
I said the end of antibiotics and the rise of diseases like resistant tuberculosis would mean that tech would end at our skin, and we’d be limited in our access to public assembly. It was probably the darkest thing I said all day, and one of my table mates said (not unkindly) “I don’t like your future.”
I realized that one of the problems is that being at the top of the heap, which the security services definitely are, makes the future an abomination and terror. There is no possible great change where the top doesn’t lose, even if it’s just control, rather than position. The future can only be worse than the present, and a future without fear is a future to fear.
They were mostly pleasant and thoughtful people, and they listened as best they could. One told me I made her feel like Cheney, “And I’m not used to feeling like Cheney.”
When we learn to live online we will want to be full humans again. When, as one of the outside experts put it, the Eternal September is over, we will learn once again to mind our own fucking business, and we will expect others to, as well.
That last quote is something we are going to visit again and again over the next 20, 30 or 50 years (it will likely be a hard slog). The truth is that being online is such a young concept and we haven’t a clue yet, what it means in human terms, social terms, societal terms. In the same way that we expect privacy at a coffee table in a public coffee shop, even though it is possible for someone to listen to our conversation, we will find a social code developing that covers the digital in ways we will find reminiscent of the analogue.
Could you? Not if you had to because it was running amok, but just because…for no real reason. How about maiming one? Is there a moral issue involved? Or is it a daft question considering that a robot has no consciousness or sentience.
Is destroying a robot the same as killing it? Which word generates the stronger emotional reaction?
These might seem like irrelevant questions but actually they are important considerations as we prepare ourselves for the coming tide of robotics. Much is written about what will happen to the job market, and that is important too of course, but there are also big issues of a more personal nature. What will it mean to live in a society where artificial or fake sentience is commonplace? What will happen to our definitions of humanity if artificial life can evoke the same responses from humans as real biological life does.
Sentience is the ability to feel, perceive, or to experience subjectivity
The overarching question is whether or not the appearance of sentience will affect our societal codes, both legislated and social, and if so, how?
To kick off this brief (very brief) inquiry have a quick look at this clip from the Graham Norton show. Graham is showing Gary Oldman, Toni Collette and Nick Frost a £12,000 mini robot, who quite frankly is very sweet, for want of a less cloying term.
That robot certainly has an uncanny ability to generate real empathy from the human audience.
At the point that it lifts its hand, like a child, to walk with Norton there is a sincere and substantial ‘aww’ from the audience, it is palpably emotive. When it falls and says ‘help me’, few people hear it (they are laughing at something else), but if you re-watch the clip and try to hear ‘help me’, it is strangely affecting.
There are other surprises too, when the technology is being stretched, the humanity disappears. At the point that it performs Gangnam style, I feel the empathy goes totally AWOL, its impressive, I guess, but in a different way, it is no longer pulling on the heartstrings, it’s talking to a different part of me.
Language is important
It would seem that the markers of human discourse, or at least a certain type of human discourse, are more subtle and less grand. In the biological realm it’s long been understood that there is a particular class of ‘linguistic tool’ in normal human speech called discourse markers, that when used facilitate a more courteous exchange. These are the small utterances that appear as, often self-deprecating filler, when humans talk to each other. For example people often say ‘probably’ , ‘I think’ , ‘maybe’ even though they harbour no such doubts about what they are saying. I did it, without realising, in the previous paragraph when I wrote, “its impressive, I guess, but in a different way”
This article argues that mastering such ‘simplicities’ of language (the language is humanly simple, the mastery of it in a technical architecture anything but) will be a major feature of effective human – robot communication.
For my money this idea seems to infer that the replication of human idioms within robotic architectures can generate sincere human responses. If the robots act, in certain ways, like humans then it seems that our autonomous thinking systems (which is most of our thinking apparatus), will treat them like humans.
They made videos of an “amateur baker” making cupcakes with a helper giving advice—in some videos the helper was a person, and in some videos the helper was a robot. Sometimes the helpers used hedges or discourse markers, and sometimes they didn’t.
You can probably already guess that the humans and robots that used hedges and discourse markers were described in more positive terms by the subjects who watched the videos. It’s somewhat more surprising to hear that the robots that hedged their speech came out better [than] the humans who did.
The author credits the novelty of robots as teachers, for their superior performance in this experiment. That makes some sense, and is testable too, as robotic interaction becomes commonplace. But for the moment I am more intrigued about understanding the action of our supposedly autonomous brains, largely because if autonomous sub-conscious reactions are driving these empathetic feelings then once novelty wanes we can expect an even more seamless set of autonomic interactions with our robot compatriots. This line of thought suggests that the division between human – human interactions and human –robot interactions will become more, not less, opaque.
There are other confounding ideas to consider.
Recently a Washington Post journalist was called by a telemarketer selling health insurance, seemingly a pleasant, bright young woman. Something, however, felt a little off, he thought she was a robot. The call was recorded.
He, in my opinion is slightly aggressive, as he tests his inkling by repeatedly asking if she is a robot, to which she repeatedly answers “I am a real person”. You can see why he became suspicious, she does sound as though she could be a robot. But here’s the thing. So does he. Some of his challenges have a strangely artificial hue as well. My conjecture is that as a result of believing that he is talking to a robot his use of discourse markers is reduced, this would also explain why his conversation comes across as unduly aggressive.
As it turns out she wasn’t a robot. Well, not quite. She was a program used by call operators with weak English, or strong accents. So, there is in fact a human triggering the interactions, which explains the pauses and the repeated phrasings, clearly the operators have a ‘script’ of some kind, managed by the software which limits the conversational creativity that can be brought to bear. It also demonstrates quite clearly that our perceptions in the realm of human – robot interactions are not 100% effective.
At one level we now have 3 choices to consider (human – human, human – robot, human – hybrid), on a more challenging level we need to consider the pitfalls and troubles that these complications could cause if our perceptions fail us and we get our sub-conscious analysis wrong.
Being rude, to a real person, which is what happened here, is just one such problem.
However, to take it to the extreme, if you can’t distinguish between artificial and human communication in a world where faceless communication is increasingly becoming the norm, then what is the nature of humanity in this faceless future? Is there a meaningful division at all?
So, back to killing a robot….it’s really not so cut and dry as it might have seemed. Robots have the ability to play with your emotions and you might be utterly unprepared for that.
Kate Darling is a researcher who is looking at robot ethics (hooray). Alongside safety, liability and privacy Kate also spends time looking at the social side of human – robot interaction.
She recently ran a popular session at Harvard. She tackles the killing question head on.
“….she encouraged participants to bond with these robots, then to destroy one. Participants were horrified, and it took threats to destroy all robots to get the group to destroy one of the six. She observes that we respond to social cues from lifelike machines, even if we know they are not real”
There are some interesting times up ahead for us all. I’m glad that people like Kate are sufficiently forward thinking to start the dialogue now. It’s almost guaranteed that we won’t be ready, but at least we won’t be totally unaware.
I’m pretty uncomfortable with the recent developments in the field of genetically engineered crops. I haven’t done due diligence and investigated properly, so for the moment I will refrain from judgement even while I acknowledge my discomfort.
That said I do have a problem with the fact that, in the name of intellectual property protection, these companies have severed the ability for GE crops to produce their owns seeds, bringing even deeper commercial dependency into the agricultural world. That just strikes me as a deeply stupid thing to do, regardless of, or perhaps because of, the profit motive. This article is broadly in favour of GE developments in agriculture while also wishing to find a way for the intellectual property landscape, in this case patents, to become more serviceable to the world at large and not just the large global entities that have the resource and knowledge to exploit the system in their favour.
The chief protagonist, Richard Jefferson, uses the frankly wonderful story of Jan Huyghen van Linschoten to make an interesting point about how monopolies on knowledge stymie innovation and development. Linschoten, in the 16th century, copied the maps that had been the basis of the Portuguese and Spanish domination of world trade. They were a closely guarded secret that held real value. When he returned to his native Holland he didn’t sell the maps for profit he published them to freely spread the knowledge wide and far.
Jefferson’s point is that the patent system needs mapping in order to democratise it for, among others, developing world farmers. I can see the value in that but still feel that there are elements of shuffling deckchairs on the titanic in such an approach. Surely the anecdote casts doubts on the whole concept of patents, not simply their accessibility?
The second link today is an anonymous screed from an American investment manager. His firm’s clients are firmly in the top 1% with net worth in excess of $5m and annual incomes above $300,000. The whole piece is very much worth reading, if only to discover that roughly $5m is required to secure a 30 year retirement income in the category that most of us would categorise as rich. That has certainly influenced the investment strategy that I will be adopting once I win the lottery (it has incidentally focused me almost exclusively on the Euromillions lottery which has potential wins above $10m, after all what is the point in taking such a ludicrous punt if I can’t even be financially secure for life as a result).
Truth be told there is little in here to find humour in. Instead it is a quantified look inside the finances of the super wealthy in America and what can be bought in return.
Unlike those in the lower half of the top 1%, those in the top half and, particularly, top 0.1%, can often borrow for almost nothing, keep profits and production overseas, hold personal assets in tax havens, ride out down markets and economies, and influence legislation in the U.S. They have access to the very best in accounting firms, tax and other attorneys, numerous consultants, private wealth managers, a network of other wealthy and powerful friends, lucrative business opportunities, and many other benefits. Most of those in the bottom half of the top 1% lack power and global flexibility and are essentially well-compensated workhorses for the top 0.5%, just like the bottom 99%.
Finally something of a more friendly hue, Cicada 3301. For 2 years or so a nameless organisation has been posing a series of brain numbingly difficult puzzles and problems, originally seeded in locations on the open web. It, quite frankly, reads like something from a William Gibson novel, which is probably no accident.
A certain level of progress (the clues seemed to lead to more clues) delivered a telephone number and the message “call us”, which in turn after another act of deciphering led to a website and a countdown. The countdown revealed real world locations and a whole new angle opened up. There is much more to the story, best recounted by the article linked below rather than me just regurgitating it for you.