4 from 2014 (links)

I periodically revisit the links at the bottom of my reading list (managed by pocket) to see if I can either dump them from the list unread, leave them there for reading at an unspecified future point or ideally read them and push them to my archive.

Having done this recently these 4 links from 2014 stood out. All of them talk about the way the world is today, and are particularly interesting to me as they could all just as easily have been published this year (with some small changes no doubt).

The Economist challenges the principle of driving shareholder value and notes that since it became a corporate mantra the value of a CEO’s remuneration has increased 8-fold, which most certainly was not the point. Please remember the quote that follows is from 2014 when no-one thought Brexit or Trump likely to come to pass….

And that trend, which has spilled out to the rest of the developed world, is leading to the growing anger of  voters and the resistance to globalisation which may eventually cause even more damage to business and investors.

This link is a guide to Shenzhen, almost certainly out of date in some regard (I’m no Shenzhen expert), but the reality of the Chinese tech markets is fascinating and although it took me nearly 3 years to read this I’m still glad i did.

A good overview from the Guardian about progress made by ‘robot’ writers. The primary example here is a quick news story about an earthquake report, in this case some gentle tremors. Published and approved by a human, but written by a computer. This was in 2014, I can’t help but imagine this is more widespread today.

Hammond was in the limelight recently, having claimed that by 2025 90% of the news read by the general public would be generated by computers. “That doesn’t mean that robots will be replacing 90% of all journalists, simply that the volume of published material will massively increase,” he explains. “Take the example of small amateur baseball games. They don’t interest the media, but several dozen people follow each one. Quill collates data on thousands of these games and can produce thousands of articles almost instantaneously, one for each match, in a style similar to sportswriters, who are easy to imitate.”

The last link is about the nature of the global clothing industry, although the article concentrates on the Korean impact on LA’s Jobber market.

How did this neighborhood become what it is? The answer lies in a 50-year process of migration and generational progress—one that has recently reached a kind of critical mass.

A well written and good exploration that reminds us that even in these times of incredible and fast change the slower moving trends driven by demographics and migration still hold some sway.

 

 


3 short links

It seems to be an unavoidable truth that just before economic bubbles pop, trading confidence and bullish sentiment is likely to be at a significant high. Whereas some of this is undoubtedly definitional (a bubble can’t build without aggressive bullish sentiment) it always leaves me asking why people don’t see the inevitable coming. The short answer is that actually they do, or at least some people do. Provenance is important here. There are commentators that do nothing apart from forecast doom laden futures, and they spend a large amount of their time being wrong (even if they claim it is only the timing they are missing, it still feels to me, like the stopped watch which tells the time correctly twice a day). When the warning signs are being broadcast by the bankers themselves however, I feel we should take note. It won’t change any behaviours of course, bullish pre-burst behaviour is as much about herd tendencies as anything else, so these charts are for those of us without our fingers on the so called pulse.

From Schroders – The record amount being borrowed by investors is a worrying sign

 

 

Apparently 30 miles up from the surface of Venus is a sweet spot for human habitation, with Mediterranean temperatures and ideal barometric pressures. This article outlines the discussion about the real world feasibility of colonising Venus with floating cities. Its a somewhat Hemingway-esque life that’s described, where men really are men. Not for me, but I’d love to see the movie.

Venus is not for the timid, or people too afraid to shove a fat bird out the airlock and let the harsh laws of thermodynamics do the work. Venus is for men. Men who like to eat meat – cooked in fire and acid and seasoned with the Devil’s own mix of volatiles boiled up from the pits of hell.

The surprisingly strong case for colonising Venus

 

Finally back to more mundane earthly matters we have this interesting, recently discovered feature of Google maps. In parts of the world where there are visceral border disputes Google shows different boundaries depending on where the map request is made from. Does this fuzziness likely promote more comfort for those involved in the dispute? I must conclude that it is the only sensible option Google can take, delivering the map each territory expects to see. In this regard this is no different to the maps that would have prevailed 200 years ago. The really interesting thing though is derived from imaging what would happen if Google decided not to bother and simply presented one version, and by such a process taking a role as an arbiter of nation boundaries. They don’t do that today but it is the kind of thing that could one day fall under the remit of “Organising all the world’s information”.

How do you feel about that?

 


links again, 4 of ’em

Earlier this year Netflix and Comcast had a little contretemps about their peering agreement. Unless you spend time trying to keep up with the various layers, and the players therein, that make up the technical infrastructure of the internet then that statement potentially means absolutely nothing to you whatsoever, but actually its all quite important.

Peering is the name given to the agreements that cover the terms for transferring internet traffic between networks and they are a fundamental cornerstone of the modern internet. Because there is not one single network that covers the whole world it is important that traffic can be exchanged, as required by the needs of the user, with the least possible friction. Historically these agreements were made between engineers and each network simply agreed to open to each other as required, no money involved. Comcast decided that they wanted to buck that history and demanded payment from Netflix. Netflix suggested that prior to this negotiation Comcast were deliberately allowing congestion in certain parts of their network to negatively affect Netflix’s customers (something Comcast denied) and that they had no choice but to pay the ransom.

Level 3 is one of the big global internet backbone companies that carry enormous amounts of web traffic. They are the one of the companies that pay for, and own and maintain, the cable that runs under the oceans. This post on their blog lays out some of the dynamics that go into peering agreements and even though this post doesn’t deal directly with the situation between Netflix and Comcast it should probably give you enough information to understand who is playing the shithead and who isn’t.

Level 3

 

I never did try playing Go the ancient Chinese strategy game, and after reading this article i’m starting to think that that is no bad thing. Like chess its a 2 player war game, but unlike chess its a game (possibly the only game left) that retains an unbeaten crown in human vs computer match-ups. Machines beat humans at checkers in 1994 and chess was added to the list in 1997. Now, 17 years later Go still holds out, and holds out with comfort. Every year the University of Electro-Communications in Tokyo hosts the UEC cup where computer programs compete against each other for the opportunity to compete against a Go sage, who will be one of the world’s top Go players. The challenge is not one of brute force computing power, its more about strategic understanding and the fact that by all accounts we don’t really understand what goes into being a great Go player, human or otherwise.

Wired

 

Ever wondered why the airwaves are licensed by the government? If you think about it a little, the chances are you will come to what seems like a very simple and straightforward conclusion, which is that the airwaves are licensed so that broadcasting signals don’t interfere with each other. To ensure that when you tune in to 97-99 FM here in the UK, you will receive radio one, not some other outfit broadcasting on the same frequency. Which is all well and good except that electromagnetic waves simply don’t interfere with each other. This concept of interference seems to imply that there is only so much space on any particular frequency that can carry signals, that there is only so much spectrum available. Colours are on the same spectrum as radio waves, separated only by their different frequencies, to suggest that spectrum is limited is to suggest that there is only so much Red to go around, which is clearly a farcical concept.

All of that, which is sound and uncontroversial 6th form physics, does raise some interesting questions about our radios and TVs and mobile phones, all of which broadcast across licensed electromagnetic frequencies. It turns out that the problem of interference is a problem of the broadcasting and receiving equipment not the natural scarcity of the airwaves. We have a system that limits access to frequencies because we are still using a technology base optimised to an old technical paradigm.

This piece published in Salon gives you the full detail and quotes extensively from the work of David Reed an important and prominent scientist from MIT, famous for writing the text that nailed the modern architecture of the internet. Understanding what is actually going on here turns out to be entertaining and enlightening.

Salon – The myth of interference

 

I don’t know whether this last link is being serious or not, and that alone might be the best observation I have to make about the state of modern economics.

Alex Tabarrok is a right leaning economist who authors the blog Marginal Revolution with Tyler Cowen, both are professors at George Mason University in Virginia. This short post, one of many from the right responding to the fuss being made by Piketty’s Capital, offers “2 surefire solutions to inequality”. One is to increase fertility among the rich, dilute the inheritance and reduce capital concentration. The other surefire way? To reduce fertility among the rich! The author of the post puts a lot more detail into this position than I do here. I’m leaning towards the opinion that he is simply having a laugh, but then as he is an economist i’m really not so sure.

Marginal Revolution

 

 


Small phones, mobile computing, innovative disruption and speculation


Apps apps apps

I don’t like getting my content from apps. I use the mobile web instead if I have to, but the majority of my web access is on a laptop with a nice big screen, which means my prejudice doesn’t hurt me too much.

I know that that sets me apart from the majority of smartphone users, but as I said before, my primary web access point isn’t my phone so the limitations of the mobile web that have enabled app culture don’t really hurt me.

I also believe that it is a culture with a short shelf life.

It is a system with an unusual degree of friction in comparison to an easily imagined alternative. The whole native apps versus web apps argument that has been running for years now is largely decided, at any one point in time, by the state of play with regard to mobile technology and the user requirements of said technology. I’m pretty certain the technology will improve faster than our user requirements will complicate.

That easily imagined alternative is pretty straightforward too. A single platform, open and accessible to all, built on open protocols, that can support good enough access to hardware sensors, is served by effective search functionality and doesn’t force content to live within walled gardens.

The web, in other words. Not fit for purpose today (on mobile), but it will be soon enough.

Monetisation possibly remains a challenge but I’m also pretty sure we will eventually learn to pay for the web services that are worth paying for. Meaningful payment paradigms will arrive, the profitless start up model/subsidised media (which is what almost all these apps are) can’t last. We will get over ourselves and move beyond the expectation of free.

Should everything be on the web? No, not at all. For me there is a fuzzy line between functional apps and content apps. The line is fuzzy because some content apps contain some really functional features, and vice versa. Similarly some apps provide function specifically when a network connection is unavailable (pocket, for example). The line is drawn between content and function only because I strongly believe that the effective searchability of content, via search engines or via a surfing schema, and the portability of that content via hyperlinks is the heart of what the web does so well and generates so much of its value. If all our content gets hidden away in silos and walled gardens then it simply won’t be read as much as if it was on the open web. And that would be a big perverse shame.

Why can’t we just publish to the stream? Because it is a weak consumption architecture, it is widely unsearchable (although I suspect this can be changed although we need to move beyond the infinite scroll real quick) and because you don’t own it (don’t build your house on someone else’s land). That’s another whole essay and not the focus of this one.

I’ve felt this way for a long time so I am happy to say that finally some important people are weighing in with concerns in the same area. They aren’t worried about the same specific things as me, but I can live with that if it puts a small dent in the acceleration of app culture.

Chris Dixon recently published this piece calling out the decline of the mobile web. I think he is over stating the issue actually, but only because I think the data doesn’t distinguish effectively between apps that I believe should be web based and those I am ambivalent about. He cites 2 key data sources.

comscore-mobile-users-desktop-users-2014

The ComScore data for total web users vs total mobile users is solid enough, showing that mobile overtook desktop in the early part of this year. Interestingly the graphic also shows that desktop user volumes are still rising, even if the rate of acceleration lags behind mobile. Not many people are taking this on board in the rush to fall in love with the so called post PC era. It’s important because to really understand the second data point, provided by Flurry, you have to look at the very different reasons people are using their mobile computers.

Flurry tells us that the amount of time spent in apps, as a % of time on the phone/internet itself, rose from 80% in 2013 to 86% (so far) in 2014. That’s a lot of time spent in apps at the cost of time spent on the mobile web.

But the more detailed report, shows that that headline number is less impressive than it at first seems. All that time in apps, it turns out isn’t actually at the cost of time spent on the web, not all of it anyway.

time_spent

Of the 86%, 32% is gaming, 9.5% is messaging,  8% is utilities and 4% productivity. That totals 53.5% of total mobile time, and 62% of all time in apps specifically. The fact that gaming now occurs within networked apps on mobile devices, rather than as standalone software on desktops, only tells us that the app universe has grown to include new functionality not historically associated with the desktop web. It tells us that this new functionality has not been removed from the mobile web, that in this case the mobile web has not declined. As I said I agree with the thrust of the post I just think the situation is less acute.

Chris Dixon’s main concern is for the health of his business, he’s a partner at Andreessen Horowitz (a leading tech VC), and he is seeing the rise of 2 great gatekeepers in Google and Apple, the owners of the 2 app stores that matter, as a problematic barrier to innovation, or rather a barrier to the success of innovation (there is clearly no shortage of innovation itself).

He’s not alone Mark Cuban has made a similar set of observations.

Fred Wilson too, although he is heading into an altogether different and more exciting direction again, which I will follow up in a different post (hint, blockchain, distributed consensus and personal data sovereignty). Still this quote from Fred’s post sums up the situation rather well from the VC point of view.

My partner Brad hypothesized that it had something with the rise of native mobile apps as the dominant go to market strategy for large networks in the past four to five years. So Brian pulled out his iPhone and I pulled out my Android and we took at trip through the top 200 apps on our respective app stores. And there were mighty few venture backed businesses that were started in the past three years on those lists. It has gotten harder, not easier, to innovate on the Internet with the smartphone emerging as the platform of choice vs the desktop browser.

The app stores are indeed problems in this regard, I don’t disagree and will leave it to Chris and Mark’s posts to explain why. I am less concerned about this aspect partly because it’s not my game, I don’t make my money funding app building or building apps. But mostly I’m unconcerned because I really believe that (native) app culture has to be an adolescent phase (adolescent in as much as the internet itself is a youngster, not that the people buying and using apps are youngsters).

If I am right and this is a passing phase can we make some meaningful guesses as to what might change along the way?

I’ve been looking at mobile through the eyes of Clay Christensen’s innovators dilemma recently, which I find to be a useful filter for examining the human factors in technology disruptions.

Clearly if mobile is disrupting anything it is disrupting desktop computing, and true to form the mobile computing experience is deeply inferior to that available on the desktop or laptop. There are, actually, lots of technical inferiorities in mobile computing but the one that I want to focus on is the most simple one of all, size, the mobile screen is small.

Small small small small. I don’t think it gets enough attention, this smallness.

If it’s not clear where I’m coming from, small is an inferior experience in many ways, although it is, of course a prerequisite for the kind of mobility we associate with our phones.

Ever wondered why we still call what are actually small computers, phones? Why we continually, semantically place these extremely clever devices in the world of phones and not computers? After all the phone itself is really not very important, it’s a tiny fraction of the overall functionality. Smartphones? That’s the bridge phrasing, still tightly tied to the world of phones not computers. We know they are computers of course, it’s just been easier to drive their adoption through the marketing semantic of telephony rather than computing. It also becomes easier to dismiss their limitations.

Mobile computers actually out date smartphones by decades, they were/still are, called laptops.

Anyway, I digress. For now let’s run with the problems caused by small computers. There are 2 big issues, that manifest in a few different ways. Screen size and the user interface.

Let’s start with screen size (which of course is actually one of the main constraints that hampers the user interface). Screen size is really important for watching TV, or youtube, or Netflix etc

We all have a friend, or 2, that gets very proud of the size of their TV. The development trend is always towards bigger and bigger TV screens. The experience of watching sport, or a movie on a big TV is clearly a lot better than on a small one (and cinema is bigger again again). I don’t need to go on, it’s a straightforward observation.

No-one thinks that the mobile viewing experience is fantastic. We think it’s fantastic that we can watch a movie on our phones (I once watched the first half of Wales v Argentina on a train as I was not able to be at home, I was stoked), but that is very different from thinking it’s a great viewing experience. The tech is impressive, but then so were those little tiny 6 inch by 6 inch black and white portable TV’s that started to appear in the 80’s.

Eventually all TV (all audio visual) content will be delivered over IP protocols. There are already lots of preliminary moves in this area (smart TVs, Roku, Chromecast, Apple TV) and not just from the incumbent TV and internet players. The console gaming industry is also extremely interested in owning TV eyeballs as they fight to remain relevant in the living room. Once we watch all our AV content via internet protocols the hardware in the front room is reduced to a computer and a screen, not a TV and not a gaming console. If the consoles don’t win the fight for the living room they lose the fight with the PC gamers.

Don’t believe me? Watch this highlights reel from the Xbox one launch….TV TV TV TV TV (and a little bit of Halo and COD)

The device that the family watches moving images on will eventually be nothing but a screen. A big dumb screen.

The Smart functionality, the computing functionality, that navigates the protocols to find and deliver the requisite content, well that will be provided by any number of computing devices. The screen will be completely agnostic, an unthinking receptor.

The computer could be your mobile device but it probably won’t be. As much as I have derided the phone part of our smartphones they are increasingly our main form of telephony. As much as it is an attractive thought we are unlikely to want to take our phones effectively offline every time we watch the telly. Even if at some stage mobile works out effective multi-tasking we will still likely keep our personal devices available for personal use.

What about advertising? That’s another area where bigger is just better. I’ve written about this before, all advertising channels trend to larger dominant experiences when they can. We charge more for the full page press ad, the homepage takeover, the enormous poster over the little high street navigational signs telling you that B&Q is around the corner. Advertising craves attention and uses a number of devices to grab/force it. Size is one of the most important.

Mobile is the smallest channel we have ever tried to push advertising into. It won’t end well.

The Flurry report tries to paint a strong picture for mobile advertising, but it’s not doing a very good job, the data just doesn’t help them out. They start with the (sound) idea that ad spend follows consumption ie. where we spend time as consumers of media, is where advertisers buy media in roughly similar proportions. But the only part of the mobile data they present that follows this rule is Facebook, the only real success story in the field of mobile display advertising. The clear winner, as with all things digital is Google, taking away the market with their intention based ad product delivered via search, quite disproportionally.

Media consumption v ads

There are clouds gathering in the Facebook mobile advertising story too. Brands have discovered that it is harder to get content into their followers newsfeeds. The Facebook solution? Buy access to your followers.

That’s a bait and switch. A lot of these brands are significant contributors to the whole Facebook story itself. Every single TV ad, TV program, media columnist or local store that spent money to devote some part of their communication real estate to the line “follow us on Facebook” helped to build the Facebook brand, helped to make it the sticky honeypot that it is today.

And now if you want to leverage those communities, you have to pay again. Ha.

Oh but there’s value in having a loyal active base of followers I hear the acolytes say. Facebook is just the wrapper. Ok, then in that case ask them all for an email address and be guaranteed that 100% of your communications will reach them. Think that will work? Computer says no.

Buying Facebook friends? That’s starting to whiff too.

That leaves Google/search. The only widely implemented current revenue model that has a future on the mobile web. Do you know what would make the Google model even more effective? A bigger screen, and a content universe on the open web.

To be fair the idea that we must pursue pay models is starting to take hold in the communities that matter. This is good news.

Read this piece. It’s as honest a view as you will find within the advertising community and it still doesn’t get that the mobile medium is fundamentally flawed simply because it is small. There is no get out of jail card free for mobile, unless it gets bigger.

Many of the businesses we met have a mobile core. A lot of them make apps providing a subscription service, some manage and analyze big data, and others are designed to improve our health care system. The hitch? None of them rely on advertising to generate revenue. A few years ago, before the rise of mobile, every new startup had a line item on its business plan that assumed some advertising revenue.

What did you think about the Facebook WhatsApp purchase? It can’t have been buying subs, it’s inconceivable that most of the 450m WhatsApp users weren’t already on Facebook. Similarly even though there is an emerging markets theme to the WhatsApp universe, Facebook hasn’t been struggling in pick up adoption in new markets. My money is that Zuckerberg bought a very rare cadre of digital consumer, consumer’s willing to pay for a service that can be found in myriad other places for free. WhatsApp charges $1 a year, a number of commenters have noted this and talked of Zuckerberg purchasing a revenue line. I think they have missed the wood for the trees. That $1 a year, its good money when you have 450m users, for sure. It’s even better when you have nearly a billion and many of them are getting fed up with spammy newsfeeds. What will you do if you get an email that says “Facebook, no ads ever again, $5”. He doesn’t just get money, he also gets to concentrate on re-building the user experience around consumers, not advertisers.

The WhatsApp purchase is the first time I have tipped my metaphorical hat in Zuckerberg’s direction. The rest is morally dubious crud that fell into his lap, this is a decently impressive strategic move. I still won’t be signing up anytime soon though.

Ok. Small. Bad for advertising. What else?

Small devices also have necessarily limited user interfaces. The qwerty keyboard for all of its random derivation works really well. There is no other method that I use that is as flexible and as speedy for transferring words and numbers into a computer.

The touchscreen is impressive though, the device that, for my money, transformed the market via the iPhone. My first thought when I saw one was that I would be able to actually access web pages and make that experience useful through the pinch and zoom functionality. I had a blackberry at the time with the little scroll wheel on the side. Ugh. But even that wonder passed, now that we optimise the mobile web to fit on the small screens as efficiently as possible. No need to pinch and zoom, just scroll up and down, rich consumption experience be damned.

Touchscreens are great, they just aren’t as flexible as a keyboard. As part of an interface suite alongside a keyboard they are brilliant. As an interim interface to make the mobile experience ‘good enough’, again brilliant. As my primary computing interface … no thanks.

When the iPad arrived the creative director at my agency, excited like a small child as he held his tablet for the first time, decided to test its ability as a primary computing interface. He got the IT department to set up a host of VPN’s so that he could access his networked folders via the internet (no storage on the iPad) and then went on a tour of Eastern Europe without his MacBook.

He never did it again. He was honest about his experience, it simply didn’t work.

What about non text creativity? Well, if you can, have a look inside the creative department at an advertising agency. It will be wall to wall Apple. But they will also be the biggest Macs you have ever seen. Huge glorious screens, everywhere.

You can do creative things on a phone, or a tablet but it is not the optimised form factor. It seems silly to even have to type that out.

If small is bad, and if mobile plays out according to Christensen’s theories (and remember this is just speculation), then eventually mobile has to get bigger. Considering that we have already reduced the acceptable size of tablets in favour of enhanced mobility this seems like some claim.

For my money the mobile experience goes broadly in one of two ways.

In both scenarios it becomes the primary home of deeply personal functionality and storage, your phone, your email (which is almost entirely web accessible these days),your messenger tool, maps, documents, music, reminders, life logging tools (how many steps did you step today) as well as trivial (in comparison) entertainments such as the kind of games we play on the commute home.

The fork is derived from what happens with desktop computing. The computing of creation and the computing of deeper better consumption experiences.

We either end up in a world with such cheap access to computing power that everyone has access to both mobile computing and desktop computing, via myriad access points, and the division of labour between those two is derived purely on which platform best suits a task. Each household might have 15 different computers (and I’m not even thinking of wearables at this stage) for 15 different tasks. This is a bit like the world we have today except that the desktop access is much more available and cheap, and not always on a desk.

Alternatively, the mobile platform does mature to become our primary computing device, a part of almost everything we do. This is much more exciting. This is the world of dumb screens that I mentioned earlier. If mobile is to fulfil its promise and become the dominant internet access point then it must transcend its smallness. There are ways this could happen.

We already have prosthetics. Every clip on keyboard for a tablet is a prosthetic.

We already have plugins. Every docking station for a smartphone or a tablet is a plugin.

The quality of these will get better but they still destroy the idea of portability. If you have to haul around all the elements of a laptop in order to have effective access to computing you may as well put your laptop in your bag.

I can see (hold for the sci-fi wishlist) projection solutions, for both keyboards and screens where the keyboard is projected onto a horizontal surface, the screen onto a vertical surface. Or what about augmented and virtual reality? Google glass is a player, although the input interface is a dead duck for full scale computing, but how about an optimised VR headset that isn’t the size of a shoebox? Zuckerberg might have been thinking about the metaverse when he bought Occulus but there will potentially be other interim uses before humanity decides to migrate to the virtual a la “ready player one”, VR could be a part of that.

The more plausible call is a plugin / cheap access model hybrid. Some computing will always be use case. I think it’s likely that a household’s primary AV entertainment screen for example, the big one in the living room, will have its own box, maybe controlled by a mobile interface but configured such that your mobile platform isn’t integral to media consumption. The wider world, though, I think will see dumb screens and intelligent mobile devices working together everywhere. Sit down slot in your mobile, nice big screen, all your files, all your preferences, a proper keyboard (the dumb screen is largely static of course, so it comes with a keyboard) and touchscreen too of course, all in all a nice computing experience.

There are worlds of development to get there of course. Storage, security and battery life among other areas will all need a pick up, but if we have learnt anything since 1974 it’s that computers do get better.

I really woke up to Christensen’s theory with the example of the transistor radio, which was deeply inferior to the big old valve table tops that dominated a family’s front room, often in pride of place. The transistor radio was so poor that the kids that used them needed to tightly align the receivers with the broadcast towers. If you were facing South when you needed to face North, you were only going to get static. But that experience was good enough. Not for mum and dad who still listened to the big radio at home, but for the teenagers this was the only way they could hear the songs they wanted to hear, without adults hovering. All of those limitations were transcended eventually and in the end everyone had their own radio. One in the kitchen, one in every teenager’s room, the car, the garage, portable and not portable. Radio’s eventually got to be everywhere, but none of them were built around valves like those big table top sets.

Mobile computing is a teenager’s first wholly owned computing experience (I’m not talking about owning the hardware). It happens when they want it to, wherever they are. It’s also been many an older person’s first wholly owned computing experience, and it’s certainly been the first truly mobile computing experience for anyone. I might have made the trivial observation that laptops were the first mobile computers, and they were, but you couldn’t pull one out of your pocket to get directions to the supermarket.

Christensen’s theory makes many observations. Among them are that the technology always improves and ends up replacing the incumbent that it originally disrupted. The users of the old technology eventually move to the new technology, the old never wins.

We aren’t going to lose desktop computing though. Or at least we aren’t going lose the need states that are best served by desktop computing, if you want to write a 2000 word essay, your phone is always going to be a painful tool to choose. But you might be able to plug it in to an easily available dumb workstation and write away in comfort instead.

At least let’s hope so.


Speaking Truth to Power

Quinn Norton is a name that hovered on the edge of my consciousness for a chunk of time. She dated Aaron Swartz for 3 years and in the resultant furore after his death her name was a part of that story as it became clear that prosecutors had put pressure on her to offer testimony against him, something she did not do.

At this year’s Chaos Communication Congress she spoke on the topic “No Neutral Ground in a Burning World” with Eleanor Saitta which gave an historical perspective on governmental surveillance and how technology is impacting this need for legibility.

 

 

Then finally I found this piece, “A Day of Speaking Truth to Power, Visiting the ODNI”. The ODNI is the Office of the Director of National Intelligence, spook central, a scary place for anyone let alone a journalist with a reputation for speaking ill of the intelligence community (IC). It’s a great read for so many reasons. It paints a picture of the IC that is both confirming and revelatory and it also surfaces a number of interesting ideas, that could have been brought up in a completely different context, but found their way into this conversation instead.

When I read longer form essays I often take one quote, sometimes two, and send them to my email for recycling into a post (credited of course) or to remind myself to dig deeper into a topic, or just to add a highlight to a particular piece, to make it standout against everything else I am reading.

With this essay I sent myself 9 different quotes, I’ve listed 5 of them below. Please read all of it though, it’s fascinating.

 

It was clear to me part of this mess we’re in arose from the IC feeling that if everyone was giving away their privacy, then what they, the Good Guys, did with it could not be that big of a deal.

 

I said the end of antibiotics and the rise of diseases like resistant tuberculosis would mean that tech would end at our skin, and we’d be limited in our access to public assembly. It was probably the darkest thing I said all day, and one of my table mates said (not unkindly) “I don’t like your future.”

 

I realized that one of the problems is that being at the top of the heap, which the security services definitely are, makes the future an abomination and terror. There is no possible great change where the top doesn’t lose, even if it’s just control, rather than position. The future can only be worse than the present, and a future without fear is a future to fear.

 

They were mostly pleasant and thoughtful people, and they listened as best they could. One told me I made her feel like Cheney, “And I’m not used to feeling like Cheney.”

 

When we learn to live online we will want to be full humans again. When, as one of the outside experts put it, the Eternal September is over, we will learn once again to mind our own fucking business, and we will expect others to, as well.

 

That last quote is something we are going to visit again and again over the next 20, 30 or 50 years (it will likely be a hard slog). The truth is that being online is such a young concept and we haven’t a clue yet, what it means in human terms, social terms, societal terms. In the same way that we expect privacy at a coffee table in a public coffee shop, even though it is possible for someone to listen to our conversation, we will find a social code developing that covers the digital in ways we will find reminiscent of the analogue.


Push to Window – 3 links

James Fallon is a neuroscientist. Once day while working on a project into Alzheimer’s, using his own family’s brain scans as raw data, he discovered that his own brain was, in clinical terms, the brain of a psychopath. That is to say, he quite accidentally, discovered that he was to all intents and purposes a psychopath. So he wrote a book about it. This link is an interview with him. Fascinating.

Again, I was joking around, but it was a real danger. The next day, we walked into the Kitum Caves and you could see where rocks had been knocked over by the elephants.  There was also the smell of all of this animal dung—and that’s where the guy got the Marburg [a fatal virus]; scientists didn’t know whether it was the dung or the bats.

A bit later, my brother read an article in The New Yorker about Marburg, which inspired the movie Outbreak. He asked me if I knew about it. I said, “Yeah. Wasn’t it exciting? Nobody gets to do this trip.” And he called me names and said, “Not exciting enough. We could’ve gotten Marburg; we could have gotten killed every two seconds.” All of my brothers have a lot of machismo and brio; you’ve got to be a tough guy in our family. But deep inside, I don’t think that my brother fundamentally trusts me after that. And why should he, right? To me, it was nothing.

http://www.theatlantic.com/health/print/2014/01/life-as-a-nonviolent-psychopath/282271/

 

This is the text of Maciej Ceglowski’s talk at this year’s Webstock. Under the guise of exploring the moments when the future makes itself known to us he treats us to the story of Lev Sergeyevich Termen. And it really is a treat. Termen was a soviet scientist, first under Lenin, then under Stalin. He also invented the Theremin, technology that was a distant precursor to the touch screens we all carry around today.

There’s a very important rant about 2/3rds of the way through about the centralisation of power, as a result of the current internet architecture. Its long but also very entertaining.

Why make such a big deal of electrification?

Well, Lenin had just led a Great Proletarian Revolution in a country without a proletariat, which is like making an omelette without any eggs. You can do it, but it raises questions. It’s awkward.

Lenin needed a proletariat in a hurry, and the fastest way to do that was to electrify and industrialize the country.

But there was another, unstated reason for the campaign. Over the centuries, Russian peasants had become experts at passively resisting central authority. They relied on the villages of their enormous country being backward, dispersed, and very hard to get to.

Lenin knew that if he could get the peasants on the grid, it would consolidate his power. The process of electrifying the countryside would create cities, factories, and concentrate people around large construction projects. And once the peasantry was dependent on electric power, there would be no going back.

https://static.pinboard.in/webstock_2014.htm

 

This last link is also a small trip back into recent history, telling the broad story of former congressman Otis Pike who died earlier this year. Pike was the unfortunate soul who in 1975 led the house committee investigating the American security apparatus, both the NSA and CIA, in the aftermath of the Watergate scandal. He was an earnest man and he tried to do the job properly, not that it did him any favours. If you have found yourself wondering why no American politicians are putting their neck on the line with regard to the problem’s with the NSA as revealed by Snowden, this piece may help explain why. Parts of the rhetoric read as if they are written about 2013 not 1972.

Meanwhile, an even more radical subcommittee on privacy in the House, headed by Bella Abzug, targeted the NSA’s domestic spying program, subpoenaing government officials and the heads of the major telecoms and cable telex firms—AT&T, ITT, Western Union and RCA. The more the House dug into the NSA’s foundations, the more they discovered about the murky extralegal arrangements and deals made between private telecom firms and the National Security State apparatus. In the late 1940s, as the NSA was being formed out of the Army Security Agency and other military signal intelligence branches, Truman’s top defense officials cajoled the major US cable telex firms to agree to let the nascent NSA tap into all international communications. Some of the firms were more reluctant than others; all asked for written legal assurances and legislative action, but were given less than they were promised

There was a price to pay, and Pike paid it.

American public opinion proved to be fickle and shallow, and the reactionaries in the intelligence community took advantage of this fickleness to destroy Pike and others like him. When in January 1976 the Pike Committee approved its draft report slamming the intelligence community as a dangerous boondoggle, calling for radical budget reductions, the abolition of the Defense Intelligence Agency, and other radical structural reforms, the special counsel to CIA director George H. W. Bush called Pike’s office and warned that if the report was approved, “we’ll destroy him for this.”…..And so they did

http://pando.com/2014/02/04/the-first-congressman-to-battle-the-nsa-is-dead-no-one-noticed-no-one-cares/


Could you kill a robot?

Could you? Not if you had to because it was running amok, but just because…for no real reason. How about maiming one? Is there a moral issue involved? Or is it a daft question considering that a robot has no consciousness or sentience.

Is destroying a robot the same as killing it? Which word generates the stronger emotional reaction?

boston dynamics robot

These might seem like irrelevant questions but actually they are important considerations as we prepare ourselves for the coming tide of robotics. Much is written about what will happen to the job market, and that is important too of course, but there are also big issues of a more personal nature. What will it mean to live in a society where artificial or fake sentience is commonplace? What will happen to our definitions of humanity if artificial life can evoke the same responses from humans as real biological life does.

Wikipedia defines sentience like this…

Sentience is the ability to feel, perceive, or to experience subjectivity

The overarching question is whether or not the appearance of sentience will affect our societal codes, both legislated and social, and if so, how?

To kick off this brief (very brief) inquiry have a quick look at this clip from the Graham Norton show. Graham is showing Gary Oldman, Toni Collette and Nick Frost a £12,000 mini robot, who quite frankly is very sweet, for want of a less cloying term.

That robot certainly has an uncanny ability to generate real empathy from the human audience.

At the point that it lifts its hand, like a child, to walk with Norton there is a sincere and substantial ‘aww’ from the audience, it is palpably emotive. When it falls and says ‘help me’, few people hear it (they are laughing at something else), but if you re-watch the clip and try to hear ‘help me’, it is strangely affecting.

There are other surprises too, when the technology is being stretched, the humanity disappears. At the point that it performs Gangnam style, I feel the empathy goes totally AWOL, its impressive, I guess, but in a different way, it is no longer pulling on the heartstrings, it’s talking to a different part of me.

Language is important

It would seem that the markers of human discourse, or at least a certain type of human discourse, are more subtle and less grand. In the biological realm it’s long been understood that there is a particular class of ‘linguistic tool’ in normal human speech called discourse markers, that when used facilitate a more courteous exchange. These are the small utterances that appear as, often self-deprecating filler, when humans talk to each other. For example people often say ‘probably’ , ‘I think’ , ‘maybe’ even though they harbour no such doubts about what they are saying. I did it, without realising, in the previous paragraph when I wrote, “its impressive, I guess, but in a different way”

android and human

This article argues that mastering such ‘simplicities’ of language (the language is humanly simple, the mastery of it in a technical architecture anything but) will be a major feature of effective human – robot communication.

For my money this idea seems to infer that the replication of human idioms within robotic architectures can generate sincere human responses. If the robots act, in certain ways, like humans then it seems that our autonomous thinking systems (which is most of our thinking apparatus), will treat them like humans.

They made videos of an “amateur baker” making cupcakes with a helper giving advice—in some videos the helper was a person, and in some videos the helper was a robot. Sometimes the helpers used hedges or discourse markers, and sometimes they didn’t.

You can probably already guess that the humans and robots that used hedges and discourse markers were described in more positive terms by the subjects who watched the videos. It’s somewhat more surprising to hear that the robots that hedged their speech came out better [than] the humans who did.

The author credits the novelty of robots as teachers, for their superior performance in this experiment. That makes some sense, and is testable too, as robotic interaction becomes commonplace. But for the moment I am more intrigued about understanding the action of our supposedly  autonomous brains, largely because if autonomous sub-conscious reactions are driving these empathetic feelings then once novelty wanes we can expect an even more seamless set of autonomic interactions with our robot compatriots. This line of thought suggests that the division between human – human interactions and human –robot interactions will become more, not less, opaque.

There are other confounding ideas to consider.

Recently a Washington Post journalist was called by a telemarketer selling health insurance, seemingly a pleasant, bright young woman. Something, however, felt a little off, he thought she was a robot. The call was recorded.

He, in my opinion is slightly aggressive, as he tests his inkling by repeatedly asking if she is a robot, to which she repeatedly answers “I am a real person”. You can see why he became suspicious, she does sound as though she could be a robot. But here’s the thing. So does he. Some of his challenges have a strangely artificial hue as well. My conjecture is that as a result of believing that he is talking to a robot his use of discourse markers is reduced, this would also explain why his conversation comes across as unduly aggressive.

As it turns out she wasn’t a robot. Well, not quite. She was a program used by call operators with weak English, or strong accents. So, there is in fact a human triggering the interactions, which explains the pauses and the repeated phrasings, clearly the operators have a ‘script’ of some kind, managed by the software which limits the conversational creativity that can be brought to bear. It also demonstrates quite clearly that our perceptions in the realm of human – robot interactions are not 100% effective.

At one level we now have 3 choices to consider (human – human, human – robot, human – hybrid), on a more challenging level we need to consider the pitfalls and troubles that these complications could cause if our perceptions fail us and we get our sub-conscious analysis wrong.

Being rude, to a real person, which is what happened here, is just one such problem.

However, to take it to the extreme, if you can’t distinguish between artificial and human communication in a world where faceless communication is increasingly becoming the norm, then what is the nature of humanity in this faceless future? Is there a meaningful division at all?

robot bar tender

So, back to killing a robot….it’s really not so cut and dry as it might have seemed. Robots have the ability to play with your emotions and you might be utterly unprepared for that.

Kate Darling is a researcher who is looking at robot ethics (hooray). Alongside safety, liability and privacy Kate also spends time looking at the social side of human – robot interaction.

She recently ran a popular session at Harvard. She tackles the killing question head on.

“….she encouraged participants to bond with these robots, then to destroy one. Participants were horrified, and it took threats to destroy all robots to get the group to destroy one of the six. She observes that we respond to social cues from lifelike machines, even if we know they are not real”

There are some interesting times up ahead for us all. I’m glad that people like Kate are sufficiently forward thinking to start the dialogue now. It’s almost guaranteed that we won’t be ready, but at least we won’t be totally unaware.