When I first saw the movie, Minority Report, I thought they had made a very plausible stab at addressing the issue of how futuristic computing interfaces might be designed and operate. I was mesmerised by those wonderful scenes with Tom Cruise waving his hands around, seemingly randomly, but actually interacting with a number of ghostly hologramatic system controls supported by additional voice commands. Nice.
Then I heard that filming those scenes was difficult because they could only do about 45 minutes before the actors became tired. I don’t know if that is really true or not, but it made some sense to me. Largely because I know how quickly I get tired when forced to play on the wii at Christmas. Whether or not the story about Minority Report is true, I can personally testify that playing on the wii for 30 minutes has me a little puffed, but the same game on a PC, I could play all day if I needed to. The simple differential between the 2 interfaces is that one tries to minimise the energy I expend to interface, and the other expands the energy I expend to interface. Of course, both outcomes are quite intentional.
I don’t think the wii has to worry about its place in the console world, the whole point after all, is to introduce athletic exertion into computer gaming. That’s a slam dunk in my opinion, but on the other side I am pretty certain that full body gestural interfaces will not become the reference format for computer operation. Getting physically tired by our increasing dependence on computing is just not going to float.
Thinking about the nature of interfaces raises some interesting thoughts about how we could more usefully categorise some of the nascent trends we see forming today. One of those trends, mobile, is I believe highly disserviced by the way we have named it.
You see, I don’t think we come close to succinctly summing up what’s important about how the mobile trend is changing actual behaviour. Mobility, in lesser forms anyway, just isn’t new. At the blunt end we have long had books and newspapers, both of which are generally designed with mobility in mind. In turn this means we have long had mobile advertising, whether that be in newspapers, magazines or perhaps even the on the radio. Mobile phones have been here for a while now, but even before that we invested, as nations and cities, in networks of public telephony.
Of course, all that is a pale image of what we can now do with our mobile devices. And that’s why the label, mobile, isn’t good enough. It doesn’t start to get close to identifying the key axis of what’s going on.
For me there are 2 key pieces of information.
- It’s actually about ubiquitous computing, both personal computing through PC’s and smartphones as well as infrastructural computing, freely enabled access and the growing internet of things and all the intersections therein
- There is a centralisation going on here, enabled by all this mobility. Increasingly our computing devices are becoming data viewers. Tools that can be used to consume, interact, create, distribute and edit the same pieces of content. You might own, or have access to maybe 4 computers, but increasingly you will work on the same data accessed from a central point of some kind, on whichever computer you are using, for whatever reason at whatever point in time.
Neither of those observations is encapsulated in the phrase du jour, mobile.
So, me, as a contrary bugger, what word would I use.
Screens. That’s what. That’s the word I’d use. Not terribly snappy I know, not a classic buzz word, but it does force you to think about computers and not phones, it delivers you directly to the proliferation of computing as viewing devices. It focuses your thinking on the interface. This is important. What general purpose people mostly experience, during the general computing experience is the interface. We have generalised ‘computing’ to a point where users don’t need to understand computing, they just need to understand how to operate their screens. The screen is the human bit. Particularly now that we have to touch it too.
We have 3 broad areas for interface development, to my mind. Hands and keyboards (evolutionary), gestural and vocal. I imagine we will see bits of all 3, and may even see an amalgamation of all 3 into a broader palette of interface options and interactions. But as far as I can tell, within my lifetime, there will always be a common role for screens. Until we interface computing with our neurology and our vision systems, we will be computing with and reading from glass and glass derivatives for some many years to come yet.
Have a look at this stunning video “a day made of glass 2”, from Corning incorporated, a specialist glass manufacturer. It’s a beautifully made vision of the future (the original ‘a day made of glass’ is also well worth a look) based around a surprisingly wide array of functional glass.
It’s as interesting a piece of imaginative futurology as I’ve seen in some time, looking at the issue with serious concern to the actual issues at hand. This is how the glass industry is seeing the future. Marvellous.