Trust and vulnerability in 2013Posted: November 15, 2013 | |
It shouldn’t have taken me 40 years, but it did
I had an epiphany a few years ago regarding the nature of trust. What I had suddenly understood was this.
The more trust that I placed in someone or something, the more vulnerable I became. If I have no trust, my trust cannot be abused.
It sounds bleak but it’s true. Trust is, after all, essentially a lack of verification. If you say it’s true, I won’t check, I trust you. If you are wrong, I have become vulnerable to a loss of some kind. That doesn’t happen in every circumstance, of course, as a lot of the time trust is well placed. But in a significant number of occurrences, it is not.
Trust can only be infallible if humans never lie and are never mistaken. As this is simply not the case we are left with the thought that, in the aggregate, trust must create vulnerability and that trust and vulnerability are axiomatic and inverse truths. It simply is not possible to have one without the other.
Recently I have also been thinking about reciprocity in connection with our modern communication infrastructure and it occurred to me that there was a relationship between the 2 things. The reciprocal nature of the internet and the reciprocal nature of trust.
Reciprocity can be complicated sometimes, not least because it seems to have 2 slightly antagonistic definitions.
Firstly there is the concept of reciprocating a benevolence (or a cruelty). If you invite me to dinner, I will feel that I should reciprocate and return the invite and ask you to dinner at mine at a later date. I am going to call this mutual reciprocity.
And then there is the concept, presumably derived from the mathematic definition, of something reciprocated being the opposite or an inversion. This idea of inversion is also reflected in the nautical use of the term. A reciprocal course is 180 degrees from the current one. I am going to call this, somewhat unimaginatively, inverted reciprocation.
It turns out that the internet is rife with examples of both kinds of reciprocity.
Zero day exploits and Facebook
Take the habit of revealing security exploits at public technology and hacking conferences. Unless you are familiar with this practice it will seem to be an aggressively dangerous and irresponsible course of action, designed to primarily, place exploitable information into the public domain and available to bad actors. However, the truth rather perversely, is in fact the complete opposite. For reasons of cost control, bureaucratic paralysis and reputation management it has become the case that a discrete, private notification of such security risks to the vendor in question doesn’t result in fixes being released. It instead results in a continuation of the discretion. If no-one knows, goes the argument, then there is no problem. Once the exploit is public knowledge a fix follows, but in almost every case where the exploit is kept secret the vulnerability is maintained. The only thing that seems to force vendors into accepting the cost and trouble of fixing these vulnerabilities is the threat of lost business, effected by public shaming.
Forcing security updates by publicising exploits is an inverted reciprocation. However, it has led rather directly to one of mutuality. What’s good for the goose is good for the gander and we now exist in a world where our governments have become the key players in a black market trading in zero day exploits (the name given to vulnerabilities that have not been made public). So where once we looked to our trusted actors (governments and vendors) to ensure that our computers were safe from exploitation we now should realise that they are as interested in using the ability to subvert our computing security as much as the black hat hackers and thieves. It’s not necessarily the case that governments want to use these exploits against us the citizenry [this was written before the Snowden NSA revelations and we now know that governments have been using considerably more explicit and constructed tools to subvert our privacy ]. Indeed the most publicised example of governments using zero day exploits are the two exploits that were apparently used to place the Stuxnet virus within the Iranian nuclear industry infrastructure. The problem, rather, is that in the absence of a fix being released, other forces, malign forces, may also get hold of these exploits and use them against common computer users.
In some cases the individual selling the exploit is paid a monthly fee for as long as it remains undiscovered, and pertinently, not fixed (whereupon it would become useless). If we accept that our governments are acting in good faith, then this last issue could be seen as a device to keep these exploits from bad actors, if only because once used by said bad actors the vulnerabilities will become known. But even if we grant such largesse, and I think that would be naive, the fact remains that we have started to incentivise the finding of dangerous exploits for direct monetary gain, rather than for the purpose of fixing them, which leads us back to an inverted reciprocation whereby the desire to make something safe only makes it more dangerous.
There are other areas of internet life where reciprocal action is less urgently frightening but no less relevant. Take for example the transformation of facebook and the flight, from it, of teenagers. As soon as facebook hit a certain size and level of ubiquity 2 things happened. Parents discovered that this social thing wasn’t just for teenagers and held value for them also, and they realised that they could now (to some extent at least) monitor their kids in the social space as well as in the real world space. Here we have a mutuality, parents joining facebook, that again leads to an inversion as the teenagers start to leave facebook. Or rather, they don’t leave but they do move their private interactions to other platforms. In this case it was originally a flight to instagram (and now you know why facebook bought instagram). We are essentially now looking at a big game of social whack a mole as fashions come and go.
Reciprocity impacts our intuitive decision making
The larger problem, the reason this is a troubling idea, is that with all this counter intuitive activity going on, the issue of who or what to trust becomes ever more complex and difficult. As computing developments lead to more and more invisible computing implementations this becomes a crucial and problematic vector for society to manage. These complications are creating significant dissonance, troublingly, at the point that we need to be able make important intuitive decisions. These decisions are increasingly core facets of life and not niche hobbyist interests. Without clarity around the actual effective vectors and actions, making quick sensible decisions becomes degraded.
The confusion caused by reciprocal factors isn’t new, of course, as it is an essentially human affair, not computational. However the hyper reality of the cyber world tends to bring this confusion forward more commonly and in ever more public forms, particularly where matters of trust are concerned.
The two examples I have discussed are explicitly connected to the concept of trust. Either the trust between children and parents (facebook), or the trust between the government, the population and a specific type of human, hackers (zero day exploits).
Moreover confusion through reciprocal complexity is not the only complication. Another important element of the digital trust equation is impacted by changing communication dynamics, as without communication of some kind all trust decisions become simple statements of faith. There are two examples, that I want to highlight, where communication modes are changing the trust landscape – the incentivisation of low effort communication and the increased geographical reach of all communications.
Does low effort = low value?
This article by Evan Selinger in wired is critiquing the ads that facebook produced to push their android interface, called Home. In short, they show 3 situations where facebook users are able to ignore the immediacy of their surroundings and instead concentrate on being ‘social’ facilitated by the seamless access of Home.
We have the 20 something girl at a family gathering, a young man on a plane ignoring the stewardess, and lastly a staffer at facebook ignoring his boss, Zuckerberg himself, as he address the team.
Selinger goes into some depth and pushes his observations to the reductio ad absurdum extreme which is that at some stage in the future the only worthwhile interactions will be those facilitated through Home (or whatever the dominant social tool is), fueled by the facebook algorithm that connects us to other people who appreciate our social personas.
I don’t buy the full doomsday scenario that Selinger describes, and I suspect he doesn’t either. But he does effectively identify another key reciprocal trust vector that is being hyper charged by the internet.
Stated simply it’s obvious. The greater the effort that is required to create and send a communication the more value that communication has, certainly to the sender, and if it is timely and effectively targeted, to the recipient also. The kind of content that facebook and other social tools prioritises is short form and low effort. Sometimes the prioritisation is through the voting algorithms and sometimes it is effected through the constraints of the tool, think of Twitter’s 140 characters, and sometimes it is both.
Take the situation that recently overtook r/atheism on reddit. The coup that was staged to take over moderator control of the sub was fueled in part by a belief that the quality of the content being ranked by the community’s voting habit was degraded and ‘off mission’. Aside from the laughable thought that an atheist community should be ‘on mission’ in the first place, there was actually something behind both sides of the argument.
The supplanted community position was that opinions on quality were the sole preserve of voting behaviour, and that whatever content was voted up was the content that the community wished to rank highly.
Those behind the coup argued, with some validity (not that such validity justified their behaviour in my opinion), that the ranking algorithm on reddit favoured content that could be consumed quickly, because the voting behaviours that occurred in the first few minutes after something was posted were disproportionally valuable and key to a strong overall ranking. Therefore a quick meme that took 30 seconds to consume and enjoy had an un-earned advantage over a long deeply researched essay that takes 10 minutes to read.
So, according to the reddit voting behaviour ranking algorithm, quick and dirty content was rewarded but long and thoughtful content was punished and r/atheism was a largely unmoderated community exposing the content to the full vagaries of that algorithm. The coup changed all that, instituting moderation rules that artificially amended behaviour to compensate for the bias in the ‘natural’ ranking. The r/atheism situation is interesting for many reasons and I could go on, but suffice to say for now that it is a great live example that demonstrates how the ease of communication (meme vs a 10,000 word essay) can be a key vector in content ranking.
This essay isn’t about devaluing the concept of social platforms, which of course exist to remove friction from the process of communication. They are what they are and they have an impact on a whole range of things. Nonetheless it is worth asking yourself what was the occasion when you last wrote and sent a letter (pen and ink, envelope and a stamp), and what was the occasion when you last received a written letter that wasn’t a birthday card? Whether this trend is for the best or not it is undeniable that the culture of letters is in decline.
This observation identifies 2 different vectors.
- We communicate more frequently, but in ever weaker wrappers, to weak and strong connections both.
- We communicate in strong wrappers less and less, to anyone
The reason these communication schemas matter is not for any qualitative perceived decline or improvement (although there is certainly a spectrum of such quality, and part of what drives position on that spectrum are facets of the schema itself), but because communication is fundamental to the building of human support networks, and support networks, consciously constructed or otherwise are one of the ways that we compensate for vulnerability.
Support networks and co-dependency
The structure of support networks used to be driven chiefly by the geographical distribution of network participants. This is a dynamic that has been shifted many times, by many different technologies that have increased our geographical communication reach and subsequently made real changes to the way we live our lives on a day to day basis.
It’s not a concept unique to digital.
Imagine living on an island that is connected to the mainland by a ferry that only travels twice a day. There is a community on the island, shops, doctors, a midwife, a school, a church. The community functions and the whole community has access to the essential functions of a modern life regardless of whether or not they all get along with each other. There is a deep co-dependency. Say you run the local shop, but have a long standing enmity with the midwife and you are pregnant. The chances are good that you allow her to buy her essentials in your shop and that if you go into an early labour at 3am she will get out of bed to birth your child. That’s what communities do. That’s what co-dependent support networks do.
Once a bridge is built to your island, and your ability to expand your geographical reach is massively enhanced (24 hour access to the mainland) these co-dependencies become very different creatures. You no longer have to shop in one shop and you no longer have to use just one midwife. One, both or neither of these 2 personas can now, feasibly, decide to withdraw their services from the other without suffering a critical loss.
In a very similar way today’s digital communities, which are very effective at linking like-minded individuals across vast geographies also reduce the co-dependency of local geographical communities. We see this quite pertinently with the development of medical online communities and a corresponding reduction in trust in locally available medical practitioners.
I was once consulted by a medical group that was highly frustrated that their breast enhancement customers were willing to take advice, “from strangers on the internet” via forums. I spent some time looking at what was happening in those forums and it was quite surprising. There was a very clear demonstration of trust evident throughout the majority of the exchanges, these people were not interacting as if they were strangers. That was because they weren’t. It turned out that on average the decision making period for the surgery was about 18 months and that a huge amount of that time was spent actively within these forums. The information exchange was recorded and verifiable, moreover the community was in no way trivial, they were discussing the reality of a serious surgery. It was actually quite easy to see who was worth talking to, and who was not.
The doctors still hated it though.
Now without getting into the positives and negatives of specific situations (and for sure some of these developments are very positive, others not so much) this is just the reality of a broadly available global communication tool. When 10 people can give advice (from afar), where previously only 1 could (locally) there has by definition been a corrosion of the local dependency. That dependency was a real part of many human trust relationships and so we should acknowledge that the trust dynamics themselves will have changed too. In some spheres we will see improvements in others, we will not.
The medical sphere is a very good arena for exploring these ideas because we are forced to trust medical decisions even though our knowledge is generally low, while the stakes are often critically high.
Take the relationship that you might have with your GP (family doctor) today. There are, of course, a lot of different reasons why we have smaller relationships with our GP’s these days, financial, political and economic factors probably being the biggest influences. However, the ability of the internet to make available a corpus of knowledge that we can interact with plays into the situation as well.
The internet provides a measure of support, a place where you can seek reassurance, but it also reduces your dependency on your GP which in turn means that a lot of people will no longer form a relationship with their GP. Which means, in turn, that the GP’s knowledge of your medical history is less than it might have been. In the vast majority of situations this will have no discernible impact at all. The problem with medicine, though, is that the cost of one mistake can be horrific. In medicine we are obliged to build systems that manage the edge cases as a priority over the mundane.
The germs of the future
The situation regarding GP’s that I just described actually identifies the only place we can really go from here.
- We aren’t going to turn off the internet.
- Decisions that should be comfortable and intuitive will continue to become counter intuitive and problematic, while the rate of technological change suggests that any plateau in development, that might let intuitive reasoning catch up, is an unreasonably long way off.
- Our support networks will become more geographically dispersed.
In short, as we see the impact of technology, and in this particular examination, the complex new trust dynamics fostered by technology, gaining stronger and stronger footholds on the human exchanges of modern society, and to a greater or lesser degree replacing or reducing the value of local geographically enforced co-dependencies, we are forced to acknowledge that the only potential safety net will in fact be the extended support networks operating at potentially vast geographic scales themselves. The problem is the solution.
So considering that the complexities of our installed invisible computing layer already make this world counter intuitive and fraught for many people, we are left to ask ourselves what can we do to improve the reality of trust dynamics in this digital world.
One angle must concentrate on trying to bring clarity to situations through communications, while another angle should concentrate on deepening the efficacy of geographically disperse support networks.
The work that is required is happening, it’s just that we are still in the very early stages of this grand experiment. All those social tools that have scoring systems of one kind or another, likes, upvotes, downvotes and credits et al they are all trust systems seeking to identify quantifiable transparent trust. We haven’t identified all the unintended consequences yet though, and we probably need to ask if we will ever be able to identify them all. I suspect such consequences may well be somewhat defined by each individual tool and differently in each human domain.
Similarly the quality of informal support networks is also evolving, although perhaps in a less structured fashion. Finding ways to trust the information that is dispersed through these forums, via the mechanics I described above is one such thing, but we also are seeing increasing connectivity being used to match actual local problems with actual local solutions. This is built on an un manageable supply of simple kindness, the desire of a stranger to reach out and help, but it may be all we have.
This is a problem that will almost certainly evolve to some kind of functional equilibrium one way or another, but on the other hand we should perhaps be prepared that such an equilibrium might feel incredibly alien to the world we live in today.