Academic discussions about trust abound*. Particularly in the political and philosophical spheres, the issue of how people trust in institutions, and when and where they don’t, is an important topic of discussion, particularly in the current political climate. Trust is also a concept which is very important within security, however, and not always well-defined or understood. It’s central,to my understanding of what security means, and how I discuss it, so I’m going to spend this post trying to explain what I mean by “trust”.
Here’s my definition of trust, and three corollaries.
- “Trust is the assurance that one entity holds that another will perform particular actions according to a specific expectation.”
- My first corollary**: “Trust is always contextual.”
- My second corollary:” One of the contexts for trust is always time”.
- My third corollary: “Trust relationships are not symmetrical.”
Why do we need this set of definitions? Surely we all know what trust is?
The problem is that whilst humans are very good at establishing trust with other humans (and sometimes betraying it), we tend to do so in a very intuitive – and therefore imprecise – way. “I trust my brother” is all very well as a statement, and may well be true, but such a statement is always made contextually, and that context is usually implicit. Let me provide an example.
I trust my brother and my sister with my life. This is literally true for me, and you’ll notice that I’ve already contextualised the statement already: “with my life”. Let’s be a little more precise. My brother is a doctor, and my sister a trained scuba diving professional. I would trust my brother to provide me with emergency medical aid, and I would trust my sister to service my diving gear****. But I wouldn’t trust my brother to service my diving gear, nor my sister to provide me with emergency medical aid. In fact, I need to be even more explicit, because there are times which I would trust my sister in the context of emergency medical aid: I’m sure she’d be more than capable of performing CPR, for example. On the other hand, my brother is a paediatrician, not a surgeon, so I’d not be very confident about allowing him to perform an appendectomy on me.
Let’s look at what we’ve addressed. First, we dealt with my definition:
- the entities are me and my siblings;
- the actions ranged from performing an emergency appendectomy to servicing my scuba gear;
- the expectation was actually fairly complex, even in this simple example: it turns out that trusting someone “with my life” can mean a variety of things from performing specific actions to remedy an emergency medical conditions to performing actions which, if neglected or incorrectly carried out, could cause death in the future.
We also addressed the first corollary:
- the contexts included my having a cardiac arrest, requiring an appendectomy, and planning to go scuba diving.
Let’s add time – the second corollary:
- my sister has not recently renewed her diving instructor training, so I might feel that I have less trust in her to service my diving gear than I might have done five years ago.
The third corollary is so obvious in human trust relationships that we often ignore it, but it’s very clear in our examples:
- I’m neither a doctor nor a trained scuba diving instructor, so my brother and my sister trust me neither to provide emergency medical care nor to service their scuba gear.******
What does this mean to us in the world of IT security? It means that we need to be a lot more precise about trust, because humans come to this arena with a great many assumptions. When we talk about a “trusted platform”, what does that mean? It must surely mean that the platform is trusted by an entity (the workload?) to perform particular actions (provide processing time and memory?) whilst meeting particular expectations (not inspecting program memory? maintaining the integrity of data?). The context of what we mean for a “trusted platform” is likely to be very different between a mobile phone, a military installation and an IoT gateway. And that trust may erode over time (are patches applied? is there a higher likelihood that an attacker my have compromised the platform a day, a month or a year after the workload was provisioned to it?).
We should also never simply say, following the third corollary, that “these entities trust each other”. A web server and a browser may have established trust relationships, for example, but these are not symmetrical. The browser has probably established with sufficient assurance for the person operating it to give up credit card details that the web server represents the provider of particular products and services. The web server has probably established that the browser currently has permission to access the account of the user operating it.
Of course, we don’t need to be so explicit every time we make such a statement. We can explain these relationships in definitions of documents, but we must be careful to clarify what the entities, the expectations, the actions, the contexts and possible changes in context. Without this, we risk making dangerous assumptions about how these entities operate and what breakdowns in trust mean and could entail.
*Which makes me thinks of rabbits.
**I’m hoping that we can all agree on these – otherwise we may need to agree on a corollary bypass.***
****I’m a scuba diver, too. At least in theory.*****
*****Bringing up children is expensive and time-consuming, it turns out.
******I am, however, a trained CFR, so I hope they’d trust me to perform CPR on them.