game design by risk analysis

I don’t know if this is a real thing or just a stupid idea, but I was watching some folks talk about giant robot stories (in the context of giant robot games) while also working on a customer risk assessment and I suddenly wondered if we could use one in the context of another?

Currently I work making giant robots safe and secure, so I already know robots in the context of risk analysis works. But what about risk analysis as a tool for game design? So there are lots of methodologies for assessing the risk (and determining how to mitigate it) for systems but one I really like because of its collaborative and practical nature is the French government’s EBIOS 2010 system. We won’t dig into it in detail nor discuss my professional variations on it, but rather look at it from a very high altitude and see if it makes a game. More correctly, if it identifies the parts of a simulation that are fun to model in a game. Maybe we get some new giant robot direction!

So the first step is to identify the assets of the system. Now, this is often naïvely interpreted as the physical objects of value in the system but this is not how this works. The assets of the system are the elements of the system that are critical to its correct and safe operation. They might be things but they might also be functions.

assets

So what kind of assets to giant robots have?

  • integrity of their armour — if the armour is busted, that’s bad
  • safety of the pilot
  • ability to destroy an opponent
  • ability to navigate difficult terrain
  • security from extreme environmental threats (radiation, engineered, disease, poison)
  • ability to function in a wide range of temperatures
  • ability to function in extremes of shock and vibration
  • ability to detect threats (enemies in this context)

I’m sure there are more, but this is a pretty good list to start with. So the next step is to determine just how bad your day gets if these assets are compromised. Since this is subjective we don’t want really fine granularity — let’s just say it’s zero if nothing bad happens, 1 if it’s a pain in the ass, 2 if the system becomes useless, and 3 if the pilot dies.

So integrity of the armour. Let’s call that a 1 because we have pilot safety and basic functions somewhere else. We don’t really care much if the armour is damaged if nothing else happens.

Pilot safety, that’s a 3 obviously. Note that in a real assessment here is where we would argue about the dollar value of a life — is it really more important to keep the pilot alive than anything else? And we might change the severity definitions based on this discussion. Anyway, and so on. Let’s summarize:

  • 1 — integrity of their armour
  • 3 — safety of the pilot
  • 2 — ability to destroy an opponent
  • 1 — ability to navigate difficult terrain
  • 2 — security from extreme environmental threats (radiation, engineered, disease, poison)
  • 2 — ability to function in a wide range of temperatures
  • 2 — ability to function in extremes of shock and vibration
  • 2 — ability to detect threats (enemies in this context)

Next we need to talk about what threatens these assets. What are the threats?

threats

So normally we’d brainstorm these and get lots of ideas and then winnow them down to essential and unique threats. But let’s short circuit that since you can’t respond very quickly to this and I’ll just list a few.

  • enemy weapons damage our weapons
  • enemy weapons damage out mobility subsystems
  • enemy weapons damage our pilot cockpit
  • environmental temperature is very high or very low
  • weapons use creates too much heat
  • weapons malfunction
  • mobility system generates too much heat
  • subsystem breaks down from lack of maintenance
  • enemy weapons damage sensors

I think already we can see a game system come together though I’m not blind to the fact that I am thinking about game systems as I generate this list. It’s a bit of a cheat so I’m not sure it proves much. Maybe if I started with a topic I don’t know well?

Anyway the next step is to decide how likely each threat is. Let’s say 0 is amazingly unlikely. 1 is unlikely, 2 is common, and 3 will happen pretty much every time you get into a fight. Let’s quickly go through that:

  • 2 — enemy weapons damage our weapons
  • 2 — enemy weapons damage out mobility subsystems
  • 1 — enemy weapons damage our pilot cockpit (because it’s small compare to everything else!)
  • 1 — environmental temperature is very high or very low
  • 3 — weapons use creates too much heat
  • 1 — weapons malfunction
  • 2 — mobility system generates too much heat
  • 2 — subsystem breaks down from lack of maintenance
  • 2 — enemy weapons damage sensors

risk matrix

Now we just multiply these to find out how much we care about each scenario. If a threat doesn’t impact any asset we don’t care. So for example, let’s look at “enemy weapons damage our weapons”. That seems to affect only our ability to damage opponents, which has an asset value of 2. So the risk for this threat is 2 x 2 = 4. We’d normally make a risk appetite grid to say just how bad a 4 is. Something like:

Severity ->
Likelihood0123
0who careswho careswho caresmaybe bad
1who caresmaybe badworryingbad
2who caresworryingbadvery upsetting
3maybe badbadvery upsettingunacceptable

So a 2 x 2 is BAD.

Let’s look at something with multiple asset impact. Enemy weapons damage our pilot cockpit. Now clearly this affects our pilot safety, our mobility, frankly almost all of our assets. So we pick the most severe one: pilot safety. So that’s a 1 x 3 — BAD.

As we go through this we start thinking about mitigations. For each scenario that’s, let’s say, worrying or worse are there mitigations we can put in place that reduce either the severity or the likelihood of the event? So, for example, we could add armour to the cockpit and maybe reduce severity by one step. That’d be nice. But we need to also consider the ramification (cost) of the mitigations.

Because I want to talk about it in the next step let’s also look at weapons use creates too much heat (3). We will now have to invent the impact of heat on the robot and now we’re also designing a game — we’re imagining features of this robot and its world context. So let’s say we think that a hot robot is an unhappy robot. That most subsystems degrade. Certainly the weapon but also mobility and maybe pilot safety ultimately. So that happens with a likelihood of 3 and pilot safety is the biggest deal of all the impacts. 3 x 3 is unacceptable.

mitigations

So a mitigation is a recommended change to the system that reduces the risk level of a given threat scenario. And this is where we start getting a game I think because when assessing a mitigation we have to consider its cost and that’s where we start to get at least robot construction rules.

We have an unacceptable scenario up there — weapons overheating can kill the pilot. That would be bad. It can also do lots of other things, so even if we solve the pilot problem we still could wind up with a 3 x 2 that’s very upsetting. So we’d really like to bring down the likelihood of a weapon overheating. We could:

  • prefer weapons that do not generate much heat (like rockets, say)
  • add heat dissipation equipment to weapons (sinks, heat pipes)
  • add heat dissipation equipment to the whole system
  • … and so on

Now from a game design perspective what’s interesting here is not how we make a giant war robot safer, but the detail that we are adding to the system. Now we know we want to track heat, maybe by component. We know that some weapons generate more or less heat. We have a new subsystem (heat sinks) that could also be damaged and create cascading trouble.

discussion

What this seems to do is to give us a big pool of credible detail — elements of a fictional universe that have some justification for existing. Ultimately a good (or more often bad) risk analysis is what drives pretty much everything in the real world: nothing is perfect and so we need to decide how much imperfection we can tolerate. A lot if not all complexity comes out of this thought process, and trade-offs like that are also a Good Trick in game design: they create diversity in approaches to playing the game well.

picks and locks

I am always looking for a new skill to learn. It’s usually something technical, something work related, but the levels of anxiety in today’s world demand something more meditative. I’ve watched a lot of YouTube finding strange solace in my mechanics restoration videos. But I’m not building a machine shop any time soon.

Then I stumbled on LockPickingLawyer. He picks locks. Easy locks, hard locks, ancient locks, techno locks. And he blows through them with amazing ease. And then, most of the time, he guts them and shows off their innards. Now, mechanical bits like this have always interested me — how does the interplay between tiny components make a lock lock? Or more interestingly, unlock? So I decided to try my hand at picking locks.

There’s a great Canadian company called Sparrows that has a bunch of material for locksmiths and amateur pickers alike. And it’s not very pricey, really. That’s a pretty good criterion for a new passtime that may or may not last. So I got some stuff.

I got a couple of cutaway practice locks. Part of what’s difficult (and fun) about picking a lock is that you can’t see what’s going on. You can only hear and feel it. When you’re starting out that’s a hell of a hurdle to get over but a cutaway lock lets you see the pin positions and correlate that with what you’re feeling. I got two — one with normal pins and one with serrated pins. Serrated pins are a kind of “security pin” — the serration will generate what’s called a “false set”. That is, it will feel like the pin is clicked into position to unlock the lock when actually it’s just been trapped by one of the serrations. It feels subtly different than a real set but you need to experience it. A couple hundred times.

So those are fun. Heavy, small, brassy. Industrial feeling. It’s ticking my boxes. Then I got a pick set, just an assortment of basic picks and levers. Now I have enough to try picking.

Well I opened the practice locks pretty fast. Being able to see in the window is a pretty big advantage but the early victory is a great moral booster. So I grabbed a real padlock I had handy, a little 4-pin Master brand padlock. No window to look in, you just gotta feel and listen. But only 4 pins so it’s not a long reach or a weird angle. Should be easy, right?

Turns out it kind of is. Ten minutes for the first pick and I literally shouted out loud for joy. Giant rush from that. Was it a fluke? Five minutes on a second pick. Under two minutes now. The lock went from a giant looking obstacle to far too easy in an evening. I should note that these are the locks I used on my airgun cases until just now.

Yeah an evening. You don’t need to see what you’re doing, so this is something you can fidget with while watching TV, listening to an audio book, whatever. It’s almost meditative as a puzzle but the buzz you get at the solution is huge. Part of it’s puzzle and part of it is the physical feedback: the pop, the sudden release of the lock tension, the shift as the shackle opens. These are all rewards.

Take those where you can get them folks.

mystical security

This is something that stuck in my head while at work today.

WARNING: NOT NECESSARILY ABOUT GAMES

The general case

Talents that are new to humanity go through four phases. Well, on different axes they go through all kinds of phases, but there’s one progression I’m interested in today.

Mysticism. At first there are few people with the talent and it is largely unexamined. Even the practitioners don’t really know how they do what they do. They have talent and inspiration and they seem to be effective. There are individual heroes and we tolerate a lot of bullshit because there’s not much out there but heroes at this stage. The word “genius” gets thrown around a lot.

Organized Mysticism. Once our mystics recognize that they have something special they organize. The find other mystics and grant them access to the organization. They deny access to those that don’t have it. This may or may not be literally organized, but there’s at least a social aggregation.

Investigation. At some point people realize that there can’t be anything magical or purely intuitive about this. There  must be a way that people with the talent do what they do. Something we can quantify and proceduralize. This requires an honest and rigorous analysis of the talent and the talented.

Engineering. Once the talent is quantified we can teach it to others. No longer do we rely on the intuitive talent of individuals nor (in some cases worse) the accreditation of an individual by a mystic cabal. It can be taught and it can be tested and it can be reproduced. Anyone who wants this talent can have it.

One problem that arises is that during the Organized Mysticism phase there will be a lot of resistance to investigation. There is significant pressure to remain mystical!

First it’s a lot less work because people can only check your results and not your process. And your results don’t have to be all that good to be good enough — just a little better than a random guess. In reality you don’t even need to be that good if your successes are spectacular enough or the failures of those who don’t use your mystic organization are publicized properly.

Second it’s lucrative. You control access to the talent, so you can price it however you like. And then you also control membership to the Mystic Cabal and if your outcomes aren’t all that controlled, maybe you just want to sell some memberships and make a packet that way. This may or may not happen but the pressure is there and the controls are absent.

And investigation is expensive and has no immediate pay off. It’s an academic exercise, one done for the love of the knowledge. It’s a future-value endeavour and one that may or may not pay off. I mean, we might discover that the talent doesn’t actually exist and then you are stuck at the Organized Mysticism stage and you are discredited. The value in self examination is low.

And honestly if you have an amazing intuitive talent do you really want to be surrounded next year by people — just anyone really — doing what you do? That’s bound to bring down salaries.

So getting out of the Organized Mysticism phase is hard. It’s an ethical move. It should be the next step for any mystics who honestly believe that their talent is both valuable (to humanity — being valuable to yourself is actually a negative motivator here) and real. Resistance to investigation is suspicious.

The specific case

In the standard way of doing security risk assessments there is this idea of a risk calculation matrix, in which you cross index the impact of an event with the likelihood of an event to determine just how bad a threat is and therefore how much you should spend to mitigate it. At its root this is a good idea — it comes from safety analysis, after all, which is a time honoured science.

However, what we do here in co-opting this mechanism for security is not science, and it’s very much to our advantage as “experts” (especially certified experts) for it not to become a science. As long as it’s an art we don’t have to do much real work and at the same time our job seems like it’s a lot more clever than it is.

In a safety case, since we are dealing with an event tree that triggers on equipment failure (that is, on mean time to fail numbers — published numbers) rather than malicious activity, that “frequency” or even less credibly “probability” column is an actual number you get from a manufacturer. My fault tree shows that if component A and component B fail simultaneously then I cannot guarantee the system is safe. A and B both have published mean time between failure numbers (which are both measured and very conservative). The probability column here is just arithmetic.

In a security case that probability column is a Wild Assed Guess. We cloak it in two things: our credentialed “expertise” and by refusing to assert real (and therefore unsupportable since there are no real) numbers but rather vague order-of-magnitude categories. A first glance at the problem might suggest that this is just inevitable — the probability of malicious activity is not quantifiable. To me, though, this should not imply that we simply trust the instinct of a credentialled expert to suddenly make it quantifiable because the problem isn’t that it’s hard to know and that you need a lot of training and experience to estimate it. The problem is that it’s genuinely unknowable. That means when someone tells you they can quantify it, even vaguely, at an order-of-magnitude level, they are lying to you.

Unfortunately this lie is part of the training. You even get tested on it.

This makes us a (currently powerful) cabal of mystics. And the problem with a cabal of mystics being in charge is that first, they aren’t helping because they are not doing any science and second, as soon as someone starts doing some science they will entirely evaporate, exposed as charlatans. So naturally for those invested in the mysticism there will be some resistance to improving the situation.

The essence of science, setting aside for a moment the logical process (and that’s a big ask but it’s out of scope here) is measurement.

One axis of that risk calculation matrix is measured. The impact. Now it might be measured vaguely, but you can go down the list of items that qualify an event for an impact category and agree that the event belongs there. Someone could get seriously injured. Tick. Someone could get killed? Nope. Okay it goes in the SIGNIFICANT column. It’s lightweight as measurement goes but it’s good enough and it’s mechanizable (and that’s a red flag that separates engineers from mystics). You don’t need a vaguely defined expertise to be able to judge this. Anyone can do it if they understand the context and the concepts.

So the question I keep banging my head against is the other axis: frequency or probability. And since this is both unmeasurable and also has vast error bars (presumably to somehow account for the unmeasurability, but honestly if it’s impossible to measure then the error bars should be infinite — an order of magnitude is just painting a broken fence) my opinion is that it should be discarded. Sure it’s familiar because of safety analysis, but they have an axis they can measure. This one is not measurable. It’s therefore the wrong axis.

A plausible (and at least estimable if not measurable) axis is cost to effect. How much does it cost to execute the attack? This has a number of advantages:

  • You can estimate it and you can back up your estimate with some logic. There’s a time component, a risk of incarceration, expertise, and some other factors. You can break it down and make an estimate that’s not entirely ad hoc and is better than an order of magnitude.
  • It reveals multiple mitigations when examined in detail.
  • It reveals information about the opposition. Actors with billions to spend might not be on your radar for policy reasons. Threats that can be realized for the cost of a cup of coffee cannot be ignored — you can hardly be said to be doing due diligence if attacking the system is that cheap.
  • It is easily re-estimated over time because you retain the logic by which you established the costs. When you re-do the assessment in a year’s time and a component that cost a million dollars now costs a hundred, the change in the threat is reflected automatically in the matrix. No new magic wand needs to be waved. It’s starting to feel sciencey.

A useful cost to attack estimate (and I have nothing against estimates, I just expect them to be defensible and quantified) would need some standardized elements. For example, I would want us to largely agree on what the cost is of a threat of imprisonment. If I wet my finger and wave it in the air I’m happy with a hundred grand per year (a fair salary) of likely incarceration times about 10% for chance of getting caught. If we’re not happy with the estimate we can do some research and find our what the chances of getting caught really are and what the sentencing is like. We might find out that I’m being way too expensive here.

This is a good sign though. When I am compelled to say “we ought to do some research” I am happily thinking that we are getting closer to a science. What credible research could you do on probability of attack? Where would you even begin? And what would its window of value be? Or its geographic dependencies? Or its dependencies on the type of business the customer does?

Because you want to break the cost to attack down into the various costs imposed on the attacker — their time, their risk, their equipment costs — you have grounds to undermine the attack with individual mitigations. What if a fast attack took many hours? What if you could substantially increase the chance of catching them? What if you could increase the chance of incarcerating them? Suddenly those legal burdens start looking like they could be doing you a favour: you make this attack less likely by increasing your ability to gather evidence and to work with law enforcement. Publish it. Make an actual case and win it. Your risk goes down. These are mitigations that are underexplored by the current model but that could do some genuine good for the entire landscape if taken seriously. Sadly they don’t imply flashy new technologies at fifty grand a crack. But I am not interested in selling you anything. I want your security to improve.

In most of our assessments the threat vector, the person attacking, is categorized fairly uselessly into “hacker” and “terrorist” and “criminal” and so on. But their motivation doesn’t actually help you all that much. This isn’t useful information. How much they are willing to spend, however, does tell you about them. It tells you plenty. If you have a policy that you are only interested in threats from below a government level, that is that you aren’t taking action to protect yourself from a hostile nation state (and this is perfectly reasonable since it’s probably insurable: check your policies) then what you really want to do is decide how much money gets spent by an attacker before they qualify as a nation state? As organized crime? As industrial espionage? And so on? If you can put dollars to these categories then you can not only make intelligent decisions about mitigations but those decisions and the arguments behind them might even have some weight with your insurance adjuster. That’d be nice.

Finally these threats all change over time. Legislation changes, law enforcement focus changes, technology changes. But all of these changes are reflected in some component of the cost to attack. Consequently the value is possible to re-assess regularly. A vague value with no measurements is harder to justify re-considering — the whole thing starts to unravel if you ever wonder whether or not it’s right. Because it has no fabric to begin with. It’s just smoke and mirrors. It’s better not to look behind the curtain in that case.

But it’s much better to build on a foundation of measurement. It’s always better to have a calculation that you can expose to reasoned debate than to shrug and trust an “expert”. None of this is so complicated that no one can understand it without training. Making it seem so is a threat to doing the job properly. Let’s throw back the curtain and make this a science again. Let’s measure things.