scope of interest

Let’s say you’re playing a game in which the ref is framing a scene. Not a huge stretch here since this is basically all of traditional RPG gaming and a lot of the rest of it. I think what follows will apply to other patterns of play as well, but let’s stick to what we know here. So you (the ref) are framing a scene.

What do you want? You want the players to engage with something, make choices, and consequently cause the wheels of the system to turn and have that machine generate whatever it generates. That’s the reason we buy games, right? We are buying a machine and it’s up to use to get it started and keep it moving. The beginning of a scene is how the engine gets started.

How do you do that? Usually you want to get to an event. Now you might start with casual discussion between characters and NPCs but this will usually stall in banalities unless something external HAPPENS. And event. As ref, probably your most useful input to the game is to craft events. Ad libbing based on the results of events is maybe the next bit. But it’s up to you to push the starter on this engine. The rest of the players shoulder a substantial burden as well: to engage with it. And, in the best of all possible games, to start stirring up their own shit, their own events, to feed the engine. But as ref even if you don’t see it as your responsibility to start shit (as in, say, a pure sandbox where you are mostly reacting) it is still a tool in your kit.

In my games I expect the ref to kick things off.

In thinking about this, about events that define scenes, I find three “scopes of engagement” for the players and their characters. Each is very different, has different results, and different values at different times. I think that recognizing these three scopes and understanding them lets us use them deliberately rather than instinctively or accidentally and that has to be a good thing.


This is an event in which the players have no initial investment. It happens to a place or person or thing that we haven’t discussed yet and so the players cannot have invented an investment in it. That’s not to say it won’t be affecting, in fact we hope it will! But since nothing about the event has any relevance to the player (not the character! We may find that the character is incredibly invested, but that’s super important: we are going to find this out) it does not require (and does not benefit from) any kind of decision tree.

The event happens and the players react. The event is a done deal, a fait accomplis. It is an instigator.

Since we’re all big fucking nerds, let’s use Star Wars for an example.

Han Solo jumps into Alderaan system and it’s nothing but rubble. That’s the event. The Empire has destroyed an entire planet. Before this event Han’s player knew nothing about Alderaan — we hadn’t discussed it, it’s not on their character sheet. Their introduction to Alderaan is its destruction. Consequently the player cannot be invested in it yet. Consequently we don’t need a big decision tree leading up to it. We present it.

What happens next in the scene is the reaction to the event. Facts have been established about the Empire’s ruthlessness, their evil. Players will want to investigate, maybe find survivors, maybe punish the wicked. At this scope of engagement, the uninvested event, we generate investment. All of the scene is about reaction. This is a self-guided missile, a fire-and-forget tool for the ref. Kick it off and ad lib against the player reactions.


Here we have an event that will affect something the players are invested in though not, critically, their character. We have already somehow established investment through backstory, prior play, mechanical elements, or some other method. We know about the thing that will be threatened by the event and we already care about it.

As referee you have carefully chosen this event to threaten something players are invested in. You have deliberately selected this scope for the scene.

When the players are invested we want them to be able to change the apparent course of events and consequently there must be decision points built into the scene: when you threaten something players are invested in, they must be able to act to affect the outcome. That’s the whole reason you chose this scope. So as ref, don’t get too invested in a particular outcome. You kicked the hornet’s nest and your plans get what they deserve: player agency.

Star Wars again suits me for illustration.

Princess Leia is threatened by assorted villains on the Death Star: cough up the rebel info or we destroy your homeworld! Well, shit, Leia’s extensive backstory notes are full of info about Alderaan! Her first girlfriend is there, her prized record collection, her family, her friends. It’s all in the backstory. Of course you read it, that’s why you’re threatening to blow it up!

Leia’s character is invested. They are motivated to stop this. As ref, this is the hinge of your scene! Betray everything you believe in and we’ll keep your planet safe otherwise it’s plasma. A moral dilemma (and this is the scope in which they thrive) — betray your most earnestly held beliefs or save your family, your friends, and people you don’t even know? A decision point. Not a chain of them, this isn’t suddenly positional combat on a grid, but at least one.

Leia decides to give the information but lie. The baddies destroy Alderaan anyway. I guess she should have put more points in SOCIAL but maybe when she levels up the player can think about that. In the meantime, angst, betrayal, and further investment in something that matters (the course of the narrative) at the expense of something that matters less (backstory). I use expense deliberately: backstory is a currency. We use it to buy things. If we don’t spend it, it’s not useful. Spend backstory.


At this scope characters are directly threatened. We don’t care about investment because we are going to be in a situation where they have to act because the bad thing is happening to them now. This is the easiest way to engage the system but none of these scopes are “best”! They do totally different things. This one is the easiest, most mechanical, but does not always provide the most (or even a lot) of change within the story.

This is because it is defined by multiple, perhaps many, decision points that are focused solely on the event and not the story arc. We are zooming in, blow by blow, making choices that are critical in the moment (I draw my knife!) but irrelevant from a larger scale. Ultimately there is still only one hinge here — what is the end state when the smoke clears — and a lot of decisions. It’s a lot of system engagement for comparatively little story change.

But! But we’re here to engage the system. Not better. Not worse. Different. We play the game at a minor expense to story (per unit time).

Star Wars fails us here, at least in the Alderaan scene, so let’s look at a character that never got mentioned: Planetary Defense Captain Olberad Pinch! While everyone else is wringing their hands or waiting for fireworks, Olberad Pinch has a problem with multiple decision points! Now we all know they failed utterly, but look at the expenditure in table time to get there. And it was very important and interesting for Pinch’s player.

Detection. A moon-sized warship enters the Alderaan system! What do Planetary Defenses do? That’s in Pinch’s capable tentacles. They investigate, gather information, determine the next course of action. Maybe send ships — maybe Pinch is on one and their story ends in a lopsided dogfight! Maybe they escape!

Action. The Death Star is determined to have planet destroying weapons and is powering up! Did you get spies aboard? Was Pinch one of them? What about the planetary railguns? The local fighter swarm? Sure, all of these things obviously failed, but there are one or more detailed, system-engaging scenes here. In game time, this space which is largely unseen in the movie, could be multiple sessions, maybe the bulk of a months play. This is the nature of the Affected scope! It’s about your character, not just something you like! You care this much!

Climax! The Death Star is powering up! If you’re not in a position to stop it maybe you can escape? Evade TIE fighters in your shuttle just in time? With who? Which eight people did you select? And where are you going now? Again detail, lots of table time, all to save your ass.

And so

Those are the three scopes of engagement I can think of for a scene. Each requires a different level of planning or ad libbing from the ref. Each has different expectations about the players and uses their character sheets differently. Each has a place, makes different things happen. If you over-use one habitually, think about the others. Think about ways you can fabricate investment with uninvested scenes. Think about ways you can engage the system by explicitly threatening characters.Think about ways you can make a scene-staging event interesting by picking on investments the player has declared right there on the character sheet (and incidentally this is why the lonely loner backstory will always be the most useless — if the character cares about nothing then a third of the tools are obviated — if you take anything away from this as a player it should be that the more your character clearly cares about things the more interesting things can happen to them).

mystical security

This is something that stuck in my head while at work today.


The general case

Talents that are new to humanity go through four phases. Well, on different axes they go through all kinds of phases, but there’s one progression I’m interested in today.

Mysticism. At first there are few people with the talent and it is largely unexamined. Even the practitioners don’t really know how they do what they do. They have talent and inspiration and they seem to be effective. There are individual heroes and we tolerate a lot of bullshit because there’s not much out there but heroes at this stage. The word “genius” gets thrown around a lot.

Organized Mysticism. Once our mystics recognize that they have something special they organize. The find other mystics and grant them access to the organization. They deny access to those that don’t have it. This may or may not be literally organized, but there’s at least a social aggregation.

Investigation. At some point people realize that there can’t be anything magical or purely intuitive about this. There  must be a way that people with the talent do what they do. Something we can quantify and proceduralize. This requires an honest and rigorous analysis of the talent and the talented.

Engineering. Once the talent is quantified we can teach it to others. No longer do we rely on the intuitive talent of individuals nor (in some cases worse) the accreditation of an individual by a mystic cabal. It can be taught and it can be tested and it can be reproduced. Anyone who wants this talent can have it.

One problem that arises is that during the Organized Mysticism phase there will be a lot of resistance to investigation. There is significant pressure to remain mystical!

First it’s a lot less work because people can only check your results and not your process. And your results don’t have to be all that good to be good enough — just a little better than a random guess. In reality you don’t even need to be that good if your successes are spectacular enough or the failures of those who don’t use your mystic organization are publicized properly.

Second it’s lucrative. You control access to the talent, so you can price it however you like. And then you also control membership to the Mystic Cabal and if your outcomes aren’t all that controlled, maybe you just want to sell some memberships and make a packet that way. This may or may not happen but the pressure is there and the controls are absent.

And investigation is expensive and has no immediate pay off. It’s an academic exercise, one done for the love of the knowledge. It’s a future-value endeavour and one that may or may not pay off. I mean, we might discover that the talent doesn’t actually exist and then you are stuck at the Organized Mysticism stage and you are discredited. The value in self examination is low.

And honestly if you have an amazing intuitive talent do you really want to be surrounded next year by people — just anyone really — doing what you do? That’s bound to bring down salaries.

So getting out of the Organized Mysticism phase is hard. It’s an ethical move. It should be the next step for any mystics who honestly believe that their talent is both valuable (to humanity — being valuable to yourself is actually a negative motivator here) and real. Resistance to investigation is suspicious.

The specific case

In the standard way of doing security risk assessments there is this idea of a risk calculation matrix, in which you cross index the impact of an event with the likelihood of an event to determine just how bad a threat is and therefore how much you should spend to mitigate it. At its root this is a good idea — it comes from safety analysis, after all, which is a time honoured science.

However, what we do here in co-opting this mechanism for security is not science, and it’s very much to our advantage as “experts” (especially certified experts) for it not to become a science. As long as it’s an art we don’t have to do much real work and at the same time our job seems like it’s a lot more clever than it is.

In a safety case, since we are dealing with an event tree that triggers on equipment failure (that is, on mean time to fail numbers — published numbers) rather than malicious activity, that “frequency” or even less credibly “probability” column is an actual number you get from a manufacturer. My fault tree shows that if component A and component B fail simultaneously then I cannot guarantee the system is safe. A and B both have published mean time between failure numbers (which are both measured and very conservative). The probability column here is just arithmetic.

In a security case that probability column is a Wild Assed Guess. We cloak it in two things: our credentialed “expertise” and by refusing to assert real (and therefore unsupportable since there are no real) numbers but rather vague order-of-magnitude categories. A first glance at the problem might suggest that this is just inevitable — the probability of malicious activity is not quantifiable. To me, though, this should not imply that we simply trust the instinct of a credentialled expert to suddenly make it quantifiable because the problem isn’t that it’s hard to know and that you need a lot of training and experience to estimate it. The problem is that it’s genuinely unknowable. That means when someone tells you they can quantify it, even vaguely, at an order-of-magnitude level, they are lying to you.

Unfortunately this lie is part of the training. You even get tested on it.

This makes us a (currently powerful) cabal of mystics. And the problem with a cabal of mystics being in charge is that first, they aren’t helping because they are not doing any science and second, as soon as someone starts doing some science they will entirely evaporate, exposed as charlatans. So naturally for those invested in the mysticism there will be some resistance to improving the situation.

The essence of science, setting aside for a moment the logical process (and that’s a big ask but it’s out of scope here) is measurement.

One axis of that risk calculation matrix is measured. The impact. Now it might be measured vaguely, but you can go down the list of items that qualify an event for an impact category and agree that the event belongs there. Someone could get seriously injured. Tick. Someone could get killed? Nope. Okay it goes in the SIGNIFICANT column. It’s lightweight as measurement goes but it’s good enough and it’s mechanizable (and that’s a red flag that separates engineers from mystics). You don’t need a vaguely defined expertise to be able to judge this. Anyone can do it if they understand the context and the concepts.

So the question I keep banging my head against is the other axis: frequency or probability. And since this is both unmeasurable and also has vast error bars (presumably to somehow account for the unmeasurability, but honestly if it’s impossible to measure then the error bars should be infinite — an order of magnitude is just painting a broken fence) my opinion is that it should be discarded. Sure it’s familiar because of safety analysis, but they have an axis they can measure. This one is not measurable. It’s therefore the wrong axis.

A plausible (and at least estimable if not measurable) axis is cost to effect. How much does it cost to execute the attack? This has a number of advantages:

  • You can estimate it and you can back up your estimate with some logic. There’s a time component, a risk of incarceration, expertise, and some other factors. You can break it down and make an estimate that’s not entirely ad hoc and is better than an order of magnitude.
  • It reveals multiple mitigations when examined in detail.
  • It reveals information about the opposition. Actors with billions to spend might not be on your radar for policy reasons. Threats that can be realized for the cost of a cup of coffee cannot be ignored — you can hardly be said to be doing due diligence if attacking the system is that cheap.
  • It is easily re-estimated over time because you retain the logic by which you established the costs. When you re-do the assessment in a year’s time and a component that cost a million dollars now costs a hundred, the change in the threat is reflected automatically in the matrix. No new magic wand needs to be waved. It’s starting to feel sciencey.

A useful cost to attack estimate (and I have nothing against estimates, I just expect them to be defensible and quantified) would need some standardized elements. For example, I would want us to largely agree on what the cost is of a threat of imprisonment. If I wet my finger and wave it in the air I’m happy with a hundred grand per year (a fair salary) of likely incarceration times about 10% for chance of getting caught. If we’re not happy with the estimate we can do some research and find our what the chances of getting caught really are and what the sentencing is like. We might find out that I’m being way too expensive here.

This is a good sign though. When I am compelled to say “we ought to do some research” I am happily thinking that we are getting closer to a science. What credible research could you do on probability of attack? Where would you even begin? And what would its window of value be? Or its geographic dependencies? Or its dependencies on the type of business the customer does?

Because you want to break the cost to attack down into the various costs imposed on the attacker — their time, their risk, their equipment costs — you have grounds to undermine the attack with individual mitigations. What if a fast attack took many hours? What if you could substantially increase the chance of catching them? What if you could increase the chance of incarcerating them? Suddenly those legal burdens start looking like they could be doing you a favour: you make this attack less likely by increasing your ability to gather evidence and to work with law enforcement. Publish it. Make an actual case and win it. Your risk goes down. These are mitigations that are underexplored by the current model but that could do some genuine good for the entire landscape if taken seriously. Sadly they don’t imply flashy new technologies at fifty grand a crack. But I am not interested in selling you anything. I want your security to improve.

In most of our assessments the threat vector, the person attacking, is categorized fairly uselessly into “hacker” and “terrorist” and “criminal” and so on. But their motivation doesn’t actually help you all that much. This isn’t useful information. How much they are willing to spend, however, does tell you about them. It tells you plenty. If you have a policy that you are only interested in threats from below a government level, that is that you aren’t taking action to protect yourself from a hostile nation state (and this is perfectly reasonable since it’s probably insurable: check your policies) then what you really want to do is decide how much money gets spent by an attacker before they qualify as a nation state? As organized crime? As industrial espionage? And so on? If you can put dollars to these categories then you can not only make intelligent decisions about mitigations but those decisions and the arguments behind them might even have some weight with your insurance adjuster. That’d be nice.

Finally these threats all change over time. Legislation changes, law enforcement focus changes, technology changes. But all of these changes are reflected in some component of the cost to attack. Consequently the value is possible to re-assess regularly. A vague value with no measurements is harder to justify re-considering — the whole thing starts to unravel if you ever wonder whether or not it’s right. Because it has no fabric to begin with. It’s just smoke and mirrors. It’s better not to look behind the curtain in that case.

But it’s much better to build on a foundation of measurement. It’s always better to have a calculation that you can expose to reasoned debate than to shrug and trust an “expert”. None of this is so complicated that no one can understand it without training. Making it seem so is a threat to doing the job properly. Let’s throw back the curtain and make this a science again. Let’s measure things.

catastrophe in the first person

So yesterday I blurted out this twitter-splort as a sort of sub-tweet related to someone asking about what could happen to engage characters when an asteroid station’s reactor malfunctions. I gave them direct and I hope useful advice but then I did this.

Something that doesn’t get explored enough for my tastes in RPGs: confusion. In real life confusion + baseline fear creates some of the most terrifying and difficult to navigate circumstances.

When something big and terrible happens in an RPG often we start with full knowledge of it. This is a missed opportunity. Often the outward signs of a disaster for someone not immediately killed are ambiguous and subtly terrifying.

There are lots of emergency people and they don’t know what to do. People are running in multiple directions (no obvious origin of danger). Things that always work are working sporadically or not at all. There are sounds that aren’t alarming but you’ve never heard them before.

There are dead and injured and it’s not obvious what killed or injured them. There are people demanding you help who don’t know how you can help. Visibility is suddenly restricted or obliterated. Alarming smells are suddenly commonplace (gas, smoke, rubber, metal)

But most importantly these haphazard inputs are all you have. They don’t assemble into a certainty as to what’s going on. They might not even help. If you are in this situation you are either:

* leaving

* investigating so you can understand

* helping the immediately in danger

A fair question is, how do you evoke this in a game. Now my first thought is that this isn’t mechanical in the strict sense — it doesn’t need points or clocks or dice. I mean, you can employ those things, but there are more general techniques you can bring to bear.

Maybe it’s obvious, but if a real person is terrified because things are uncertain and confusing and dangerous then evoking the mood for players guiding a character through the disaster might benefit from the same thing: lack of information. This is of course in direct conflict with the idea that players should have full information and play their characters as though they don’t. Sometimes that’s the right thing and lets mechanisms already present engage, but it doesn’t establish mood. So what I’ll suggest is that whether or not you eventually draw back the curtain to allow the mechanism to play out, at least start with limited information.

So consider this asteroid reactor failure:

Ref: You’re buying noodles at a swing-bar when suddenly there’s a lurch. The air goes opaque with dust or something and your noodles fly out of your hands, whirling across the open space of the Trade Void. You hear screaming and you can’t see shit.

This is where I start: you don’t need to evoke confusion or simulate. Start with the actual confusion. Players will probably start looking for information. Before they get too much out, follow up. This makes things urgent.

Ref: People are rushing past you, just grey shapes in this fog, bumping into you. They are heading in different directions and are incoherent. Except for the one begging for help from across the ‘Void. You find your clothes are smeared with blood from someone who passed you.

Players are now in a position where they have little information, no easy way to get more information, and yet a motivation to either leave, help, or investigate.

I think it’s a critical technique to know and use as ref: to step back from the simulation engine and use the information itself to establish mood and urgency. It’s a story telling technique, not a game mechanism. When you rush or interrupt people, they get anxious. When they don’t have enough information they get the Fear. When they know the danger is real but don’t know the direction that is dangerous, they get careful.

The problem with this is that it’s not safe. When you try to get real emotions at the table you are treading on dangerous ground. If you’re going to attempt to directly evoke fear and anxiety in people, they better all be on board for that. And even if they feel like they are, it’s helpful to have an out like an X-Card or a Script Change. Make sure everyone knows what they are in for and have a way to opt out. If I use fast random information and overtalking people in order to establish confusion and anxiety, I’m doing a real thing to real people and you bear a great deal of responsibility when you do that. Someone not prepared for it would have every right to get angry about it. So tread lightly and talk first.

The upside is that the mood is easier to get into, easier to react within context, easier to build scenes that are memorable for the emotion and tension.

Most of our catastrophe images have context because we are looking back on the event through lens of investigation and analysis. But what could you conclude from this if it’s all you knew? A vast cloud of thick grey is descending on you and the noise is tremendous and people are screaming. Context is a luxury.

One level above this is how to analyze situations in order to understand how to place someone in them convincingly. If you’ve never been in mortal danger, you might have no idea what features of that terror are easily conveyed. But there are things that are generally true as I indicated in those tweets:

Low information: initially you know nothing except the effects you see.

Low visibility: bad things often create visual confusion. Fog, smoke, tear gas, crowds — your ability to see what is going on is constrained, so don’t describe everything.

High emotions: people are screaming, crying, begging. Not all of them are in danger or physical distress but almost all of them are overwhelmed by the confusion. You can’t immediately tell which are which.

Blood: Even just second order injuries (people getting banged about by the confused other people) generate a lot of blood after a few minutes. And you can’t tell who’s badly injured from who just has a broken nose. Or who’s covered in someone elses blood.

Low air: whether the air is filled with Bad Things or you’re overcrowded or you’re just hyperventilating it always feels like there is not enough air.

On the upside you will also usually find pockets of local organization: there’s usually someone trying to help and even if they have no idea what’s going on this will tend to form a nucleus of organization: people in this situation are attracted down the confusion gradient. They’ll walk right into a crossfire of bullets if it’s easier to see and breathe there.

There’s also usually a coordinated response very rapidly and that forced organization defuses confusion rapidly. The longer it takes to get there the more certain people are that it’s never coming, which amplifies confusion rapidly.

Presenting these things fall into the category of technique for me. You can mechanize some of them I suppose, but I think you only want to do that if you want your game to be about catastrophe. If you just want your particular game night to deal with a catastrophe, you want to hone some skills for presenting the catastrophic.


Oh advancement systems how we love you in the RPG world. By “we” here I mean you, and maybe not you specifically. Personally, I dislike them a great deal.

The problem with character advancement is that opposition either scales with character advancement or it doesn’t.

When opposition scales with you, the best advancement systems have the following features:

  • The range of options available to the player increases
  • The range of options available to the ref increases
  • New chapters of the monster manual are brought to the front — you are revealing new pictures of new opposition

For myself, the first two of those are not appealing to me. I don’t want my game to get more complicated as I play it. I’m not saying you’re bad, stupid, or evil if you like it, but let’s acknowledge that it’s a very important design choice that people are going to react to differently.

A possible hero to me without being a secret fireball-throwing wizard.

Especially as ref, new complexity can feed my anxiety and lead to me violating the rules. There’s no way I’m managing a spell list for a high level dragon and deciding what they do from round to round as though I was playing my wizard character. At least in part because this dragon is probably going to die soon and there’s another complicated monster in the next room.

As a player I can cope, especially if there’s a type of character that doesn’t change much in complexity. If my fighter has increasing bonuses to scale with the baddies but not a lot of tactical choices that increase over levels, I’ll probably play the fighter and not the sorceror.

Revealing new parts of the monster manual is valuable: changing up the nature of opposition is cool. And the implication that these increasingly powerful monsters imply increasingly existential threats to the low level societies I am protecting is pretty cool. But this is a very specific kind of story arc and not one I want to play every time I sit down. And, frankly, not one I have the patience to work through from zero to hero. It’s just not for me. I’d rather start where the fun is, wherever that is for me today.

Everything else is basically the same except the numbers are bigger, and this can get to feel pointless, especially if the monster manual is weak. If the gnolls just keep getting bigger and better at magic then I don’t feel like we’re going anywhere interesting. I’m just doing more damage against larger hit point pools.

If the system doesn’t scale opposition (like an asymmetrical system where the opposition model doesn’t change or where the opposition isn’t really modelled at all) then something very different happens: you just get more successful. Now I actually find that pretty interesting as long as it happens slowly and as long as failure is rich — the whole tone of the game should change over time. But it has a cap and not a very well defined one: at some point there are no challenges any more and that’s an unsatisfying way to end a story. It might make an amusing allegory once. Just once.

Again these are matters of personal taste. I know there are people (because I was one) who get a rush from advancing. Accruing enough points to ring the bell and get a new power is intrinsically satisfying regardless of its relationship to the story (and sadly there often isn’t one — maybe I’d be keener if something happened in the fiction to explain and explore my sudden leap in ability). But this makes it a mode of play, not a necessary feature of play. I like playing cards for money but it doesn’t mean that money needs to be on the table for every card game.

This is why advancement figures weakly if at all in my games: it doesn’t sing to me. It’s important that there are games that have it because it sings to a lot of people. But it’s important to have games that don’t as well, because that thrill of improvement ties a reward to the accrual of experiences that help you advance, which can distract you from the fact that sometimes these things are abhorrent and rewarding them should be questionable. When the thrill comes from this reward, this advancement, questioning the underpinnings of the idea of rewarding murder and robbery (for example) is uncomfortable and unproductive. I think we need to play without mechanical reward for a while to get a grip on what kinds of things we love in a story that aren’t murder and robbery. Maybe that leads us to games that reward different things and in different ways. And sometimes we find that it’s fine that that changes from session to session, and maybe advancement as a reward isn’t always necessary.

And sometimes, for sure, we want to ring that bell as we stand on the corpse of a wizard-dragon that took hours of smart choices to slay. But not always. And, for me, not even mostly.


I wanted to talk a little about heroism but I forgot to. I don’t think a hero should be defined by their capabilities. I mean they can be, but it feels insufficient — even Superman is a hero for reasons far beyond being crazy strong and largely invulnerable. Powers enable hero or villain. A hero to me is about how someone responds to adversity — about the choices they make when the choices are hard. So “heroic” gaming to me is gaming within a context where its’ not obvious what the right thing to do is and, most importantly, where you’re celebrated when you make a great choice. People treat you like a hero when you’re heroic. Scale of conflict is not strictly relevant, though it’s a cheap way to get action that could resolve heroically.