Making Sense of Hanlon's Razor
A few days after I published 7 Kinds of Knowledge Worth Remembering, my sister sent me a text. She’d been reading through the examples I listed under mental models—things like expected value, marginal gains, and Hanlon’s Razor—and got stuck on that last one. “Why is it better to assume people are stupid?” she asked. Why is that preferable? It is a good and genuine question and exactly the kind of push that makes me think harder. This post is a response to that question—an exploration of why Hanlon’s Razor makes sense cognitively, structurally, and statistically.
"Never attribute to malice that which is adequately explained by stupidity."
That’s Hanlon’s Razor in its most quoted form. It sounds glib, almost dismissive. But it works, because it's about optimizing your attention.
An uncertain world demands constant interpretation; an in such a world, Hanlon’s Razor is less a moral guideline than a cognitive strategy. And one with strong backing from psychology, economics, and systems theory, among others.
Let’s unpack why.
Malice is costly. Incompetence is cheap.
Malicious intent is metabolically expensive. It requires planning, sustained anger, and the ability to model someone else's suffering as a goal—a combination that is unlikely to occur outside the realm of fictional villainy.
Planning requires executive function—the ability to inhibit impulses, delay gratification, and mentally simulate future events. As such, most harmful acts are impulsive, rather than planned. Sustained planning narrows the field to a smaller subset of people with high control and dark motivation.
Anger is energetically costly. It raises stress hormones (like cortisol and adrenaline), impairs long-term decision-making, and drains physical resources. Evolutionarily, sustained anger is risky: it makes you less cooperative, more exposed, and less adaptive to changing conditions. So, even though we all have access to deep anger, most people cool off relatively quickly. Real malice requires keeping the emotional fire going long enough to act on it in a deliberate way.
The third, and perhaps the rarest ingredient: to act maliciously, one must be able to model someone else’s perspective and care about hurting it. This involves theory of mind (understanding that others have thoughts and feelings) twisted toward harm. This is cognitively advanced and morally inverted. Not everyone has the capacity, and fewer still the disposition.
So why is genuine malice so rare?
Because it combines:
Long-term cognitive control
High emotional intensity over time
Sophisticated social modeling
A motivation structure that prioritizes harm over more efficient strategies
In short: It’s an expensive stack of capabilities. And there are far easier ways to make mistakes, protect yourself, or gain advantage—most of which don’t require sustained malice.
Hanlon’s Razor isn’t suggesting malice doesn’t exist. It’s saying the statistical likelihood of all the necessary factors aligning is low. So the smart bet is to rule out the cheaper explanations first.
And, there are plenty of cheaper explanations. If we look closely, most actual harm comes from:
Poor memory
Bad incentives
Misaligned systems
Social pressure
Low context1
Incomplete models
Laziness
Acknowledging those factors does not excuse bad behavior. It simply informs a probability distribution with which to understand it. If you assume malice first, you're positing a high-cost explanation for something a low-effort mistake could easily produce. That's a bad bet and bad reasoning.
Error Is the Rule
But, even if malice were cheap enough to be more common, stupidity has so much market share that it’s basically a monopoly position. Physicist, David Deutsch2 makes a profound and relevant point: fallibility is universal. No matter how advanced our understanding becomes, error remains inevitable because knowledge is always incomplete, and new problems are always possible.
In that light, Hanlon’s Razor isn’t merely charitable—it’s just realistic.
We are all stupid, in the sense that we are finite thinkers operating in a near-infinite problem space. The ways to be wrong will always outnumber the ways to be right.
Try not to think of this as pessimism. It is simply good epistemic hygiene.
Just as handwashing prevents infection, epistemic hygiene helps prevent cognitive contamination. It’s the disciplined practice of managing uncertainty by resisting over-reach, updating beliefs with evidence, and distinguishing between intuition and inference. You don’t eliminate error this way, but you do contain its spread.
Hanlon’s Razor works as a kind of mental sanitizer. It stops you from assuming too much, too fast. When you default to simpler, more probable explanations—like oversight, distraction, or bad incentives—you’re not being naive. You’re keeping your belief system clean by refusing to smuggle in unnecessary assumptions about motive or malice without evidence.
Attribution Error Is the Default
Now, you may be thinking: that’s all well and good for physics, but when that guy cuts me off in traffic, it’s not because of some “near-infinte problem space”; he’s just a jerk. And I suspect we have all felt something like that at some time in our lives. I also suspect we have all been that jerk (but, when we did it, we had a good reason).
Psychology gives this a name: the fundamental attribution error. We humans contend with a long list of cognitive biases, many of which might actually be helpful (at least some of the time). But, when it comes to the fundamental attribution error, we just make life hard for ourselves when we overestimate the role of personality (e.g., “they’re selfish”) and underestimate the situation (e.g., “they were exhausted, distracted, under pressure”).
If we habituate to Hanlon’s Razor, then it forces us to pause, and to ask: is this a character flaw, or a context failure? Have I ever behaved that way? Did I have a good reason, or was I just being a jerk?
Most systems are loosely coupled
Depending on your cognitive abilities (and the city you live in) traffic can be one complex system among many. We live in a roiling stew of complex systems, and in complex systems—hospitals, schools, bureaucracies, even friendships—no one has full visibility. Most actors operate on local information. Errors compound silently. As a consequence, you’re rarely seeing malicious coordination (even at the DMV); you’re seeing a lack of coherence.
This is why Donella Meadows3 warned that changing the people in a system rarely changes outcomes. The structure of the system drives behavior, no matter who’s at the wheel. Hanlon’s Razor reminds you to look at the system before blaming the node.
Malice vs. Indifference
By this point, it might seem like intent doesn’t matter much. After all, outcomes are what we live with.
But intent still matters—because it determines how we respond.
Sometimes harm feels like malice: you were excluded, humiliated, or harmed. But feeling harmed doesn’t mean someone meant to harm you. And that distinction—between malice and indifference—changes everything.
If it was malice, the appropriate response might be protection, boundary-setting, or accountability. But if it was indifference—or error, or inattention—then the response should be different: awareness, redesign, feedback.
Hanlon’s Razor doesn’t excuse the outcome—it just keeps you from misidentifying the source. When systems fail, or people miss the mark, vengeance wastes energy. What’s needed is structure. What’s needed is repair.
🧠 Seen this play out?
I’d love to hear how you’ve navigated situations where intent was unclear, but effects were real. → Join the conversation and share your experience.
Don’t become the thing you’re fighting
Even if you’re right about being hurt, it’s easy to be wrong about why.
And once you decide someone meant to hurt you—without strong evidence—it reshapes your next move. You might retaliate harder than needed. You might cut off communication or escalate unnecessarily. A small misstep becomes a lasting narrative. A repairable error becomes a personal offense.
That’s the real risk: misreading intent makes you more likely to respond with intent. And in trying to punish imagined malice, you become part of the very pattern you hoped to break.
Hanlon’s Razor interrupts that loop. It doesn’t ask you to ignore harm. It asks you to examine your inference. Is this something to escalate—or something to clarify, redesign, or let go? It gives you a buffer between the event and your reaction: a chance to respond, rather than reenact.
Always be updating
So what should we do when harm actually happens?
Not all harm is malice. Not all intent is visible. But something happened—and it still needs to be understood. This is where one of my favorite mental models comes in: Bayesian reasoning.
In 7 Kinds of Knowledge Worth Remembering, I listed it as a tool for thinking more clearly under uncertainty. That’s exactly what Hanlon’s Razor helps us do—by starting from base rates and updating only when the evidence justifies it.
When you experience harm or confusion, your mind starts generating hypotheses:
They did it on purpose.
They didn’t care.
They were distracted.
The system made it likely.
Bayes gives you a method: assign rough probabilities based on what usually happens, then update those probabilities as new evidence arrives. Don’t start from scratch. Don’t leap to certainty. Just adjust—gradually, proportionally.
Hanlon’s Razor doesn’t say malice is impossible. It just reminds you that malice is statistically rare, while confusion and constraint are everywhere. So until the evidence meaningfully shifts, bet on what’s more probable. That’s Bayesian. That’s efficient. And it keeps your attention focused on what can be observed, clarified, and redesigned—rather than on imagined motives.
What if you’re wrong?
Assuming malice carries a cost. So does assuming incompetence. But they aren’t symmetrical.
If you assume incompetence and it turns out to be malice, you’ll see the pattern soon enough. Malice leaves tracks. You can escalate appropriately once the evidence is clear.
But if you assume malice and it was a mistake, you’ve already done damage:
You’ve burned trust.
You’ve reduced future cooperation.
You’ve closed a feedback loop that might have led to repair.
You’ve made it harder for someone to try again in good faith.
That’s why Hanlon’s Razor works as a form of reversible error protection. It nudges you toward interpretations that are easier to unwind if wrong. It delays escalation just long enough for better evidence to arrive. And it keeps relationships, systems, and conversations from calcifying too early.
In uncertain environments, the best default is the one that preserves optionality.
Low-context systems rely heavily on explicit instructions, rules, and documentation because participants don’t share much background knowledge or implicit understanding. Missteps in these systems often come from missing information, not bad intent. In contrast, high-context systems assume shared norms, unspoken expectations, and subtle cues, so behavior is guided more by what “goes without saying.” When people from low-context backgrounds interact in high-context systems (or vice versa), confusion is common and often misread as incompetence or disrespect.
For example, a student new to a school may not realize that asking a question after the bell rings is seen as disruptive—not because they’re defiant, but because no one ever told them that bell = silence. In a high-context classroom, that rule is assumed. In a low-context one, it has to be made explicit. In such a case, Hanlon’s Razor reminds us to resist the urge to assume this behavior is defiance, and instead to study the context for clues to what else might be going on.
David Deutsch, a physicist and philosopher, argues in The Beginning of Infinity (2011) that fallibility is a permanent feature of human knowledge. Since we live in an open-ended universe of problems, there will always be errors to detect and correct, and no final theory or perfect understanding will eliminate the possibility of being wrong.
Donella Meadows, a systems theorist, identified leverage points in complex systems—places where small shifts can produce big change. She emphasized that changing the people in a system often has little effect if the underlying structure, rules, and goals remain the same. The system’s design drives its behavior more than individual intentions. See: Meadows, Thinking in Systems: A Primer (2008).