What the Great Hanoi Rat Massacre of 1902 and Modern Risk Practices Have in Common

Source: AI-generated using ChatGPT

In 1902, French colonial administrators in Hanoi discovered rats swarming the city's newly built sewer system. Alarmed by the public health risk, they launched a bounty program: locals would be paid for every rat tail they turned in.

At first, the numbers looked great. Thousands of tails poured in. But city officials started noticing something strange: rats were still everywhere. Then came the reports. People were breeding rats. Others clipped the tails and released the rats back into the wild, free to breed and be harvested again. Some even smuggled rats in from outside the city just to cash in.

The Hanoi rat ecosystem, 1902

The bounty created the illusion of progress while quietly making the problem worse. It was a textbook case of perverse incentives, a system where the rewards were perfectly aligned to reinforce failure.


Perverse Incentives in Cyber Risk

Maybe five years ago, I would’ve told you that cyber risk quantification was on the brink of going mainstream. Leaders were waking up to the flaws in traditional methods. Standards bodies were paying attention. Things felt like they were moving.

But now? I’m not so sure.

The deeper I go into this field, the more I see how powerful the gravitational pull is to keep things exactly the way they are. It turns out that cybersecurity is riddled with perverse incentives, not just in isolated cases, but as a feature of the system itself. Nearly 25 years ago, Ross Anderson made this point powerfully in his classic paper Why Information Security Is Hard, that cybersecurity isn’t just a technology problem, it’s a microeconomics problem. The challenge isn’t just building secure systems; it’s that incentives between users, vendors, insurers, consultants, and regulators are often misaligned. When the people making security decisions aren’t the ones who bear the consequences, we all suffer.

Cyber risk management today is our own version of the Hanoi rat bounty. On paper, it looks like we’re making progress: reports filed, audits passed, standards met. But beneath the surface, it's a system designed to reward motion over progress, activity over outcomes. It perpetuates itself, rather than improving.

Let me explain.


The Risk Ecosystem Is Built on Circular Incentives

The risk services ecosystem

Companies start with good intentions. They look to frameworks like NIST CSF, ISO/IEC 27001, or COBIT to shape their security programs. These standards often include language about how risk should be managed, but stop short of prescribing any particular model. That flexibility is by design: it makes the standards widely applicable. But it also leaves just enough latitude for organizations to build the easiest, cheapest, least rigorous version of a risk management program and still check the box.

So boards and executives give the directive: “Get a SOC 2,” or “Get us ISO certified.” That becomes the mission. The mission, among many other things, includes an end-to-end cyber risk management program.

Enter the consulting firms—often the Big Four. One comes in to help build the program. Another comes in to audit it. Technically, they’re separate firms. But functionally, they’re reading from the same playbook. Their job isn’t to push for rigor - it’s to get you the report. The frameworks they implement are optimized for speed, defensibility, and auditability, not for insight, accuracy, or actual risk reduction.

And so we get the usual deliverables: heat maps, red/yellow/green scoring, high/medium/low labels. Programs built for repeatability, not understanding.

So, the heatmap has become the de facto language of risk. Because the standards don’t demand more, and the auditors don’t ask for more, nobody builds more.


Where the Loop Starts to Break

Here’s where things start to wobble: the same ecosystem that builds your program is also judging whether you’ve done it “right.” Even if it’s not the same firm doing both, the templates, language, and expectations are virtually identical.

It's like asking a student to take a test, but also letting them write the questions, choose the answers, and let their buddy grade it. What kind of test do you think they’ll make? The easiest one that still counts as a win.

That’s what’s happening here.

Source: AI-generated using ChatGPT

The programs are often built to meet the bare minimum—the lowest common denominator of what’s needed to pass the audit that everyone knows is coming. Not because the people involved are bad. But because the system is set up to reward efficiency, defensibility, and status-quo deliverables - not insight, not improvement.

The goal becomes: “Check the box.” Not: “Understand the risk. Reduce the uncertainty.”

So we get frameworks with tidy charts and generic scoring systems that fit nicely on a slide deck. But they’re not designed to help us make better decisions. They’re designed to look like we’re managing risk.

And because these programs satisfy auditors, regulators, and boards, nobody asks the hard question: “Is this actually helping us reduce real-world risk?”


The Rat Tail Economy

To be clear: NIST, ISO, and similar frameworks aren’t bad. They’re foundational. But when the same firms design the implementation and define the audit, and when the frameworks are optimized for speed rather than depth, you get a system that’s highly efficient at sustaining itself and deeply ineffective at solving the problem it was created to address.

It’s a rat tail economy. We’re counting the symbols of progress while the actual risks continue breeding in the shadows.


If You're Doing Qualitative Risk, You're Not Failing

If you’re working in a company that relies on qualitative assessments—heat maps, color scores—and you feel like you should be doing more: take a breath.

You’re not alone. And you’re not failing.

The pressure to maintain the status quo is enormous. You are surrounded by it. Most organizations are perfectly content with a system that satisfies auditors and makes execs feel covered.

But that doesn’t mean it’s working.

The result is a system that:

  • Measures what’s easy, not what matters

  • Prioritizes passing audits over reducing real risk

  • Funnels resources into checklists, not outcomes

  • Incentivizes doing just enough to comply, but never more


So How Do We Stop Being Rat Catchers?

The truth? We probably won’t fix the whole system overnight. But we can start acting differently inside it.

We can build toward better—even if we have to work within its constraints for now.

A few years ago, a friend vented to me about how their board only cared about audit results. “They don’t even read my risk reports,” he said.

I asked what the reports looked like.

“Mostly red-yellow-green charts,” he admitted.

And that’s when it clicked: the first step isn’t getting the board to care—it’s giving them something worth caring about.

So start there:

  • Baby steps. Meet your compliance obligations, but begin quantifying real risks in parallel. Use dollar-value estimates, likelihoods, and impact scenarios. Start small - pick just one or two meaningful areas.

  • Translate risk into decisions. Show how quantified information can justify spending, prioritize controls, or reduce uncertainty in a way that matters to the business.

  • Tell better stories. Don’t just show charts; frame your findings around real-world impact, trade-offs, and possible futures.

  • Push gently. When working with auditors or consultants, ask: “What would it look like if we wanted to measure risk more rigorously?” Plant the seed.

This isn’t about being perfect. It’s about shifting direction, one decision at a time.

We can’t tear down the bounty system in a day—but we don’t have to breed more rats, either. We can step outside the loop, see it for what it is, and try something different.

That’s how change begins.

Previous
Previous

Vendor Sales Tactics: The Good, The Bad, and the Bathroom

Next
Next

Zines, Blogs, Bots: A Love Story