“That Feels Too High”: A Risk Analyst's Survival Guide

Source: AI generated with ChatGPT

Years back, I walked into a meeting with our Chief Risk Officer, ready to present a cloud outage risk assessment I was genuinely proud of. I'd spent three weeks on it: interviewing subject matter experts, validating assumptions with comprehensive data, running scenarios, testing edge cases, and putting together a deck showing potential losses and the probability of exceeding various thresholds. Real data, not the "trust me bro" variety.

I finished presenting and waited for questions about our risk exposure, control gaps identified, the return on investment ratios we would get with planned mitigations, or the methodology.

Instead, there was a pause. Then: "Hmm... that feels too high."

Just like that, three weeks of careful analysis became a discussion about feelings. All that work (the interviews, the data validation, the scenario testing) suddenly felt secondary to one person's gut reaction.

After years of getting this exact feedback, here's what I've learned: when someone says your risk numbers don't "feel right," they're not necessarily wrong. They're just speaking a different language. To be effective in this business, you need to become fluent in both quantitative analysis and the language of executives.


Three Reasons Your Numbers Get Questioned

That wasn't the first time this has happened, and it won't be the last. After countless meetings of "that feels high" or "that feels low" or "that feels off," I've identified three reasons why risk numbers might not pass the "feels right" test.

Reason #1: You Got Something Wrong

This is the humbling one and the most valuable feedback you can get.

Sometimes the stakeholder is right. Not because they ran their own Monte Carlo simulation, but because they have deep, battle-scarred knowledge of how things work in the operating environment at the organization, from failing systems they know about to a changing threat environment.

I once modeled a scenario that looked solid on paper. All the controls were documented as working correctly, the technology performed well in testing, and the processes seemed robust. The CTO took one look at my numbers and said, "This feels low."

It turned out, the CTO had operational knowledge that wasn't captured in any of my sources: things they'd seen fail before, quirks in how systems actually behaved under stress, dependencies that weren't obvious from the documentation. Experience that you can't easily quantify but that fundamentally changes the risk picture.

The Bayesian Approach:

When someone challenges your numbers, treat their feedback as new information that should update your beliefs, not as a personal attack on your work.

Ask questions like:

  • "What specific part feels wrong to you?"

  • "What have you seen happen in practice that I might have missed?"

  • "Are there controls that look good on paper but fall apart under pressure?"

  • "What would you expect this number to be, and why?"

I've found real issues this way: incorrect assumptions about control effectiveness, data gaps, coverage problems, bad or outdated metrics, or scope errors that somehow slipped past multiple reviews.

This isn't about being wrong. It's about being willing to update your model when you get new information. That's good science and good risk management.

Reason #2: Cognitive Bias Is Getting in the Way

Sometimes the pushback isn't about missing information. It's that human brains evolved to avoid being eaten by tigers, not to intuitively understand probability distributions.

Daniel Kahneman's work in "Thinking Fast and Slow" is incredibly relevant here. Confirmation bias makes people favor information that supports what they already believe. Anchoring bias makes them stick to the first number they heard. Availability bias makes recent headlines seem more probable than your carefully calculated statistics.

I once presented a model showing that our risk of a major data breach was relatively low, about 5% annually. The CISO immediately pushed back: "That feels way too low. We hear about breaches happening all the time!"

This is classic availability bias. The constant stream of breach headlines made high-impact incidents feel more probable than they actually were.

How to Navigate Cognitive Bias:

You can't logic someone out of a position they didn't logic themselves into. However, you can work with their cognitive patterns:

  • Acknowledge their experience first: "I understand why this feels that way given what we see in the news..."

  • Find common ground before highlighting differences

  • Use analogies and stories that work with natural thinking patterns

  • Focus on the decision at hand, not winning the debate

Remember: you're not there to reprogram anyone's brain. You're there to help them make better decisions despite cognitive quirks we all have.

Reason #3: Risk Communication Is Failing

Sometimes the disconnect is simpler than bias or missing knowledge. This is probably the most fixable problem, and the one we mess up most often.

You put up a slide with a beautiful loss exceedance curve. You're proud of it. It tells a story! It shows uncertainty!

Your audience sees what looks like a confusing chart they can't interpret.

Many information security folks have spent their entire careers working with heat maps and red-yellow-green traffic lights. They've never had to interpret a confidence interval or understand what a "95th percentile" means.

I've watched experienced risk managers (smart people who could run circles around me in operational knowledge) completely mix up mean and mode. I've seen executives stare at a five-number summary like it was written in a foreign language. This isn't because they're incompetent. It's because we've done a poor job of translating quantitative concepts into clear language.

Ironically, I've had more trouble communicating quantitative concepts to my peers in risk and security than to business people. Finance people, econ majors, and MBAs intuitively understand the language of risk.

The Art of Risk Translation

Think of yourself as a translator. You need to know your audience well enough to present the same insights in different ways:

  • For visual learners: Charts and graphs, but with clear explanations

  • For story people: Turn your data into narratives they can follow

  • For bottom-line focused: Lead with the key insight and recommendation

  • For detail-oriented: Provide the methodology for those who want it

I once had a CISO who couldn't make sense of probability distributions but understood perfectly when I explained the analysis as "budgeting for bad things that might happen." Same concept, different language.


It's Not About Comfort, It's About Trust

Quantitative risk isn't supposed to feel comfortable. It's supposed to be useful.

When someone says your numbers don't feel right, that's not a failure. It's information. It's telling you something important about their knowledge, their cognitive patterns, or your communication approach.

Your job isn't to make everyone feel good about your Monte Carlo simulations. Your job is to build enough trust and understanding that people can make better decisions, even when those decisions involve uncomfortable levels of uncertainty.

Sometimes that means fixing your model because they caught something you missed. Sometimes it means working around cognitive biases. Sometimes it means explaining the same concept several different ways until you find the one that clicks.

The key is always listening first. Engage with the discomfort instead of dismissing it.


A Simple Diagnostic Framework

The next time someone questions your risk numbers, work through these three questions in order:

1. Could they know something I don't? (Missing Information)

  • Ask: "What specific part feels wrong to you?"

  • Listen for operational details, historical patterns, or system quirks

  • Probe: "What have you seen happen in practice that I might have missed?"

2. Are they anchored to a different reference point? (Cognitive Bias)

  • Look for recent headlines, personal experiences, or industry incidents affecting their perception

  • Check if they're comparing to outdated baselines or different scenarios

  • Notice emotional language or "gut feeling" responses

3. Do they understand what I'm showing them? (Communication Gap)

  • Observe body language during technical explanations

  • Ask: "How would you explain this risk to your team?"

  • Test comprehension: "What's the key takeaway for you?"

Start with #1. Most valuable feedback comes from operational knowledge you missed. Only move to #2 and #3 if you're confident your analysis is solid.


The Bottom Line

The best risk analysis in the world is worthless if it sits on a shelf because nobody trusts it or understands it. Building that trust requires more than just getting the math right. It requires getting the human part right, too.

Don't get defensive the next time someone says your numbers don't feel right. Get curious. Somewhere in that conversation is the key to making your quantitative work actually useful in the real world of organizational decision-making.

The next time someone questions your numbers, remember this: their discomfort might be the most valuable feedback you get. Listen to it. That's ultimately what matters most.


Get Practical Takes on Cyber Risk That Actually Help You Decide

Subscribe below to get new issues monthly—no spam, just signal.


Next
Next

Six Levers That Quietly Change Your Risk and How to Spot Them