Metrics Tony MartinVegue Metrics Tony MartinVegue

An Evening with Doug Hubbard: The Failure of Risk Management: Why it's *Still* Broken and How to Fix It

What do dog love, company reputation, and the Challenger explosion have in common? In this recap of Doug Hubbard’s live Measurement Challenge, we explore how even the most “immeasurable” things in risk and life can, in fact, be measured—if we frame them right.

unnamed.jpg

There seems to be two different types of risk managers in the world: those who are perfectly satisfied with the status quo, and those who think current techniques are vague and do more harm than good. Doug Hubbard is firmly in the latter camp. His highly influential and groundbreaking 2009 book titled The Failure of Risk Management: Why it’s Broken and How to Fix It takes readers on a journey through the history of risk, why some methods fail to enable better decision making and – most importantly – how to improve. Since 2009, however, much has happened in the world of forecasting and risk management: the Fukushima Daiichi Nuclear Disaster in 2011, the Deepwater Horizon Offshore Oil Spill in 2019, multiple large data breaches (Equifax, Anthem, Target), and many more. It makes one wonder; in the last 10 years, have we “fixed” risk?

Luckily, we get an answer. A second edition of the book will be released in July 2019, titled The Failure of Risk Management: Why it's *Still* Broken and How to Fix It. On September 10th, 2018, Hubbard treated San Francisco to a preview of the new edition, which includes updated content and his unique analysis on the events of the last decade. Fans of quantitative risk techniques and measurement (yes, we’re out there) also got to play a game that Hubbard calls “The Measurement Challenge,” in which participants attempt to stump him with questions they think are immeasurable.

IMG-1188.JPG

It was a packed event, with over 200 people from diverse fields and technical backgrounds in attendance in downtown San Francisco. Richard Seiersen, Hubbard’s How to Measure Anything in Cybersecurity Risk co-author, kicked off the evening with a few tales of risk measurement challenges he’s overcome during his many years in the cybersecurity field. 

Is it Still Broken?

The first edition of the book used historical examples of failed risk management, including the 2008 credit crisis, the Challenger disaster and natural disasters to demonstrate that the most popular form of risk analysis today (scoring using ordinal scales) is flawed and does not effectively help manage risk. In the 10 years since Hubbard’s first edition was released, quantitative methods, while still not widely adopted, have made inroads in consulting firms and companies around the world. Factor Analysis of Information Risk (FAIR) is an operational risk analysis methodology that shares many of the same approaches and philosophies that Hubbard advocates for and has made signification traction in risk departments in the last decade. One has to ask – is it still broken?

It is. Hubbard pointed to several events since the first edition:

  • Fukushima Daiichi Nuclear Disaster (2011)

  • Deepwater Horizon Offshore Oil Spill (2010)

  • Flint Michigan Water System (2012 to present)

  • Samsung Galaxy Note 7 (2016)

  • Amtrak Derailments/collisions (2018)

  • Multiple large data breaches (Equifax, Anthem, Target)

Risk managers are fighting the good fight in trying to drive better management decisions with risk analysis, but by and large, we are not managing our single greatest risk: how we measure risk.

Hubbard further drove the point home and explained that the most popular method of risk analysis, the risk matrix, is fatally flawed. Research by Cox, Bickel and many others discovered that the risk matrix adds errors, rather than reduces errors, in decision making.

Fig 1: Typical risk matrix

Fig 1: Typical risk matrix

See Fig. 1: “Should we spend $X to reduce risk Y or $A to reduce risk B?” It is not clear how to answer this question, using the risk matrix methodology.  

How do we fix it? Hubbard elaborated on the solution at length, but the short answer is: math with probabilities. There are tangible examples in the first edition of the book, and will be expanded upon in the second edition.

The Measurement Challenge!

IMG-1194.JPG

A pervasive problem in business is the belief that some things, especially those that are intangible, cannot be measured. Doug Hubbard has proven, however, that anything can be measured. The technique lies in understanding exactly what measurement is and framing the object under measurement in a way that facilitates measurement. Based on this idea, Hubbard created a game that he likes to do on his website, books and occasionally when he speaks at an event, called The Measurement Challenge. The Measurement Challenge is a simple concept: Hubbard will take questions, concepts, items, or ideas that people perceive to be immeasurable, and he will demonstrate how to measure them. The Measurement Challenge is based on another one of Hubbard’s books, How to Measure Anything: Finding the Value of Intangibles in Business in which simple statistical techniques are described to demonstrate how to measure (literally!) anything.

When all the participants checked into the event that evening, The Measurement Challenge was briefly explained to them, and they were given paper to write down one item they thought was hard or impossible to measure. Some entries actually have been measured before, such as measuring the number of jelly beans in a jar, the number of alien civilizations in the universe and decomposition problems, similar to the number of piano tuners in Chicago. The most interesting ones were things that are intangible, and which is of course, is Hubbard’s specialty.  

Measuring intangibles requires a clear definition of what it is you're trying to measure.

It’s useful to keep in mind the clarification chain, described in Hubbard’s book How to Measure Anything: Finding the Value of Intangibles in Business. The clarification chain is summed up as three axioms:

  1. If it matters at all, it is detectable/observable.

  2. If it is detectable, it can be detected as an amount (or range of possible amounts.)

  3. If it can be detected as a range of possible amounts, it can be measured.

 All entries were collected, duplicates were combined and tallied up for final voting. The finalist questions were put up on an online voting system for all participants to vote on from their smartphones. There were a diverse number of excellent questions, but two were picked to give plenty of time to delve into the concepts of measurement and how to decompose the problems.

Some interesting questions that weren’t picked:

  • Measure the capacity for hate

  • The effectiveness of various company training programs

  • The value of being a better calibrated estimator

  • How much does my daughter love me?

The winning questions were:

  • How much does my dog love me?, and

  • What is the probable reputation damage to my company resulting from a cyber incident?

Challenge #1: How much does my dog love me?

How much does my dog love me? This is a challenging question, and it combined many other questions that people had asked of a similar theme. There were many questions on love, hate and other emotions, such as: How do I know my daughter loves me? How much does my spouse love me? How can I measure the love between a married couple? How much does my boss hate me? If you can figure out how to measure love, you would also know how to measure hate. Taking that general theme, “How much does my dog love me?” is a good first measurement challenge.

Hubbard read the question, looked up somewhat quizzically and told the person who had asked the question to raise their hand. He asked a counter question: “What do you mean by love?” Most people in the audience, including the person who’d asked the question, were unsure how to answer. Pausing to let the point be made, Hubbard then started to explain how to solve this problem.

He explained that the concept of “love” has many different definitions based on who you are, your cultural differences, age, gender, and many other factors. The definition of love also varies by the object of which you are forming the question around. For example, the definition of love from an animal is very different from the definition of love from a child, which is also very different from the love from a spouse. After explaining, Hubbard asked again: “What do you mean by love from your dog? What does this mean?”

People started throwing out ideas of what it means for a dog to love an individual, naturally using the clarification chain as a mental framework. Observable, detectable behaviors were shouted out, such as:

  • When I come home from work and my dog is happy to see me. She jumps up on me. This is how I know she loves me.

  • I observe love from my dog when he cuddles in bed after a long day at work.

  • Some dogs are service animals and are trained to save lives or assist throughout your day. That could also be a form of love.

Hubbard asked a follow-up question, “Why do you care if your dog loves you?” This is where the idea of measuring “love” started to come into focus for many of the people in the audience. If one is able to clearly define what love is, able to articulate why one personally cares, and frame the measurement as what can be observed, meaningful measurements can be made.

The last question Hubbard asked was, “What do you already know about this measurement problem?” If one’s idea of love from a dog is welcome greetings, one can measure how many times the dog jumps up, or some other activity that is directly observable. In the service animal example, what would we observe that would tell us that the dog is doing its job? Is it a number of activities per day that that the dog is able to complete successfully? Would it be passing certain training milestones so that you would know that the dog can save your life when it's needed? If your definition of love falls within those parameters, it should be fairly easy to build measurements around what you can observe.

Challenge #2: What is the probable reputation damage to my company resulting from a cyber incident?

The next question was by far one of the most popular questions that was asked. This is a very interesting problem, because some people would consider this to be an open and shut case. Reputation damage has been measured many times by many people and the techniques are fairly common knowledge. However, many risk managers proudly exclaim that reputation damage simply cannot be measured for various reasons: the tools don't exist, it’s too intangible, or that it's not possible to inventory all the various areas a business has reputation, as an asset to lose.

Just like the first question, he asked the person that posed this problem to raise their hand and he asked a series of counter questions, designed to probe exactly what they mean by “reputation,” what could you observe that would tell you that have good reputation, and  as a counter question, what could you observe that would tell you that you have a bad reputation? 

Framing it in the form of observables started an avalanche of responses from audience. One person chimed in saying that if a company had a good reputation, it would lead to customers’ trust and sales might increase. Another person added that an indicator of a bad reputation could be a sharp decrease in sales. The audience got the point quickly. Many other ideas were brought up:

  • A drop in stock price, which would be a measurement of shareholder trust/satisfaction.

  • A bad reputation may lead to high interest rates when borrowing money.

  • Inability to retain and recruit talent.

  • Increase in public relations costs.

Many more examples, and even more sector specific examples, were given by the audience.  By the end of this exercise, the audience was convinced that reputation could indeed be measured, as well as many other intangibles.  

Further Reading

Hubbard previewed his new book at the event and everyone in the audience had a great time trying to stump him with measurement challenges, even if it proved to be futile. These are all skills that can be learned. Check out the links below for further reading.

 Douglas Hubbard

The Failure of Risk Management, by Douglas Hubbard

How to Measure Anything, by Douglas Hubbard

 More Information of Risk Matrices

Read More
Cognitive Bias Tony MartinVegue Cognitive Bias Tony MartinVegue

The Semi-Attached Figure: How to spot manipulative security advertising claims

Out of all the ways to lie with statistics and manipulate peoples’ perceptions, the semi-attached figure may be the most prevalent. It’s hard to spot unless you are really looking for it because it’s sneaky, subtle and takes a fair bit of concentrative analysis to identify. A semi-attached figure occurs when proof is given for a claim, but when the reader looks at it closely, the proof and the claim are not related. It’s called “semi-attached” because the proof seemsto support a claim, but upon inspection, it doesn't. Marketing and advertising professionals are absolute masters of the semi-attached figure. 

1.png

If you can't prove what you want to prove, demonstrate something else and pretend that they are the same thing. In the daze that follows the collision of statistics with the human mind, hardly anybody will notice the difference.
-Darrell Huff, “How to Lie with Statistics”

Out of all the ways to lie with statistics and manipulate peoples’ perceptions, the semi-attached figure may be the most prevalent. It’s hard to spot unless you are really looking for it because it’s sneaky, subtle and takes a fair bit of concentrative analysis to identify. A semi-attached figure occurs when proof is given for a claim, but when the reader looks at it closely, the proof and the claim are not related. It’s called “semi-attached” because the proof seemsto support a claim, but upon inspection, it doesn't. Marketing and advertising professionals are absolute masters of the semi-attached figure. 

The semi-attached figure is a hard concept to understand without tangible examples, so let’s start out with a few easy marketing claims outside of the security field.

Example 1: Now, with Retsyn!

This example was touched on by David Lavenda in a post at FastCompany. It’s such a familiar advertising campaign that went on for decades, that most of us can recite parts of it from memory. It’s also one of the best examples of the semi-attached figure.

2.png

In Certs commercials, the narrator says “Want fresh, clean breath? Get the only mint with Retsyn,” or a similar slogan. Most viewers will hear this and unconsciously accept the “…with Retsyn” phrase as proof that Certs gives someone fresh, clean breath. It soundsgood – it actually sounds great! It sounds like it will make stinky breath less stinky. Here’s where the claim and the proof are semi-attached: you the audience, have bad breath and need evidence as to why you should buy Certs. Here’s the proof – Certs has Retsyn.

What exactly is Retsyn? According to an article by Slate, it’s “…natural flavoring, partially hydrogenated cottonseed oil, and a nutritional supplement called copper gluconate, none of which will kill bacteria.” The proof and the claim have nothing to do with each other, but it’s very effective as a marketing technique.

Example 2: These cigarettes are fine for your health

Post-World War II to the early 1970’s was the golden age of tobacco marketing in the United States before advertising restrictions were put in place. Cigarette advertising downplayed the unhealthy effects of smoking – and in many cases, made the case that it was actually healthy, and cured various maladies even though a strong statistical link between smoking and lung cancer was established in the 1940’s.

People born in the 1980’s and after have probably never seen a cigarette ad or have a vague recollection of one, perhaps forgetting or not knowing how insidiously manipulative tobacco marketing used to be. Due to the overwhelming evidence that started to build in the 1950’s that cigarettes cause serious illnesses and death, advertising had to counteract this message with pushing the "cool factor," downplaying health issues and touting benefits. To convince people to buy cigarettes, contrary to extensive evidence that they should not, marketing had to find new ways to be effective and directly play to human emotion. The semi-attached figure plays a vital role in achieving that.

This 1949 ad from Viceroy Cigarettes is a perfect application of the semi-attached figure. This came out at a time in which public health advocates started discussing the link between smoking and cancer, and this ad is an attempt to counter the message. 

4.png

The claim here is implied: cigarettes are not harmful to your health. There are two pieces of proof provided: First, Viceroys filter the smoke. (The truth is irrelevant: research indicates filtersmay increase lung cancer risk). The second proof is, your dentist recommends Viceroys, with a cartoon drawing of a dentist. The problem here is obvious. The dentist isn’t real – but the reader is led to think that either this man is their dentist, or whoever really is their dentist would surely also recommend Viceroys.

 Example #3: Exactly what is Unbreakable?

The Unbreakable Linux bus at the RSA conference

The Unbreakable Linux bus at the RSA conference

Starting around 2005, on the 101 freeway corridor between Palo Alto and San Francisco, Oracle advertising started to appear. It featured an armored Linux penguin mascot and the tagline “Unbreakable Linux.” The same ads showed up for years at RSA security conferences, emblazoned on the sides of busses that took trips between the Moscone convention center and area hotels. This claim refers to a product called Oracle Linux, which is based on Red Hat. Oracle has also used the word “unbreakable” to refer to other software products.

6.png

This is a classic semi-attached figure – Oracle makes a statement, “unbreakable,” and leads the reader to associate the statement with a piece of software and pretends it’sthe same thing. The claim and proof are taking advantage of the perception that Linux enjoys greater stability when compared to competitors. Of course, the software isn’t “unbreakable” (no software is), and Oracle Linux has been subject to many of the same vulnerabilities all flavors of Linux has had over the years. 

Unbreakable. This Linux distro that cannot be… what? Hacked? Experience downtime? Patched without rebooting? Does this refer to high availability? It’s very reminiscent of “with Retsyn.” It sounds great, but when it’s really analyzed, the reader is left thinking, what does that even mean?

Oracle still uses the term “Unbreakable,” but backtracked and admitted that it’s a marketing tagline, describing Oracle’s commitment to product security and does not refer to any specific product, feature or attribute. 

Oracle is no stranger to hyperbole. This is the same company who’s marketing slogan used to be “Can’t break it. Can’t break in.”

Example #4: We won the Cyber!

100% true story; only the names have been changed to protect the guilty.

Timeshare, used car and cyber security vendor sales people all have a special place in my heart. I was in a security vendor sales pitch many years back, and the salesman projected this graph of the number of cybersecurity incidents reported by federal agencies from 2006 to 2015 on the screen. The vendor was selling next generation firewall technology.

7.png

The room fell silent to the stark reality on the chart before us as the vendor started their pitch:

“Look at this graph – from 2006 to today, cyberattacks have increased over 10-fold! We’re at war. This is proof that we’re at cyberwar and you must protect yourself. The current equipment you have cannot protect your company from these types of unrelenting, sophisticated, advanced AND persistent attacks...”

The salesman went on and on and on. I love stuff like this. I love it when vendors build their pitch around a house of cards: one tap and it all falls apart. Where’s the semi-attached figure here?

The vendor was trying to lead us to a path to believe that the sky is falling. Maybe it is, maybe it isn’t – I have plenty of reason to believe that there is some type of cyber-related doom on the horizon, but this graph has nothing to do with it. In order to find the semi-attached figure, let’s ask a few probing questions. 

  • It would appear that cyberattacks have increased from 2006 to 2015. Why? Are there more computers in 2015 than in 2006?

  • What is the ratio of attack targets and surface versus attacks?

  • Is detection of attacks better in 2015 than it was in 2006, meaning we have the ability to detect and measure a larger range of attacks?

  • What is being measured here?

  • What does the Federal government consider an attack?

  • What do attacks against the Federal government have to do with my company (a bank, at the time)

8.png

The claim is: we’re going to get hacked unless we upgrade firewalls. The proof is this graph – from a different sector, provided without context, using an unknown method of measurement.

The graph above is from 2015. See 2016’s graph below – and I have great news! WE WON THE CYBER!

9.png

No, sorry, we didn’t. The federal government changed the definition and reporting requirements of a cyber-attack in 2016. They no longer consider a simple port scan an attack. In other words, what was being measured and the unit of measurement was changed from 2015 to 2016. Not only was the vendor pitch a semi-attached figure, the salesman was also guilty of the post hoc fallacy, also known as correlation does not imply causation.

How to spot the semi-attached figure

While using the semi-attached figured is manipulative, it’s unlikely to end any time soon. It’s far too effective. Keep in mind that the most effective marketing plays on human nature’s greatest fears and aspirations. Here are a few tips to spot and resist the lure of the semi-attached figure.

Anyone can take a number, graph, data visualization, or statistic and shoehorn it into proof for a claim. Just because something has a number or seems “sciencey” it doesn’t mean it can be automatically trusted. 

Spot the claim, such has “this product makes you hacker-proof” or “Unbreakable!” What’s the supporting proof?  Ask yourself: does the proof support the claim, or is it semi-attached?

Last, be especially wary of authority figures: doctors, dentists, cybersecurity experts, a CEO or past or present government officials. It could be a legitimate opinion or endorsement, but also remember that nearly everyone will say nearly anything if they get paid enough.

Here’s a challenge for readers: after you read this post, think about the semi-attached figure next time you are at the Blackhat or RSA vendor expo halls. How many do you see?


This post is part of a series titled How to Lie with Statistics, Information Security Edition– visit the link to read more.

Read More
Cognitive Bias Tony MartinVegue Cognitive Bias Tony MartinVegue

The Mad Men of Cyber Security Advertising

The framing effect is used in many places, intentionally and unintentionally, but is most present in advertising. It's a very effective way to frame perceptions.

a1f0a-1higqunkmln6ogkkpi0as-q.png

“We can be blind to the obvious, and we are also blind to our blindness.” 

― Daniel Kahneman, Thinking, Fast and Slow

There’s a famous line spoken by advertising executive Don Draper in AMC’s Mad Man: “If you don’t like what’s being said, change the conversation.”

This is a loaded quote and sums up how the ad men of Madison Avenue use words, emotion, a little bit of statistics and a whole lot of creativity to change minds and sell products. Consider this example: American Tobacco, parent company of Lucky Strike Cigarettes, was one of the first companies to use physicians in cigarette ads — a trend that disturbingly lasted until 1953.

91154-1hakps-besctbxscga_xwqg.png

There are several psychological sleight of hands tricks, all designed to elicit positive feelings from the reader and de-emphasize any negative ones. The smiling doctor, the phrases “less irritating,” and “It’s toasted” combined with a seemingly large number of endorsing doctors are examples of what is called the framing effect. The framing effect is a cognitive bias that describes a situation in which a person’s perception about something is influenced by the way it’s presented. Who wouldn’t want something that’s both endorsed by doctors and toasted? Psychologists Daniel Kahneman and Amos Tversky explored the framing effect in great detail as they developed Prospect Theory, for which Kahneman would later win the Nobel Prize in Economics.

The framing effect is used in many places, intentionally and unintentionally, but is most present in advertising, as in the positive frame in Lucky Strike ad. It’s a very effective way to frame statistics or concepts to change the audience perception about a marketing claim. Once you’re aware of it, you’ll start to see it everywhere. Take the following Kaspersky infographic as another example of a negative frame:

Source: Kaspersky Security Bulletin 2016

Source: Kaspersky Security Bulletin 2016

Kaspersky Lab is a Russia-based cybersecurity and anti-virus vendor. The above infographic is in reference to a study the firm conducted that shows that 1 in 5 small and medium sized business that paid a ransom in response to a ransomware infection did not get their files decrypted by the perpetrators.

There are several examples the framing effect in this infographic:

  • Use of the word “never” immediately frames the perception negatively.

  • Usage of the statistic “One in five” also creates a negative frame.

  • Red highlighting of the phrase “never got their data back” shifts the conversation to urgency, needing immediate attention and high risk.

  • The one, single victim clipart that didn’t get their files back is as big as the combined clipart of the four victims that did get their files back. The data visualization used is disproportionate to the actual data.

Just like the cigarette ad from the 1930’s, the use of graphics and data pull the reader’s perception in a very specific direction. In Kapersky’s case, the direction is very negative. The reader is left with the feeling that paying a ransom in response to ransomware infections is a very bad idea because, you may never get your files back.(…and gee whiz, I sure wish I had some anti-virus software to protect me against this terrible thing…)

The same exact base statistic can be flipped and turned into a positive frame.

ef7a4-1_omwycj0akrjycov8nwgoa.png

Everything about the new infographic is the same, but flipped: “4 out of 5” of something positive as opposed to “1 out of 5” of something negative; flipped clipart, positive colors and words are framed positively.

Let’s take it one step further:

ac260-1nwgsdg4x-dey6jxup1agza.png

80% seems quite good, and note that 80% is equivalent to 4 of out 5. The extra exclamation points are added to further reinforce the positive frame and add excitement around the statistic.

It should be no surprise to anyone that Kaspersky — a company that sells ransomware mitigation software — uses the framing effect, but they’re not the only one. It’s everywhere, and in fact, there are classes, blogs and instructional videos on how to take advantage of the framing effect specifically as a marketing tool. Overcoming these psychological tricks is not easy because they take advantage of a deep-rooted human trait: loss aversion. But awareness of these marketing tactics is a step forward in improving decision-making skills.


This post is part of a series titled How to Lie with Statistics, Information Security Edition– visit the link to read more.

Read More
Decision Analysis Tony MartinVegue Decision Analysis Tony MartinVegue

GDPR, Ragnarok Online and Decision Analysis

What does an old MMORPG have to do with modern data privacy laws? In this post, I use Ragnarok Online’s sudden EU exit to show how GDPR compliance can trigger real-world decisions—and why sometimes, the rational move is to walk away.

cf6f7-1sxtzto-565kgqgmx5sw_oa.jpeg

Did you ever think you would read about GDPR, risk analysis and a 16-year old MMORPG in the same place? You can stop dreaming, because here it is!

First: GDPR, tl;dr

General Data Protection Regulation(GDPR) is a massive overhaul of privacy on the Internet that applies to all European Union (EU) persons. Any company outside of the EU needs to comply with GDPR if they store personal data of any EU person. On May 25, 2018 GDPR becomes enforceable, and many companies — including US-based companies with data on EU persons — have been making changes to become compliant. (This explains why you have been receiving so many privacy notice updates lately.)

The cost of GDPR compliance is not cheap or easy, and the price of non-compliance can involve hefty fines and litigation. Every company that stores personal data has most likely spent the last two years performing analysis on whether GDPR applies to them, and if so, what the cost of compliance is.

What Happened with Ragnarok Online?

This leads to a story that took the gaming world by surprise: On April 25, 2018, the online gaming company Gravity Interactive announced they are shutting down all games and services in the EU, effective May 25th– the day GDPR takes effect. All EU-based IP-addresses will be blocked. Understandably, there’s an uproar, especially from EU-based players of Ragnarok Online, one of Gravity Interactive’s most popular games. Gravity Interactive has operated EU-based servers for 14 years and to many, the sudden decision to pull out of the market entirely seems unfair and unexpected. It’s understandable that people would be upset. The company has been the subject of much derision over the decision. But clearly there’s more to the story disappointed gamers.

This is an interesting case study because it illustrates several points in the decision-making process:

  • How a quantitative risk analysis can be used to help with strategic business decisions;

  • Every sound risk analysis starts with a clearly defined question; and

  • Avoidance can be an appropriate way to eliminate risk exposure.

Let’s analyse this problem with, first, forming a question that articulates the decision being made, then identifying possible choices, and last, estimating costs for each choice.

The Question

Every company faces strategic decisions. Sound, informed, decision making requires information about benefits and risk exposure. Risk analysis always needs to answer a question, in other words, a decision that someone needs to make. In our case, the decision for Gravity Interactive is whether to invest the time, money and resources to achieve GDPR compliance. GDPR introduces data privacy, security, compliance and legal requirements that are new for most US-based companies, therefore the cost of compliance can be significant. Most companies, US-based or otherwise, spent the last two years performing analyses of GDPR compliance: the cost of complying with the regulations from many perspectives, including technological. Companies can comply with GDPR, ignore GDPR or pull out of the EU market and analysis will help find the best course of action to take.

1af72-1kpzoxwiuw1rrpkwx2anshw.png

Decision: should we invest in GDPR compliance?

The Decisions

A company faces three options when deciding whether to invest in GDPR compliance. First, they need to price out the cost of compliance. This can be an upfront cost, as well as ongoing. Compliance involves funding and starting projects to align people, processes and technologies with applicable regulations. The analysis in this area would include a survey of all changes the company needs to make, estimating the cost, and performing a cost-benefit analysis.

The next option is to ignore compliance. This is where risk analyses are most useful to help a company. Ignoring compliance is always an option — and as risky as it may sound, many companies choose to ignore laws and regulations; some unintentionally, some wilfully. This happens more often than most of us should be comfortable with. We typically find out about this when companies are caught, regulators levy penalties and the story is splashed all over the news. At the same time, many companies successfully fly under the regulatory radar for years without being caught. A risk analysis on compliance risk would involve the length of time it would take for regulatory action to take place (if it takes place), what the regulators would force the company to do and, penance projects to achieve compliance.

Lastly, they can choose to withdraw from the market altogether. In the risk management world, we call this risk avoidance.This is the elimination of risk by choosing not to pursue a potentially risk generating activity. In this case, a company can avoid non-compliance risk by exiting the EU market.

The box below contains sample output of these different analyses. I obviously don’t know any of the costs or risk associated with Gravity Interactive’s decision, so I created a sample Company A with example values.

87e1a-1lsn6bnd7me3rcptjvtbnkg.png

Company A: Projected Costs of GDPR Compliance Options

It’s clear that the company should not ignore compliance. This activity creates significant risk exposure. It’s likely they would have to pay fines, face litigation and be forced to make changes to comply with GDPR anyway.

Based on the two remaining options — comply with GDPR or exit the market, we can perform a cost/benefit analysis of current EU market share, projected EU growth and balance it against the cost of GDPR compliance. Based on my analysis of Company A, it should exit the EU market.

If I were responsible for risk management at either Company A or Gravity Interactive, I would want to perform additional risk analyses on the current state of data privacy and security. If compliance with GDPR is too costly, does the company currently comply with US privacy and security regulations?

In the case of Gravity Interactive, the company clearly decided that forgoing a portion of its customer base, losing the loyalty of its EU fans and risking the ire of gamers worldwide was worth the potential costs of compliance or non-compliance with GDPR. Or in short, to avoid being stuck between Ragnarok and a hard place.

Read More
Quantitative Risk Tony MartinVegue Quantitative Risk Tony MartinVegue

Black Swans in Risk: Myth, Reality and Bad Metaphors

Think you understand Black Swan events? This post dismantles the myth, exposes how risk pros misuse the term, and introduces a better way to frame extreme risk.

cd5ad-1htkchb-3jnmslay0oy6pmw.jpeg

The term “Black Swan event” has been part of the risk management lexicon since its coinage in 2007 by Nassim Taleb in his eponymous book titled The Black Swan: The Impact of the Highly Improbable. Taleb uses the metaphor of the black swan to describe extreme outlier events that come as a surprise to the observer, and in hindsight, the observer rationalizes that they should have predicted it.

The metaphor is based on the old European assumption that all swans are white, until black swans were discovered in 1697 in Australia.

Russell Thomas recently spoke at SIRACon 2018 on this very subject in his presentation, “Think You Know Black Swans — Think Again.” In the talk, and associated blog post, Thomas deconstructs the metaphor and Taleb’s argument and expounds on the use and misuse of the term in modern risk management. One of the most illuminating areas of Thomas’ work is his observation that the term “Black Swan” is used in dual ways: both to dismiss probabilistic reasoning and to extend it to describe certain events in risk management that require extra explanation. In other words, Taleb’s definition of Black Swan is a condemnation of probabilistic reasoning, i.e., forecasting future events with some degree of certainty. The more pervasive definition is used to describe certain types of events within risk management, such as loss events commonly found in risk registers and heat maps in boardrooms across the globe. If it seems contradictory and confusing, it is.

From a purely practitioner point of view, it’s worth examining why the term Black Swan is used so often in risk management. It’s not because we’re trying to engage in a philosophical discussion about the unpredictability of tail risks, but rather that risk managers feel the need to separately call out extreme impact events, regardless of probability, because they pose an existential threat to a firm. With this goal in mind, risk managers can now focus on a) understanding why the term is so pervasive, and b) find a way to communicate the same intent without logical fallacies.

Black Swan Definition and Misuse

The most common definition of a Black Swan is: an event in which the probability of occurrence is low, but the impact is high. A contemporary example is a 1,000 year flood or 9/11. In these, and similar events, the impact is so extreme, risk managers have felt the need to classify these events separately; call them out with an asterisk (*) to tell decision makers not to be lulled into a false sense of security because the annualized risk is low. This is where the office-talk term “Black Swan” was born. It is an attempt to assign a special classification to these types of tail risks.

This isn’t an entirely accurate portrayal of Black Swan events, however, according to both Taleb and Thomas.

According to Taleb, a Black Swan event has these three attributes:

First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme ‘impact’. Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable.

After examining the complex scenarios in which these types of conditions exist, it’s clear that the concept Taleb is trying to describe is well beyond something that would be found in a risk register, and is, in fact a critique of modern risk management techniques. It is an oxymoron to include this term in a risk program or even use it to describe risks.

Despite these points, the term has entered the everyday lexicon and, along with Kleenex and Cyber, it’s here to stay. It’s become a generally accepted word to describe low probability, high impact events. Is there something better?

Factor Analysis of Information Risk (FAIR), the risk analysis model developed by Jack Jones, doesn’t deal directly on a philosophical level with Black Swan events, but it does provide risk managers with a few extra tools to describe circumstances around low probability, high impact events. These are called risk conditions.

“Risk Conditions”: The FAIR Way to Treat a Black Swan

Risk is what matters. When scenarios are presented to management, it doesn’t add much to the story if one risk has a higher probability or a lower probability than other types of risks, or if one impact is higher than the other. FAIR provides the taxonomy to assess, analyze and report risks based on a number of factors (e.g. threat capability, control strength, frequency of a loss event). Most risk managers have just minutes with senior executives and will avoid an in-depth discussion of individual factors and will, instead, focus on risk. Why do some risk managers focus on Black Swan events then?

Risk managers use the term because they need to communicate something extra — they need an extra tool to draw attention to those few extreme tail risks that could outright end a company. There may be something that can be done to reduce the impact (e.g. diversification of company resources in preparation for an earthquake) or perhaps nothing can be done (e.g. market or economic conditions that cause company or sector failure). Nevertheless, risk managers would be remiss to not point this out.

Risk conditions go beyond simply calling out low probability, high impact events. They specifically deal with low probability, high impact events that do not have any or have weak mitigating controls. Categorizing it this way makes sense when communicating risk. Extreme tail risks with no mitigating controls may get lost in annualized risk aggregation.

FAIR describes two risk conditions: unstable risk and fragile risk.

Unstable risk conditions describe a situation in which the probability of a loss event is low and there are no mitigating controls in place. It’s up to each organization to define what “low probability” means, but most firms describe events that happen every 100 years or less as low probability. An example of an unstable risk condition would be a DBA having unfettered, unmonitored access to personally identifiable information or a stack of confidential documents sitting in an unlocked room. The annualized loss exposure would probably be relatively low, but controls aren’t in place to lower the loss event frequency.

A fragile risk condition is very similar to an unstable risk condition; however, the distinction is that there is one control in place to reduce the threat event frequency, but no backup control(s). An example of this would be a critical SQL database is being backed up nightly, but there’re no other controls to protect against an availability event (e.g. disk mirroring, database mirroring).

Conclusion

Don’t fight the Black Swan battle — leave that to philosophers and risk thinkers — but try to understand why someone is calling something a Black Swan. Provide the tools, such as those provided by the FAIR taxonomy, to help business leaders and your colleagues conceptualize actual risk. Risk conditions describe these types of events and the unique risks they pose with greater clarity and without outdated, often misused metaphors.

Originally published at www.fairinstitute.org.

Read More
Cognitive Bias Tony MartinVegue Cognitive Bias Tony MartinVegue

Will the Real “Year of the Data Breach” Please Stand Up?

For over a decade, nearly every year has been dubbed the "Year of the Data Breach"—and it’s getting ridiculous. This post breaks down the history, the hype, and why it’s time to retire the phrase once and for all.

ac90a-1wkiym1va4s966dxkpgt6nw.jpeg

My New Year’s Day ritual has been the same for nearly 10 years now: a late breakfast, a cup of strong coffee and a scan of security blogs and news for two things that always make me chuckle: cyber predictions for the new year, and a retrospective that declares the past year the “Year of the Data Breach.” Kelly Shortridge perfectly parodied the former and I actually thought we might go a year without the latter, until I found this headline on Bloomberg news in which 2017 is named the Year of the Data Breach:

aef59-1eph4bzenvmorq81qdyc7xa.png

Source: Bloomberg;

https://www.bna.com/2017-year-data-b73014473359/

If you are wondering to yourself, where have I seen this before…? It’s nearly every year. 10 of the last 12 years, to be exact.

Here’s a retrospective on the last 12 years and a glimpse into 2018’s prospects.

2005

2005 was the first year ever to be declared “The Year of the Data Breach” by many media outlets, such as InfoWorld. The phrase “data breach” entered into everyday usage in 2005 as well, due to data breach notification laws being enacted, increased data breach litigation as well as Ameritrade, CitiGroup and CardSystems all disclosing incidents.

2006

2006 was a big year for data breaches — it featured the AOL search data leak scandal and the US Department of Veterans Affairs breach. It caused one blogger and one security vendor to dub 2006 the year of the data breach.

2007

Attrition.org, the Identify Theft Resource Center and the Canadian Federal Privacy Commissioner in a letter to Parliament all declared 2007 “the year of the data breach.” I remember 2007 for two things: Britney Spears’ sad meltdown and the TJ Maxx data breach.

2008

Nothing! 2008 is not the year of the data breach! Good job, 2008.

2009

If 2005, 2006 and 2007 were all the year of the data breach, 2009 is the year of the MEGA DATA BREACH, according to Forbes and a security vendor. It was a big one, primarily due to the Heartland Payment Systems data breach which was a compromise of 130 million records.

2010

After the MEGA year of 2009, we all decided to take a break.

2011

After 2008 and 2010 were not the year of the data breach, it was as if security journalists, vendors and cyber experts all stood up and shouted, in unison, “NEVER AGAIN! There shall never be a year that is not the Year of the Data Breach!”

And a good year it was. Trend Micro called it and Brian Krebs, among many others referenced it. The most notable incident was the Sony Playstation Network suffering a prolonged service outage and data breach.

2012

A small security vendor, in a year end retrospective, named 2012 the “Year of the Data Breach,” with breaches at Yahoo, Zappos and several high-profile incidents in the government sector dominating the news. It was also the “Year of the Data Breach in New Zealand,” according to the country’s privacy commissioner.

2013

2009 wants its adjective back. Symantec, in the 2013 Internet Security Threat Report, dubbed 2013 the “Year of the Mega Data Breach,” citing attacks on small and medium-sized businesses and the government sector. Others called it the “Year of the Retailer Breach” due to incidents at Target and Adobe.

2014

Assuming we could only have one “Year of the Data Breach,” 2014 would have to be the strongest contender. There were a massive amount of incidents in 2014: UPS, Michael’s, Home Depot, Jimmy John’s, Staples and JP Morgan Chase. The aforementioned are all eclipsed by, The Hack of the Century (according to Fortune): the Sony Pictures Entertainment hack.

Most media outlets dubbed 2014 the “Year of the Data Breach,” as well as Advisen, Trend Micro and Tripwire’s State of Security.

2015

I declare 2015 the “Year of Superlatives.” Here is how the year was reported:

Many Americans were affected by data breaches in 2015, with the most notable incidents occurring at Ashley Madison, OPM, Anthem and the IRS.

2016

After 2014 and 2015, one would think it would be time to retire the phrase “Year of the…” and think of something else. Nope.

A small law firm specializing in data privacy, SecureWorld, and a radio host named 2016 the “Year of the Data Breach.”

In a completely perplexing statement, Ars Technica recognized 2014 and 2015 as the “Year of the Data Breach” and also issued a challenge:

[I]f pundits don’t label [2016] ‘the year of the data breach’ — like a one-phase Chinese zodiac for the 21st century — they’re not doing their jobs at all.

2017

Bloomberg declared 2017 the “Year of the Data Breach,” citing incidents at Equifax, Yahoo and Uber. Experian also jumped on the bandwagon.

2018: a cyber prediction

Combining my two favorite things: cyber predictions and “year of the data breach” declarations, the non-profit Information Security Forum (ISF) stated that 2018 will be the “year of the data breach.”

Conclusion

Much has been written about consumer data breach fatigue. I have no doubt that breach fatigue is real and headlines like this, year over year, contribute to it. When headlines about cybersecurity cross the line into hyperbole, it’s time to re-think how we present the industry’s most pressing problems to the rest of the world. As it stands now, declaring a year the “year of the data breach” has become virtually meaningless. We know that data breaches are going to occur every year. Perhaps, starting this year, we can pull out the one notable data breach as the “Data Breach of the Year,” instead of naming the whole year the “Year of the Data Breach.”

Read More
Cognitive Bias Tony MartinVegue Cognitive Bias Tony MartinVegue

The Birth of a Ransomware Urban Myth

"Nearly 40% of ransomware victims pay up!" Sounds shocking, right? Turns out… that headline was based on eight people. This post unpacks how bad stats become infosec urban legends—and what that means for decision-making.

0ff46-1nryumr9546tx7tg2qmj49a.png

Would you be surprised to find that “nearly 40% of ransomware victims pay attackers,” according to a recent article published by DarkReading? I sure was. The number of victims that pay ransomware and the amount paid has been an elusive figure for years now. To date, law enforcement has not collected and published ransomware crime statistics like they have for other forms of criminal activity.

Junk research published by security vendors has always irked me because they use and misuse statistics to spread fear and sell products. Security threats are overblown and solutions are oversimplified, leading to a bevy of problems ranging from the creation of information security urban myths to poor corporate decision making based on faulty assumptions.

Sadly, the DarkReading article and underlying research is no exception. It’s a prime example of what’s wrong with vendor-sponsored research and how the rest of us pick up quotes, circulate and re-tweet without giving it a minute of critical thought. It’s easy to spot — just grab a statistic and follow it down the rabbit hole. Let’s dissect the ransomware payment rate and find out what’s really going on.

DarkReading published this article on April 14th, 2017 with the headline:

If you follow the article to the end, a link to the research is cited, along with the name of the security vendor that performed the research (Trustlook). They have a nice blog post and a cute, entertaining infographic — great reading material to send to the CISO tomorrow morning. The next step is to check the validly of the research and see exactly what Trustlook is claiming.

  • Trustlook is a security vendor and sells a suite of products that protects end-users from malware, including ransomware, and other forms of attack.

  • The research is based on a survey. Surveys are polls; you ask a group of people a question and record the answers.

  • Trustlook surveyed 210 of their Mobile Security product customers. Mobile Security is an Android-based anti-virus app.

  • Trustlook did not disclose a margin of error, which would indicate the survey is not statistically significant. This means the results only apply to the survey takers themselves and cannot be extrapolated to apply to a larger group or the general population.

This would be enough to make anyone that took a semester of college Stats roll their eyes and move on. However, the assertions in the infographic really take the cake. When percentages are used in statistics, the reader tends to forget or lose sight of the underlying numbers. Breaking down the percentages further:

  • We know 210 customers were surveyed (Trustlook disclosed this).

  • Of the 210, 45% have never heard of ransomware. Put another way, 94 out of 210 customers answered a survey about ransomware, but have never heard of ransomware. Trustlook conducted research and published a survey on ransomware in which nearly half of the respondents don’t know what ransomware is.

b7bf2-1hx5wo6imrtkuncaerb399a.gif
  • 116 respondents had the wherewithal to understand the subject matter for a survey they are filling out.

  • Of the 116, 20 people had, at some point, been infected with ransomware.

  • Of the 20 that have been infected, 8 of them paid the ransom.

Let me say that again in case you missed it.

Trustlook found 8 of their customers that said they paid a ransom and turned it into this:

…and DarkReading expanded the claim to include all ransomware victims:

Two days later, it’s everywhere:

Source: Google.com search

Source: Google.com search

A new ransomware urban myth is born.

Read More
Cognitive Bias Tony MartinVegue Cognitive Bias Tony MartinVegue

Selection Bias and Information Security Surveys

Everyone in infosec has seen a sketchy stat—“60% of orgs were hit by ransomware!” But who actually took that survey? This post breaks down how selection bias warps vendor reports and how bad data becomes cybersecurity “truth.”

1b70f-1jxemuq4bxxtuacwkunq_rg.png

The auditor stared at me blankly. The gaze turned into a gape and lasted long enough to make me shift uncomfortably in my chair, click my pen and look away before I looked back at him.

The blank look flashed to anger.

“Of course, malicious insiders are the biggest threat to this company. They’re the biggest threat to ALL companies.”

He waved the latest copy of a vendor report, which would lead anyone to believe malicious insiders are the single biggest threat to American business since Emma Goldman.

The report he waved in the room was not research at all. It was vendor marketing, thinly disguised as a “survey of Information Security leaders.” It was solely based on an unscientific survey of a small group of people. It reeked of error and bias.

Selection bias is what makes these surveys virtually worthless. I previously wrote about the problems of surveys in information security vendor reports and I want to dig in deeper on a topic from the last post: properly selecting a representative sample from the general population being surveyed. This matters so much. This is perhaps the most important step when conducting a statistically sound survey.

Why this matters

Risk analysts are one of many professions that rely on both internal and external incident data to assess risk. If a risk analyst is performing an assessment of current or former employees stealing customer data, there are two primary places one would look for incident data to determine frequency: internal incident reports and external data on frequency of occurrence.

One of the first places a risk analyst would look would be one of the many published reports on insider threat. The analyst would then find one or several statistics about the frequency of current or former employees stealing data, and use the figure to help provide a likelihood of a loss event.

If the survey is statistically sound, the results can be extrapolated to the general population. In other words, if the survey states that 12% of insiders use USB devices to steal data, within a margin of error, you can use that same range to help inform your assessment.

If the survey is not statistically sound, the results only apply to respondents of the survey. This is called selection bias.

What is selection bias?

There are many forms of bias that are found in statistics, and by extension, in surveys, but the most common is selection bias. It’s the easiest to get wrong and throws the results off the quickest.

Selection bias occurs when the survey result is systematically different from the population that is being studied. Here are a few ways this happens.

  • Undercoverage: Underrepresentation of certain groups in the sample. For example, if you are surveying information security professionals, you will want pen testers, risk analysts, department heads, CISO’s — essentially a cross-section. If you have a hard time getting CISO’s to answer the survey, the survey will be biased toward undercoverage of CISO’s.

  • Voluntary Response: This occurs when your survey takers are self-selected. The most common example of this is online surveys or polls. Slashdot polls are fun — but completely non-scientific because of voluntary response. Optimally, one would like to have participants randomly selected to ensure a good cross-section of groups of people.

  • Participation bias: This occurs when a certain group of participants are more or less likely to participate than others. This can happen when a certain group appreciates the value of surveys more than others (risk analysts versus pen testers) or if survey takers are incentivized, such as with reward points or cash. Compensating survey takers is a very contentious practice and will usually result in people taking the survey that are not in the intended sample population.

Real-world example

There are many to choose from, but I found the “2015 Vormetric Insider Threat Report” from a random Google search. The report is aesthetically polished and, on the surface, very informative. It has the intended effect of making any reader nervous about data theft from employees and contractors.

The report is based on a survey of 818 IT professionals that completed an online survey. The report authors are very careful; they frame the report as opinion of the respondents. Furthermore, there is a short disclosure at the end of the report that states the survey methodology, identifies the company that performed the survey (Harris Poll) and states that the “…online survey is not based on a probability sample and therefore no estimate of theoretical sampling error can be calculated.”

Let me translate that: This report is for marketing and entertainment purposes only.

Why not?

Here’s another problem: Harris Poll compensates their survey takers. Survey takers earn points (called “HIpoints”) for every survey they fill out. These points can be redeemed for gift cards and other items. We already know the survey isn’t statistically significant from the disclosure, but one must ask — can the survey be trusted to include only IT professionals, if the respondents are self-selected and are rewarded if they say anything to qualify for the survey?

The most obvious problem here is voluntary selection and participation bias; both lead to a situation in which you should not use the survey results to base any serious decision on.

I don’t mean to pick on Vormetric exclusively. There are hundreds of similar surveys out there.

Here’s another one. The Cyberthreat Defense Group conducted an online survey that asked many enterprise security questions. One of the results was that 60% of the respondent’s organizations were hit by ransomware in 2016. I fast-forwarded to the section that described the methodology. They vaguely disclosed the fact that survey takers were PAID and the survey results represented the opinions of the respondents. It’s back-page news, but at least it’s there. This is the problem:

Now it’s not the opinion of a small, self-selected, compensated group of people that may or may not be in security. Now it’s fact.

Then it gets tweeted, re-tweeted, liked, whatever. Now it’s InfoSec Folklore.

Screen Shot 2018-07-31 at 12.58.53 PM.png

 

See the problem?

Read More
Statistics Tony MartinVegue Statistics Tony MartinVegue

The Problem with Security Vendor Reports

Most vendor security reports are just glossy marketing in disguise, riddled with bad stats and survey bias. This post breaks down how to spot junk research before it ends up in your board slides — and how to demand better.

5d1a1-1t-ryezvoyzgnhzbj7iazng.jpeg

The information security vendor space is flooded with research: annual reports, white papers, marketing publications — the list goes on and on. This research is subsequently handed to marketing folks (and engineers who are really marketers) where they fan out to security conferences across the world, standing in booths quoting statistics and attending pay-to-play speaking slots, convincing executives to buy their security products.

There’s a truth, however, that the security vendors know but most security practitioners and decision makers aren’t quite wise to yet. Much of the research vendors present in reports and marking brochures isn’t rooted in any defensible, scientific method. It’s an intentional appeal to fear, designed to create enough self-doubt to make you buy their solution.

This is how it’s being done:

  • Most vendor reports are based on surveys, also known as polls

  • Most of the surveys presented by security vendors ignore the science behind surveys, which is based on statistics and mathematics

  • Instead of using statistically significant survey methods, many reports use dubious approaches designed to lead the reader down a predetermined path

This isn’t exactly new. Advertisers have consumer manipulation down to an art form and have been doing it for decades. Security vendors, however, should be held to a higher standard due to fact that the whole field is based on trust and credibility. Many vendor reports are presented as security research and not advertisements.

What’s a survey?

daefb-16ssjh4y5a9nqmd9whqwm6w.jpeg

A survey is a poll. Pollsters ask a small group of people a question, such as “In the last year, how many of your security incidents have been caused by insiders?” The results are extrapolated to apply it to a general population. For example, IBM conducted a survey that found that 59% of CISO’s experienced cyber incidents in which the attackers could defeat their defenses. The company that conducted the survey didn’t poll all CISO’s — they polled a sample of CISO’s and extrapolated a generality about the entire population of CISO’s.

This type of sampling and extrapolation is completely acceptable to do, if the survey adheres to established methodologies in survey science. Doing so makes the survey statistically significant; not doing it puts the validity of the results in question.

All surveys have some sort of error and bias. However, a good survey will attempt to control for this by doing the following:

  • Use established survey science methods to reduce the errors and bias

  • Disclose the errors and bias to the readers

  • Disclose the methodology used to conduce the survey

  • A good survey will also publish the raw data for peer review

Why you should care about statistically sound surveys

Surveys are everywhere in security. They are found in cute infographics, annual reports, journal articles and academic papers. Security professionals take these reports and read them, learn from them, quote them in steering committee meetings or to senior executives when they ask questions. Managers often ask security analysts to quantify risk with data — the easiest way is to find a related survey. We rely on the data to enable our firms to make risk-aware business decisions.

When you tell your Board of Directors that 43% of all data breaches are caused by internal actors, you’d better be right. The data you are using must be statistically significant and rooted in fact. If you are quoting vendor FUD or some marking brochure that’s disconnected from reality, your credibility is at stake. We are trusted advisors and everything we say must be defensible.

What makes a good survey

Everyone has seen a survey. Election and public opinion polls seem simple on the surface, but it’s very hard to do correctly. The science behind surveys are rooted in math and statistics; when the survey is, it’s statistically significant.

There are four main components of a statistically significant survey:

Population

This is a critical first step. What is the group that is being studied? How big is the group? An example would be “CISO’s” or “Information Security decision makers.”

Sample size

The size of the group you are surveying. It’s usually not possible to study an entire population, so a sample is chosen. A good survey taker will do all they can to ensure the sample size is as representative as the general population as possible. More importantly, the sample size needs to be randomly selected.

Confidence interval

Also known as the margin of error; (e.g. +/-); larger the sample size, the lower the margin of error.

Unbiased Questions

The questions themselves are crafted by a neutral professional trained in survey science. Otherwise, it is very easy to craft biased questions that lead the responder to answer in a certain way.

What makes a bad survey?

A survey will lose credibility as it uses less and less of the above components. There are many ways a survey could be bad, but here are the biggest red flags:

  • No disclosure of polling methodology

  • No disclosure of the company that conducted the poll

  • The polling methodology is disclosed, but no effort was made to make it random or representative of the population (online polls have this problem)

  • Survey takers are compensated (people will say anything for money)

  • Margin of error not stated

Be Skeptical

Be skeptical of vendor claims. Check for yourself and read the fine print. When you stroll the vendor halls at RSA or Blackhat and a vendor makes some outrageous claim about an imminent threat, dig deeper. Ask hard questions. We can slowly turn the ship away from FUD and closer to fact and evidence-based research.

And if you’re a vendor — think about using reputable research firms to perform your surveys.

Read More
Quantitative Risk Tony MartinVegue Quantitative Risk Tony MartinVegue

Prioritizing Patches: A Risk-Based Approach

It’s been a tough few weeks for those of us that are responsible for patching vulnerabilities in the companies we work at. Not only do we have the usual operating system and application patches, we also have patches for VENOM and Logjam to contend with. The two aforementioned vulnerabilities are pretty serious and deserve extra attention. But, where to start and what to do first? Whether you have hundreds or thousands or hundreds of thousands of systems to patch, you have to start somewhere. Do you test and deploy patches for high severity vulnerabilities first, or do you continue to deploy routine patches, prioritizing systems critical to the functioning of your business?

5bc9f-19ernxepah3r1ezffz33iza.jpeg

It’s been a tough few weeks for those of us that are responsible for patching vulnerabilities in the companies we work at. Not only do we have the usual operating system and application patches, we also have patches for VENOM and Logjam to contend with. The two aforementioned vulnerabilities are pretty serious and deserve extra attention. But, where to start and what to do first? Whether you have hundreds or thousands or hundreds of thousands of systems to patch, you have to start somewhere. Do you test and deploy patches for high severity vulnerabilities first, or do you continue to deploy routine patches, prioritizing systems critical to the functioning of your business?

It depends. You have to take a risk-based approach to patching, fully considering several factors including where the system is on the network, the type of data it has, what it’s function is and whether or not the patch in question poses a threat.

There’s an old adage in risk management (and business in general): “When everything’s a priority, nothing a priority.” How true it is. For example, if you scan your entire network for the Heartbleed vulnerability, the tool will return a list of all systems that the vulnerability has been found on. Depending on the size of your network, this could seem like an insurmountable task — everything is high risk.

A good habit to get into for all security professionals is to take a risk-based approach when you need to make a decision about resources. (“Resources” in this context can be money, personnel, time, re-tasking an application, etc.) Ask yourself the following questions:

  • What is the asset I’m protecting? What is the value?

  • Are there are compliance, regulatory or legal requirements around this system. For example, does it store PHI (Personal Health Information), is in-scope for Sarbanes-Oxley or does it fall under PCI?

  • What are the vulnerabilities on this system?

  • What is the threat? Remember, you can have vulnerability without a threat — think of a house that does not have a tornado shelter. The house is in California.

  • What is the impact to the company if a threat exploited the vulnerability and acted against the asset? Impact can take many forms, including loss productivity, lost sales, a data breach, system downtime, fines, judgments and reputational harm.

A Tale of Two Systems

Take at look at the diagram below. It illustrates two systems with the same web vulnerability, but different use cases and impact. A simple vulnerability scan would flag both systems as having high-severity vulnerabilities, but a risk-based approach to vulnerability mitigation reveals much different priorities.

6dfb2-1lktxm3fcnx5liy8ggwmvyg.png

This is not to say that Server #2 could not be exploited. It very much could be, by an insider, a vendor or from an outside attacker and the issue needs to be remediated. However, it is much more probable that System #1 will be compromised in a shorter time-frame. Server #2 would also be on the list to get patched, but considering that attackers external to the organization have to try a little harder to exploit this type of vulnerability and the server is not critical to the functioning to the business, the mitigation priority is Server #1.

Your Secret Weapon

Most medium-to-large companies have a separate department dedicated to Business Continuity. Sometimes they are in IT as part of Disaster Recovery, and sometimes they are in a completely separate department, focusing on enterprise resiliency. Either way, one of the core functions of these departments is to perform a business impact analysis on critical business functions. For example, the core business functions of the Accounting department are analyzed. Continuity requirements are identified along with impact to the company. Many factors are considered, including financial, revenue stream, employee and legal/regulatory impact.

This is an excellent place to start if you need data on categorizing and prioritizing your systems. In some cases, the business impact analysis is mapped back to actual server names or application platforms, but even if it’s not, you can start using this data to improve your vulnerability management program.

It’s difficult to decide where to deploy scarce resources. The steps outlined above truly are the tip of the iceberg but are nonetheless a great first step in helping to prioritize when and where to start implementing mitigating controls. The most successful Information Security departments are those that able to think in risk-based terms naturally when evaluating control implementation. With practice, it becomes second nature.

About the Author:Tony Martin-Vegue works for a large global retailer leading the firm’s cyber-crime program. His enterprise risk and security analyses are informed by his 20 years of technical expertise in areas such as network operations, cryptography and system administration. Tony holds a Bachelor of Science in Business Economics from the University of San Francisco and holds many certifications including CISSP, CISM and CEH.

Originally published at www.tripwire.com on May 31, 2015.

Read More
Information Security Tony MartinVegue Information Security Tony MartinVegue

What’s the difference between a vulnerability scan, penetration test and a risk analysis?

Think vulnerability scan, pen test, and risk analysis are the same thing? They're not — and mixing them up could waste your money and leave you exposed. This post breaks down the real differences so you can make smarter, more secure decisions.

You’ve just deployed an ecommerce site for your small business or developed the next hot iPhone MMORGP. Now what?

Don’t get hacked!

An often overlooked, but very important process in the development of any Internet-facing service is testing it for vulnerabilities, knowing if those vulnerabilities are actually exploitable in your particular environment and, lastly, knowing what the risks of those vulnerabilities are to your firm or product launch. These three different processes are known as a vulnerability assessment, penetration test and a risk analysis. Knowing the difference is critical when hiring an outside firm to test the security of your infrastructure or a particular component of your network.

Let’s examine the differences in depth and see how they complement each other.

Vulnerability assessment

Vulnerability assessments are most often confused with penetration tests and often used interchangeably, but they are worlds apart.

Vulnerability assessments are performed by using an off-the-shelf software package, such as Nessus or OpenVas to scan an IP address or range of IP addresses for known vulnerabilities. For example, the software has signatures for the Heartbleed bug or missing Apache web server patches and will alert if found. The software then produces a report that lists out found vulnerabilities and (depending on the software and options selected) will give an indication of the severity of the vulnerability and basic remediation steps.

It’s important to keep in mind that these scanners use a list of known vulnerabilities, meaning they are already known to the security community, hackers and the software vendors. There are vulnerabilities that are unknown to the public at large and these scanners will not find them.

Penetration test

Many “professional penetration testers” will actually just run a vulnerability scan, package up the report in a nice, pretty bow and call it a day. Nope — this is only a first step in a penetration test. A good penetration tester takes the output of a network scan or a vulnerability assessment and takes it to 11 — they probe an open port and see what can be exploited.

For example, let’s say a website is vulnerable to Heartbleed. Many websites still are. It’s one thing to run a scan and say “you are vulnerable to Heartbleed” and a completely different thing to exploit the bug and discover the depth of the problem and find out exactly what type of information could be revealed if it was exploited. This is the main difference — the website or service is actually being penetrated, just like a hacker would do.

Similar to a vulnerability scan, the results are usually ranked by severity and exploitability with remediation steps provided.

Penetration tests can be performed using automated tools, such as Metasploit, but veteran testers will write their own exploits from scratch.

Risk analysis

A risk analysis is often confused with the previous two terms, but it is also a very different animal. A risk analysis doesn’t require any scanning tools or applications — it’s a discipline that analyzes a specific vulnerability (such as a line item from a penetration test) and attempts to ascertain the risk — including financial, reputational, business continuity, regulatory and others — to the company if the vulnerability were to be exploited.

Many factors are considered when performing a risk analysis: asset, vulnerability, threat and impact to the company. An example of this would be an analyst trying to find the risk to the company of a server that is vulnerable to Heartbleed.

The analyst would first look at the vulnerable server, where it is on the network infrastructure and the type of data it stores. A server sitting on an internal network without outside connectivity, storing no data but vulnerable to Heartbleed has a much different risk posture than a customer-facing web server that stores credit card data and is also vulnerable to Heartbleed. A vulnerability scan does not make these distinctions. Next, the analyst examines threats that are likely to exploit the vulnerability, such as organized crime or insiders, and builds a profile of capabilities, motivations and objectives. Last, the impact to the company is ascertained — specifically, what bad thing would happen to the firm if an organized crime ring exploited Heartbleed and acquired cardholder data?

A risk analysis, when completed, will have a final risk rating with mitigating controls that can further reduce the risk. Business managers can then take the risk statement and mitigating controls and decide whether or not to implement them.

The three different concepts explained here are not exclusive of each other, but rather complement each other. In many information security programs, vulnerability assessments are the first step — they are used to perform wide sweeps of a network to find missing patches or misconfigured software. From there, one can either perform a penetration test to see how exploitable the vulnerability is or a risk analysis to ascertain the cost/benefit of fixing the vulnerability. Of course, you don’t need either to perform a risk analysis. Risk can be determined anywhere a threat and an asset is present. It can be data center in a hurricane zone or confidential papers sitting in a wastebasket.

It’s important to know the difference — each are significant in their own way and have vastly different purposes and outcomes. Make sure any company you hire to perform these services also knows the difference.

Originally published at www.csoonline.com on May 13, 2015.

Read More
Information Security Tony MartinVegue Information Security Tony MartinVegue

Not all data breaches are created equal — do you know the difference?

Not all data breaches are created equal — the impact depends on what gets stolen. From credit cards to corporate secrets, this post breaks down the real differences between breach types and why some are much worse than others.

It was one of those typical, cold February winter days in Indianapolis earlier this year. Kids woke up hoping for a snow day and old men groaned as they scraped ice off their windshields and shoveled the driveway. Those were the lucky ones, because around that same time, executives at Anthem were pulling another all-nighter, trying to wrap their heads around their latest data breach of 37.5 million records and figuring out what to do next. And, what do they do next? This was bad — very bad — and one wonders if one or more of the frenzied executives thought to him of herself, or even aloud, “At least we’re not Sony.”

Why is that? 37.5 million records sure is a lot. A large-scale data breach can be devastating to a company. Expenses associated with incident response, forensics, loss of productivity, credit reporting, and customer defection add up swiftly on top of intangible costs, such as reputation harm and loss of shareholder confidence. However, not every data breach is the same and much of this has to do with the type of data that is stolen.

Let’s take a look at the three most common data types that cyber criminals often target. Remember that almost any conceivable type of data can be stolen, but if it doesn’t have value, it will often be discarded. Cyber criminals are modern day bank robbers. They go where the money is.

Common data classifications and examples

Customer financial data

This category is the most profuse and widespread in terms of the number of records breached, and mostly includes credit card numbers, expiration dates, cardholder names, and other similar data. Cyber criminals generally pillage this information from retailers in bulk by utilizing malware specifically written to copy the credit card number at the point-of-sale system when a customer swipes his or her card. This is the type of attack that was used against Target, Home Depot, Neiman-Marcus and many others, and incidents such as these have dominated the news for the last several years. Banks have also been attacked for information on customers.

When cyber criminals then attempt to sell this pilfered information on the black market, they are in a race against time — they need to close the deal as quickly as possible so the buyer is able to use it before the card is deactivated by the issuing bank. A common method of laundering funds is to use the stolen cards to purchase gift cards or pre-paid credit cards, which can then be redeemed for cash, sold, or spent on goods and services. Cardholder data is typically peddled in bulk and can go for as little as $1 per number.

Companies typically incur costs associated with response, outside firms’ forensic analysis, and credit reporting for customers, but so far, a large-scale customer defection or massive loss of confidence by shareholders has not been observed. However, Target did fire its CEO after the breach, so internal shake-ups are always a stark possibility.

Personally identifiable information

Personally Identifiable Information, also known as PII, is a more serious form of data breach, as those affected are impacted far beyond the scope of a replaceable credit card. PII is information that identifies an individual, such as name, address, date of birth, driver’s license number, or Social Security number, and is exactly what cyber criminals need to commit identity theft. Lines of credit can be opened, tax refunds redirected, Social Security claims filed — essentially, the possibilities of criminal activities are endless, much like the headache of the one whose information has been breached.

Unlike credit cards, which can be deactivated and the customer reimbursed, one’s identity cannot be changed or begun anew. When a fraudster gets a hold of PII, the unlucky soul whose identity was stolen will often struggle for years with the repercussions, from arguing with credit reporting agencies to convincing bill collectors that they did not open lines of credit accounts.

Because of the long-lasting value of PII, it sells for a much higher price on the black market — up to $15 per record. This is most often seen when companies storing a large volume of customer records experience a data breach, such as a healthcare insurer. This is much worse for susceptible consumers than a run-of-the-mill cardholder data breach, because of the threat of identity theft, which is more difficult to mitigate than credit card theft.

Company impact is also very high, but is still on par with a cardholder data breach in that a company experiences costs in response, credit monitoring, etc.; however, large-scale customer defection still has not been observed as a side effect. It’s important to note that government fines may be associated with this type of data breach, owing to the sensitive nature of the information.

Internal company information

This type of breach has often taken a backseat to the above-mentioned types, as it does not involve a customer’s personal details, but rather internal company information, such as emails, financial records, and intellectual property. The media focused on the Target and Home Depot hacks, for which the loss was considerable in terms of customer impact, but internal company leaks are perhaps the most damaging of all, as far as corporate impact.

The Sony Pictures Entertainment data breach eclipsed in magnitude anything that has occurred in the retail sector. SPE’s movie-going customers were not significantly impacted (unless you count having to wait a while longer to see ”The Interview” — reviews of the movie suggest the hackers did the public a favor); the damage was mostly internal. PII of employees was released, which could lead to identity theft, but the bulk of the damage occurred due to leaked emails and intellectual property. The emails themselves were embarrassing and clearly were never meant to see the light of day, but unreleased movies, scripts and budgets were also leaked and generously shared on the Internet.

Many firms emphasize data types that are regulated (e.g. cardholder data, health records, company financials) when measuring the impact of a data breach, but loss of intellectual property cannot be overlooked. Examine what could be considered “secret sauce” for different types of companies. An investment firm may have a stock portfolio for its clients that outperforms its competitors. A car company may have a unique design to improve fuel efficiency. A pharmaceutical company’s clinical trial results can break a company if disclosed prematurely.

Although it’s not thought of as a “firm” and not usually considered when discussing fissures in security, when the National Security Agency’s most secret files were leaked by flagrant whistleblower Edward Snowden, the U.S. government experienced a very significant data breach. Some would argue it is history’s worst of its kind, when considering the ongoing impact on the NSA’s secretive operations.

Now what?

Whenever I am asked to analyze a data breach or respond to a data breach, I am almost always asked, “How bad is it?” The short answer: it depends.

It depends on the type of data that was breached and how much of it. Many states do not require notification of a data breach of customer records unless it meets a certain threshold (usually 500). A company can suffer a massive system intrusion that affects the bottom line, but if the data is not regulated (e.g. HIPAA, GLBA) or doesn’t trigger a mandatory notification as required by law, the public probably won’t know about it.

Take a look at your firm’s data classification policy, incident response and risk assessments. A risk-based approach to the aforementioned is a given, but be sure you are including all data types and the wide range of threats and consequences.

Originally published at www.csoonline.com on March 17, 2015.

Read More
Information Security Tony MartinVegue Information Security Tony MartinVegue

The Sony Pictures Entertainment hack: lessons for business leaders

The Sony Pictures hack wasn’t just a breach — it was a wake-up call for every business leader. This post breaks down what Sony got wrong, how to actually quantify risk, and why your business continuity plan should be printed and ready for a cyber apocalypse.

a3afe-1avga3akjuffmzvgaadetaq.jpeg

The November 2014 hack against Sony Pictures Entertainment reads like something straight out of a low-budget movie: employees walk into work one morning to see red skulls appear on their computer monitors, with threats of destruction unless certain demands are met. Move the clock forward several months and while Sony is still picking up the pieces, the security community is trying to figure out if this is just another data breach or a watershed moment in the cat-and-mouse game that defines this line of work.

Plenty of retrospection has occurred, both inside Sony and out, and (rightly so) the conversation has centered on what could have been done differently to prevent, detect and respond to this unprecedented hack. What some people think of as a problem that is limited to cyber-security is actually a problem that spans all aspects of a business.

What lessons can business leaders, outside of the field of cyber-security, learn from the hack?

Enterprise Resiliency

On Monday, November 24th the hacking group, Guardians of Peace or GOP, made the attack known to both Sony and to the public at the same time. Sony management made the decision to shut down computer systems: file servers, email, Internet access, access for remote employees — all computing equipment. Under the circumstances, shutting down a global company was a bold, but necessary, thing to do. The depth and scope of the breach wasn’t completely known at the time and those in charge felt it was important to stop the bleeding and prevent further damage from occurring.

Sony systems were down for over six days. In that time, employees used other methods to communicate with each other, such as text messaging and personal email; in other words, they reverted to manual workarounds. Manual workarounds are the cornerstone of a good business continuity plan, which helps firms be more resilient during an emergency. During a crisis or a serious incident, a company has to assume that access to any computing resources could be unavailable for an extended period of time. There is no way of knowing if Sony had business continuity plans that included an extended outage of IT equipment or whether they evoked them, but one thing is clear — most companies do not factor in this type of disaster. Most business continuity planning revolves around localized disasters, such as terrorist attacks, hurricanes and severe weather. The outage that Sony experienced was global, total and extended.

If you manage a department, make sure you have a printed business continuity plan that includes call trees, manual workarounds and information on how to get a hold of each other if company resources are unreachable. Many companies’ plans assume a worst-case scenario consisting of a building or facility being inaccessible, such as a power outage or mandatory evacuation due to a natural disaster, but we are in a new era in which the worst case could be the complete shut-down of all computing equipment. Plan for it.

Defense in Depth

Defense in depth is a concept from the military that has been adopted by many in the cyber-security space. The basic idea is to have rings or layers of defense, rather than putting all your resources in one method. Think of a medieval castle under assault. The defenders are not going to place all of their men in front of the door of the throne room to protect the King. They dig a moat to make it harder to reach the door, raise bridges, close gates, place archers in parapets, pour hot oil on attackers using ladders, strategically deploy swordsmen inside the castle for when it is breached and a special King’s Guard as a last resort.

This method is very effective because if one method of defense fails, there are still others for the attackers to overcome. This also delays the attackers, buying valuable time for the defender to respond.

This technique is used in the cyber-security space in a similar way, as one would deploy resources to defend a castle. Many companies already implement some form of defense in depth, but the Sony hack is a good reminder to audit defenses and ensure you have the right resources in the right places. From outside the network coming in, firewalls and intrusion detection systems (IDS) are deployed. Computers are protected with antivirus and encryption. The most valuable data (in Sony’s case, presumably unreleased movies and internal emails) should be protected with a separate set of firewalls, intrusion detection, etc. — a King’s Guard. Strong company policies and security awareness training are also used as defense measures.

1f86b-1604nuvmb4tcqiu4-lwm9dq.jpeg

Caerphilly Castle, Caerphilly South Wales

Admittedly, this is a lot — and it is only half of the story. Protecting a company relies just as much on resources outside of the security department as it does resources inside the security department. Do you spend a million dollars a year on security measures but don’t have any method of controlling access to and from your physical building? Can someone waltz in the front door wearing an orange vest and a hardhat and walk off with the CFO’s laptop? Do you encrypt every laptop but don’t perform criminal background checks on employees, consultant and contractors? Maybe you spend a fortune on penetration testing your web sites but don’t do any security checks on vendors that have access to the company network. Target learned this lesson the hard way.

In order to create defense in depth, it is crucial to have the commitment of other departments such as human resources, facilities management, vendor management, communications and others, as they all contribute to the security posture of a company. You can’t rely on your security department to protect the whole company. It truly is a team effort that requires cooperation across all levels in a company. Just like defending a castle. Everyone has a job to do.

Managing Risk

Sony has been criticized for what have been perceived to be lax security measures. Some of the criticism is Monday morning quarterbacking and some of it is valid. In an article for CIO Magazine in 2007, the Executive Director of Sony Pictures Entertainment, Jason Spaltro, was profiled in a cringe-worthy piece called “Your Guide to Good-Enough Security.” In it, Spaltro brags about convincing a SOX auditor not to write up weak passwords as a control failure and explains that it doesn’t make business sense to spend $10 million on fixing a security problem that would only cause $1 million in loss.

He’s right, partly. It doesn’t make sense to spend 10 times more on a problem than the asset is worth. This isn’t a control failure or a problem with perception — this is a risk management problem. The first question a business leader needs to ask is, “Where did you come up with the $1 million in loss figure and is it accurate?” The viewpoint taken by Sony doesn’t fully take into account the different types of losses that a company can experience during a data breach. The Sony breach clearly demonstrates a failure of security controls in several different areas, but the real failure is the firm’s inability to measure and manage risk.

A good risk analysis program identifies an asset, whether it may be employee health information or movie scripts, or even reputation and investor confidence. From there, any threat that can act against an asset is identified, with corresponding vulnerabilities. For example, company intellectual property stored on file servers is a very important asset, with cybercriminals being a well-resourced and motivated threat. Several vulnerabilities can exist at the same time, ranging from weak passwords to access the file server to users that are susceptible to phishing emails that install malware on their systems.

Quantifying Risk

A rigorous risk analysis will take the aforementioned data and run it through a quantitative risk model. The risk analyst will gather data of different types of loss events such as productivity loss, loss of competitive advantage, asset replacement cost, fines, judgments — the list goes on. The final risk assessment will return an annualized exposure. In other words, a data breach could occur once every ten years at a cost of $100 million per incident; therefore, the company has an annualized exposure of $10 million. This makes it very easy for business managers to run a cost benefit analysis on security expenditures. If the analysis is done correctly and sound methods are used, security sells itself.

80d59-1khockrjmuwdb7gho_jthrw.png

Components of a risk analysis

In other words, Spaltro is right. You would never spend more on a control than the cost of an incident. However, not all risk is communicated to management in a way that allows for informed business decisions. As a business leader, look at how risk is being communicated to you. Is risk being communicated in a way that makes business decisions easy (dollars) or are softer methods being used, such as, “This vulnerability is a high risk!” (High compared to what? What exactly does High mean?)

In many other aspects of business and in other fields, risk is communicated in terms of annualized exposure and run through a cost-benefit analysis. Information Security, however, is lagging behind and the Sony hack is proof that the field must mature and adopt more advanced methods of quantifying and communicating risk. Decision makers must insist on it.

Conclusion

There are some battles in ancient history that strategists never tire of studying. Lessons and tactics are taught in schools to this day and employed in the battlefield. The Sony Hack will go down in history as one such battle that we will continue to learn from for years to come. Sony had strategic choices to make in the moment and those are continuing to play out in the media, and across the cyber-security landscape. What we can glean from this today is that the firms that are winning are the firms that look at cyber-security on a macro, holistic level. Individual components of successful program are interconnected throughout all aspects of the business, and it is through this understanding that business leaders can stay one step ahead.

Read More