The Birth of a Ransomware Urban Myth
"Nearly 40% of ransomware victims pay up!" Sounds shocking, right? Turns out… that headline was based on eight people. This post unpacks how bad stats become infosec urban legends—and what that means for decision-making.
Would you be surprised to find that “nearly 40% of ransomware victims pay attackers,” according to a recent article published by DarkReading? I sure was. The number of victims that pay ransomware and the amount paid has been an elusive figure for years now. To date, law enforcement has not collected and published ransomware crime statistics like they have for other forms of criminal activity.
Junk research published by security vendors has always irked me because they use and misuse statistics to spread fear and sell products. Security threats are overblown and solutions are oversimplified, leading to a bevy of problems ranging from the creation of information security urban myths to poor corporate decision making based on faulty assumptions.
Sadly, the DarkReading article and underlying research is no exception. It’s a prime example of what’s wrong with vendor-sponsored research and how the rest of us pick up quotes, circulate and re-tweet without giving it a minute of critical thought. It’s easy to spot — just grab a statistic and follow it down the rabbit hole. Let’s dissect the ransomware payment rate and find out what’s really going on.
DarkReading published this article on April 14th, 2017 with the headline:
If you follow the article to the end, a link to the research is cited, along with the name of the security vendor that performed the research (Trustlook). They have a nice blog post and a cute, entertaining infographic — great reading material to send to the CISO tomorrow morning. The next step is to check the validly of the research and see exactly what Trustlook is claiming.
Trustlook is a security vendor and sells a suite of products that protects end-users from malware, including ransomware, and other forms of attack.
The research is based on a survey. Surveys are polls; you ask a group of people a question and record the answers.
Trustlook surveyed 210 of their Mobile Security product customers. Mobile Security is an Android-based anti-virus app.
Trustlook did not disclose a margin of error, which would indicate the survey is not statistically significant. This means the results only apply to the survey takers themselves and cannot be extrapolated to apply to a larger group or the general population.
This would be enough to make anyone that took a semester of college Stats roll their eyes and move on. However, the assertions in the infographic really take the cake. When percentages are used in statistics, the reader tends to forget or lose sight of the underlying numbers. Breaking down the percentages further:
We know 210 customers were surveyed (Trustlook disclosed this).
Of the 210, 45% have never heard of ransomware. Put another way, 94 out of 210 customers answered a survey about ransomware, but have never heard of ransomware. Trustlook conducted research and published a survey on ransomware in which nearly half of the respondents don’t know what ransomware is.
116 respondents had the wherewithal to understand the subject matter for a survey they are filling out.
Of the 116, 20 people had, at some point, been infected with ransomware.
Of the 20 that have been infected, 8 of them paid the ransom.
Let me say that again in case you missed it.
Trustlook found 8 of their customers that said they paid a ransom and turned it into this:
…and DarkReading expanded the claim to include all ransomware victims:
Two days later, it’s everywhere:
Source: Google.com search
A new ransomware urban myth is born.
Prioritizing Patches: A Risk-Based Approach
It’s been a tough few weeks for those of us that are responsible for patching vulnerabilities in the companies we work at. Not only do we have the usual operating system and application patches, we also have patches for VENOM and Logjam to contend with. The two aforementioned vulnerabilities are pretty serious and deserve extra attention. But, where to start and what to do first? Whether you have hundreds or thousands or hundreds of thousands of systems to patch, you have to start somewhere. Do you test and deploy patches for high severity vulnerabilities first, or do you continue to deploy routine patches, prioritizing systems critical to the functioning of your business?
It’s been a tough few weeks for those of us that are responsible for patching vulnerabilities in the companies we work at. Not only do we have the usual operating system and application patches, we also have patches for VENOM and Logjam to contend with. The two aforementioned vulnerabilities are pretty serious and deserve extra attention. But, where to start and what to do first? Whether you have hundreds or thousands or hundreds of thousands of systems to patch, you have to start somewhere. Do you test and deploy patches for high severity vulnerabilities first, or do you continue to deploy routine patches, prioritizing systems critical to the functioning of your business?
It depends. You have to take a risk-based approach to patching, fully considering several factors including where the system is on the network, the type of data it has, what it’s function is and whether or not the patch in question poses a threat.
There’s an old adage in risk management (and business in general): “When everything’s a priority, nothing a priority.” How true it is. For example, if you scan your entire network for the Heartbleed vulnerability, the tool will return a list of all systems that the vulnerability has been found on. Depending on the size of your network, this could seem like an insurmountable task — everything is high risk.
A good habit to get into for all security professionals is to take a risk-based approach when you need to make a decision about resources. (“Resources” in this context can be money, personnel, time, re-tasking an application, etc.) Ask yourself the following questions:
What is the asset I’m protecting? What is the value?
Are there are compliance, regulatory or legal requirements around this system. For example, does it store PHI (Personal Health Information), is in-scope for Sarbanes-Oxley or does it fall under PCI?
What are the vulnerabilities on this system?
What is the threat? Remember, you can have vulnerability without a threat — think of a house that does not have a tornado shelter. The house is in California.
What is the impact to the company if a threat exploited the vulnerability and acted against the asset? Impact can take many forms, including loss productivity, lost sales, a data breach, system downtime, fines, judgments and reputational harm.
A Tale of Two Systems
Take at look at the diagram below. It illustrates two systems with the same web vulnerability, but different use cases and impact. A simple vulnerability scan would flag both systems as having high-severity vulnerabilities, but a risk-based approach to vulnerability mitigation reveals much different priorities.
This is not to say that Server #2 could not be exploited. It very much could be, by an insider, a vendor or from an outside attacker and the issue needs to be remediated. However, it is much more probable that System #1 will be compromised in a shorter time-frame. Server #2 would also be on the list to get patched, but considering that attackers external to the organization have to try a little harder to exploit this type of vulnerability and the server is not critical to the functioning to the business, the mitigation priority is Server #1.
Your Secret Weapon
Most medium-to-large companies have a separate department dedicated to Business Continuity. Sometimes they are in IT as part of Disaster Recovery, and sometimes they are in a completely separate department, focusing on enterprise resiliency. Either way, one of the core functions of these departments is to perform a business impact analysis on critical business functions. For example, the core business functions of the Accounting department are analyzed. Continuity requirements are identified along with impact to the company. Many factors are considered, including financial, revenue stream, employee and legal/regulatory impact.
This is an excellent place to start if you need data on categorizing and prioritizing your systems. In some cases, the business impact analysis is mapped back to actual server names or application platforms, but even if it’s not, you can start using this data to improve your vulnerability management program.
It’s difficult to decide where to deploy scarce resources. The steps outlined above truly are the tip of the iceberg but are nonetheless a great first step in helping to prioritize when and where to start implementing mitigating controls. The most successful Information Security departments are those that able to think in risk-based terms naturally when evaluating control implementation. With practice, it becomes second nature.
About the Author:Tony Martin-Vegue works for a large global retailer leading the firm’s cyber-crime program. His enterprise risk and security analyses are informed by his 20 years of technical expertise in areas such as network operations, cryptography and system administration. Tony holds a Bachelor of Science in Business Economics from the University of San Francisco and holds many certifications including CISSP, CISM and CEH.
Originally published at www.tripwire.com on May 31, 2015.
What’s the difference between a vulnerability scan, penetration test and a risk analysis?
Think vulnerability scan, pen test, and risk analysis are the same thing? They're not — and mixing them up could waste your money and leave you exposed. This post breaks down the real differences so you can make smarter, more secure decisions.

You’ve just deployed an ecommerce site for your small business or developed the next hot iPhone MMORGP. Now what?
Don’t get hacked!
An often overlooked, but very important process in the development of any Internet-facing service is testing it for vulnerabilities, knowing if those vulnerabilities are actually exploitable in your particular environment and, lastly, knowing what the risks of those vulnerabilities are to your firm or product launch. These three different processes are known as a vulnerability assessment, penetration test and a risk analysis. Knowing the difference is critical when hiring an outside firm to test the security of your infrastructure or a particular component of your network.
Let’s examine the differences in depth and see how they complement each other.

Vulnerability assessment
Vulnerability assessments are most often confused with penetration tests and often used interchangeably, but they are worlds apart.
Vulnerability assessments are performed by using an off-the-shelf software package, such as Nessus or OpenVas to scan an IP address or range of IP addresses for known vulnerabilities. For example, the software has signatures for the Heartbleed bug or missing Apache web server patches and will alert if found. The software then produces a report that lists out found vulnerabilities and (depending on the software and options selected) will give an indication of the severity of the vulnerability and basic remediation steps.
It’s important to keep in mind that these scanners use a list of known vulnerabilities, meaning they are already known to the security community, hackers and the software vendors. There are vulnerabilities that are unknown to the public at large and these scanners will not find them.
Penetration test
Many “professional penetration testers” will actually just run a vulnerability scan, package up the report in a nice, pretty bow and call it a day. Nope — this is only a first step in a penetration test. A good penetration tester takes the output of a network scan or a vulnerability assessment and takes it to 11 — they probe an open port and see what can be exploited.
For example, let’s say a website is vulnerable to Heartbleed. Many websites still are. It’s one thing to run a scan and say “you are vulnerable to Heartbleed” and a completely different thing to exploit the bug and discover the depth of the problem and find out exactly what type of information could be revealed if it was exploited. This is the main difference — the website or service is actually being penetrated, just like a hacker would do.
Similar to a vulnerability scan, the results are usually ranked by severity and exploitability with remediation steps provided.
Penetration tests can be performed using automated tools, such as Metasploit, but veteran testers will write their own exploits from scratch.
Risk analysis
A risk analysis is often confused with the previous two terms, but it is also a very different animal. A risk analysis doesn’t require any scanning tools or applications — it’s a discipline that analyzes a specific vulnerability (such as a line item from a penetration test) and attempts to ascertain the risk — including financial, reputational, business continuity, regulatory and others — to the company if the vulnerability were to be exploited.
Many factors are considered when performing a risk analysis: asset, vulnerability, threat and impact to the company. An example of this would be an analyst trying to find the risk to the company of a server that is vulnerable to Heartbleed.
The analyst would first look at the vulnerable server, where it is on the network infrastructure and the type of data it stores. A server sitting on an internal network without outside connectivity, storing no data but vulnerable to Heartbleed has a much different risk posture than a customer-facing web server that stores credit card data and is also vulnerable to Heartbleed. A vulnerability scan does not make these distinctions. Next, the analyst examines threats that are likely to exploit the vulnerability, such as organized crime or insiders, and builds a profile of capabilities, motivations and objectives. Last, the impact to the company is ascertained — specifically, what bad thing would happen to the firm if an organized crime ring exploited Heartbleed and acquired cardholder data?
A risk analysis, when completed, will have a final risk rating with mitigating controls that can further reduce the risk. Business managers can then take the risk statement and mitigating controls and decide whether or not to implement them.
The three different concepts explained here are not exclusive of each other, but rather complement each other. In many information security programs, vulnerability assessments are the first step — they are used to perform wide sweeps of a network to find missing patches or misconfigured software. From there, one can either perform a penetration test to see how exploitable the vulnerability is or a risk analysis to ascertain the cost/benefit of fixing the vulnerability. Of course, you don’t need either to perform a risk analysis. Risk can be determined anywhere a threat and an asset is present. It can be data center in a hurricane zone or confidential papers sitting in a wastebasket.
It’s important to know the difference — each are significant in their own way and have vastly different purposes and outcomes. Make sure any company you hire to perform these services also knows the difference.
Originally published at www.csoonline.com on May 13, 2015.
Not all data breaches are created equal — do you know the difference?
Not all data breaches are created equal — the impact depends on what gets stolen. From credit cards to corporate secrets, this post breaks down the real differences between breach types and why some are much worse than others.

It was one of those typical, cold February winter days in Indianapolis earlier this year. Kids woke up hoping for a snow day and old men groaned as they scraped ice off their windshields and shoveled the driveway. Those were the lucky ones, because around that same time, executives at Anthem were pulling another all-nighter, trying to wrap their heads around their latest data breach of 37.5 million records and figuring out what to do next. And, what do they do next? This was bad — very bad — and one wonders if one or more of the frenzied executives thought to him of herself, or even aloud, “At least we’re not Sony.”
Why is that? 37.5 million records sure is a lot. A large-scale data breach can be devastating to a company. Expenses associated with incident response, forensics, loss of productivity, credit reporting, and customer defection add up swiftly on top of intangible costs, such as reputation harm and loss of shareholder confidence. However, not every data breach is the same and much of this has to do with the type of data that is stolen.
Let’s take a look at the three most common data types that cyber criminals often target. Remember that almost any conceivable type of data can be stolen, but if it doesn’t have value, it will often be discarded. Cyber criminals are modern day bank robbers. They go where the money is.

Common data classifications and examples
Customer financial data
This category is the most profuse and widespread in terms of the number of records breached, and mostly includes credit card numbers, expiration dates, cardholder names, and other similar data. Cyber criminals generally pillage this information from retailers in bulk by utilizing malware specifically written to copy the credit card number at the point-of-sale system when a customer swipes his or her card. This is the type of attack that was used against Target, Home Depot, Neiman-Marcus and many others, and incidents such as these have dominated the news for the last several years. Banks have also been attacked for information on customers.
When cyber criminals then attempt to sell this pilfered information on the black market, they are in a race against time — they need to close the deal as quickly as possible so the buyer is able to use it before the card is deactivated by the issuing bank. A common method of laundering funds is to use the stolen cards to purchase gift cards or pre-paid credit cards, which can then be redeemed for cash, sold, or spent on goods and services. Cardholder data is typically peddled in bulk and can go for as little as $1 per number.
Companies typically incur costs associated with response, outside firms’ forensic analysis, and credit reporting for customers, but so far, a large-scale customer defection or massive loss of confidence by shareholders has not been observed. However, Target did fire its CEO after the breach, so internal shake-ups are always a stark possibility.
Personally identifiable information
Personally Identifiable Information, also known as PII, is a more serious form of data breach, as those affected are impacted far beyond the scope of a replaceable credit card. PII is information that identifies an individual, such as name, address, date of birth, driver’s license number, or Social Security number, and is exactly what cyber criminals need to commit identity theft. Lines of credit can be opened, tax refunds redirected, Social Security claims filed — essentially, the possibilities of criminal activities are endless, much like the headache of the one whose information has been breached.
Unlike credit cards, which can be deactivated and the customer reimbursed, one’s identity cannot be changed or begun anew. When a fraudster gets a hold of PII, the unlucky soul whose identity was stolen will often struggle for years with the repercussions, from arguing with credit reporting agencies to convincing bill collectors that they did not open lines of credit accounts.
Because of the long-lasting value of PII, it sells for a much higher price on the black market — up to $15 per record. This is most often seen when companies storing a large volume of customer records experience a data breach, such as a healthcare insurer. This is much worse for susceptible consumers than a run-of-the-mill cardholder data breach, because of the threat of identity theft, which is more difficult to mitigate than credit card theft.
Company impact is also very high, but is still on par with a cardholder data breach in that a company experiences costs in response, credit monitoring, etc.; however, large-scale customer defection still has not been observed as a side effect. It’s important to note that government fines may be associated with this type of data breach, owing to the sensitive nature of the information.
Internal company information
This type of breach has often taken a backseat to the above-mentioned types, as it does not involve a customer’s personal details, but rather internal company information, such as emails, financial records, and intellectual property. The media focused on the Target and Home Depot hacks, for which the loss was considerable in terms of customer impact, but internal company leaks are perhaps the most damaging of all, as far as corporate impact.
The Sony Pictures Entertainment data breach eclipsed in magnitude anything that has occurred in the retail sector. SPE’s movie-going customers were not significantly impacted (unless you count having to wait a while longer to see ”The Interview” — reviews of the movie suggest the hackers did the public a favor); the damage was mostly internal. PII of employees was released, which could lead to identity theft, but the bulk of the damage occurred due to leaked emails and intellectual property. The emails themselves were embarrassing and clearly were never meant to see the light of day, but unreleased movies, scripts and budgets were also leaked and generously shared on the Internet.
Many firms emphasize data types that are regulated (e.g. cardholder data, health records, company financials) when measuring the impact of a data breach, but loss of intellectual property cannot be overlooked. Examine what could be considered “secret sauce” for different types of companies. An investment firm may have a stock portfolio for its clients that outperforms its competitors. A car company may have a unique design to improve fuel efficiency. A pharmaceutical company’s clinical trial results can break a company if disclosed prematurely.
Although it’s not thought of as a “firm” and not usually considered when discussing fissures in security, when the National Security Agency’s most secret files were leaked by flagrant whistleblower Edward Snowden, the U.S. government experienced a very significant data breach. Some would argue it is history’s worst of its kind, when considering the ongoing impact on the NSA’s secretive operations.
Now what?
Whenever I am asked to analyze a data breach or respond to a data breach, I am almost always asked, “How bad is it?” The short answer: it depends.
It depends on the type of data that was breached and how much of it. Many states do not require notification of a data breach of customer records unless it meets a certain threshold (usually 500). A company can suffer a massive system intrusion that affects the bottom line, but if the data is not regulated (e.g. HIPAA, GLBA) or doesn’t trigger a mandatory notification as required by law, the public probably won’t know about it.
Take a look at your firm’s data classification policy, incident response and risk assessments. A risk-based approach to the aforementioned is a given, but be sure you are including all data types and the wide range of threats and consequences.
Originally published at www.csoonline.com on March 17, 2015.