Cognitive Bias Tony MartinVegue Cognitive Bias Tony MartinVegue

The Birth of a Ransomware Urban Myth

"Nearly 40% of ransomware victims pay up!" Sounds shocking, right? Turns out… that headline was based on eight people. This post unpacks how bad stats become infosec urban legends—and what that means for decision-making.

0ff46-1nryumr9546tx7tg2qmj49a.png

Would you be surprised to find that “nearly 40% of ransomware victims pay attackers,” according to a recent article published by DarkReading? I sure was. The number of victims that pay ransomware and the amount paid has been an elusive figure for years now. To date, law enforcement has not collected and published ransomware crime statistics like they have for other forms of criminal activity.

Junk research published by security vendors has always irked me because they use and misuse statistics to spread fear and sell products. Security threats are overblown and solutions are oversimplified, leading to a bevy of problems ranging from the creation of information security urban myths to poor corporate decision making based on faulty assumptions.

Sadly, the DarkReading article and underlying research is no exception. It’s a prime example of what’s wrong with vendor-sponsored research and how the rest of us pick up quotes, circulate and re-tweet without giving it a minute of critical thought. It’s easy to spot — just grab a statistic and follow it down the rabbit hole. Let’s dissect the ransomware payment rate and find out what’s really going on.

DarkReading published this article on April 14th, 2017 with the headline:

If you follow the article to the end, a link to the research is cited, along with the name of the security vendor that performed the research (Trustlook). They have a nice blog post and a cute, entertaining infographic — great reading material to send to the CISO tomorrow morning. The next step is to check the validly of the research and see exactly what Trustlook is claiming.

  • Trustlook is a security vendor and sells a suite of products that protects end-users from malware, including ransomware, and other forms of attack.

  • The research is based on a survey. Surveys are polls; you ask a group of people a question and record the answers.

  • Trustlook surveyed 210 of their Mobile Security product customers. Mobile Security is an Android-based anti-virus app.

  • Trustlook did not disclose a margin of error, which would indicate the survey is not statistically significant. This means the results only apply to the survey takers themselves and cannot be extrapolated to apply to a larger group or the general population.

This would be enough to make anyone that took a semester of college Stats roll their eyes and move on. However, the assertions in the infographic really take the cake. When percentages are used in statistics, the reader tends to forget or lose sight of the underlying numbers. Breaking down the percentages further:

  • We know 210 customers were surveyed (Trustlook disclosed this).

  • Of the 210, 45% have never heard of ransomware. Put another way, 94 out of 210 customers answered a survey about ransomware, but have never heard of ransomware. Trustlook conducted research and published a survey on ransomware in which nearly half of the respondents don’t know what ransomware is.

b7bf2-1hx5wo6imrtkuncaerb399a.gif
  • 116 respondents had the wherewithal to understand the subject matter for a survey they are filling out.

  • Of the 116, 20 people had, at some point, been infected with ransomware.

  • Of the 20 that have been infected, 8 of them paid the ransom.

Let me say that again in case you missed it.

Trustlook found 8 of their customers that said they paid a ransom and turned it into this:

…and DarkReading expanded the claim to include all ransomware victims:

Two days later, it’s everywhere:

Source: Google.com search

Source: Google.com search

A new ransomware urban myth is born.

Read More
Quantitative Risk Tony MartinVegue Quantitative Risk Tony MartinVegue

Prioritizing Patches: A Risk-Based Approach

It’s been a tough few weeks for those of us that are responsible for patching vulnerabilities in the companies we work at. Not only do we have the usual operating system and application patches, we also have patches for VENOM and Logjam to contend with. The two aforementioned vulnerabilities are pretty serious and deserve extra attention. But, where to start and what to do first? Whether you have hundreds or thousands or hundreds of thousands of systems to patch, you have to start somewhere. Do you test and deploy patches for high severity vulnerabilities first, or do you continue to deploy routine patches, prioritizing systems critical to the functioning of your business?

5bc9f-19ernxepah3r1ezffz33iza.jpeg

It’s been a tough few weeks for those of us that are responsible for patching vulnerabilities in the companies we work at. Not only do we have the usual operating system and application patches, we also have patches for VENOM and Logjam to contend with. The two aforementioned vulnerabilities are pretty serious and deserve extra attention. But, where to start and what to do first? Whether you have hundreds or thousands or hundreds of thousands of systems to patch, you have to start somewhere. Do you test and deploy patches for high severity vulnerabilities first, or do you continue to deploy routine patches, prioritizing systems critical to the functioning of your business?

It depends. You have to take a risk-based approach to patching, fully considering several factors including where the system is on the network, the type of data it has, what it’s function is and whether or not the patch in question poses a threat.

There’s an old adage in risk management (and business in general): “When everything’s a priority, nothing a priority.” How true it is. For example, if you scan your entire network for the Heartbleed vulnerability, the tool will return a list of all systems that the vulnerability has been found on. Depending on the size of your network, this could seem like an insurmountable task — everything is high risk.

A good habit to get into for all security professionals is to take a risk-based approach when you need to make a decision about resources. (“Resources” in this context can be money, personnel, time, re-tasking an application, etc.) Ask yourself the following questions:

  • What is the asset I’m protecting? What is the value?

  • Are there are compliance, regulatory or legal requirements around this system. For example, does it store PHI (Personal Health Information), is in-scope for Sarbanes-Oxley or does it fall under PCI?

  • What are the vulnerabilities on this system?

  • What is the threat? Remember, you can have vulnerability without a threat — think of a house that does not have a tornado shelter. The house is in California.

  • What is the impact to the company if a threat exploited the vulnerability and acted against the asset? Impact can take many forms, including loss productivity, lost sales, a data breach, system downtime, fines, judgments and reputational harm.

A Tale of Two Systems

Take at look at the diagram below. It illustrates two systems with the same web vulnerability, but different use cases and impact. A simple vulnerability scan would flag both systems as having high-severity vulnerabilities, but a risk-based approach to vulnerability mitigation reveals much different priorities.

6dfb2-1lktxm3fcnx5liy8ggwmvyg.png

This is not to say that Server #2 could not be exploited. It very much could be, by an insider, a vendor or from an outside attacker and the issue needs to be remediated. However, it is much more probable that System #1 will be compromised in a shorter time-frame. Server #2 would also be on the list to get patched, but considering that attackers external to the organization have to try a little harder to exploit this type of vulnerability and the server is not critical to the functioning to the business, the mitigation priority is Server #1.

Your Secret Weapon

Most medium-to-large companies have a separate department dedicated to Business Continuity. Sometimes they are in IT as part of Disaster Recovery, and sometimes they are in a completely separate department, focusing on enterprise resiliency. Either way, one of the core functions of these departments is to perform a business impact analysis on critical business functions. For example, the core business functions of the Accounting department are analyzed. Continuity requirements are identified along with impact to the company. Many factors are considered, including financial, revenue stream, employee and legal/regulatory impact.

This is an excellent place to start if you need data on categorizing and prioritizing your systems. In some cases, the business impact analysis is mapped back to actual server names or application platforms, but even if it’s not, you can start using this data to improve your vulnerability management program.

It’s difficult to decide where to deploy scarce resources. The steps outlined above truly are the tip of the iceberg but are nonetheless a great first step in helping to prioritize when and where to start implementing mitigating controls. The most successful Information Security departments are those that able to think in risk-based terms naturally when evaluating control implementation. With practice, it becomes second nature.

About the Author:Tony Martin-Vegue works for a large global retailer leading the firm’s cyber-crime program. His enterprise risk and security analyses are informed by his 20 years of technical expertise in areas such as network operations, cryptography and system administration. Tony holds a Bachelor of Science in Business Economics from the University of San Francisco and holds many certifications including CISSP, CISM and CEH.

Originally published at www.tripwire.com on May 31, 2015.

Read More
Information Security Tony MartinVegue Information Security Tony MartinVegue

What’s the difference between a vulnerability scan, penetration test and a risk analysis?

Think vulnerability scan, pen test, and risk analysis are the same thing? They're not — and mixing them up could waste your money and leave you exposed. This post breaks down the real differences so you can make smarter, more secure decisions.

You’ve just deployed an ecommerce site for your small business or developed the next hot iPhone MMORGP. Now what?

Don’t get hacked!

An often overlooked, but very important process in the development of any Internet-facing service is testing it for vulnerabilities, knowing if those vulnerabilities are actually exploitable in your particular environment and, lastly, knowing what the risks of those vulnerabilities are to your firm or product launch. These three different processes are known as a vulnerability assessment, penetration test and a risk analysis. Knowing the difference is critical when hiring an outside firm to test the security of your infrastructure or a particular component of your network.

Let’s examine the differences in depth and see how they complement each other.

Vulnerability assessment

Vulnerability assessments are most often confused with penetration tests and often used interchangeably, but they are worlds apart.

Vulnerability assessments are performed by using an off-the-shelf software package, such as Nessus or OpenVas to scan an IP address or range of IP addresses for known vulnerabilities. For example, the software has signatures for the Heartbleed bug or missing Apache web server patches and will alert if found. The software then produces a report that lists out found vulnerabilities and (depending on the software and options selected) will give an indication of the severity of the vulnerability and basic remediation steps.

It’s important to keep in mind that these scanners use a list of known vulnerabilities, meaning they are already known to the security community, hackers and the software vendors. There are vulnerabilities that are unknown to the public at large and these scanners will not find them.

Penetration test

Many “professional penetration testers” will actually just run a vulnerability scan, package up the report in a nice, pretty bow and call it a day. Nope — this is only a first step in a penetration test. A good penetration tester takes the output of a network scan or a vulnerability assessment and takes it to 11 — they probe an open port and see what can be exploited.

For example, let’s say a website is vulnerable to Heartbleed. Many websites still are. It’s one thing to run a scan and say “you are vulnerable to Heartbleed” and a completely different thing to exploit the bug and discover the depth of the problem and find out exactly what type of information could be revealed if it was exploited. This is the main difference — the website or service is actually being penetrated, just like a hacker would do.

Similar to a vulnerability scan, the results are usually ranked by severity and exploitability with remediation steps provided.

Penetration tests can be performed using automated tools, such as Metasploit, but veteran testers will write their own exploits from scratch.

Risk analysis

A risk analysis is often confused with the previous two terms, but it is also a very different animal. A risk analysis doesn’t require any scanning tools or applications — it’s a discipline that analyzes a specific vulnerability (such as a line item from a penetration test) and attempts to ascertain the risk — including financial, reputational, business continuity, regulatory and others — to the company if the vulnerability were to be exploited.

Many factors are considered when performing a risk analysis: asset, vulnerability, threat and impact to the company. An example of this would be an analyst trying to find the risk to the company of a server that is vulnerable to Heartbleed.

The analyst would first look at the vulnerable server, where it is on the network infrastructure and the type of data it stores. A server sitting on an internal network without outside connectivity, storing no data but vulnerable to Heartbleed has a much different risk posture than a customer-facing web server that stores credit card data and is also vulnerable to Heartbleed. A vulnerability scan does not make these distinctions. Next, the analyst examines threats that are likely to exploit the vulnerability, such as organized crime or insiders, and builds a profile of capabilities, motivations and objectives. Last, the impact to the company is ascertained — specifically, what bad thing would happen to the firm if an organized crime ring exploited Heartbleed and acquired cardholder data?

A risk analysis, when completed, will have a final risk rating with mitigating controls that can further reduce the risk. Business managers can then take the risk statement and mitigating controls and decide whether or not to implement them.

The three different concepts explained here are not exclusive of each other, but rather complement each other. In many information security programs, vulnerability assessments are the first step — they are used to perform wide sweeps of a network to find missing patches or misconfigured software. From there, one can either perform a penetration test to see how exploitable the vulnerability is or a risk analysis to ascertain the cost/benefit of fixing the vulnerability. Of course, you don’t need either to perform a risk analysis. Risk can be determined anywhere a threat and an asset is present. It can be data center in a hurricane zone or confidential papers sitting in a wastebasket.

It’s important to know the difference — each are significant in their own way and have vastly different purposes and outcomes. Make sure any company you hire to perform these services also knows the difference.

Originally published at www.csoonline.com on May 13, 2015.

Read More
Information Security Tony MartinVegue Information Security Tony MartinVegue

Not all data breaches are created equal — do you know the difference?

Not all data breaches are created equal — the impact depends on what gets stolen. From credit cards to corporate secrets, this post breaks down the real differences between breach types and why some are much worse than others.

It was one of those typical, cold February winter days in Indianapolis earlier this year. Kids woke up hoping for a snow day and old men groaned as they scraped ice off their windshields and shoveled the driveway. Those were the lucky ones, because around that same time, executives at Anthem were pulling another all-nighter, trying to wrap their heads around their latest data breach of 37.5 million records and figuring out what to do next. And, what do they do next? This was bad — very bad — and one wonders if one or more of the frenzied executives thought to him of herself, or even aloud, “At least we’re not Sony.”

Why is that? 37.5 million records sure is a lot. A large-scale data breach can be devastating to a company. Expenses associated with incident response, forensics, loss of productivity, credit reporting, and customer defection add up swiftly on top of intangible costs, such as reputation harm and loss of shareholder confidence. However, not every data breach is the same and much of this has to do with the type of data that is stolen.

Let’s take a look at the three most common data types that cyber criminals often target. Remember that almost any conceivable type of data can be stolen, but if it doesn’t have value, it will often be discarded. Cyber criminals are modern day bank robbers. They go where the money is.

Common data classifications and examples

Customer financial data

This category is the most profuse and widespread in terms of the number of records breached, and mostly includes credit card numbers, expiration dates, cardholder names, and other similar data. Cyber criminals generally pillage this information from retailers in bulk by utilizing malware specifically written to copy the credit card number at the point-of-sale system when a customer swipes his or her card. This is the type of attack that was used against Target, Home Depot, Neiman-Marcus and many others, and incidents such as these have dominated the news for the last several years. Banks have also been attacked for information on customers.

When cyber criminals then attempt to sell this pilfered information on the black market, they are in a race against time — they need to close the deal as quickly as possible so the buyer is able to use it before the card is deactivated by the issuing bank. A common method of laundering funds is to use the stolen cards to purchase gift cards or pre-paid credit cards, which can then be redeemed for cash, sold, or spent on goods and services. Cardholder data is typically peddled in bulk and can go for as little as $1 per number.

Companies typically incur costs associated with response, outside firms’ forensic analysis, and credit reporting for customers, but so far, a large-scale customer defection or massive loss of confidence by shareholders has not been observed. However, Target did fire its CEO after the breach, so internal shake-ups are always a stark possibility.

Personally identifiable information

Personally Identifiable Information, also known as PII, is a more serious form of data breach, as those affected are impacted far beyond the scope of a replaceable credit card. PII is information that identifies an individual, such as name, address, date of birth, driver’s license number, or Social Security number, and is exactly what cyber criminals need to commit identity theft. Lines of credit can be opened, tax refunds redirected, Social Security claims filed — essentially, the possibilities of criminal activities are endless, much like the headache of the one whose information has been breached.

Unlike credit cards, which can be deactivated and the customer reimbursed, one’s identity cannot be changed or begun anew. When a fraudster gets a hold of PII, the unlucky soul whose identity was stolen will often struggle for years with the repercussions, from arguing with credit reporting agencies to convincing bill collectors that they did not open lines of credit accounts.

Because of the long-lasting value of PII, it sells for a much higher price on the black market — up to $15 per record. This is most often seen when companies storing a large volume of customer records experience a data breach, such as a healthcare insurer. This is much worse for susceptible consumers than a run-of-the-mill cardholder data breach, because of the threat of identity theft, which is more difficult to mitigate than credit card theft.

Company impact is also very high, but is still on par with a cardholder data breach in that a company experiences costs in response, credit monitoring, etc.; however, large-scale customer defection still has not been observed as a side effect. It’s important to note that government fines may be associated with this type of data breach, owing to the sensitive nature of the information.

Internal company information

This type of breach has often taken a backseat to the above-mentioned types, as it does not involve a customer’s personal details, but rather internal company information, such as emails, financial records, and intellectual property. The media focused on the Target and Home Depot hacks, for which the loss was considerable in terms of customer impact, but internal company leaks are perhaps the most damaging of all, as far as corporate impact.

The Sony Pictures Entertainment data breach eclipsed in magnitude anything that has occurred in the retail sector. SPE’s movie-going customers were not significantly impacted (unless you count having to wait a while longer to see ”The Interview” — reviews of the movie suggest the hackers did the public a favor); the damage was mostly internal. PII of employees was released, which could lead to identity theft, but the bulk of the damage occurred due to leaked emails and intellectual property. The emails themselves were embarrassing and clearly were never meant to see the light of day, but unreleased movies, scripts and budgets were also leaked and generously shared on the Internet.

Many firms emphasize data types that are regulated (e.g. cardholder data, health records, company financials) when measuring the impact of a data breach, but loss of intellectual property cannot be overlooked. Examine what could be considered “secret sauce” for different types of companies. An investment firm may have a stock portfolio for its clients that outperforms its competitors. A car company may have a unique design to improve fuel efficiency. A pharmaceutical company’s clinical trial results can break a company if disclosed prematurely.

Although it’s not thought of as a “firm” and not usually considered when discussing fissures in security, when the National Security Agency’s most secret files were leaked by flagrant whistleblower Edward Snowden, the U.S. government experienced a very significant data breach. Some would argue it is history’s worst of its kind, when considering the ongoing impact on the NSA’s secretive operations.

Now what?

Whenever I am asked to analyze a data breach or respond to a data breach, I am almost always asked, “How bad is it?” The short answer: it depends.

It depends on the type of data that was breached and how much of it. Many states do not require notification of a data breach of customer records unless it meets a certain threshold (usually 500). A company can suffer a massive system intrusion that affects the bottom line, but if the data is not regulated (e.g. HIPAA, GLBA) or doesn’t trigger a mandatory notification as required by law, the public probably won’t know about it.

Take a look at your firm’s data classification policy, incident response and risk assessments. A risk-based approach to the aforementioned is a given, but be sure you are including all data types and the wide range of threats and consequences.

Originally published at www.csoonline.com on March 17, 2015.

Read More
Information Security Tony MartinVegue Information Security Tony MartinVegue

The Sony Pictures Entertainment hack: lessons for business leaders

The Sony Pictures hack wasn’t just a breach — it was a wake-up call for every business leader. This post breaks down what Sony got wrong, how to actually quantify risk, and why your business continuity plan should be printed and ready for a cyber apocalypse.

a3afe-1avga3akjuffmzvgaadetaq.jpeg

The November 2014 hack against Sony Pictures Entertainment reads like something straight out of a low-budget movie: employees walk into work one morning to see red skulls appear on their computer monitors, with threats of destruction unless certain demands are met. Move the clock forward several months and while Sony is still picking up the pieces, the security community is trying to figure out if this is just another data breach or a watershed moment in the cat-and-mouse game that defines this line of work.

Plenty of retrospection has occurred, both inside Sony and out, and (rightly so) the conversation has centered on what could have been done differently to prevent, detect and respond to this unprecedented hack. What some people think of as a problem that is limited to cyber-security is actually a problem that spans all aspects of a business.

What lessons can business leaders, outside of the field of cyber-security, learn from the hack?

Enterprise Resiliency

On Monday, November 24th the hacking group, Guardians of Peace or GOP, made the attack known to both Sony and to the public at the same time. Sony management made the decision to shut down computer systems: file servers, email, Internet access, access for remote employees — all computing equipment. Under the circumstances, shutting down a global company was a bold, but necessary, thing to do. The depth and scope of the breach wasn’t completely known at the time and those in charge felt it was important to stop the bleeding and prevent further damage from occurring.

Sony systems were down for over six days. In that time, employees used other methods to communicate with each other, such as text messaging and personal email; in other words, they reverted to manual workarounds. Manual workarounds are the cornerstone of a good business continuity plan, which helps firms be more resilient during an emergency. During a crisis or a serious incident, a company has to assume that access to any computing resources could be unavailable for an extended period of time. There is no way of knowing if Sony had business continuity plans that included an extended outage of IT equipment or whether they evoked them, but one thing is clear — most companies do not factor in this type of disaster. Most business continuity planning revolves around localized disasters, such as terrorist attacks, hurricanes and severe weather. The outage that Sony experienced was global, total and extended.

If you manage a department, make sure you have a printed business continuity plan that includes call trees, manual workarounds and information on how to get a hold of each other if company resources are unreachable. Many companies’ plans assume a worst-case scenario consisting of a building or facility being inaccessible, such as a power outage or mandatory evacuation due to a natural disaster, but we are in a new era in which the worst case could be the complete shut-down of all computing equipment. Plan for it.

Defense in Depth

Defense in depth is a concept from the military that has been adopted by many in the cyber-security space. The basic idea is to have rings or layers of defense, rather than putting all your resources in one method. Think of a medieval castle under assault. The defenders are not going to place all of their men in front of the door of the throne room to protect the King. They dig a moat to make it harder to reach the door, raise bridges, close gates, place archers in parapets, pour hot oil on attackers using ladders, strategically deploy swordsmen inside the castle for when it is breached and a special King’s Guard as a last resort.

This method is very effective because if one method of defense fails, there are still others for the attackers to overcome. This also delays the attackers, buying valuable time for the defender to respond.

This technique is used in the cyber-security space in a similar way, as one would deploy resources to defend a castle. Many companies already implement some form of defense in depth, but the Sony hack is a good reminder to audit defenses and ensure you have the right resources in the right places. From outside the network coming in, firewalls and intrusion detection systems (IDS) are deployed. Computers are protected with antivirus and encryption. The most valuable data (in Sony’s case, presumably unreleased movies and internal emails) should be protected with a separate set of firewalls, intrusion detection, etc. — a King’s Guard. Strong company policies and security awareness training are also used as defense measures.

1f86b-1604nuvmb4tcqiu4-lwm9dq.jpeg

Caerphilly Castle, Caerphilly South Wales

Admittedly, this is a lot — and it is only half of the story. Protecting a company relies just as much on resources outside of the security department as it does resources inside the security department. Do you spend a million dollars a year on security measures but don’t have any method of controlling access to and from your physical building? Can someone waltz in the front door wearing an orange vest and a hardhat and walk off with the CFO’s laptop? Do you encrypt every laptop but don’t perform criminal background checks on employees, consultant and contractors? Maybe you spend a fortune on penetration testing your web sites but don’t do any security checks on vendors that have access to the company network. Target learned this lesson the hard way.

In order to create defense in depth, it is crucial to have the commitment of other departments such as human resources, facilities management, vendor management, communications and others, as they all contribute to the security posture of a company. You can’t rely on your security department to protect the whole company. It truly is a team effort that requires cooperation across all levels in a company. Just like defending a castle. Everyone has a job to do.

Managing Risk

Sony has been criticized for what have been perceived to be lax security measures. Some of the criticism is Monday morning quarterbacking and some of it is valid. In an article for CIO Magazine in 2007, the Executive Director of Sony Pictures Entertainment, Jason Spaltro, was profiled in a cringe-worthy piece called “Your Guide to Good-Enough Security.” In it, Spaltro brags about convincing a SOX auditor not to write up weak passwords as a control failure and explains that it doesn’t make business sense to spend $10 million on fixing a security problem that would only cause $1 million in loss.

He’s right, partly. It doesn’t make sense to spend 10 times more on a problem than the asset is worth. This isn’t a control failure or a problem with perception — this is a risk management problem. The first question a business leader needs to ask is, “Where did you come up with the $1 million in loss figure and is it accurate?” The viewpoint taken by Sony doesn’t fully take into account the different types of losses that a company can experience during a data breach. The Sony breach clearly demonstrates a failure of security controls in several different areas, but the real failure is the firm’s inability to measure and manage risk.

A good risk analysis program identifies an asset, whether it may be employee health information or movie scripts, or even reputation and investor confidence. From there, any threat that can act against an asset is identified, with corresponding vulnerabilities. For example, company intellectual property stored on file servers is a very important asset, with cybercriminals being a well-resourced and motivated threat. Several vulnerabilities can exist at the same time, ranging from weak passwords to access the file server to users that are susceptible to phishing emails that install malware on their systems.

Quantifying Risk

A rigorous risk analysis will take the aforementioned data and run it through a quantitative risk model. The risk analyst will gather data of different types of loss events such as productivity loss, loss of competitive advantage, asset replacement cost, fines, judgments — the list goes on. The final risk assessment will return an annualized exposure. In other words, a data breach could occur once every ten years at a cost of $100 million per incident; therefore, the company has an annualized exposure of $10 million. This makes it very easy for business managers to run a cost benefit analysis on security expenditures. If the analysis is done correctly and sound methods are used, security sells itself.

80d59-1khockrjmuwdb7gho_jthrw.png

Components of a risk analysis

In other words, Spaltro is right. You would never spend more on a control than the cost of an incident. However, not all risk is communicated to management in a way that allows for informed business decisions. As a business leader, look at how risk is being communicated to you. Is risk being communicated in a way that makes business decisions easy (dollars) or are softer methods being used, such as, “This vulnerability is a high risk!” (High compared to what? What exactly does High mean?)

In many other aspects of business and in other fields, risk is communicated in terms of annualized exposure and run through a cost-benefit analysis. Information Security, however, is lagging behind and the Sony hack is proof that the field must mature and adopt more advanced methods of quantifying and communicating risk. Decision makers must insist on it.

Conclusion

There are some battles in ancient history that strategists never tire of studying. Lessons and tactics are taught in schools to this day and employed in the battlefield. The Sony Hack will go down in history as one such battle that we will continue to learn from for years to come. Sony had strategic choices to make in the moment and those are continuing to play out in the media, and across the cyber-security landscape. What we can glean from this today is that the firms that are winning are the firms that look at cyber-security on a macro, holistic level. Individual components of successful program are interconnected throughout all aspects of the business, and it is through this understanding that business leaders can stay one step ahead.

Read More