The Way Business Is Moving published by
Issue Date: October 2008

Security ROI

October 2008
Bruce Schneier, chief security technology officer BT

Return on investment, or ROI, is a big deal in business.
Any business venture needs to demonstrate a positive return on investment, and a good one at that, in order
to be viable.

It has become a big deal in IT security, too. Many corporate customers are demanding ROI models to demonstrate that a particular security investment pays off. And in response, vendors are providing ROI models that demonstrate how their particular security solution provides the best return on investment.
It is a good idea in theory, but it is mostly bunk in practice.
Before I get into the details, there is one point I have to make. 'ROI' as used in a security context is inaccurate. Security is not an investment that provides a return, like a new factory or a financial instrument. It is an expense that, hopefully, pays for itself in cost savings. Security is about loss prevention, not about earnings. The term just does not make sense in this context.
The bottom line
But as anyone who has lived through a company’s vicious end-of-year budget-slashing exercise knows, when you are trying to make your numbers, cutting costs is the same as increasing revenues. So while security cannot produce ROI, loss prevention most certainly affects a company’s bottom line.
And a company should implement only security countermeasures that affect its bottom line positively. It should not spend more on a security problem than the problem is worth. Conversely, it should not ignore problems that are costing it money when there are cheaper mitigation alternatives. A smart company needs to approach security as it would any other business decision: costs versus benefits.
Annualised loss expectancy
The classic methodology is called annualised loss expectancy (ALE), and it is straightforward. Calculate the cost of a security incident in both tangibles like time and money, and intangibles like reputation and competitive advantage. Multiply that by the chance the incident will occur in a year. That tells you how much you should spend to mitigate the risk. So, for example, if your store has a 10% chance of getting robbed and the cost of being robbed is $10 000, then you should spend $1000 a year on security. Spend more than that, and you are wasting money. Spend less than that, and you are also wasting money.
Of course, that $1000 has to reduce the chance of being robbed to zero in order to be cost-effective. If a security measure cuts the chance of robbery by 40% – to 6% a year – then you should spend no more than $400 on it. If another security measure reduces it by 80%, it is worth $800. And if two security measures both reduce the chance of being robbed by 50% and one costs $300 and the other $700, the first one is worth it and the second is not.
The actuarial tail
The key to making this work is good data; the term of art is 'actuarial tail'. If you are doing an ALE analysis of a security camera at a convenience store, you need to know the crime rate in the store’s neighbourhood and maybe have some idea of how much cameras improve the odds of convincing criminals to rob another store instead. You need to know how much a robbery costs: in merchandise, in time and annoyance, in lost sales due to spooked patrons, in employee morale. You need to know how much not having the cameras costs in terms of employee morale; maybe you are having trouble hiring salespeople to work the night shift.
With all that data, you can figure out if the cost of the camera is cheaper than the loss of revenue if you close the store at night – assuming that the closed store will not get robbed as well. And then you can decide whether to install one.
Cybersecurity is considerably harder, because there just is not enough good data. There are not good crime rates for cyberspace, and we have a lot less data about how individual security countermeasures – or specific configurations of countermeasures – mitigate those risks. We do not even have data on incident costs.
One problem is that the threat moves too quickly. The characteristics of the things we are trying to prevent change so quickly that we cannot accumulate data fast enough. By the time we get some data, there is a new threat model for which we do not have enough data. So we cannot create ALE models.
The million dollar question?
But there is another problem, and it is that the math quickly falls apart when it comes to rare and expensive events. Imagine you calculate the cost – reputational costs, loss of customers, etc, – of having your company’s name in the newspaper after an embarrassing cybersecurity event to be $20 million. Also assume that the odds are 1 in 10 000 of that happening in any one year. ALE says you should spend no more than $2000 mitigating that risk.
So far, so good. But maybe your CFO thinks an incident would cost only $10 million. You cannot argue, since we are just estimating. But he just cut your security budget in half. A vendor trying to sell you a product finds a Web analysis claiming that the odds of this happening are actually 1 in 1000. Accept this new number, and suddenly a product costing 10 times as much is still a good investment.
It gets worse when you deal with even more rare and expensive events. Imagine you are in charge of terrorism mitigation at a chlorine plant. What is the cost to your company, in money and reputation, of a large and very deadly explosion? $100 million? $1 billion? $10 billion? And the odds: 1 in a hundred thousand, 1 in a million, 1 in 10 million? Depending on how you answer those two questions – and any answer is really just a guess – you can justify spending anywhere from $10 to $100 000 annually to mitigate that risk.
Jigging the numbers
Or take another example: airport security. Assume that all the new airport security measures increase the waiting time at airports by - and I am making this up – 30 minutes per passenger. There were 760 million passenger boardings in the United States in 2007. This means that the extra waiting time at airports has cost us a collective 43 000 years of extra waiting time. Assume a 70-year life expectancy, and the increased waiting time has 'killed' 620 people per year – 930 if you calculate the numbers based on 16 hours of awake time per day. So the question is: If we did away with increased airport security, would the result be more people dead from terrorism or fewer?
This kind of thing is why most ROI models you get from security vendors are nonsense. Of course their model demonstrates that their product or service makes financial sense: They have jiggered the numbers so that they do.
This does not mean that ALE is useless, but it does mean you should 1) mistrust any analyses that come from people with an agenda and 2) use any results as a general guideline only. So when you get an ROI model from your vendor, take its framework and plug in your own numbers. Do not even show the vendor your improvements; it will not consider any changes that make its product or service less cost-effective to be an 'improvement'. And use those results as a general guide, along with risk management and compliance analyses, when you are deciding what security products and services to buy.
Are Security ROI Figures Meaningless?
The Problem of Measuring Information Security
Calculating Security Return on Investment
Bejtlich and Business: Will It Blend?
How to Calculate Return On Investment (ROI) for Web Security

Others who read this also read these articles

Search Site


Previous Issues