net.work

The Way Business Is Moving

net.work published by
Issue Date: October 2008

Full disclosure

1 October 2008
Bruce Schneier, chief security technology officer of BT

In eerily similar cases in the Netherlands and the United States, courts have recently grappled with the computer-security norm of 'full disclosure', asking whether researchers should be permitted to disclose details of a fare-card vulnerability that allows people to ride the subway for free.
The ‘Oyster card’ used on the London Tube was at issue in the Dutch case, and a similar fare card used on the Boston ‘T’ was the centre of the US case. The Dutch court got it right, and the American court, in Boston, got it wrong from the start – despite facing an open-and-shut case of First Amendment prior restraint.
The US court has since seen the error of its ways – but the damage is done. The MIT security researchers who were prepared to discuss their Boston findings at the DefCon security conference were prevented from giving their talk.
The ethics of full disclosure
The ethics of full disclosure are intimately familiar to those of us in the computer-security field. Before full disclosure became the norm, researchers would quietly disclose vulnerabilities to the vendors – who would routinely ignore them. Sometimes vendors would even threaten researchers with legal action if they disclosed the vulnerabilities.
Later on, researchers started disclosing the existence of a vulnerability but not the details. Vendors responded by denying the security holes’ existence, or calling them just theoretical. It was not until full disclosure became the norm that vendors began consistently fixing vulnerabilities quickly. Now that vendors routinely patch vulnerabilities, researchers generally give them advance notice to allow them to patch their systems before the vulnerability is published. But even with this 'responsible disclosure' protocol, it is the threat of disclosure that motivates them to patch their systems. Full disclosure is the mechanism by which computer security improves.
Medieval guilds
Outside of computer security, secrecy is much more the norm. Some security communities, like locksmiths, behave much like medieval guilds, divulging the secrets of their profession only to those within it. These communities hate open research, and have responded with surprising vitriol to researchers who have found serious vulnerabilities in bicycle locks, combination safes, master-key systems, and many other security devices.
Researchers have received a similar reaction from other communities more used to secrecy than openness. Researchers – sometimes young students – who discovered and published flaws in copyright-protection schemes, voting-machine security and now wireless access cards have all suffered recriminations and sometimes lawsuits for not keeping the vulnerabilities secret. When Christopher Soghoian created a website allowing people to print fake airline boarding passes, he got several unpleasant visits from the FBI.
Fundamentally fragile
This preference for secrecy comes from confusing a vulnerability with information *about* that vulnerability. Using secrecy as a security measure is fundamentally fragile. It assumes that the bad guys do not do their own security research. It assumes that no one else will find the same vulnerability. It assumes that information will not leak out even if the research results are suppressed.
These assumptions are all incorrect.
The problem is not the researchers; it is the products themselves. Companies will only design security as good as what their customers know to ask for. Full disclosure helps customers evaluate the security of the products they buy, and educates them in how to ask for better security. The Dutch court got it exactly right when it wrote: “Damage to NXP is not the result of the publication of the article but of the production and sale of a chip that appears to have shortcomings.”
In a world of forced secrecy, vendors make inflated claims about their products, vulnerabilities do not get fixed, and customers are no wiser. Security research is stifled, and security technology does not improve. The only beneficiaries are the bad guys.
If you will forgive the analogy, the ethics of full disclosure parallel the ethics of not paying kidnapping ransoms. We all know why we do not pay kidnappers: it encourages more kidnappings. Yet in every kidnapping case, there is someone – a spouse, a parent, an employer – with a good reason why, in this one case, we should make an exception.
The reason we want researchers to publish vulnerabilities is because that is how security improves. But in every case there is someone – the Massachusetts Bay Transit Authority, the locksmiths, an election machine manufacturer – who argues that, in this one case, we should make an exception.
Opinion
We should not. The benefits of responsibly publishing attacks greatly outweigh the potential harm. Disclosure encourages companies to build security properly rather than relying on shoddy design and secrecy, and discourages them from promising security based on their ability to threaten researchers. It is how we learn about security, and how we improve future security.


Others who read this also read these articles

Search Site





Subscribe

Previous Issues