The Fallacy of the "Secret Master Key"

I’m often at a loss on how to explain to people the thorny social issues which arise from our complex contemporary technology. Much of the time, in order to even begin discussing the implications of a particular topic, you have to explain all sorts of complicated underlying concepts to your audience. By the time you arrive at what is actually THE POINT, your listeners, intelligent as they may be, have that glazed-over look in their eyes which tells you that you're trying their attention span.

That’s why, when a perfect, real-life analogy falls into my lap, I seize it. That’s what I’ll be doing here.

On this day, the big news story for IT security is about something you might not associate with IT: luggage locks. Specifically, TSA-approved locks. Bear with me while I explain: Because we live in a post-9/11 world filled with terrorists and criminals and other bogeymen hiding around every corner, not only do airline passengers often feel the need to lock their luggage, but TSA inspectors have the right to open and inspect certain pieces of luggage. As such, TSA inspectors were cutting off scores of locks from luggage to inspect the contents as they passed along conveyor belts for loading. Naturally, that led to scores of angry travellers asking why the lock they paid for had to be destroyed. 

For some in the TSA, the solution was to partner with lock-manufacturers to create a “TSA approved” lock that would open with a master key; a key which would only be known the TSA and their inspectors. This way, passengers could lock their luggage, but it could be opened as needed by TSA inspectors. Brilliant, right?

No. It’s fantastically naive.

Recently, the Washington Post accidentally published photos of a TSA inspector’s keychain, which included the master key for “TSA approved locks”. The Post quickly yanked the photo, but it was too late: it had already been seen. Yesterday, someone published a 3D file of the master lock which they’d created by inspecting the photo. By afternoon, another person had 3D-printed out the key and tried it on a TSA-approved lock. It worked.

I’m just guessing, but it’s probably safe to say that there are likely a few million of these locks in circulation. All of them are now useless. We shouldn’t blame the people who created the 3D-printed copies, because they just did what any criminal was able to do with mere hours of effort. Rather, we should condemn the authorities for taking such a lackadaisical approach to security. As Andy Greenberg from Wired points out:

“The real security blunder, as Berkeley computer security researcher Nicholas Weaver noted after the key photos were first published, was made by the TSA and the Washington Post, who released the photos on the Post’s website. Publishing photos of sensitive keys, after all, is a well-understand screwup in the world of physical security, where researchers have shown for years that a key can be decoded and reproduced even from a photo taken from as far away as 200 feet and at an angle.”

So, that concludes my setup. What does this have to do with IT security you ask? Everything.

Right now, governments are gravitating towards the idea that they should have a “master key” to access any kind of technology they want. Phones, computer operating systems, cloud data, SSL and other encryption: people in government are saying these should all have some kind of “back door” accessible only through a kind of “master key” which is held exclusively by governments and their agencies. FBI director Jim Comey publicly called for this as recently at this past July. UK Prime Minister David Cameron wants to ban encryption unless there's a secret backdoor for the government.

Does this approach sound familiar? It should. And it has the exact same problems as the TSA’s “master key”.

Our civilization has never before existed in a state where technology has so completely permeated our lives. Yet, the state of our IT security is an incredibly sorry one. The past four years have seen dozens of high-profile hacks, SonyGoDaddyApple; the very companies which should be among the best-positioned to defend themselves from hacking, all compromised. A group called LulzSec went on a hacking spree in 2012 just to demonstrate how easily they could compromise systems. Overseas criminals are also increasingly targeting ordinary people, extorting them under the threat of stealing their identities or destroying their data. What’s more, it’s been demonstrated that major infrastructure operated by technology is woefully underprepared for a cyber-attack. Today’s malicious hackers are just as sophisticated as (if not more so than) the people designing and managing the systems where your most sensitive data resides.

It is a foregone conclusion then that, if governments force companies to engineer “backdoors” into their software, that it will be a short time before some hacker replicates or steals the “master key”. At such point, hundreds of millions of devices, phones, computers, cloud servers, will suddenly be vulnerable to exploit. Worse, that hacker will likely keep the discovery to themselves before selling it to the highest bidders.

Contrary to what certain government officials seem to believe, there’s no way to “add a backdoor” while “keeping the overall system secure”. This is a contradiction in terms. If we want to keep our data safe, be it corporate, personal, or government, we have to design systems which lock down tight, not ones designed with deliberate vulnerabilities.

In a world where one key can open every lock, none of those locks can do what they were built to do in the first place.

lock and circuit board image - copyright: wk1003mike/Shutterstock