How the NSA’s approach to security is less about protecting us and more about keeping us “exploitable”.
I just spent an hour watching a presentation given by Jacob Appelbaum at the Chaos Computer Congress in Hamburg this past December. Appelbaum presents even more evidence and explanation of the far-reaching data collection being done by the NSA. The conference presentation accompanies a Der Spiegel article released the same day that delves into specific details about the depth and nature of the NSA’s passive and active data mining operations. The documents leaked by Snowden last year outlined a lot of this and obviously paved the way for the flow of this kind of information; but the Der Spiegel article was unique in it’s technical specificity. It’s one thing to claim that big government is spying on citizens; it’s another thing to get a practical explanation as to how they’re doing it. Seems to make it more tangible.
One of the more shocking revelations in Appelbaum’s talk was how the NSA can use continuous wave radar to pull data from passive tracking devices on or near signals they’re trying to intercept. In laymen’s terms, these passive tracking devices are similar in nature to the stickers they put on books in the library to prevent theft. They don’t push out an active signal and don’t require a power source; they’re merely reflective. When you walk through the sensors at the entry of the library, you pass through a signal; if the sensors detect the specific signal being reflected back by the sticker, the alarm goes off. Except in the case of the NSA, these signals can now actually be affixed to a video, keyboard, voice signal and transmit data back when activated by the radar. The NSA can sit outside your house with a big microwave emitter, begin sending this directed signal into your house and measure the data that gets reflected back. You just have to hope you’re not getting targeted for any length of time or else you might get cancer.
He also discussed the concept of “persistent attacks”. These are the kind of computer attacks that infect at such a low level that standard means of eliminating trojans fail. For example, they can infect the BIOS of a computer so that the computer remains vulnerable even after it’s been rebooted or infect the firmware of a hard drive so the drive remains vulnerable even after it’s been formatted. These attacks aren’t particularly new. US-CERT published warnings about similar threats in 2005. Seeing it advertised as a tool of the NSA, though, troubles me.
Several other techniques surprised me, interdiction being perhaps the most frightening and invasive. Interdiction is a military term that means intercepting or destroying enemy forces or supplies before they reach enemy lines. For the NSA, interdiction is the practice of hijacking a shipment of electronics in the mail so that bugs can be placed before it reaches the intended recipient. Appelbaum shares a story from a contact in the NSA about intercepting a computer and replacing its case with one that had a passive sensor embedded in the injection molding, making it invisible to the eye and undetectable to the computer itself.
Another technique that I hadn’t heard of before is “bridging the air gap.” An air gap is a networking technique in which you keep secure networks like your company intranet completely detached from an unsecured network like the Internet. Generally speaking, if there’s no physical connection, there’s no way data can be transferred (a la Battlestar Galatica); this poses a problem for data miners (and Cylons, presumably). For example, you might be incredibly security-conscious and take every precaution not to leave yourself open to attack. However, your neighbor or friends might not be so careful, and it might be their cell phone the NSA hacks to gain access to your otherwise impenetrable network.
When you combine those techniques and technologies, it paints a frightening picture; but not in the way you might think.
All technologies will inevitably have insecurities and vulnerabilities. (Whether intentionally or unintentionally left insecure is great topic for debate.) That is the cyclical nature of technology; invent, refine, invent, refine. And it’s obvious that the smartest person in the room is the one you want on your side. However, the NSA is using tax-dollars to hoard vulnerabilities and exploits, treating them as a kind of currency that gives them an edge over “the bad guys” and trading them with other intelligence agencies. This makes us all vulnerable, even if not to our government. If our government knows about these flaws, it won’t take long for smart people outside the government to become aware of them. In fact, in his presentation Appelbaum mentions several security experts who demonstrated the possibilities of some of these exploits several months before they were even aware they were a part of the NSA toolbox. These exploits will eventually be discovered by someone, and then we’ll just have to hope they have better discretion than our government.
White-hat hacking is hacking for non-malicious reasons: to test the security of a system or to find and fix security lapses. On the other hand, Robert Moore defines a black-hat hacker as a hacker who “violates computer security for little reason beyond maliciousness or for personal gain”. So what does that make our government? NSA apologists often claim this hacking is to protect us, for our own good; but we must take it on good faith without very little in the way of demonstrable evidence to back up that claim. They’re stealing data and hoarding vulnerabilities to make it easier for them to act when a “good cause” comes along. Then you read about how the FBI threatened to make Martin Luther King Jr.’s affairs public if he didn’t commit suicide or about how the IRS targeted specific groups based on their names or political leanings, and you grow skeptical of the government’s idea of a “good cause”. Gray hat, at best.
Complicating all this is the fact that this is a highly technical conversation happening in a highly political context. Important details are often misunderstood and worse yet obscured by talking heads playing reductio ad absurdum or politicians who can barely figure out how to check their voicemail.
A friend told me she didn’t think most people cared about this. I think there’s some truth to that, but I honestly believe that’s because it’s very hard to perceive injustice and form an opinion when you don’t fully understand the underlying technologies. If someone from the government broke into your house without warning and without warrant, you’d be incensed, the injustice would be crystal clear. That’s very much akin to what’s happening, but it doesn’t feel as intrusive because it’s all so opaque to so many people. It’s kind of like asking what rules should govern teleportation use; you might have some general ideas on what should and shouldn’t be allowed, but until you understand what’s possible, it’s hard to know what standards should be enforced. If teleportation was a thing, and government agents were beaming into your house, stealing your property, and beaming back out, you’d rather quickly want some laws passed.
I suspect it will take time for people to understand these technologies. People have a hard time forming opinions in the abstract. There will be some big case, some big event that people can latch on to, something that will drive public opinion and will move this from legal and technological theorizing into a specific injustice perpetrated on a specific individual or group of individuals. And I suspect most of these important conversations will come in time as people who truly understand technology shift into positions of power.
Until that day comes, these vulnerabilities and exploits should be disclosed to the people making this hardware and software so they can be fixed (assuming they were unintentional) and we can all be safer for it. We citizens must demand it be so and must stay informed to ensure it is so. After all, even though our government exists to protect and serve its citizens, its a government of us, by us and for us.