Part 2: What does this mean for my security?

In Part 1 we brought you up to speed on what the FBI’s request in the Apple vs. FBI fight is all about. Where we left off is that the iPhone utilizes very strong encryption and that the easiest way around this, which is what the FBI wants, is for Apple to create a new update for the iPhone, signed with Apple’s own signing keys, which would allow for decryption key to be recovered.

We’ve heard over and over from various commenters that what the FBI is asking for is bad for security while the FBI accuses Apple of bring concerned solely with their profits and the public of being more sacrificing their security for unnecessary privacy. In this part we’ll take a deeper look at what the results of this fight would potentially mean for the security of a corporation’s network at a technical level before moving on to Part 3 where we will look at the business and legal implications for company executives.

Rogue Code

We can accept that the FBI and Apple may have different priorities, but what many in the political establishment seem to be pining for is to have their cake and eat it too; they want Apple to help the FBI without weakening the security of their products. There are some technical reasons why this is not necessarily possible. Let’s take a dive into the FBI’s request and see what we find.

A few years ago there was political pressure placed on Apple to make it more difficult to use stolen iPhones. Apple responded and the strong security that the FBI is now asking Apple to bypass is, in part, the result of that political pressure. A good way to make phones less lucrative to thieves is to make them useless and therefore worthless if stolen, so the iPhone has been programmed to lock itself down if someone tries to enter the wrong PIN or passcode too many times. Much to the FBI’s chagrin, an iPhone has no way to know if the person who “stole” it is some kid who wants to sell it for cash or the FBI, all it knows is that somebody is typing in a PIN and it’s not the right one so it should go into lockdown.

As I mentioned in the previous post, the only realistic way to get around this, short of calling on the NSA to exploit classified vulnerabilities, is to create a new version of iOS that disables these security controls and install it on the target phone. This update would have to be digitally signed with Apple’s own secret signing key as that is the only one (that we know of) that an iPhone will recognize. The fundamental problem is that once that piece of code exists and had Apple’s digital signature it could be installed on any iPhone. This is the reason that Tim Cook refers to such code as “cancer,” we don’t want it to spread but it could.

The FBI’s current request would have Apple develop, sign, and install this update, then unlock the target phone all within Apple’s own labs where the code would ostensibly be safe from release into the wild. Theoretically Apple would then be free to delete the “cancerous” code and we could all forget that this little episode ever happened… until the next time. And that’s the problem.

Although the director of the FBI wrote that this is about one phone and not a precedent it’s a reasonable assumption that there will be a next time. It could be the next time there is a terrorist attack, or it could be the next time the FBI suspects someone is a terrorist, or it could be the very next day when requests are submitted for the 175 iPhones that the State of New York currently has sitting in evidence and in response to the 12 other federal requests for unlocking iPhones under the same law at issue here. The result is that this is likely to be a Pandora’s Box; once that code exists it’s not going to get deleted because law enforcement is going to want to use it all the time.

So once this code exists it has to be kept somewhere. Apple may have some tight security but, once again, “there’s no such thing as secure” and the magic update that unlocks any iPhone will be a valuable prize for hackers and intelligence agencies, “ours” and “theirs” alike. We already know that our intelligence agencies are very interested in these capabilities. People will try very hard to get their hands on it, likely resorting to physically infiltrating Apple itself (if they haven’t already). Both Manning and Snowden were able to leak some of the most secretdocuments the US government possessed so I don’t doubt for a second that someone would get their hands on that iPhone unlock code and we would all potentially be at the mercy of whoever got it.

Today’s fight may be about an iPhone but there are many other devices and programs that implement encryption. Any ruling that affects Apple would obviously be applicable to Android and other smartphones that protect their data in the same way as well as the software inside tablets and laptops that similarly encrypt their hard drives to protect data against physical theft of the device. Beyond these devices there are also many thousands of programs that utilize encryption to protect Internet communications which the FBI has also said it wants access to. Today we may be worried about Apple’s keys getting leaked as a result of the FBI’s request but before long it is likely that we will be worried about Apple, Google, Microsoft, Facebook, and any other company that implements encryption to protect its users losing their keys to similar requests.

This may all seem a bit subjective and I’m just some security guy writing on the Internet but I’m not the only one with this concern. Among the people who support Apple’s position in this fight are many that one might not expect including former director of the CIA James Woolsey, former director of the CIA and NSA General Michael Hayden, former CIA agent Rep. Will Hurd of Texas, and former NSA research scientist Dave Aitel. It sends a strong message that maybe the FBI is asking for something that will cause more damage than it will prevent when the people who are in charge of and associated with 2 major US intelligence agencies are opposed to it, keeping in mind that their responsibilities included hacking into foreign systems and protecting our own systems from the same types of attacks.

So far most of the support for the FBI has been coming from other smaller law enforcement agencies and associated organizations, and various “tough on terrorists” politicians. For what it’s worth, the current heads of the CIA and NSA have remained quiet on this topic, although the FBI’s filings seemed to indicate that the NSA wouldn’t help them.

To be fair, having Apple unlock the phone rather than permanently weaken the phone’s security with a backdoor password or any of the other options I describe in the remainder of this post is probably the least-bad option.

Handing over the keys

There is another similar option to having Apple develop the unlock code, one that the FBI has used as a veiled threat towards Apple: Hand over the source code and the signing keys to the FBI so that they can make the necessary changes to the software and sign it themselves. This is far worse as it puts copies of the source code and the keys in someone else’s hands, which presents even more major problems.

Just as there is a strong risk that the “magic unlock code” could get stolen from Apple there would also be a risk of it getting stolen from the FBI. I hate to have to reference Manning and Snowden again, but there it is, I doubt the FBI has better internal security than the NSA and it’s not like the FBI hasn’t had spies in its own midst before.

Another problem with this approach is that the FBI doesn’t necessarily know Apple’s source code all that well and it’s not terribly far-fetched that they could make mistakes when trying to modify it. While the accidental leaking of a horribly broken “unlock update” that permanently breaks the phones it is applied to might be amusing, it would also put us in the position of exposing everyone to the increased risk of Apple’s code and keys getting leaked without any results to actually show for it.

Even if the “unlock update” (functional or otherwise) doesn’t get leaked, the loss of Apple’s signing key alone to potential hostile attackers would be a major issue. It would take more time, skill, and probably a lot of broken iPhones to successfully modify iOS without the source code, but it is entirely possible as long as one has that signing key to trick the phone into accepting whatever software is being sent. Hackers have decades of experience reverse engineering software to bypass security measures and the only thing preventing them from doing so in this case is Apple’s secret key.

The loss of the source code itself would be a major coup, potentially much worse than the FBI’s unlock update leaking out. With the source code and signing key an attacker could make any changes they wanted to iOS, creating an even more powerful backdoor than what the FBI is currently asking for. The obvious move would be to create a remotely accessible backdoor password and distribute the resulting update to as many phones as possible.

Without the signing key an attacker could still browse the source code to identify overlooked vulnerabilities that could be exploited on even unmodified iPhones. Exploits for any such vulnerabilities would almost certainly end up in malware packages.

The Ever Elusive Perfect Back Door

We can daydream about other potential solutions for the FBI as long as we like, with the most obvious one being to program a backdoor passcode into the iPhone that only the FBI knows. For now we will ignore the problem that every other law enforcement agency on earth, some of which we may not be so comfortable with, will want their own backdoor passcode too and focus solely on the FBI.

To be fair, as far as this case is concerned the FBI is not currently asking for a true remotely exploitable permanent backdoor installed in every iPhone that has been sold (at least not yet anyways), they are, according to their own statements, only looking to get access to this one phone one time. Even if we take that at face value (which is very difficult to do in light of other past statements from the FBI) the prospect of a true backdoor, one that always exists that can be exploited remotely and covertly will almost surely come up again some day and the NSA is already known to have created some backdoors so it’s worth considering the technical implications of this possibility.

The problem is that any permanent backdoor code will be implemented in software which will then be installed on millions of phones around the planet. Some of those users will be hackers, some of which may be hobbyists, others may be criminals, and some will almost certainly be other countries’ intelligence agencies. If those hackers have even the slightest inclination that a universal backdoor passcode exists they will start looking for it, it’s too valuable not to, and there is nothing to stop them from tinkering with their own device. It will be found and, if found by the wrong people, it will be abused.

The possibility of a hacker finding a backdoor isn’t mere speculation either, this happens fairly regularly and more often than not the discoverer didn’t even realize a backdoor existed until they found it. Among cases where backdoors or decryption keys have been found are:

The Juniper Firewall example is very relevant to this topic as it demonstrates exactly how quickly a backdoor could be found in a piece of software if it were known to exist. The presence of a backdoor in Juniper VPN appliances was publicly announced on Dec 17, 2015 along with the release of an emergency patch to remove it. Independent researchers at Fox-IT were able to find the actual password within 6 hours by analyzing the code changed by the patch, it was then found again by Rapid7 and was published publicly on Dec 21. Although the NSA is widely suspected to be behind at least one of the Juniper backdoors, the culprits behind the rest as-yet unknown, we have evidence that even the venerable NSA’s backdoor capabilities are not infallible.

Those of us who have been in security long enough will remember the Crypto Wars of the 1990’s, including the Clipper Chip. This was a chip designed by the NSA to encrypt voice communication while including a backdoor for law enforcement to decrypt those calls. Despite the NSA classifying the underlying algorithm to prevent the public from reviewing it, the first of a series of vulnerabilities were independently found and published within a year of the chip’s announcement. Plans for the chip were abandoned, but not before many other leading cryptographers, some of whom are vocally opposed to the FBI’s current request of Apple, analyzed the concept of a 3rd-party backdoor and determined that there are significant risks to such a model. A follow-up paper was released last year by many of the same authors pointing out that the risks today are even more serious than when the original paper was published.

Interestingly, the vulnerabilities that were found in the Clipper Chip allowed for the backdoor to be disabled rather than abused by unauthorized parties. Perhaps then this isn’t so much a warning that others will find such a backdoor, but that such a backdoor will be ineffective when it is needed most despite the increased risk the rest of us will all face by having it present to begin with. Still, I’m not sure what gives politicians the idea that Silicon Valley could easily create some sort of perfect back door when the NSA has already tried and failed.

Ghosts from the Past

Another option to circumvent encryption without installing a backdoor is to force the use of weaker encryption. The idea is that the encryption can be made weak enough so that an organization with sufficient resources, like perhaps the US government, would be able to crack the keys in a reasonable amount of time while common criminals who lack these resources would not. Again, this is not what the FBI is currently asking for, but is a possible scenario that has been suggested and could come up again.

The obvious flaw with this approach is that the encryption our devices use is not only protecting us from common criminals but also from sophisticated organized crime operations that may have the resources to crack weakened encryption and other nations’ intelligence agencies that definitely have the resources. This option should be seen as a non-starter, even worse than implementing a backdoor password to otherwise strong encryption. The use of weakened encryption is more than a suggestion though, we know this is a bad idea because it has already been done and we are still suffering the consequences of it even today.

Back during the Crypto Wars of the 1990’s, during the early days of the Internet as we now know it, encryption technology was considered a munition under US law, no different than tanks and jet fighters, and required the same sort of government permission to be exported. Companies that produced web browsers and other software that utilized encryption had to create 2 different versions of their programs, one for use inside the US that was free to use strong encryption and another “export” version that used weaker encryption.

The flaw in this approach is that, at its core, encryption is just math and the underlying mathematical concepts are already widely known around the world. Saying that someone is not allowed to export strong encryption is the equivalent of saying that someone is not allowed to send a very large number out of the country. During the first round of the Crypto Wars (many are saying that the Apple FBI fight is a new round in the same war) some cryptogeeks took this to its logical conclusion by creating t-shirts and getting tattoos with the RSA encryption algorithm (one of the algorithms that was “too strong” for export) to point out the absurdity of the situation. That may have been all in good fun but the author of the original PGP encryption software spent 3 years under Federal investigation simply because someone outside the country downloaded his software which was also too strong for export.

Any call to weaken encryption overlooks the obvious problem that bad guys can do math too. Even if we weaken the iPhone’s security it wouldn’t take long for someone else in another country to write a piece of strong encryption software that could be used by those who intend to do us harm instead. In fact many of these alternative encryption products already exist. We would only be hurting our own security by utilizing weaker encryption.

And hurt our own security we have. Although the export controls were lifted long ago and we are free to send strong encryption software to whoever we choose, many encryption packages that we still use today contain backwards compatibility functions that allow them to communicate with systems that still use the weaker “export strength” encryption. Every once in a while a vulnerability in these weaker encryption functions is found that puts our modern systems at risk. A major one of these vulnerabilities was just found this month that exposes 11 million websites at risk of an attacker decrypting their communications.

Even if we could make “perfect” weak encryption, whatever that is, it would rapidly become useless as computers, and thus cracking speed, got faster. Anyone who had a need for encryption would have to constantly update their systems to use every stronger, but not too strong, encryption in order to stay one step ahead of the bad guys while still being weak enough for their own law enforcement to crack.

We would be asking overworked IT admins to apply updates that would potentially break compatibility with their customers who aren’t updating as often and to deal with the inevitable complaints. It has been 20 years since the end of the first round of the Crypto Wars and we still haven’t been able to get rid of all the weak encryption from that era, we would have no hope of keeping on top of it on a regular basis.

A Matter of Trust

So what will these scenarios mean for the average IT administrator? Let’s imagine that the properly signed “magic unlock update” gets stolen or, even worse, Apple’s signing keys get stolen. How would it affect our own networks and security?

We often don’t think about it, but the security of the Internet is built on a system of trust. It’s not just Apple that uses cryptographic signing keys to authenticate their software updates, the same thing happens every time our operating system and many other software packages auto-update. A trust check also happens automatically every time we connect to a website, VPN, or email server that protects our communications with encryption. When this works correctly it is completely unnoticeable except for the little padlock in the corner of our browser and it’s easy to forget that it’s even happening, if you knew it was happening at all.

In the case of software updates the process is fairly simple: A pair of cryptographic keys, really just large numbers, is created through a mathematical process. One key is kept secret and used to digitally sign software updates (the private key) while the other key is programmed into the software that is released to the public (the public key). When a new software update is released the public key can be used to mathematically verify that the update was digitally signed with the private key and hasn’t been altered since thus verifying that it is legitimate and hasn’t been tampered with. Even if someone were to extract the public key from the software it can’t be used to forge a signature, only the private key can do that.

Dealing with SSL/TLS is a bit more complicated, simply because it would be a daunting task to try to generate a public/private key pair for every single web site, mail server, VPN, and whatever else uses encryption and then program every single one of those public keys into every web browser. Keeping this list updated would be impossible.

Instead, SSL/TLS (as used on the public Internet) utilizes what are called Certificate Authorities. These are independent companies that have each generated one or more public/private key pairs, just like the ones described above. The Certificate Authorities distribute the public keys to the developers of web browsers and other software that needs them while they keep the private keys secret. Whenever someone wants to set up a new service that uses SSL/TLS encryption they will generate their own public/private key pair. The private key is kept on their server while the public key is put into a specially formatted Certificate File along with some other information about the company, its domain name, expiration dates, etc. Before the service can begin operation the Certificate File will be sent to a Certificate Authority who will verify the identity of the person behind the request and, once satisfied that no impersonation is going on, will digitally sign the Certificate File with their own private key. The service is now ready for business.

When your web browser connects to a web site protected by SSL/TLS encryption it will first request the Certificate File and verify that it is signed by one of the Certificate Authorities whose public keys have been pre-programmed into the browser. If everything checks out then it can establish an encrypted connection to the web server using the public key contained in the Certificate File. Basically the browser trusts that the server is legitimate because it trusts that the Certificate Authorities verified that web site’s operator before digitally signing the web site’s Certificate File and the encryption keys contained within it. The same process takes place for SSL VPNs, mail servers, and any other client/server service that uses this very common type of encryption.

This entire system completely breaks if we can no longer trust those digital signatures. If Apple, Microsoft, Google, or another major software vendor were to lose control of their secret signing key anyone would be able to forge signatures on malicious software updates. An attacker’s only problem would be how to distribute them to their targets as the auto-update mechanisms would still, hopefully, be protected from tampering by an SSL/TLS connection that uses a different set of keys from the software itself.

Things get much worse if a Certificate Authority were to lose control of their signing keys. Anyone would be able to use the stolen keys to forge signatures on their own Certificate Files, effectively giving them the ability to impersonate any web site, mail server, VPN, or other SSL/TLS protected service on the Internet. Particularly worrisome is that the App Stores and auto-update mechanisms of these major software vendors could be impersonated as well.

This is the doomsday scenario: both code signing keys and certificate signing keys leaked to unknown attackers. IT admins would no longer be able to trust their own auto-update systems and would have to be on constant lookout for rogue patches that would potentially install backdoors in their systems. Patching is hard enough to keep up with already and missing security patches are a major security vulnerability that has led to many of the Internet security problems we face today. Forcing IT administrators to be on the lookout for rogue update that are effectively indistinguishable from the real thing will only make matters worse. We will be stuck in a situation where not patching leaves us exposed to hackers but we have no way of knowing if the patch itself is hostile.

Beyond that we wouldn’t be able to trust any web server, mail server, or VPN that we connected to. It could be legitimate, or it could be the hacker sitting next to us at Starbucks tampering with the connection to grab our online banking password, or it could be a foreign intelligence agency halfway around the world trying to get access to our corporate network. The Internet’s ability to do anything that required security assurances would effectively be broken. This isn’t all just speculation though, these scenarios have already happened a few times, albeit on a relatively small scale.

Doomsday Lite

In 2011 DigiNotar, a Certificate Authority, was hacked. The hacker was able to forge signatures on over 500 fraudulent certificates covering all of Google’s domains, Yahoo!, Mozilla (the developers of FireFox), WordPress, and the Tor Project (an online anonymity system). From what we know of the breaches the attacker is believed to be Iranian and possibly associated with Iranian intelligence services. The certificates were used to spy on the Gmail accounts of approximately 300,000 Iranian Internet users. In the aftermath, DigiNotar’s keys were removed from web browsers, preventing the browsers from verifying valid and fraudulent certificates alike, effectively putting DigiNotar out of business and forcing their customers to quickly find a new Certificate Authority. DigiNotar went bankrupt as a result. The rest of the Internet is lucky that the attacker focused on Iranian Internet users, with forged Google certificates the impact could have been much wider.

While the DigiNotar hack was an attack on the SSL/TLS infrastructure, we have also seen attackers use signing keys to create malicious software updates, which is the exact scenario we’re worried about with Apple. In September 2012 a hacker got into one of the servers that builds software for Adobe and was able to use it to sign some malicious code of his own which was then used to hack other targets. The Flame malware package also used forged Microsoft certificates to impersonate Windows Update and infect computers across a network via the auto-update mechanism. Fortunately Flame also only targeted computers in the Middle East.

Stolen keys can be revoked and replaced, which is exactly what Adobe and Microsoft did: there is a blacklist of keys that browsers and other software are programmed to check before validating a key, but this only applies if we know which keys have been stolen so we can add them to the list. In Adobe’s case the key itself was used a few times, but was not actually stolen so the damage was limited. Microsoft was also able to revoke the keys used by the Flame malware in an orderly fashion. This wasn’t the case with DigiNotar, so all of their keys ended up getting removed from browsers and the result was incredibly disruptive to DigiNotar, their clients, and their clients’ customers.

None of these keys were known to be compromised until attacks occurred in the wild, however. A smart attacker would be stealthy with his stolen keys, using them judiciously only to get access to high-value targets and deleting the evidence afterwards to maximize the amount of time before anyone realized that the keys had been compromised. If a national intelligence agency was able to steal the signing key from Apple or the FBI, this is the type of attack we would have to be worried about. We wouldn’t know the trust system had been violated until it was too late.

So far we’ve been lucky that the attackers who have stolen code and certificate signing keys haven’t had them for long and/or were focused on specific targets rather than the public at large. The importance of protecting cryptographic keys and the entire system of trust built around them to the continued usefulness of the Internet for anything economically useful can not be understated. Apple losing control of their signing keys or being forced to use them to sign what amounts to malicious code is a dangerous step towards this future that will further tax the resources of already overworked network administrators and security personnel.

Concluded in Part 3