Tuesday, July 22, 2008

The Cold Boot Attack

In February, a team led by researchers at Princeton announced how to recover laptop encryption keys directly from the DRAM (dynamic RAM) of the device. This attack, dubbed "Cold Boot", began a flurry of posts and articles stating dismay, seeking clarification and listing defences. The contents of DRAM takes several minutes to be cleared after power is shut off, and an attacker with physical access to the machine has an opportunity to recover the contents of memory over those few minutes. Contrary to popular belief, dynamic RAM is only somewhat dynamic.

Reproduced below are a series of images from the Princeton team depicting the DRAM decay of a Mona Lisa image over a period of 5 minutes. The striped image on the far right shows the ground states for the DRAM, which are the 0 or 1 states that the memory elements will eventually return to after power is removed. By the time DRAM reaches the striped ground state all information on encryption keys (and any other parameters) in memory is lost.


A common reaction from experienced security professionals was to state that Cold Boot attacks are well-known to exist in principle, and that the Princeton work has filled in the practical details. “[The cold boot attack has] been in the toolbox of forensics examiners for some time,” said Murugiah Souppaya, a researcher at the National Institute of Standards and Technology. The attack is more of a wake-up call than a bolt of lightening.

Hard-disk encryption vendors were also quick to point out that Cold Boot attacks exploit hardware vulnerabilities and not encryption weaknesses. The true risk comes from software that relies on power loss to purge keys from memory rather than the software explicitly clearing memory itself. The lesson for users is to ensure that power is shut off to DRAM when they not are using their laptops (either by shutting down the laptop or by putting it into hibernate mode).

When the keys bit are extracted from memory some will already have false values due to ground state decay. Pictured left is a decayed Mona Lisa image which contains sufficient information to recognize the original image. The Princeton researchers have devised algorithms for detecting and recovering key bits corrupted by ground state decay, which, after stripping away all the fanfare of the attack, is the most significant contribution of the work. How do we recover the true key from a corrupted copy?

Coding theory is the study of exactly this problem, in the context of sending and receiving messages where messages bits can be corrupted (flipped) during transmission. Additional information, known as redundancy, must be added to each message so that errors can be detected and corrected. Without some redundancy there is little hope to find a handle on what errors have occurred.

Luckily the keys do in fact come with their own redundancy in DRAM. A user-supplied key to a block cipher is converted into an internal format that matches the operation of the cipher. For example, in the case of DES the user-supplied 56-bit key is converted into an 768-bit extended key, corresponding to the 48-bit sub-keys used in each of the 16 rounds in DES. Now the sub-keys themselves consist of bits that are just copies of bits from the user-supplied key. Each bit of the user-supplied key is guaranteed to occur at least 14 times in the extended key.

For efficiency reasons, the extended key is stored in DRAM rather than the user-supplied key. So for DES, and its triple-DES variants, each key bit will be stored at least 14 times in DRAM. Thus there will be (at least) 14 copies of each key bit to work with, which we can think of as 13 bits of redundancy for each key bit. PGP CTO, John Callas, likened this key decoding process to completing the remaining squares of a Sudoku game. While the analogy is not perfect, it is suggestive of the type of work that must be performed to decode partially decayed keys from memory.

For me, the Cold Boot incident is reminiscent of Richard Feynman's involvement in the 1986 Challenger disaster inquiry. There was long debate on whether the O-rings would or would not expand under cold conditions, and Feynman demonstrably settled the question by placing a clamped O-ring into a glass of ice water and showing that it did not expand when removed. At the time Freeman Dyson noted that this was an example of Nature providing a simple answer when asked a simple question.

The Princeton team asked Nature the simple question of whether DRAM is cleared on power loss, and the simple answer is no.

You can find the research used to produce this post as a FreeMind mindmap rendered into Flash here.

Sunday, July 20, 2008

Are AES 256-bit keys too large?

ice This may seem an odd question given that since the mid 70's discussions about cryptographic keys have been mainly concerned about their potential shortness. While the risks of using short keys are apparent, there are risks lurking at the other end of the gamut as well. We can have too much of a good thing, and also think we have a good thing when we in fact don't.

The adoption of the AES cipher has progressed quite rapidly, greatly assisted by being mandated as a US government standard. As a technology AES has nonchalantly skipped along the Gartner hype cycle and nestled into the Plateau of Productivity. AES supports key lengths of 128, 192 and 256 bits, and many commercial offerings, to encrypt laptops or USB sticks for example, supply AES at the maximum 256-bit key length. Will attackers be able to bypass the high level of security implied by AES-256? In at least two significant cases, the answer is likely to be yes. But we require some background first.

The economics of AES keys

Modern block ciphers encrypt by mixing data over a series of rounds to produce a complex nonlinear relationship between the plaintext (message), ciphertext and key. For AES-128 , AES-192 and AES-256 the number of rounds is 10, 12 and 14 respectively. So AES-256 requires a further 4 rounds over AES-128 to mix the additional key bits, and is consequently only 40% slower. Thus a 128-bit key can be doubled (squaring the effort of exhaustive attack) for a relatively modest cost in performance. Therefore, unless performance is critical, maximal length 256-bit keys are a natural choice for AES deployments.

haystack So given that AES-256 is likely to be deployed widely, what type of security can we expect? Well, when a secret key cipher has a key length of n bits this is interpreted to mean that, in the absence of any weaknesses, the security of the cipher is proportional to the cost of a brute force attack on a key space of size 2^n. That is, a needle-in-a-haystack search problem where there are of 2^n pieces of hay. On average the correct key will be recovered after 2^(n-1) guesses, or searching half the key space. So AES-256 should be secure against an attacker who has materially less than 2^(255) = 10^(76) resources at their disposal. Well, unless you have a need to approximate the number of hydrogen atoms in the observable universe, you will never encounter a number as large as 10^(76). So there is little hope in attacking AES-256 by direct brute force methods, but is this the only way to get at the data protected by such large keys?

AES and Passwords

It is common for laptop and USB encryption solutions to use one of the following key management systems:

  • Derive the device-encryption key directly from a user-supplied password, using a method such as the Password-Based Key Derivation Function.
  • Randomly generate the device-encryption key, and then encrypt this key by a key derived from a user-supplied password

doll The latter approach is preferable since it actually protects data with a full length key, and further does not require data re-encryption when the user changes their password. It is possible to create more intricate chains of key encrypting keys, similar to those encapsulating Russian dolls, but ultimately the security of the device-encryption key reduces to the strength of user passwords.

How large a password space do we need to provide the same security as a 256-bit key space? Assuming that the password consists of characters drawn from the set of 94 printable ASCII keyboard characters, the password would have to be 40 characters in length. That is, a user would have to select a random 40 character password to achieve the same security against a brute force guessing attack on an AES-256 key. Such passwords would probably take about 20 seconds to be entered by a typical person.

In practice we can expect user passwords to be less than 15 characters in length, and certainly not chosen uniformly. So we have an example of strong cryptography being bypassed via far weaker passwords. Some laptop encryption vendors now offer two-factor authentication to protect their keys, but such solutions are not widely deployed as yet.

AES in Security Protocols

Another place where we need to re-evaluate the security of AES-256 is in protocols like S/MIME, SSL and TLS where data is protected by a bulk encryption method, but the key of the bulk encryption method is protected by a public key. For example, TLS now supports AES-256 as a ciphersuite, in particular, in combination with RSA for key establishment. Since the AES-256 key is derived deterministically from the keying material protected by RSA encryption, then the strength of the RSA key should be proportional to the strength of the AES-256. But how large does an RSA key need to be so that it provides similar security to an AES-256 key?

In 2001 this problem was considered by the well-known and respected researcher Arjen Lenstra and his results indicate that the equivalent RSA key length to match AES-256 security would be more than 13,000 bits! Since RSA is commonly deployed in the range of 1024 to 4096 bits, we therefore have a huge discrepancy as compared to AES-256 security. The size of RSA keys can in theory be increased indefinitely, but computational efficiency becomes the limiting factor. For RSA and discrete logarithm system that use n-bit keys, the computational cost of using these systems (particularly for decryption) is proportional to n^3 operations. So if the key size is doubled from n bits to 2n bits, then performance will decrease by a factor of approximately 8. Increasing a 1024 bit key 13-fold will reduce performance by a factor of over 2000.

For protocols such as TLS and S/MIME, the security of AES-256 will be bypassed by attacking the public key system protecting the AES keys. In this case, without consideration for other contextual security parameters, AES-256 provides a false sense of security.

Length is not always Strength

AES 256-bit keys will certainly thwart direct brute force attacks far beyond our lifetimes. Adi Shamir, one of the co-inventors of RSA, has stated that "cryptography is typically bypassed, not penetrated". In this post we have considered two cases where this can happen: long keys being protected by weaker passwords, and long keys being protected by much shorter (weaker) keys. The moral seems clear: don't use very long keys in a given environment unless the security of the other parameters involved is similar, since the weakest link will be the target. Unfortunately you just won't find any other practical security parameter up to par with an AES-256 key.

Related Posts

Sunday, July 13, 2008

A Blackish Swan for Debian Crypto

debian In May Luciano Bello discovered a flaw in the Debian version of Linux which, as described in CVE-2008-0166, yielded "a random number generator that generates predictable numbers, which makes it easier for remote attackers to conduct brute force guessing attacks against cryptographic keys". While this sentence is unlikely to win any awards for clarity, the graveness of the situation should nonetheless be apparent.

Since September 2006 the Debian Etch distribution has been producing weak (public) keys for well-known protocols such as SSL and SSH, as well as certificates for personal encryption and signing. A few days after the announcement SANS raised its INFOCon level to yellow, indicating an increased threat against the general internet infrastructure (the INFOCon level has since returned to green). While the offending Debian code is easily patched (only two lines need to be uncommented) it will take much longer to assess the impact of over 18 months of weak keys being handed out by Debian.

The Best Laid Plans

The root cause of the incident centres around memory allocation. Programs frequently request additional memory when they are running. The operating system (OS) satisfies such requests by finding an available block of memory, marking the block as allocated, and returning the starting address of the block to the requesting program for access.

The memory allocated to a program request will not be empty - the block will always contain some values in terms of zeros and ones. These values are typically what was left behind by the last program that used the allocated locations.

When a program is allocated memory it is good programming style for the programmer to assign known values to the locations so that subsequent computations involving these locations begin from a known state. For example, if one of the allocated locations was to be used for a variable hits to count the number of visitors to your web site, then hits should be initialised to zero. If the last value assigned to the memory location of hits was 192 or -517, then simply incrementing hits for each new web site visitor will not achieve the desired result.

Code scanning tools are able to detect instances of memory that are allocated to a program but are not initialised before being used. The scanning tools will return warnings and advise that initial values be assigned. In general this is good advice unless the programmer had an unconventional reason for using uninitialised memory.

doh But the programmers of the OpenSSL PRNG apparently had such a reason. They actually wanted to have memory allocated for the seed to the PRNG and use the previous values contained in these locations as part of the seed. Later, another Debian programmer seeing that code scanning tools were complaining about uninitialised memory in the OpenSSL PRNG took the well-intentioned step of commenting out the offending code, removing the code from the resulting executable program.

The offending code consisted of changes to just two lines.

Commenting out these two lines (unfortunately) did not stop the PRNG from working but had the effect of reducing its seed to be just the process identifier (PID) for the program. The PID is a value used by the OS to distinguish one process from another for administration purposes. The PID is typically a 15-bit value that is able to represent numbers in the range 0 to 32,767, or less than 33,000 distinct values in total. Cryptographic keys that were then generated from such a seeded PRNG are selected from only a small number of possible values and can therefore be easily broken. Instead of keys being based on hundreds or thousands of bits of entropy, keys were based on at most 15 bits of entropy.

So in summary: the seed generation process for OpenSSL PRNG was designed to rely on using previously assigned values to dynamically allocated memory. This unusual programming practice was not well-communicated, and a subsequent Debian programmer removed the two lines of code that seeded the PRNG through memory allocation. The effect was to produce a PRNG that was only able to produce 33,000 different values, which leads to a predictably small number of cryptographic keys.

The impact of changing two lines of code


The scope of the impact of Debian creating weak keys is multidimensional: the time scales, other code distributions, security protocols, security credentials and finally data. With respect to time, the PRNG programming flaw has been present in Debian since September 2006 (the Etch release), and was not publicly detected until May this year, meaning that weak keys have been generated on the Debian distribution for over 18 months now. Further, other Linux distributions derived from Debian, including versions of Knoppix and Ubuntu, have also inherited the flaw over this time period.

Security protocols whose underlying cryptography was weakened include VPN, SSH, SSL and its derivatives such as FTPS, as well as all X.509 key material (general public/private key pairs). The full list of impacted security protocols and applications is quite extensive. Further, internet-facing servers with weak SSL certificates bound to a domain name, such as www.mycompany.com, will need to be revoked and reissued by the company administering that domain. An article in the Register, called Debian's Epic SSL Blunder, states that the number of SSL certificates that may need replacing could be in the hundreds of thousands or even millions. So while the OpenSSL PRNG code can be easily patched identifying and replacing all the weak keys generated by the flawed code is a big operational headache. It may be months or years before all the weak keys and their corresponding certificates are tracked down and upgraded.

Another serious threat is the potential exposure of data, including security credentials such as passwords, which were transported over public networks under the false assumption that they were protected by strong cryptography. It has already been suggested in a Heise Security article that secret service organisations have been exploiting the weaknesses in communication.

A Blackish Swan?

Black Swan According to Wikipedia, a black swan "is a large-impact, hard-to-predict, and rare event beyond the realm of normal expectations". Are we dealing with a Debian Black Swan? It is too early to judge this incident as an outright Black Swan, or the catalyst for an imminent Black Swan, since the consequences have not fully played out. Certainly the incident has highlighted potential weaknesses in the open source programming model, if not in general, then at least in the interaction between open source developers and vendor developers. Ben Laurie of OpenSSL originally blasted Debian for changing code that they "did not understand", but in a subsequent post his tone was more conciliatory as evidence of the change being discussed on OpenSSL mailing lists was presented. His final remark was "I welcome any suggestions to improve this situation".

Flaws related to encryption always make good copy, and on occasion, strike at the heart of our fundamental beliefs in security. When encryption falters the whole edifice of security seems shaken.

You can find the research used to produce this post as a FreeMind mindmap rendered into Flash here.