Wednesday, April 29, 2009

“One Way Hash” Arguments

Julian Sanchez has made an excellent post on Climate Change argument fallacies where he coins the term “one way hash” arguments. The context of the post is a discussion about the difficulties of refuting false arguments concerning the state of climate change (the politically correct term for global warming), in particular, the difficulties of lay people to understand the arguments of specialists.

Sometimes the arguments are such that the specialists can develop and summarize them to the point that an intelligent layman can evaluate them. But often—and I feel pretty sure here—that’s just not the case. Give me a topic I know fairly intimately, and I can often make a convincing case … I need only worry about what sounds plausible. If my opponent is trying to explain what’s true, he may be constrained to introduce concepts that take a while to explain and are hard to follow, trying the patience (and perhaps wounding the ego) of the audience.

from which he concludes that

… there’s a certain class of rhetoric I’m going to call the “one way hash” argument. Most modern cryptographic systems in wide use are based on a certain mathematical asymmetry: You can multiply a couple of large prime numbers much (much, much, much, much) more quickly than you can factor the product back into primes. A one-way hash is a kind of “fingerprint” for messages based on the same mathematical idea: It’s really easy to run the algorithm in one direction, but much harder and more time consuming to undo. Certain bad arguments work the same way—skim online debates between biologists and earnest ID aficionados armed with talking points if you want a few examples: The talking point on one side is just complex enough that it’s both intelligible—even somewhat intuitive—to the layman and sounds as though it might qualify as some kind of insight. … The rebuttal, by contrast, may require explaining a whole series of preliminary concepts before it’s really possible to explain why the talking point is wrong. So the setup is “snappy, intuitively appealing argument without obvious problems” vs. “rebuttal I probably don’t have time to read, let alone analyze closely.”

I found a link to Sanchez’s analysis in a post by Eric Rescorla, who was correcting Sanchez, somewhat pedantically, on the point that hash functions aren’t based on factoring. Sanchez defended himself by stating that he was constructing a metaphor not giving a cryptography lesson.

In any case, crypto nitpicks aside, I like the metaphor, as did others in the comments to Sanchez’s post. I think “one way hash” argument might be a bit wordy, and I would prefer simply a “hashed” argument (but this could be a botched argument), or a even a “product” argument (referring to factoring).

When Sanchez says above that arguments can often reduce to “snappy, intuitively appealing argument without obvious problems” vs. “rebuttal I probably don’t have time to read, let alone analyze closely”, I am reminded of some recent remarks from Marcus Ranum, The Anatomy of Security Disasters, where he observes that

Since we’re not working in a field where the probabilities are simple, like they are on a roulette wheel, we’ve had to resort to making guesses, and trying to answer unanswerable questions … I don’t know a single senior security practitioner who has not, at some point or other, had to defend an estimated likelihood of a bad thing happening against an estimated business benefit.

Often business has the “snappy intuitively appealing arguments without obvious problems” - plus Excel - while if the security practitioner objects, then by contrast, the “rebuttal may require explaining a whole series of preliminary concepts before it’s really possible to explain why the talking point (i.e. business case) is wrong”. Snappy and plausible usually wins out over lengthy, detailed and correct. There is asymmetry at work here, a “one way hash” argument, and security people have ended up with the hard inversion problem.

Related Posts

ENISA and Security Awareness

In June I will be speaking at an ENISA conference in London on security awareness. The conference theme is the “growing requirement for information security awareness across public and private organisations". ENISA is quite active in the space of security awareness, and you can see their portfolio of work here. Better security awareness might have prevented the loss of an unencrypted USB stick by an MI6 agent, which as reported recently, lead to a £100 million anti-narcotics operation being abandoned due to compromised data.

One interesting awareness report from ENISA is a survey on current awareness practices and success criteria. The report is short at 24 pages given the generous margins and large graphical embellishments. I have included an important chart below that shows a list of techniques and their effectiveness at raising awareness (as determined by the survey participants)
imageClassroom training (face-to-face interaction) was judged to be the most effective method, and by some margin. Promotional material had no redeeming features, and CBT courses were only slightly ahead of leaflets and just on par with regular mail outs. But please read the whole report to get the whole picture. In any case, the chart is a good discussion point for your next security team meeting.

Related Posts

Friday, April 24, 2009

The Relegation of Security to NFR Status

In this post I wish to continue exploring some of the points raised by Marcus Ranum in his recent piece The Anatomy of Security Disasters, commenting on what I see as security being relegated to the status of a non-functional requirement (NFR). That is, a potentially perplexing yet necessary task that needs to be done by some group of specialists to complete a project.

The Ranum disaster cycle looks like this. Management comes up with a new idea with significant business benefits that requires support from IT to be deployed. For sake of argument, let’s say the idea is to bring a customer application online through a web interface that has traditionally only been accessed by internal staff as a back office function. Typically the project has one or several champions, convinced of its importance and certain success, who with their “can do” track records will see the project through.

IT (let alone IT Security) is often not involved in the early stages of developing the project business case and deployment timeframe. By the time IT security does get sight of the project they are nonchalantly requested to “make it secure”. In this manner IT security has become a non-functional requirement (NFR) - not worth explicitly stating - because we have security people and their job is to, well, make IT secure.

From the point of view of the project manager securing the solution is not qualitatively different from ensuring that there is sufficient network capacity, that the servers are adequately sized for peak loads, help desk support is created, back-up and recovery is deployed, the web interface is friendly, and so on. There are specialists who perform these tasks and they will be called upon by the project manager as and when an NFR requires attention.

Once a project (or just its idea) gains a minimum amount of support it becomes very difficult to abort it based on purely technical objections. Ranum speaks of zombie projects that, despite repeated technical strafing, still manage to stagger towards design and deployment. At this point the power of veto over the project for technical reasons is effectively rescinded, and the security people need only resign themselves to “making it secure”, or even perhaps “making it PCI secure”. Ditto for the other NFR specialists. So what is delivered is the best solution under the circumstances – Just-In-Time Security.

The observation here is that the security function is no longer called upon to critically underwrite the security risks of a project, with the option to reject. Their assumed role is to support the delivery of an appropriate solution for the stated business purpose - just like everyone else. Ranum makes the point that bringing IT systems and associated data online carries tremendous inherent security risks - risks which management appear to have lost sight of, and security practitioners have effectively lost their voice to surface.

Ranum predicts that only an epic failure on the scale of the 1986 Space Shuttle Challenger disaster will close the reality gap between management and IT, and save security from being banished into NFR limbo. Ranum hopes that he is wrong, as we all do.

Related Posts

Thursday, April 23, 2009

Marcus Ranum and the Points of No Return

image Marcus Ranum has written another poignant piece (labelled as a rant) on the state of IT security, called The Anatomy of Security Disasters. You might think from the title that Ranum was embarking on a microscope and tweezers dissection of a recent security incident, which is normally the reason security people get anatomical. However in this case the disaster for Ranum is not a single discrete event but rather the cumulative effect of many business-driven IT decisions taken over the last three decades that have rendered a grand IT failure all but inevitable. For Ranum, we have passed the point(s) of no return in avoiding this disaster, and tragically, the disaster may be a necessary trauma to reset the current complacency towards IT (security) risks.

Ranum sees many similarities with the epic failure of the Space Shuttle Challenger which broke-up shortly after take-off on its tenth mission in 1986, killing all seven crew members. His simple and brutal explanation for the disaster is that “space travel is dangerous”, but the public, as well as many in senior management at NASA had forgotten the inherent risks in the space program. An independent analysis by Nobel Laureate Richard Feynman (who was dying of cancer at the time) on the Challenger incident made a telling observation

It appears that there are enormous differences of opinion as to the probability of a failure with loss of vehicle and of human life. The estimates range from roughly 1 in 100 to 1 in 100,000. The higher figures come from the working engineers, and the very low figures from management. What are the causes and consequences of this lack of agreement? Since 1 part in 100,000 would imply that one could put a Shuttle up each day for 300 years expecting to lose only one, we could properly ask "What is the cause of management's fantastic faith in the machinery?"

Returning to IT, Ranum is essentially asking the same question - what is the cause of management’s fantastic faith in IT? Putting critical IT assets online is simply dangerous.

Business decision-makers are clearly not listening to security engineers, and a huge reality gap has developed between management expectations and IT reality. So much so, that when problems arise management righteously claim that they were lied to. Ranum quotes one of his colleagues as basically believing that any IT thing that is worth doing can be done securely. Security people should stop being “whiners” and just do their job of securely enabling IT for business.

Compound this disconnect between management and technical people over hundreds of thousands of projects at the corporate, national and international levels, spanning the last 3o years, and you have the disaster Ranum is describing (and lamenting). You also have the coming disaster that he is fearing (and loathing), since we have passed the point of no return. Those project decisions cannot be undone - only contained.

A predictable Black Swan is gestating.

I have developed a FreeMind map of Ranum 's article here which gives you some navigational freedom for reading.

image

Related Posts

Tuesday, April 14, 2009

On the Entropy of Fingerprints

A biometric is just a long password, that is easy to remember and easy to enter (with the right hardware support). But just how long a password? Can we measure and compare the “something you are” against the “something you know” authentication criteria? I went looking on the web and yes there are some answers.

In An Analysis of Minutiae Matching Strength three IBM researchers outline how to measure the entropy of fingerprints and their resistance to brute force attacks as compared to passwords. The authors state that sampled biometrics are much longer than passwords (several hundred bytes to over a megabyte) and typically have a high information content. A password of equivalent length would be difficult to remember.

The authors use two models to arrive at these conclusions. In both models they assume that an extracted fingerprint sample can be represented as an image of 300 x 300 pixels, which can be divided into 400 non-overlapping sites of 20 x 20 pixels. Each site holds a minutia detailing a ridge and valley pattern of a fingerprint, and each minutia point has an angle of orientation represented by d = 4, 8 or 16 values. A sample fingerprint is considered a match against a template if a minimum number of N sites match where N is 10, 12, 14, 16 or 18.

image

So this is like saying that you have a password of length 400 where each character takes on at least d values and you accept a candidate password as correct if it matches the true password in at least N positions. Letting N = 10 and d = 4 yields just over 2^85 possible fingerprint configurations. So attempting to randomly guess a correct fingerprint template in this model only succeeds with one chance in 2^{-85}. This is very low indeed and corresponds to a random length 13 password based on the 94 printable ASCII characters.

What we have described is called the simple model by the authors, which does not account for certain minutia dependencies. A more complex model is proposed to compensate which also shows that the entropy is still as high as 80 bits with additional matches. Even with the complex model there were quite a few caveats, and a revised model was reported in the excellent 2008 survey paper Biometrics: A Tool for Information Security.

In section V.A of the survey paper the amount of discriminating information in a fingerprint is discussed. The revised model is somewhat more conservative in its comparisons to passwords. The authors now state that randomly matching on at least 20 from 36 minutia is at least as difficult as guessing a length 6 case-sensitive alphanumeric password (about 10^{11} in total).

The revised model was motivated by the desire to quantify the uniqueness of fingerprints due to their importance in determining guilt in court cases. And just like DNA tests, the assumed power of fingerprints to uniquely discriminate between individuals is being downgraded.

So in summary a biometric is just a long password, that is easy to remember and easy to enter (with the right hardware support). But you need to check the parameters of the matching algorithm and its assumptions to determine how strong your fingerprint as compared to a password.

Related Posts

Wednesday, April 8, 2009

NIST, Passwords and Entropy

Passwords still remain the principal security mechanism to protect information assets. To make passwords harder to guess, security policies usually specify baseline requirements concerning password length (say at least 6 characters), composition rules (say all alphanumeric characters with at least one digit) and further content guidelines on avoiding dictionary words and personal references. The intent of such policies is to gain some assurance that users will not select passwords, either willingly or unintentionally, from a potentially small fraction of all possible passwords.

An interesting approach to specifying password policies is taken by NIST in the Electronic Authentication Guide (SP800-63), a document which has been evolving since early 2006. In SP800-63 entropy-based arguments are used to create password policies that provide a bound on the maximum number of password guesses that an attacker can make while still having only a small probability of success. Entropy is a classical measure of information content of an event with an uncertain outcome. In SP800-63 the approach is to specify policies which yield passwords with a minimum amount of entropy, and then to translate this uncertainty measure into a minimum amount of work (that is, guesses) that must be performed by an attacker to recover the password.

The SP800-63 approach to Password Policies

The main threat being addressed SP800-63 is online password guessing, assuming that the attacker has online access to the target system or (web) application for the lifetime of the password (typically several months at least). This focus seems to hark back to the 1985 US DoD Password Management Guidelines which considered resistance to online guessing as the fundamental metric of password security.

The SP800-63 approach to selecting passwords can be outlined in the following 5 steps.

  1. Set a threshold of success for an online attack to 2^{-k}
  2. Model the textual properties of user selected passwords as English text.
  3. Use known results on entropy estimates for English to select password policies that yield a minimum amount of entropy in user passwords.
  4. Derive a bound M on the number of online guesses for an attacker to recover a password generated according the policies in Step 3 and the threshold in Step 1.
  5. Verify that the attacker will be limited to a number of online guesses m such that m < M.

In Step 1 a threshold is set for the success of online guessing attacks which is acceptable to the system owner. As suggested by SP800-63 this value should be at least k = 10 (about 1 in 1000) and preferably k = 14 (about 1 in 16,000). These probabilities are taken over the number of possible guesses that the attacker can make over the lifetime of the password. If these odds look too high for the attacker then the system designer should use a stronger authentication mechanism.

In Step 2 it is assumed that user-selected passwords have the same characteristics as English text. This is a somewhat conservative (even reasonable some might say) and permits previously known results on the entropy of English to be applied The basic results here are due to the experiments of Claude Shannon, the inventor of information entropy, and many other significant discoveries. Based on these results, password policies are selected to ensure that user passwords will contain a minimum amount of entropy.

In Step 3 the entropy implied in password policies is converted into work (the number M of online guesses) to recover a password according to the bound stated in Step 1. Finally in Step 5 the system is designed to ensure that the number of possible online guesses m over the lifetime of the password is less than M.

To clarify this approach, we recount an example from Appendix 1 of SP800-63. Consider a password policy that mandates passwords of length 8, selected from the 94 printable keyboard characters, required to include at least one lower case, upper case, digit and special character, and is finally scanned to remove dictionary words. SP800-63 asserts that such passwords contain at least 30 bits of entropy, and conclude that if the number of online guesses mounted by an attacker is less than 2^{16}, then the likelihood that the password will be guessed is less 2^{-14}.

In this case passwords will be considered secure against online guessing if the attacker can be limited to less than about 64,000 guesses over the lifetime of the password. The graph below and its associated table in Appendix 1 of SP800-63 show measures of entropy for passwords up to length 30.

image

Such arguments may end up being influential with security policy makers as exemplified by How Strong is Your Password? NIST has some formulas. For example, the US Railroad Retirement Board includes the following statement in its authentication policies (level 2 corresponds to a threshold of k = 14)

For level 2 protection against on-line guessing, NIST recommends guessing entropy of 30. Guessing entropy is an indication of the amount of work to determine, or guess, a password. Alternately, NIST indicates that any system that required passwords to be changed at least every two years and limited trials by locking an account for 24 hours after six failed attempts would satisfy the targeted guessing attack requirements for level 2.

Converting Entropy into Guessing Work

This seems like magic – pump up the entropy in passwords and you get resistance to online guessing attacks. To understand what is happening here let’s make the following definitions. We will assume that there are N possible passwords defined by a policy

CodeCogsEqn

and that a given user U will select these passwords according to the probability distribution

CodeCogsEqn(1)

Lastly let’s assume for simplicity that the passwords are ordered such that

 CodeCogsEqn(2)

which just means that password P1 is the mostly likely choice of the user with probability p1, password P2 is the next most likely choice with probability p2, and so on. The SP800-63 approach is to find the largest value of M such that

and to ensure that an attacker cannot make M online guesses over the lifetime of the password. Here we make the conservative assumption that an attacker is searching the passwords in the same order of likelihood as the user will select them.

The big trick in the SP800-63 analysis is to derive the value of M from the entropy in the passwords rather than finding the password distribution explicitly. Using standard definitions, the password entropy is given as

Now this does not seem to help us much. Say for example that the password entropy is determined to be 30 bits, what can we than say about the number of acceptable password guesses M? Well what SP800-63 does is to assume that a password with 30 bits of entropy represents the same difficulty to guess as a random 30-bit number. What this assumption means is that

and M can be evaluated as

So when the entropy is 30 bits and k = 14, then if the number of online guesses mounted by an attacker is less than M = 2^{16} = 2^{30 - 14}, the likelihood that the password will be guessed is less than 2^{-14}.

The Flaw of Averages

Unfortunately there is a flaw in this argument. Imagine that you have a random number generator for AES-128 keys, and that you determined that the entropy of the keys produced by the generator was only 80 bits rather than the expected 128 bits. What could you say about the probabilities of the keys being produced by such a generator?

Well it could be the case that there is a subset of 2^{80} keys that are produced uniformly (each with probability 2^{-80} ) and that the remaining keys (the vast majority) are produced with probability zero. Such a distribution would produce an entropy of 80 bits.

So what SP800-63 does in its analysis is to use the same logic but in reverse. If the set of user passwords has an entropy of 30 bits then SP800-63 concludes that there are effectively 2^{30} uniformly distributed passwords, each selected by users with probability of 2^{-30}. Trying to guess the correct password is then like trying to guess a random 30-bit number.

But having a uniform subset of 2^{30} passwords is only one possible distribution that could produce an entropy of 30 bits. It is entirely possible that there could be passwords which have a much higher probability of 2^{-30} of being selected (and others with a much lower probability) while still maintaining an overall entropy of 30 bits. And this would lead a smaller value of M as implied by the analysis above.

The problem here is that the entropy is a single summary statistic (an average in fact) computed over the password distribution, and as such, detail on the distribution itself is lost. The logic of the Sp800-63 analysis is to hypothesize into existence a uniform distribution as the source of the entropy value, when in fact other non-uniform distributions may well have been responsible. And these other non-uniform distributions may provide less security against online guessing than is implied by the hypothesized uniform distribution.

In fact it has been observed before by several authors that entropy is not a good indicator of work, which in our case means the number of guesses to recover a password. To understand this statement further you will need to take the time to read some research. The best papers on the topic are written by John O. Pliam, starting with On the Incomparability of Entropy and Marginal Guesswork in Brute Force Attacks. Eric Verhuel has done some more recent research, presented at RSA 2007.

The Gordian Knot Remains

Setting effective password policies really depends on having good knowledge of how users choose passwords, in particular, the form of the probability distribution for their password selection

In short no one seems to know how to unravel the Gordian Knot surrounding user password habits. SP800-63 attempted to get at these probabilities via entropy arguments, but in the entropy we have already lost too much information concerning the password distribution. Even so, the SP800-63 password policy recommendations are good – just take their online guessing bounds with a potentially large grain of salt. Appendix 1 of SP800-63 makes very interesting reading nonetheless.

image

Related Articles

The Data Centric Security Model (DCSM)

(Repost from July 2007)

While at IBM I worked on a concept which we called the Data Centric Security Model (DCSM), with the basic idea being that security people will have more fruitful interaction with IT managers if discussions centered on their data rather than on our technology. After I left IBM, several people continued to work on the DCSM resulting in a paper presented at the Business Driven IT Workshop in May 2007, which has now been posted on Scribd.

A Data Centric Security Model A Data Centric Security Model Luke O'Connor IBM proposal for a data centric approach to IT security.

Monday, April 6, 2009

Zero Knowledge Proofs

(repost from October 2008 for SBN)

30proofsIn June I blogged about an anonymity incident concerning the ToR network. The draft version of the article was quite long and I eventually edited out a lengthier introduction. I would like to return to one interesting topic here. The popularity of commercial and open source internet anonymity services has waxed and waned over the last decade. A high profile company from the late 90's was Zero Knowledge Systems (recently renamed to Radialpoint) that possessed a collection of anonymity services, notably their flagship offering, the Freedom Network. For various reasons, the company was downsized to near extinction during the dot.com bust, though much of the technology, experience and expertise lives on in other products and former employees. ToR was a major beneficiary, and luminaries, such as Adam Shostack and Stefan Brands, are now employees of Microsoft (Ian Goldberg returned to academia). Shostack has an interesting presentation entitled Will People Ever Pay For Privacy? where his gives his views on the lack of commercial success of privacy and anonymity offerings, circa 2003.

The company name, Zero Knowledge Systems, is not just a puzzling juxtaposition of words but actually a deliberate reference to an underlying technology that formed the basis of their intellectual property. In this post we look more closely at the paradoxical-sounding concept of Zero Knowledge.

Proofs

confusion Let's start by recalling a proof that is a few thousand years old which at the time it was discovered caused considerable consternation amongst integer-loving Greeks. We are referring to the (unpleasant) fact that the square root of two is irrational (can't be expressed of the ratio of two whole numbers). Here's one proof. Assume root 2 is rational and let sqrt(2) = P/Q where P/Q are in lowest terms (share no factors). Squaring and rearranging yields that P^2 = 2*Q^2, so P must be even, from which it follows that Q must be odd (not in lowest terms otherwise). Since P is even then P^2 must be divisible by 4, but since Q is odd then Q^2 must also be odd. Hence 2*Q^2 is not divisible by 4 and we have our proof (by contradiction).

So we began with a statement S = "root 2 is irrational" and the proof is the evidence that the statement is correct. We read the proof and are convinced that is it correct. Not only that, we can remember the proof and use it to prove to someone else that root 2 is irrational. We might say that this proof is transferable. This seems a natural property since we commonly think of proofs as being available to be read and verified by anyone who has the interest and patience to do so. Would we ever have a need to create proofs that are not transferable? Does this notion even make sense? For security, yes.

In security we deal with proofs pertaining to authentication (S = "I claim this is my identity") and authorization (S = "I claim these privileges and access rights"). The proof of these assertions is typically achieved by a user Alice providing the appropriate credentials to a target system for verification. Proof in this case can be equated to presentation, or to the transfer of credentials from Alice to the target system. If the target system is a phishing site then Alice gives away (transfers) her security information to a potentially malicious party. Could Alice prove statements concerning her authenticity and authorization privileges without transferring the full details of those credentials to the target system?

Zero Knowledge proofs amathre a technique created in theoretical computer science which show how one party can convince another party that a given statement is true but not reveal any other information. In particular, if Alice uses a Zero Knowledge Proof to convince Bob that a statement S is true, then Bob cannot use the proof supplied by Alice to prove that S is true to another party Carol. This is the zero knowledge aspect - Bob is convinced that S is true (he learns something) but he cannot convince anyone else (what he learns has zero value in the sense Bob can't transfer it). This seems counterintuitive and indeed it is.

Recall vs. Capability

Passwords are a common authentication mechanism used to prove your identity to a system. You prove that you know the password associated with a given user identifier by providing the password to the target system. The target system will have a copy of your password in a database and it simply compares its (hashed) copy of the password with the purported copy that you provide, and a match completes the authentication proof. So in this case demonstrating (proving) that you know a password is achieved by revealing it (or a suitable hashed derivative).

We might call this a full knowledge proof in the sense that what you know is fully transferred to the system for authentication. And if it happens that you are responding to a phishing email or your DNS service has been poisoned then your full authentication details are transferred to a rogue system.

To improve the situation consider the user answering 5 questions when they register for a system account, such that 3 are randomly chosen to be asked at each login. There are 10 ways 3-from-5 questions can be selected, so if a rogue phishing system observes the answers then the probability that the same three answers can be used to logon again is 0.1. If the number of questions is increased to 10 then there are 120 ways 3-from-10 questions and the probability of transfer is under 0.009.

So you prove your identity by answering a few simple questions that hopefully someone who is not you would find difficult to all answer correctly. The proof of identity can be made non-transferable with high probability by having a K-from-N question system where getting the same K questions from one login to another is unlikely.

The idea is sound but there are problems in practice. Our memory is not that good so the number of questions can't be made too large. Also think of Sarah Palin who recently had the password to her email account reset by someone answering the 3 questions: what is your birthday, what is your postal code, and where did you meet your spouse? Unfortunately, this is public information for Palin. We can make the questions more personal and hopefully easier to remember but people may object for privacy purposes.

A better approach is to move away from knowledge-based questions & answers to capability-based tasks & solutions. That is, you prove who you are not by answering questions that you should know but by performing some tasks that only you should be able to do.

Cube Logic

cube Consider Alice proving her identity to Bob by proving her capability to solve the Rubik's cube (yes many people can do this, but the example is instructive). Alice could opt for a full transfer proof by writing a set of instructions for solving the cube which she could give to Bob for verification. But let's assume that Alice is too lazy to create such a set of instructions, and that Bob is equally too impatient to read them in any case. They both agree that  a practical demonstration is better: Bob will take a cube, repeatedly twist and turn it until he is satisfied that it is in a random-looking configuration, and then ask Alice to return the cube to its monochrome state (one colour on each face).

He hands the scrambled cube to Alice, who turns her back to Bob and starts solving the cube (she turns away so if Bob does not know how to solve the cube he is not improving by watching her). If Alice can actually solve the cube then she can use her skills to return Bob's cube to the monochrome state. If she does not have the cube-solving capability then she can take a stab at the solution but she will have to be lucky.

While Bob may be sceptical of Alice's cube-solving abilities, he is prepared to believe that Alice may have memorized a large number of solution sequences to specific configurations. If Bob arranged the cube into one of these configurations then Alice is able to solve the cube directly from memory. Let's be wildly optimistic here and assume that Alice has memorized the solutions for half of the possible 10^(19) configurations.

So Bob can proceed as follows. He randomizes the cube and asks Alice to solve it. If she can't then he knows that she has falsely claimed to have the cube-solving capability. If she does solve the cube, he repeats the test again by re-randomizing the cube and asking Alice for the solution. Each time Bob poses a cube to be solved by Alice there is a 50% chance that she will be caught out if she can't in fact solve the cube (Bob did not randomize the cube to one of her memorized configurations). After 20 iterations there is only 1 chance in a million that Alice could fake having the cube-solving capability.

c4 So here we have replaced questions (do you know the answer to this?) by tasks (do you know the solution to this?). The advantage of tasks is that they are capability-based rather than knowledge-based, and in general more tasks can be posed than questions answered from recall. Also we don't have to recall a skill, we just use it. As we mentioned the cube has 10^(19) configurations, which means that someone with the cube-solving capability can solve 10^(19) tasks of the form "return the cube to its monochrome state from this configuration". There is really no comparison to the number of knowledge-based questions that we could answer accurately on a regular basis.

Finally, Bob is not learning anything about the cube-solving capability. If he does not have this capability, after interacting with Alice he is none wiser. He has observed that she unscrambled a series of scrambled cubes chosen by him. He is convinced that she can unscramble the cube but he will not be able to convince Carol that either he or Alice can unscramble the cube. This is the Zero Knowledge aspect.

Zero Knowledge Proofs

The example above explained to essence of Zero Knowledge proof (ZKP) which we can summarize here as

  • Alice knows a secret and wants to convince Bob of this without revealing the secret.
  • Alice and Bob go through a series of exchanges (rounds) where Bob sets Alice a task to solve using her claimed secret as a capability. If she has the capability then each task is easy to solve for Alice.
  • If Alice does not have this capability then after a few rounds she will fail to solve one of the tasks set by Bob, and he then knows that she does not have the capability implied by knowledge of the claimed secret.
  • If Alice successfully solves enough tasks set by Bob (say 20 to 50) then Bob is convinced that Alice knows the secret
  • As a result of undertaking the exchange with Alice, Bob does not learn anything about the capability and cannot convince a third party Carol that he knows the secret associated with the capability.

But how will this work in practice? The capabilities used in practice for ZKPs are derived from computational secrets such as the prime factors of a number or the discrete logarithm of a number modulo a prime. Alice can easily create a public and private key pair suitable for this purpose. The private key takes the role of Alice's secret, and her capability is being able to easily solve specific equations involving her secret. The role of the public key is to verify that her solutions are correct. So when Bob wants proof that Alice knows her claimed secret, he creates tasks in the form of equations that Alice can solve using her secret key. If Alice does not know the secret key then she can guess the solution but will be caught out after a few rounds.

In a future post we will talk about some practical applications of ZKPs, but if you just can't wait then take a look at the Direct Anonymous Attestation (DAA) protocol used in the Trusted Computing Platform (TPM) from the Trusted Computing Group.

Some security documents on Scribd

Just a note to say that I am collecting some security documents at Scribd - about 20 so far. This includes useful documents that I found on the web, plus several written by myself. My documents include information on the Birthday Paradox, Data Centric Security, Black Swans in IT Security, Entropy Bounds on Traffic Confirmation and a host of other topics that I have blogged about in the last year or so.

In general Scribd is a wonderful source of material for a wide range of topics, including security, risk management, probability, general analysis plus a wealth of information on less technical topics. If anything there are now too many documents, but the pockets of gold are worth the search.

Friday, April 3, 2009

Three Security maps in FreeMind and Flash

Here are the Mind Maps that I constructed to help me sift through information on three security topics from last year - an improved attack on A5/1, the Cold Boot Attack, and the Debian Crypto flaw. In each case there is a considerably more detail and references in the Mind Maps than the posts that were derived from them.

In February 2008 an improved attack on A5/1 was announced, the cipher used in the encryption of GSM mobile phones. While A5/1 is not considered strong, the new attack claimed faster recovery of keys using less assumption and data. This Mind Map provides an overview of the issues and what was claimed.

Also in February last year, the Cold Boot Attack was devised by Princeton researchers. This Mind Map gives an overview on what was claimed, what were the reactions and a lot of opinion on how this attack came about. In short, many professionals knew the attack could work in principle but it took an actual demonstration to convince them thoroughly.

In May 2008 Luciano Bello discovered a flaw in the random number generator of OpenSSL, which lead to the discovery that the Debian Etch distribution had been producing weak (public) keys for well-known protocols such as SSL and SSH over the previous 18 months. This Mind Map provides an annotated set of links on the topic.

Related Posts


Thursday, April 2, 2009

Password Roundup #1

Passwords are always in the news. In Some Black Swans of IT Security I described the password predicament as 40 years of compounded reliance on an unscalable 1-factor authentication technology without a plausible migration strategy. The graphic to the left (thanks to Graham Cluley) is a rendering of the 200 odd passwords that have been used by the Conficker worm as a successful vector of propagation. The overwhelming simplicity of the list once again highlights our resistance to selecting strong passwords. In this post I will touch on some other notable password developments of late.

Recovering passwords of notable people

The US college student, David Kernell, who allegedly hacked the email account of Sarah Palin, has pleaded not guilty to new additional felony charges. If he is found guilty on all counts, he could face up to 20 years in prison and a fine of up to US $250,000. Kernell reset the password of Palin's email account by successfully supplying the answers to three relatively simple questions that could be accurately guessed based on public information. An article from Help Net Security, on the Palin incident from September last year, stated that 21 million passwords are stolen each year, and almost half of these thefts are perpetrated by someone who is relatively close to the victim (friends, neighbours, colleagues, or relatives). No relation between Kernell and Palin in this case.

In a similar incident, hackers broke into the web email account of UK Justice Secretary Jack Straw, and then sent out several hundred fraudulent emails under his name. It is suspected that Straw's password for his email account was easy to either guess or reset. The incident was somewhat embarrassing for Straw since he was responsible for setting up the National Hi-Tech Crime Unit to crack down on hackers when he was Home Secretary in 2001.

Earlier this year a hacker used a dictionary attack on an account of an employee at Twitter to gain access to the micro-blogging service and impersonate celebrities such as Britney Spears, Barack Obama and others. In this case the recovered password was happiness. Check out the Twitter Hack video on Youtube.

Cracking and revealing passwords

The Wikileaks team announced that they cracked the encryption to a key US document relating to the war in Afghanistan. The document, entitled NATO in Afghanistan: Master Narrative, details the official storyline (spin) to be given by NATO representatives to journalists. As it turned out the document was protected by PDF encryption and the key was derived from the password progress. Hardly worth encrypting you might say.

On Feb 7th Go Hacking posted on how to recover various Windows desktop passwords using software loaded onto a USB stick. By recover here the author means reveal more than crack - expose application passwords that are cached somewhere on Windows. You can use a USB stick so-loaded to recover passwords from you friends or colleagues (or co-workers while they are at lunch as well). The suggested software suite includes tools to recover passwords from instant messaging applications, email clients and browsers (specific tools for both IE and Firefox). You also get tips on how you write two short scripts to start the tools once the USB device is inserted into the victim Windows box.

On to real cracking. In January ElcomSoft announced the availability of their Wireless Security Auditor tool that can be used to assess the strength of wireless passwords. Of course the tool does this by trying to actually crack the password. The novel aspect of the approach is to use Graphic Processing Units (GPUs) for the cracking computations, which can be optimised to search must faster than standard CPUs. The table indicates that GPU-accelerated cracking can be over 16 times faster that Intel-based CPU cracking.

image

Personal Password Habits

Several recent surveys indicate that people are relying on fewer passwords to protect resources, thus increasing their importance to hackers. The Guardian reports on a survey which found that 61% of people use the same password whenever they can. This is not a good sign unless that password is very difficult to break. And well that seems unlikely in Britain according to another survey which found that 83% of British population use either their date of birth, pet name, street name or maiden name as their password or security question for most of their private email and bank accounts. The article goes on to point out that it is exactly this type of information that is finding its way onto social networks, and is therefore less of a secret than people might think. As evidence of growing password fatigue, the Guardian also reported that 10% of people have 50 or more online accounts.

So the message is that people are selecting fewer distinct passwords because they have so many to remember. Another online survey conducted by Sophos revealed that one third of respondents use the same password at all their Internet sites (a somewhat lower figure than what the Guardian reported, but still pointing to the same problem)

image

So the value of user passwords at a given site increases if there is a reasonable chance the passwords are valid credentials for other sites as well. Maybe this was part of the motive behind a recent attack on the well-known job site Monster where its user database was compromised leading to the release of user IDs and passwords, email addresses, names, phone numbers, and some basic demographic data.

The compromise of the phpBB forum software site was even more revealing, because the attacker not only stole passwords but published them as well. There is a nice analysis by Robert Graham which shows that that the majority of passwords were

  • A person's name
  • Keyboard patterns (qwerty)
  • Variants of the word password, like password1
  • References to pop culture or sports.
Interestingly, there were no length requirements on phpBB passwords, and over 60% of the exposed passwords were 6 characters or less. Lastly, check out Graham Cluley's video (scroll down) on how to choose a strong password if you needs some tips on how to get past the 6-character barrier.

Related Articles