Perfect secrecy, or not?

Cryptography is the science of designing systems that can withstand malicious attempts to abuse them. Every cryptographic scenario can be illustrated by the story of security’s inseparable couple, Alice and Bob: Alice and Bob want to send messages to each other, while deterring various unwanted interlocutors, eavesdroppers, and tamperers from participating.

In the simplest model, Alice sends Bob a secret message, and Eve the eavesdropper attempts to decode Alice’s message. Alice’s goal is to encrypt the message in such a way that Bob can decrypt but Eve cannot. The formal study of such communication protocols must begin with the construction of a cryptographic model, which we can then analyze to determine its functionality and security.

The formalization of the Alice–Eve–Bob secnario consists of the following:

  • A set of messages, and another set of ciphertexts;
  • Alice has an encryption algorithm that takes and outputs ;
  • Bob has a decryption algorithm that takes and outputs ;
  • Eve has her own decryption algorithm that takes and outputs .

In order for the above model to be functional, we expect to have for all . Security would mean that for all , or, at least, for a large portion of .

This naïve definition of functionality and security quickly turns out to be hopeless. History has shown that one party can often find out which encryption method the other party is using through espionage and other tactics. For Alice to guarantee the security of her communication with Bob, it is imperative for her to use an encryption method that produces messages that cannot be decrypted even when the encryption algorithm is leaked to Eve.

To this end, we construct an encryption method with a secret key, without which encryption messages cannot be decrypted with ease. We update the above cryptographic model accordingly:

  • We now have a set of messages, a set of ciphertexts, and a set of keys.
  • There is a key-generation algorithm that outputs an element of .
  • Alice is equipped with an encryption algorithm that takes and outputs .
  • Bob is equipped with a decryption algorithm that takes and outputs .
  • Even is equipped with another decryption algorithm that takes and outputs .

In this context, it is reasonable to say that the model is functional if

for all . In other words, Bob should be able to decrypt any message that Alice encrypted, so long as both of them use the same key.

Is this new model secure? We should hope that Eve cannot, in general, guess the key that Alice and Bob choose to use. Therefore, it is reasonable to assume that is a randomized algorithm.

Moreover, if Eve fails to obtain the correct key, then she should not be able to recover the original message. For this, a typical ciphertext must not carry any inherent meaning on its own, thereby deterring Eve from deciphering it without a key.

We formalize these observations as follows:

Definition 1 (Shannon, 1949). A cryptographic system is perfectly secret if, for each probability distribution over , we have the identity

for all choices of . In other words, the probability of recovering a fixed message does not depend on the choice of the ciphertext .

An implementation of Shannon’s perfect secrecy model is the one-time pad algorithm. To state the algorithm, we recall from Section 1.10 that denotes the finite field of size 2.

Theorem 2 (Shannon one-time pad, 1949). Fix and let , the -fold cartesian product of . We define to be the algorithm that chooese uniformly from . Let denote the coordinatewise addition on

and define

for all . The resulting system , called the one-time pad, is a perfectly secret cryptographic system.

The one-time pad algorithm (OTP) is one of the earliest known examples of encryption methods with a secret key. Miller was the first person to describe OTP formally (Miller 1882). Shannon, on the other hand, was the first to prove formally the security of OTP (Shannon 1949.

OTP is an example of a symmetric-key encryption scheme, as the same key is used both for the encryption process and the decryption process. The above theorem shows that OTP is essentially unbreakable, but OTP is not without problems. We first note that a key must be just as large as the message is used. In fact, this condition cannot be altered:

Theorem 3 (Shannon, 1949). In every perfectly secret cryptographic system, .

Worse, a key can never be recycled. Indeed, if and , then

From , the individual messages can be obtained with reasonable certainty. A historical example is the Venona project:

One-time pads used properly only once are unbreakable; however, the KGB’s cryptographic material manufacturing center in the Soviet Union apparently reused some of the pages from one-time pads. This provided Arlington Hall with an opening. Very few of the 1942 KGB messages could be solved because there was very little duplication of one-time pad pages in those messages. The situation was more favorable in 1943, even more so in 1944, and the success rate improved accordingly. (Benson 2011)

In short, it is hopeless to aim for perfect secrecy. We shall have to settle for something less while maintaining reasonable level of security, which is the underlying theme of cryptography.

We conclude the section with proofs of the above theorems. To this end, we shall make use of a technical lemma.

Lemma 4. The perfect secrecy condition holds if and only if

for all choices of and . In other words, the probabilities of obtaining the same ciphertext from two different messages and are the same.

We defer the proof of the lemma and establish the theorems.

For OTP,

for all choices of , whence OTP is functional.

To check that OTP is ecure, we observe that, for an arbitrary choice of a probability distribution over ,

regardless of the choice of . Therefore,

for all choices of and . Perfect secrecy of OTP now follows from the lemma.

To show that any perfectly secret cryptographic system must have keys at least as large as the message, we suppose that . We fix , set and consider the set


we can find . Now,


and so it follows from the lemma that the cryptographic system in question is not perfectly secret.

Let us return to the proof of the technical lemma, which we restate below for convenience.

Lemma 4. The perfect secrecy condition holds if and only if

for all choices of and . In other words, the probabilities of obtaining the same ciphertext from two different messages and are the same.

For any choice of , we have that


if and only if

To this end, we observe that

and that

Since probabilities are always nonnegative,

if and only if

regardless of the choice of . This holds if and only if

and the proof of the lemma is now complete.

We have now seen in that perfect secrecy is exceedingly difficult to achieve. Nevertheless, it is merely sufficient to prevent Eve the eavesdropper from efficiently decrypting the message. To formalize this notion, we must make sense of what it means for Eve to be computationally bounded.

Computational boundedness is, in essence, the restriction on the ability to carry out computations that take too long to run. What, then is computation? We turn to Alan Turing, the father of theoretical computer science (Turing 1950):

The idea behind digital computers may be explained by saying that these machines are intended to carry out any operations which could be done by a human computer. The human computer is supposed to be following fixed rules; he has no authority to deviate from them in any detail. We may suppose that these rules are supplied in a book, which is altered whenever he is put on to a new job. He has also an unlimited supply of paper on which he does his calculations. He may also do his multiplications and additions on a “desk machine,” but this is not important.

If we use the above explanation as a definition we shall be in danger of circularity of argument. We avoid this by giving an outline of the means by which the desired effect is achieved. A digital computer can usually be regarded as consisting of three pats:

  • Store
  • Executive unit
  • Control

The store is a store of information, and corresponds to the human computer’s paper, where this is the paper on which he does his calculations or that on which his book of rules is printed. In so far as the human computer does calculations in his bead a part of the store will correspond to his memory.

The executive unit is the part which carries out the various individual operations involved in a calculation. What these individual operations are will vary from machine to machine. Usually fairly lengthy operations can be done such as “Multiplicy 3540675445 by 7076345687” but in some machines only very simple ones such as “Write down 0” are possible.

We have mentioned that the “book of rules” supplied to the computer is replaced in the machine by a part of the store. It is then called the “table of instructions.” It is the duty of the control to see that these instructions are obeyed correctly and in the right order. The control is so constructed that this necessarily happens.

In short, computation can be modeled with a theoretical machine consisting of three parts:

  • infinitely-long tapes with discrete cells, one of which containing input values and the rest providing read-write workspace areas,
  • a state register that specifies the state of the machine at each step, and
  • heads that can, in accordance with the state of the machine and the recorded symbols on tapes, read and write symbols on tapes, as well as moving tapes left or right one cell at a time.

On such a model, computational boundedness is merely a restriction on how many times the tapes can be moved before a computational task is considered too difficult. It is, then, worthwhile to formalize the model into a precise mathematical construct:

Definition 5. (Turing, 1950). For a fixed , a -tape Turing machine is defined to be an ordered triple consisting of the following:

  • A set of symbols, called the alphabet of . Typically, we assume that contains, at least, 0, 1, blank, and start.
  • A set of states for . Typically, we assume that contains, at least, start and halt.
  • A transition function

that, based on the current state of and the symbols at the current locations of the heads, produces output to be recorded on the workspace tapes and moving instructions for the heads.

We are, of course, interested in computational processes that terminate in a finite number of steps.

Definition 6. A -tape Turing Machine is said to be a deterministic algorithm, or simply an algorithm, if, for each input that starts off at the start state, there exists a positive integer such that

produces the halt state. In other words, a deterministic algorithm always halts after following the instructions provided by the transition function finitely many times, regardless of the initial configuration of the input tapes.

The image to keep in mind is as follows: we put tapes into the Turing machine as the input, modifies the tapes until it hits the halt state, and the final configuration of the tapes is printed as the output.

We recall, however, that we are interested in computational efficiency, not mere computability. In order to formalize the notion of computational boundedness, we must work out what it means for an algorithm to have a certain running time.

It is typical to consider bit strings such as

as representations of data. Therefore, it makes sense to be able to refer to the collection of all finite bit strings:

Definition 7. Let denote the -fold cartesian product of the bit set . We define

the set of all finite bit strings. Given a bit string , we define its length to be the unique such that

With bit strings as our representation of data, it makes sense to think of a computational task as a function on , i.e., a process that outputs a unique bit string for each input bit string. This turns out to be a sufficient abstraction for defining the notion of computational efficiency.

Definition 8. Let and . We say that a Turing machine computes in -time if, for every , the Turing machine initialized to the start configuration on input halts with as its output within steps.

In other words, we can use a function to provide an upper bound on the run time of a Turing machine computing . Computational tasks in real life often takes longer with larger input data, and so it makes sense to have the -time depends on the size of the input bit string.

In fact, it would make sense to have grow with its input size, at a rate sufficiently fast that the Turing machine is always given the time to read the input. Moreover, we would want itself to be efficiently computable as well, for otherwise we cannot make use of the information on computational boundedness with ease. We collect these desirable properties into the following definition:

Definition 9. A function is time constructible if for all and if there exists a Turing machine that computes the function , the binary representation of , in -time.

Examples of time constructible functions include , , and .

Let us now define what it means for a function to be efficiently computable.

Definition 10. We define to be the set of all time-constructible functions such that for some . A function is said to be computable in polynomial time, or efficiently computable, if there exists a Turing machine and a function such that computes in -time.

We often write to denote a fixed, unspecified element of .

Examples of efficiently computable functions include the usual sorting algorithms, primality testing, Fast Fourier transform, and so on.

It is often useful to allow our model for computation to make use of randomness. For example, quicksort with a randomized pivot often performs a lot better than quicksort with a median-of-medians pivot, even though the latter has better worst-case runtime than the former. Some widely-used computational methods, such as the Monte Carlo methods, are always probabilistic in nature and do not have non-probabilistic analogues.

In light of this, we define a probabilistic analogue of the Turing machine.

Definition 11 (de Leeuw–Moore–Shannon–Shapiro, 1970). A probabilistic Turing machine is an ordered quadruple consisting of a set of symbols, a set of states, and two transition functions and : see Definition 2.2.1. Given an input, probabilistic Turing machine is executed by randomly applying and with equal probability.

Probabilistic Turing machine is said to be a probabilistic algorithm if, for each input

there exists a positive integer such that

produces the halt state. Here, each is randomly chosen to be either 1 or 2.

The concept of computational efficiency for Turing machines can be carried over to the context of probabilistic Turing machines with minor modifications.

Definition 12. A probabilistic Turing machine computes in -time for some if, for every choice of bit string , the probabilistic Turing machine initialized to the start state on input halts after at most steps with as the output, regardless of the random choices made within .

Definition 13. A function is said to be computable in probabilistic polynomial time if there exists a probabilistic Turing machine and a function such that computes in -time.

Regular Turing machines are sometimes called deterministic Turing machines to emphasize their difference with probabilistic Turing machines. Similarly, computability in polynomial time is often referred to as deterministic polynomial time.

With the language of computational complexity theory at hand, we can now formalize the notion of a process that is easy to carry out but difficult to revert. To this end, we introduce two preliminary definitions.

Definition 14. denotes a random variable distributed uniformly over , i.e.,

whenever and equals zero otherwise.

Definition 15. refers to a bit string of length , consisting entirely of 0. Similarly, refers to a bit string of length , consisting entirely of 1.

We are now ready to give the definition of a one-way function.

Definition 16 (Diffie–Hellman, 1976). A function is said to be one-way if the following conditions hold:

  • is easy to compute, i.e., is computable in deterministic polynomial time.
  • is difficult to invert, i.e., for each probabilistic polynomial-time algorithm and every polynomial ,

for all sufficiently large .

Why is the auxiliary input needed? Without it, a function can be considered one-way by merely shrinking its input: if the image is very small, an inverting algorithm simply would not have enough time with respect to the size of its input—i.e., the shrunk output of the original function—to have good computational complexity.

The existence of a one-way function has not been proven. In fact, an existence proof would settle the famous P versus NP problem. There are, however, plausible candidates for one-way functions, having withstood many attempts at producing efficient inverting algorithms.

The most famous example is the integer factorization problem, which is widely believed to be difficult. State-of-the-art factoring algorithms such as general number field sieve runs in subexponential time. In the language of one-way functions, the multiplication function

is conjectured to be a one-way function.

With this assumption, we can build the famous RSA cryptosystem, which builds on the difficulty of the integer factorization problem.

See Goldreich’s two-volume monograph for more information on the foundations of cryptography.