# CLRS Chapter 5. Probabilistic Analysis and Randomized Algorithms

← chapter 4 ⋮ table of contents ⋮ chapter 6 →

## 5.1. The hiring problem

### Exercises

5.1-1.Show that the assumption that we are always able to determine which candidate is best, in line 4 of procedure`hire-assistant`

, implies that we know a total order on the ranks of the candidates.

This is merely the definition of a total order: every pair of elements is comparable.

5.1-2.Describe an implementation of the procedure`random(a, b)`

thathat only makes calls to`random(0, 1)`

. What is the expected running time of your procedure, as a function of and ?

We first note that implementing `random(a, b)`

is equivalent to implementing `random(0, b-a)`

, as the former can be obtained from the latter via one addition operation, and vice versa. `random(0, n)`

can be implemented from `random(0, 1)`

by generating `log n + 1`

random bits, with which we can write down the binary representations of all integers between and , inclusive. Unless is of the form , the generated random bits will produce numbers other than as well. In such cases, we simply generate another -bit random binary number and repeat the process.

```
from random import choice
from math import log, ceil
def random_integer(a, b):
"""We call this random_integer so as not to overwrite the random module."""
if a == 0 and b == 1: # define random_integer(0, 1)
return choice([0, 1])
elif a > b: # invalid case
return None
elif a == b: # degenerate case
return a
else:
n = b - a + 1
random_bits = 0
for i in range(ceil(log(n, 2))):
random_bits += random_integer(0, 1) * (2 ** i)
if random_bits in range(n):
return a + random_bits
else:
return random_integer(a, b)
```

```
# Generate a graphical representation of the distribution generated random bits
import seaborn as sns
import matplotlib.pyplot as plt
sns.distplot([random_integer(0, 9) for _ in range(500), kde=False)
sns.distplot([random_integer(0, 9) for _ in range(1000)], kde=False)
sns.distplot([random_integer(0, 9) for _ in range(2000)], kde=False)
```

At each generation of a -bit random number, there is a probability of that we have to generate another random number. Since each instance is independent of the others, the probability of `random(0, n)`

failing to terminate after iterations is . It then follows taht the probability of `random(0, n)`

terminating after iterations is .

Assuming it takes contant time to compute `random(0, 1)`

, the expected running time of `random(0, n)`

is

Since

for all , we differentiate both sides to obtain

Using this formula, we finisht he computation of the expected running time of `random(0, n)`

:

It is of course possible that *none* of the generated -bit numbers falls within the desired range. The probability of such event is negligible, however, as converges to as .

In general, it is impossible to implement `random(0, n)`

with guaranteed finite-time termination solely from `random(0, 1)`

. Indeed, a repeated application of `random(0, 1)`

can produce selection operations with probability , where and are nonnegative integers. Implementing `random(0, n)`

requires a selection operation with probability , and most such quantities cannot be written as a finite sum of numbers of the form .

5.1-3.Suppose that you want to output with probability and with probability . At your disposal is a procedure`biased-random`

that outputs either 0 or 1. It outputs 1 with some probability and with some probability , where , but you do not know what is. Give an algorithm that uses`biased-random`

as a subroutine, and returns an unbiased answer, returning with probability and with probability . What is the expected running time of your algorithm as a function of ?

We generate two independent instances of `biased-random`

, and . Observe that

We can then generate and with equal probability by returning whenever and returning whenever . If , then we generate two more random instances of `biased-random`

and compare them.

```
import numpy as np
def biased_random(p):
return np.random.choice([0, 1], 1, p=[1-p, p])[0]
def coin_toss_from_biased_random(p):
x, y = biased_random(p), biased_random(p)
if x < y:
return 0
elif x > y:
return 1
else:
return coin_toss_from_biased_random(p)
```

```
# Generate a graphical representation of the generated random 0-1s
import seaborn as sns
import matplotlib.pyplot as plt
sns.distplot([coin_toss_from_biased_random(0.2) for _ in range(1000)], kde=False)
sns.distplot([coin_toss_from_biased_random(0.5) for _ in range(1000)], kde=False)
sns.distplot([coin_toss_from_biased_random(0.8) for _ in range(1000)], kde=False)
```

Assuming it takes constant time to compute `biased-random`

, the expected running time of `biased-random`

is

(See Exercise 5.1-2 for details on a similar computational problem.) Since as , the probability of the algorithm never terminating is negligible.

## 5.2. Indicator random variables

### Exercises

5.2-1.In`hire-assistant`

, assuming that the candidates are presented in a random order, what is the probability that you hire exactly one time? What is the probability that you hire exactly times?

For notational convenience, we label the candidates by , where is a better candidate than whenever .

To hire exactly once, must be the first interviewee. The probability of this happening is .

To hire exactly times, the th interview, for each , must be . The probability of this happening is .

5.2-2.In`hire-assistant`

, assuming that the candidates are presented in a random order, what is the probability that you hire exactly twice?

For notational convenience, we label the candidates by , where is a better candidate than whenever . Moreover, we also label the candidates in the order they are presented: .

If , then no more hiring can occur, so we must have . We thus fix and assume that . In order to have precisely two hirings, all candidates better than must appear after . Since the timing of appearances of does not affect the number of hirings, we see that precisely two hirings occur if and only if is the first to be interviewed among the -person subset

Since the candidates are presented in a random order, the probability of this happening is .

We now denote by the event that precisely two hirings occur in the -person setting and observe that , the probability of two hirings occuring assuming , is shown to be . Since must be one of , the law of total probability implies that

Since cannot result in two hirings, we conclude that

Page 1154 of CLRS shows that

whence it follows that

5.2-3.Use indicator random variables to compute the expected value of the sum of dice.

For each , we let be the indicator random variable associated with the event that a die shows . Since all six numbers are equally likely to show, we have for all . The expected value of the sum of one die is

The expected value of the sum of dice is

5.2-4.Use indicator random variables to solve the following problem, which is known as the. Each of customers gives a hat to a hat-check person at a restaurant. The hat-check person gives the hats back to the customers in a random order. What is the expected number of customers who get back their own hat?hat-check problem

For each , we let be the indicator random variable associated with the event that customer receives their own hat back. Since only one of the hats is the desired hat, we have

for all . The expected number of customers who get their own hat back is

In other words, only one person can be expected to get their hat back, regardless of the number of customers.

5.2-5.Let be an array of distinct numbers. If and , then the pair is called anof . (See Problem 2-4 for more on inversions.) Suppose that the elements of form a uniform random permutation of . Use indicator random variables to compute the expected number of inversions.inversion

For each , we let be the indicator random variable associated with the event that is an inversion of . Since either or , we have that for all . The expected number of inversions is

## 5.3. Randomized algorithms

### Permuting input arrays

Here are Python implementations of the uniform random permutation algorithms in the text.

```
def permute_by_sorting(A):
n = len(A)
P = [0 for _ in range(n)]
for i in range(n):
P[i] = random_integer(1, n ** 3)
return sort_by_key(A, P)
def sort_by_key(A, P):
"""Use the built-in Python sort feature to sort by key"""
key = sorted(
[(i, P[i]) for i in range(len(P))],
key=lambda pair: pair[1])
return [A[key[i][0]] for i in range(len(P))]
```

```
def randomize_in_place(A):
n = len(A)
for i in range(n):
random_index = random_integer(i, n-1)
A[i], A[random_index] = A[random_index], A[i] # swap
return A
```

### Exercises

5.3-1.Professor Marceau objects to the loop invariant used in the proof of Lemma 5.5. He questions whether it is true prior to the first iteration. He reasons that we could just as easily declare that an empty subarray contains no 0-permutations. Therefore, the probability that an empty subarray contains a 0-permutation should be 0, thus invalidating the loop invariant prior to the first iteration. Rewrite the procedure`Randomize-In-Place`

so that its associated loop invariant applies to a nonempty subarray prior to the first iteration, and modify the proof of Lemma 5.5 for your procedure.