# CLRS Chapter 2. Getting Started

table of contents ⋮ chapter 3 →

## 2.1. Insertion Sort

```
def insertion_sort(A):
for i in range(1, len(A)):
x = A[i]
j = i - 1
while (j >= 0) and (A[j] > x):
A[j+1] = A[j]
j = j - 1
A[j+1] = x
return A
```

### Properties of Insertion Sort

Insertion sort is adaptive, stable, in-place, and online.

**Adaptive.** We say that a sorting algorithm is **adaptive** if the algorithm can make use of the order already present in the input data to speed up the sorting process. To see that insertion sort is sensitive to the order prsent in the input data, we define an **inversion** (see **Problem 2-4**) in a list to be a pair indices such that but . We can think of the total number of inversions as a measure of disorder in . If is ordered, then there are no inversions in . If the terms of are arranged in reverse order, then there are inversions in , where is the length of the list .

Note that, during the th iteration of the `for`

loop, the `while`

conditional `(j >= 0) and (A[j] > x)`

is satisfied if and only if is an inversion. Therefore, the number of steps of computation `insertion_sort()`

performs is

where denotes the total number of inversions in the list .

See Estivill-Castro/Wood, “A survey of adaptive sorting algorithms” for details on adaptive algorithms.

**Stable.** We say that a sorting algorithm is **stable** if every pair of terms with the same value remains in the same order after the sorting process terminates. Insertion sort is stable, as each term is moved to the left without crossing any term with the same value.

Stability is a desirable property when a data set needs to be sorted in accordance with more than one ordering. For example, if we are to sort a list of names, in the alphabetical order, by last name AND first name, we can run a stable sort on last names and then run another stable sort on first names. Running an unstable sort twice might mix up the order of last names.

**In-place.** We say that an algorithm is **in place** if an algorithm can be executed using minimal additional storage beyond saving the input data. The definition of *minimal* is context-dependent, but an algorithm with the space complexity of should always be considered in-place. Insertion sort is in-place, as the transformation takes places within the array `A`

, with the aid of only two auxiliary variables `x`

and `i`

.

**Online.** We say that an algorithm is **online** if the algorithm does not require all information about the input before it executes. In other words, an online algorithm can process the input data *serially*, i.e., as it comes in.

Why is insertion sort an online algorithm? During the th iteration of the `for`

loop, we see that the loop reads and modifies only the sublist `A[:k+1]`

. Therefore, to execute `insertion_sort()`

, it suffice sto know one extra term of `A`

at each iteration of the `for`

loop.

### Loop Invariants as a Form of Mathematical Induction

A loop-invariants proof of correctness is just a fancy way of writing an induction proof. The initialization step corresponds to the base case of mathematical induction. The maintenance step corresponds to the induction step of mathematical induction. The termination step doesn’t correspond to a concrete step of an induction proof but usually follows easily from the induction step and the conditional that drives the loop.

### Exercises

2.1-1.Using Figure 2.2 as a model, illustrate the operation of`insertion_sort`

on the array .

```
>>> A = [31, 41, 59, 26, 41, 58]
>>> for i in range(1, len(A)):
... x = A[i]
... j = i - 1
... while (j >= 0) and (A[j] > x):
... A[j+1] = A[j]
... j = j-1
... print(A)
... A[j+1] = x
... print(A)
[31, 41, 59, 26, 41, 58]
[31, 41, 59, 26, 41, 58]
[31, 41, 59, 59, 41, 58]
[31, 41, 41, 59, 41, 58]
[31, 31, 41, 59, 41, 58]
[26, 31, 41, 59, 41, 58]
[26, 31, 41, 59, 59, 58]
[26, 31, 41, 41, 59, 58]
[26, 31, 41, 41, 59, 59]
[26, 31, 41, 41, 58, 59]
```

2.1-2.Rewrite the`insertion_sort`

procedure to sort into nonincreasing instead of non-decreasing order.

```
def insertion_sort_reverse(A):
for i in range(1, len(A)):
x = A[i]
j = i - 1
while (j >= 0) and (A[j] < x):
A[j+1] = A[j]
j = j - 1
A[j+1] = x
return A
```

```
>>> insertion_sort_reverse([1, 2, 3])
[3, 2, 1]
>>> insertion_sort_reverse([2,1,3,1,4,1,5,1,6,1,7,1,8,1])
[8, 7, 6, 5, 4, 3, 2, 1, 1, 1, 1, 1, 1, 1]
```

2.1-3.Consider thesearching problem:

`INPUT`

: A sequence of numbers and a value .

`OUTPUT`

: An index such that or the special value`NIL`

if does not appear in .Write pseudocode for

linear search, which scans through the sequence, looking for . Using a loop invariant, prove that your algorithm is correct. Make sure that your loop invariant fulfills the three necessary properties.

```
def linear_search(A, v):
for i in range(len(A)):
if A[i] == v:
return i
return None
```

```
>>> linear_search([3, 2, 5, 1], 3)
0
>>> linear_search([3, 2, 5, 1], 2)
1
>>> linear_search([3, 2, 5, 1], 5)
2
>>> linear_search([3, 2, 5, 1], 1)
3
>>> linear_search([3, 2, 5, 1], 0)
None
```

The loop invariant of `linear_search`

is as follows: at the start of th iteration of the `for`

loop, the subarray `A[:i]`

does not contain `v`

.

**Initialization.** The loop invariant is trivially true before the first iteration of the loop.

**Maintenance.** The loop does not mutate `A`

, so the veracity of the loop invariant does not change during one iteration of the loop.

**Termination.** If `A[i] == v`

, then the loop terminates with `return i`

. Otherwise, the loop terminates after the entire array is scanned, at which point `linear_search`

returns `None`

. This is precisely what we aimed for.

2.1-4.Consider the problem of adding two -bit binary integers, stored in two -element arrays and . The sum of the two integers should be stored in binary form in an -element array . State the problem formally and write pseudocode for adding the two integers.

Formally, and are arrays of length whose elements are either 0 or 1. We are to produce an array of length such that equals modulo 2, where

We can implement this as follows:

```
def binary_addition(A, B):
n = len(A) # assume that len(A) == len(B)
C = [None for _ in range(n+1)]
M = 0
for i in range(n):
C[i] = (A[i] + B[i] + M) % 2
if A[i] + B[i] + M <= 1:
M = 0
else:
M = 1
C[n] = M
return C
```

```
>>> binary_addition([0, 1, 1], [1, 0, 1])
[1, 1, 0, 1]
>>> binary_addition([0, 0, 0, 1, 1, 1], [1, 1, 1, 1, 1, 1])
[1, 1, 1, 0, 1, 1, 1]
```

## 2.2. Analyzing Algorithms

### Exercises

2.2-1.Express the function in terms of -notation.

We let for notationa convenience. We shall show that .

To see this, we observe first that

for all natural numbers . It follows that .

It now suffices to establish that . As a preliminary result, we show that

for all sufficiently large . To this end, we set and observe that

is increasing on , where

(The exact value of is unimportant.)

Since the derivative of keeps increasing on , we see that increases without bound on . It follows that as , which means we can find a real number such that whenever . We now pick to be any natural number larger than and conclude that

whenever , because .

Estimate is equivalent to the statement that

whenever . It follows that

for all , whence , as was to be shown.

2.2-2.Consider sorting numbers stored in array by first finding the smallest element of and exchanging it with the element in . Then find the second smallest element of , and exchange it with . Continue in this manner for the first elements of . Write pseudocode for this algorithm, which is known asselection sort. What loop invariant does this algorithm maintain? Why does it need to run for only the first elements, rather than for all elements? Give the best-case and worst-case running times of selection sort in -notation.

```
def selection_sort(A):
for i in range(len(A) - 1):
# minimum-finding subroutine
min_index = i
for j in range(i, len(A)):
if A[j] < A[min_index]:
min_index = j
# swap
A[i], A[min_index] = A[min_index], A[i]
return A
```

```
>>> selection_sort([3, 2, 1])
[1, 2, 3]
>>> selection_sort([2, 1, 3, 1, 4, 1, 5, 1, 6, 1, 7, 1, 8, 1])
[1, 1, 1, 1, 1, 1, 1, 2, 3, 4, 5, 6, 7, 8]
```

The loop invariant of `selection_sort`

is as follows: at the end of the $i$th iteration of the outer `for`

loop, the subarray `A[:i]`

is sorted and its elements are no larger than the minimum of the subarray `A[i:]`

. It follows, in particular, that the `for`

loop need not run for the last element of , as it is trivially sorted.

Let us prove the loop invariant.

**Initialization.** The first interation of the `for`

loop finds the minimum of `A`

and places it at `A[0]`

. `A[:1] == A[0]`

is trivially sorted, and its only element is the minimum of `A`

, hence no larger than the minimum of `A[i:]`

.

**Maintenance.** The th iteration of the `for`

loop finds the minimum of `A[i-1:]`

and places it at `A[i-1]`

. Before the swap, the minimum of `A[i-1]`

is no smaller than the maximum of `A[:i-1]`

, and so the resulting subarray `A[:i]`

is sorted. We have

which, in particular, implies that the elements of `A[:i]`

are no larger than the minimum of the subarray `A[i:]`

.

**Termination.** The `for`

loop terminates after `len(A)-1`

iterations, with `A`

sorted.

We let for notational convenience . The outer `for`

loop runs times regardless of what input array `A`

is. During the th iteration of the outer `for`

loop, the inner `for`

loop runs times regardless of what `A`

is. It follows that The time complexity of `selection_sort`

bounded below by

The best-case running time occurs when the `if`

clause in the inner `for`

loop never holds, i.e., when `A`

is already sorted. The worst-case running time is bounded by the scenario in which the `if`

clause holds every time. Therefoer, the time complexity of `selection_sort`

is bounded above by

It follows that the time complexity of `selection_sort`

is .

2.2-3.Consider linear search again (see Exercise 2.1-3). How many elements of the input sequence need to be checked on the average, assuming that the element being searched for is equally likely to be any element in the array? How about in the worst case?

For an input array `A`

of length $n$ and an input value `v`

to be searched, there are possible scenarios: either `A[i] == v`

for some or `v`

is not in `A`

. If `A[i] == v`

, then the search terminates after checking elements. If `v`

is not in `A`

, then the search terminates after checking all elements. It follows that the expected number of elements to be checked is

which is approximately for large enough values of . If we assume that `v`

must be in `A`

, then the case of not finding `v`

after searching all $n$ elements of `A`

is eliminated. In this case, the expected number of elements to be checked is

which, once again, is approximately for large enough values of .

The worst case of linear search entails, of course, searching all elements of the array.

2.2-4.How can we modify almost any algorithm to have a good best-case running time?

Suppose that, for each , we are able to characterize explicitly the desired output of at least one input of size . We build this case into the algorithm, requiring only the time needed to check whether the input data is the specified “good input”.

## 2.3. Designing Algorithms

### Is Merge Sort Always Fast?

### Exercises

Since Merge sort always runs in time, it is not an adaptive sorting algorithm. The runtime of insertion sort can be as low as , and so we expect insertion sort to run faster than merge sort for lists that are largely sorted.

Moreover, sorting algorithms can run faster than merge sort on small arrays. This is because asymtotpic analysis is useful only for **large** .

What do we mean by this? Given two positive constants and , we observe that

and so there exists a number such that

whenever . Therefore,

for all . This is what we mean when we say asymptotically dominates .

On the other hand, attains its minimum at and is an increasing function on . Therefore, so long as

we can find a positive integer such that

Therefore, if , there exists a positive integer such that

In other words, it is possible for an algorithm to run faster than an algorithm for small .

### Exercises

2.3-1.Using Figure 2.4 as a model, illustrate the operation of merge sort on the array`<3, 41, 52, 26, 38, 57, 9, 49>`

.

[3] [4] [52] [26] [38] [57] [9] [49]

[3 4] [26 52] [38 57] [9 49]

[3 4 26 52] [9 38 49 57]

[3 4 9 26 38 49 52 57]

2.3-2.Rewrite the`merge`

procedure so that it does not use sentinels, instead stopping once either array`L`

or`R`

has all its elements copied back to`A`

and then copying the remainder of the other array back into`A`

.

```
import math
def merge(A, p, q, r):
L, R = A[p:q], A[q:r]
n, m = q-p, r-q
i, j, k = 0, 0, p
while i < n and j < m:
if L[i] < R[j]:
A[k] = L[i]
i += 1
else:
j += 1
k += 1
while i < n or j < m:
if i < n:
A[k] = L[i]
else:
A[k] = R[j]
j += 1
k += 1
def merge_sort(A, p, r):
if p+1 < r:
q = p + math.floor((r-p)/2)
merge_sort(A, p, q)
merge_sort(A, q, r)
merge(A, p, q, r)
```

```
>>> A = [3, 41, 52, 26, 38, 57, 9, 49]
merge_sort(A, 0, len(A))
>>> A
[3, 9, 26, 38, 41, 49, 52, 57]
```

```
>>> from random import choice
>>> for _ in range(5):
... # generate a list of 10 random integers between 0 and 99
... B = [choice(range(100)) for _ in range(10)]
... print('before sort: {}'.format(B))
... merge_sort(B, 0, len(B))
... print('after sort: {}'.format(B))
... print('')
before sort: [14, 72, 7, 44, 5, 15, 56, 42, 48, 41]
after sort: [5, 7, 14, 15, 41, 42, 44, 48, 56, 72]
before sort: [75, 13, 23, 29, 90, 33, 0, 81, 25, 1]
after sort: [0, 1, 13, 23, 25, 29, 33, 75, 81, 90]
before sort: [89, 1, 14, 36, 58, 60, 57, 82, 65, 71]
after sort: [1, 14, 36, 57, 58, 60, 65, 71, 82, 89]
before sort: [40, 29, 96, 57, 33, 37, 81, 24, 23, 56]
after sort: [23, 24, 29, 33, 37, 40, 56, 57, 81, 96]
before sort: [98, 54, 65, 70, 77, 89, 17, 37, 69, 89]
after sort: [17, 37, 54, 65, 69, 70, 77, 89, 89, 98]
```

2.3-3.Use mathematical induction to show that when is an exact power of , the solution of the recurrenceis .

If , then the claim is holds trivially. Suppose that the claim holds for and observe that

as was to be shown. It follows from mathematical induction that the claim holds for all with .

2.3-4.We can express insertion sort as a recursive procedure as follows. In order to sort , we recursively sort and then insert into the sorted array . Write a recurrence for the worst-case running time of this recursive version of insertion sort.

```
def insert(A, k):
i = k
while i > 0 and A[i] < A[i-1]:
A[i], A[i-1] = A[i-1], A[i]
i -= 1
def recursive_insertion_sort(A, k):
if k > 0:
recursive_insertion_sort(A, k-1)
insert(A, k-1)
```

```
>>> A = [3, 41, 52, 26, 38, 57, 9, 49]
>>> recursive_insertion_sort(A, len(A))
>>> A
[3, 9, 26, 38, 41, 49, 52, 57]
```

```
>>> from random import choice
>>> for _ in range(5):
... # generate a list of 10 random integers between 0 and 99
... B = [choice(range(100)) for _ in range(10)]
... print('before sort: {}'.format(B))
... recursive_insertion_sort(B, len(B))
... print('after sort: {}'.format(B))
... print('')
before sort: [89, 32, 83, 65, 81, 5, 24, 72, 48, 8]
after sort: [5, 8, 24, 32, 48, 65, 72, 81, 83, 89]
before sort: [67, 16, 47, 32, 53, 65, 47, 49, 35, 89]
after sort: [16, 32, 35, 47, 47, 49, 53, 65, 67, 89]
before sort: [94, 71, 19, 60, 19, 58, 63, 13, 13, 99]
after sort: [13, 13, 19, 19, 58, 60, 63, 71, 94, 99]
before sort: [74, 8, 41, 26, 46, 7, 53, 83, 93, 57]
after sort: [7, 8, 26, 41, 46, 53, 57, 74, 83, 93]
before sort: [88, 7, 33, 95, 95, 63, 67, 36, 80, 56]
after sort: [7, 33, 36, 56, 63, 67, 80, 88, 95, 95]
```

Observe first that the time complexity of `insert(A, k)`

is , as the `while`

loop could very well iterate all the way from to . The worst-case running time of `recursive_insertion_sort`

is therefore

2.3-5.Referring back to the searching problem (see Exercise 2.1-3), observe that if the sequence is sorted, we can check the midpoint of the sequence against and eliminate half of the sequence from further consideration. Thealgorithm repeats this procedure, halving the size of the remaining portion of the sequence each time. Write pseudocode, either iterative or recursive, for binary search. Argue that the worst-case running time of binary search is .binary search

```
from math import floor
def binary_search(A, p, q, v):
if p > q:
return -1
if p == q:
return p if A[p] == v else -1 # -1 if v is not found
mid = p + floor((q-p/2))
if A[mid] == v:
return mid
else:
if A[mid] > v:
return binary_search(A, p, mid-1, v)
else: # A[mid] < v
return binary_search(A, mid+1, q, v)
```

```
>>> A = [0, 2, 4, 6, 8, 10, 12, 14]
>>> for v in range(0, 16):
... print(binary_search(A, 0, len(A)-1, v))
0
-1
1
-1
2
-1
3
-1
4
-1
5
-1
6
-1
7
-1
```

Let be the worst-case running time of `binary_search`

on a list of length . Since the case is checked directly by the second `if`

clause, we have . Otherwise, we have . Since , we conclude that

2.3-6.Observe that thewhileloop of lines 5-7 of`insertion_sort`

procedure in Section 2.1 uses a linear search to scan (backward) through the sorted subarray`A[1..j-1]`

. Can we use a binary search (see Exercise 2.3-5) instead to improve the overall worst-case running time of insertion sort to ?

Binary search does not improve the time complexity of insertion sort. The insertion operation, even without the search operation, takes linear time, as terms must be shifted to make room for the term to be inserted.

2.3-7.Describe a -time algorithm that, given a set of integers and another integer , determines whether or not there exist two elements in whose sum is exactly .

We first perform merge sort on , which takes time. Now, for each element of , we search for in using binary search. This takes for each . Since there are elements of , the search portion takes time. It follows that the entire algorithm takes time.

```
def two_sum(S, x):
merge_sort(S, 0, len(S))
for i in range(len(S)):
if binary_search(S, 0, len(S)-1, x-S[i]) > -1:
return True:
return False
```

```
>>> two_sum([3, 41, 52, 26, 38, 57, 9, 49], 44)
True
>>> two_sum([3, 41, 52, 26, 38, 57, 9, 49], 55)
True
>>> two_sum([3, 41, 52, 26, 38, 57, 9, 49], 95)
True
>>> two_sum([3, 41, 52, 26, 38, 57, 9, 49], -32)
False
```

## Problems

### 2-1. Insertion sort on small arrays in merge sort

Although merge sort runs in worst-case time and insertion sort runs in worst-case time, the constant factors in insertion sort can make it faster in practice for small problem sizes on many machines. Thus, it makes sense to

the leaves of the recursion by using insertion sort within merge sort when subproblems become sufficiently small. Consider a modification to merge sort in which sublists of length are sorted using insertion sort and then merged using the standard merging mechanism, where is a value to be determined.coarsena. Show that insertion sort can sort the sublists, each of length , in worst-case time.

b. Show how to merge the sublists in worst-case time.

c. Given that the modified algorithm runs in worst-case time, what is the largest value of as a function of for which the modified algorithm has the same running time as standard merge sort, in terms of -notation?

d. How should we choose in practice?

#### Problem 2-1a

We recall that insertion sort on a list of length takes worst-case time. Since there are sublists, the entire procedure takes

time.

#### Problem 2-1b

Recall that merging two lists of length takes time. We merge the sublists of length two at a time, so that we end up with sublists of length . Repeating the process until we end up with one list of length takes

#### Problem 2-1c

Since , we have that

It thus suffices to pick such that

Since , we see that any with -estimate of should give us the desired time complexity.

To verify this, we set and observe that

as desired.

We remark that not all choices of are valid, as must not be larger than . In fact, we must have

for all sufficiently large . This can be done by, say, choosing a positive integer and setting

#### Problem 2-1d

We experiment with a number of input arrays to find a positive integer such that input arrays of length result in similar runtime for insertion sort and merge sort. Choosing

we ensure that any input array of length less than is sorted using insertion sort, and that longer arrays are sorted using the hybrid sort with insertion sort running on subarrays of reasonable length.

### 2-2. Correctness of bubblesort

Bubblesort is a popular, but inefficient, sorting algorithm. It works by repeatedly swapping adjacent elements that are out of order:

`def bubblesort(A): for i in range(len(A) - 1): for j in range(len(A) - 1, i, -1): if A[j] < A[j-1]: A[j], A[j-1] = A[j-1], A[j]`

a. Let denote the output of

`bubblesort(A)`

. To prove that`bubblesort`

is correct, we need to prove that it terminates and thatwhere . In order to show that

`bubblesort`

actually sorts, what else do we need to prove?The next two parts will prove inequality (2.3).

b. State precisely a loop invaraint for the

`for`

loop in lines 2-3, and prove that this loop invariant holds. Your proof should use the structure of the loop invariant proof presented in this chapter.c. Using the termination condition of the loop invariation proved in part (b), state a loop invariant for the

`for`

loop in lines 1-4 that will allow you to prove inequality (2.3). Your proof should use the structure of the loop invariant proof presented in this chapter.d. What is the worst-case running time of bubblesort? How does it compare to the running time of insertion sort?

#### Problem 2-2a

We need to show that is a ** permutation** of , i.e., the terms of are the terms of , potentially in a different order.

#### Problem 2-2b

We shall show the following loop invariant:

At the end of the th iteration (starting from

`len(A)-1`

down to`0`

), the subarray is permuted in such a way thatfor all .

** Initialization.** The

`len(A)-1`

th iteration of the inner `for`

loop ensures thatmaking a swap if necessary.

** Maintenance.** Assume that we have

at the beginning of the th iteration of the inner `for`

loop. If , then the loop invariant is trivially satisfied. If not, the `if`

clause makes the swap, satisfying the loop invariant.

** Termination.** The inner

`for`

loop terminates after the `i`

th iteration. At this point, we haveand the new subarray is a permutatin of the old subarray.

#### Problem 2-2c

We shall show the following loop invariant:

At the end of the th iteration, the subarray is permuted in such a way that

The initialization and maintenance step are, in fact, established already as part of the termination condition of the loop invariant of the inner `for`

loop. The `for`

loop terminates with a permutation of that satisfies (2.3).

#### Problem 2-2d

We let for notational convenience. Taking into account the two for loops, we see that the time complexity of `bubblesort`

is bounded below by

Since each iteration of the inner `for`

loop adds at most one step to the computational process, we see that the time complexity of `bubblesort`

is, in fact, .

`bubblesort`

is worse than insertion sort, as insertion sort is adaptive, whereas `bubblesort`

fails to be adaptive.

### 2-3. Correctness of Horner’s rule

The following code fragment implements Horner’s rule for evaluating a polynomial

given the coefficients $a_0,a_1,\ldots,a_n$ and a value for $x$:

`def horner(coeffs, val): y = 0 for i in range(len(coeffs)-1, -1, -1): y = coeffs[i] + val * y return y`

a. In terms of -notation, what is the running time of this code fragment for Horner’s rule?

b. Write pseudocode to implement the naive polynomial-evaluation algorithm that computes each term of the polynomial from scratch. What is the running time of this algorithm? How does it compare to Horner’s rule?

c. Consider the following loop invariant:

At the start of each iteration of the

`for`

loop of lines 2-3,Interpret a summation with no terms as equaling 0. Following the structure of the loop invariant proof presented in this chapter, use this loop invariant to show that, at termination, .

d. Conclude by arguing that the given code fragment correctly evaluates a polynomial characterized by the coefficients .

#### Problem 2-3a

Each iteration of the `for`

loop incurs constant time, and so the running time of `horner`

is .

#### Problem 2-3b

A naive implementation is as follows:

```
def polynomial_multiplication(coeffs, val):
y = 0
for i in range(len(coeffs)):
term = coeffs[i]
for j in range(i):
term *= val
y += term
return y
```

```
>>> polynomial_multiplication([1, 2, 3, 4, 5, 6], 2)
321
```

Each inner `for`

loop incurs constant time, and so the running time of `polynomial_multiplication`

is

which is slower than `horner`

.

```
>>> %%timeit
... horner([1, 2, 3, 4, 5, 6], 2)
651 ns ± 3.66 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
In [9]:
```

```
>>> %%timeit
... polynomial_multiplication([1, 2, 3, 4, 5, 6], 2)
2.02 µs ± 7.73 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
```

#### Problem 2-3c

The initialization step is the th iteration, at the beginning of which we have , as the formula indicates.

As for the maintenance step, we assume that

at the beginning of the th iteration of the `for`

loop. After the `for`

loop, we have

maintaining the loop invariant at the beginning of the th iteration.

The loop terminates after the th iteration, yielding

as desired.

#### Problem 2-3d

See 2-3c

### 2-4. Inversions

Let be an array of $n$ distinct numbers. If and , then the pair is called an

of .inversiona. List the five inversions of the array .

b. What array with elements from the set has the most inversions? How many does it have?

c. What is the relationship between the running time of insertion sort and the number of inversions in the input array? Justify your answer.

d. Give an algorithm that determines the number of inversions in any permutation on elements in worst-case time. (

Hint: Modify merge sort.)

#### Problem 2-4a

, , , , .

#### Problem 2-4b

If every pair such that is an inversion, then we have

satisfies this condition.

#### Problem 2-4c

See **Properties of Insertion Sort**.

#### Problem 2-4d

We observe that the number of inversions in an array is the sum of the following three quantities:

- , the number of inversions in , the left half of ;
- , the number of inversions in , the right half of ;
- , the number of inversions required to merge and .

This yields a simple recursive algorithm for computing the number of inversions in . So long as we can compute in linear time with respect to the length of (hence the length of ), the time complexity should be the same as that of merge sort, . We can do this by keeping a counter during the merge process, incrementing it every time an element of is added to the merged list before we exhaust . This only incurs constant time per each element of and , and so it can be in linear time.

```
from math import floor
def inversion_counter(A):
return inversion_mergesort(A, 0, len(A), 0)
def inversion_mergesort(A, p, r, counter):
if p+1 < r:
q = p + floor((r-p)/2)
counter = inversion_mergesort(A, p, q, counter)
counter = inversion_mergesort(A, q, r, counter)
counter = inversion_merge(A, p, q, r, counter)
return counter
def inversion_merge(A, p, q, r, counter):
L, R = A[p:q], A[q:r]
n, m = q-p, r-q
i, j, k = 0, 0, p
while i < n and j < m:
if L[i] < R[j]:
A[k] = L[i]
i += 1
else:
counter += n-i # R[j] now precedes all n-i remaining elements of L
A[k] = R[j]
j += 1
k += 1
while i < n or j < m:
if i < n:
A[k] = L[i]
i += 1
else:
A[k] = R[j]
j += 1
k += 1
return counter
```

```
>>> inversion_counter([5, 4, 3, 2, 1])
10
>>> (5 * (5-1))/2
10.0
>>> inversion_counter([10, 9, 8, 7, 6, 5, 4, 3, 2, 1])
45
>>> (10 * (10-1))/2
45.0
```