# CLRS Chapter 2. Getting Started

source code directory

## 2.1. Insertion Sort

def insertion_sort(A):
for i in range(1, len(A)):
x = A[i]
j = i - 1
while (j >= 0) and (A[j] > x):
A[j+1] = A[j]
j = j - 1
A[j+1] = x
return A


### Properties of Insertion Sort

Insertion sort is adaptive, stable, in-place, and online.

Adaptive. We say that a sorting algorithm is adaptive if the algorithm can make use of the order already present in the input data to speed up the sorting process. To see that insertion sort is sensitive to the order prsent in the input data, we define an inversion (see Problem 2-4) in a list $A$ to be a pair $(i,j)$ indices such that $% $ but $A[i] > A[j]$. We can think of the total number of inversions as a measure of disorder in $A$. If $A$ is ordered, then there are no inversions in $A$. If the terms of $L$ are arranged in reverse order, then there are $\frac{n(n-1)}{2}$ inversions in $A$, where $n$ is the length of the list $A$.

Note that, during the $i$th iteration of the for loop, the while conditional (j >= 0) and (A[j] > x) is satisfied if and only if $(i,j)$ is an inversion. Therefore, the number of steps of computation insertion_sort() performs is

where $\operatorname{Inv}(A)$ denotes the total number of inversions in the list $L$.

See Estivill-Castro/Wood, “A survey of adaptive sorting algorithms” for details on adaptive algorithms.

Stable. We say that a sorting algorithm is stable if every pair of terms with the same value remains in the same order after the sorting process terminates. Insertion sort is stable, as each term is moved to the left without crossing any term with the same value.

Stability is a desirable property when a data set needs to be sorted in accordance with more than one ordering. For example, if we are to sort a list of names, in the alphabetical order, by last name AND first name, we can run a stable sort on last names and then run another stable sort on first names. Running an unstable sort twice might mix up the order of last names.

In-place. We say that an algorithm is in place if an algorithm can be executed using minimal additional storage beyond saving the input data. The definition of minimal is context-dependent, but an algorithm with the space complexity of $O(1)$ should always be considered in-place. Insertion sort is in-place, as the transformation takes places within the array A, with the aid of only two auxiliary variables x and i.

Online. We say that an algorithm is online if the algorithm does not require all information about the input before it executes. In other words, an online algorithm can process the input data serially, i.e., as it comes in.

Why is insertion sort an online algorithm? During the $k$th iteration of the for loop, we see that the loop reads and modifies only the sublist A[:k+1]. Therefore, to execute insertion_sort(), it suffice sto know one extra term of A at each iteration of the for loop.

### Loop Invariants as a Form of Mathematical Induction

A loop-invariants proof of correctness is just a fancy way of writing an induction proof. The initialization step corresponds to the base case of mathematical induction. The maintenance step corresponds to the induction step of mathematical induction. The termination step doesn’t correspond to a concrete step of an induction proof but usually follows easily from the induction step and the conditional that drives the loop.

### Exercises

2.1-1. Using Figure 2.2 as a model, illustrate the operation of insertion_sort on the array $A = \langle 31, 41, 59, 26, 41, 58 \rangle$.

>>> A = [31, 41, 59, 26, 41, 58]
>>> for i in range(1, len(A)):
...    x = A[i]
...    j = i - 1
...    while (j >= 0) and (A[j] > x):
...        A[j+1] = A[j]
...        j = j-1
...        print(A)
...    A[j+1] = x
...    print(A)

[31, 41, 59, 26, 41, 58]
[31, 41, 59, 26, 41, 58]
[31, 41, 59, 59, 41, 58]
[31, 41, 41, 59, 41, 58]
[31, 31, 41, 59, 41, 58]
[26, 31, 41, 59, 41, 58]
[26, 31, 41, 59, 59, 58]
[26, 31, 41, 41, 59, 58]
[26, 31, 41, 41, 59, 59]
[26, 31, 41, 41, 58, 59]


$\square$

2.1-2. Rewrite the insertion_sort procedure to sort into nonincreasing instead of non-decreasing order.

def insertion_sort_reverse(A):
for i in range(1, len(A)):
x = A[i]
j = i - 1
while (j >= 0) and (A[j] < x):
A[j+1] = A[j]
j = j - 1
A[j+1] = x
return A

>>> insertion_sort_reverse([1, 2, 3])
[3, 2, 1]
>>> insertion_sort_reverse([2,1,3,1,4,1,5,1,6,1,7,1,8,1])
[8, 7, 6, 5, 4, 3, 2, 1, 1, 1, 1, 1, 1, 1]


$\square$

2.1-3. Consider the searching problem:

INPUT: A sequence of $n$ numbers $A = \langle a_1,a_2,\ldots,a_n \rangle$ and a value $v$.

OUTPUT: An index $i$ such that $v = A[i]$ or the special value NIL if $v$ does not appear in $A$.

Write pseudocode for linear search, which scans through the sequence, looking for $v$. Using a loop invariant, prove that your algorithm is correct. Make sure that your loop invariant fulfills the three necessary properties.

def linear_search(A, v):
for i in range(len(A)):
if A[i] == v:
return i
return None

>>> linear_search([3, 2, 5, 1], 3)
0
>>> linear_search([3, 2, 5, 1], 2)
1
>>> linear_search([3, 2, 5, 1], 5)
2
>>> linear_search([3, 2, 5, 1], 1)
3
>>> linear_search([3, 2, 5, 1], 0)
None


The loop invariant of linear_search is as follows: at the start of $i$th iteration of the for loop, the subarray A[:i] does not contain v.

Initialization. The loop invariant is trivially true before the first iteration of the loop.

Maintenance. The loop does not mutate A, so the veracity of the loop invariant does not change during one iteration of the loop.

Termination. If A[i] == v, then the loop terminates with return i. Otherwise, the loop terminates after the entire array is scanned, at which point linear_search returns None. This is precisely what we aimed for. $\square$

2.1-4. Consider the problem of adding two $n$-bit binary integers, stored in two $n$-element arrays $A$ and $B$. The sum of the two integers should be stored in binary form in an $(n+1)$-element array $C$. State the problem formally and write pseudocode for adding the two integers.

Formally, $A$ and $B$ are arrays of length $n$ whose elements are either 0 or 1. We are to produce an array $C$ of length $n+1$ such that $C[i]$ equals $A[i] + B[i] + M[i]$ modulo 2, where

We can implement this as follows:

def binary_addition(A, B):
n = len(A)  # assume that len(A) == len(B)
C = [None for _ in range(n+1)]
M = 0
for i in range(n):
C[i] = (A[i] + B[i] + M) % 2
if A[i] + B[i] + M <= 1:
M = 0
else:
M = 1
C[n] = M
return C

>>> binary_addition([0, 1, 1], [1, 0, 1])
[1, 1, 0, 1]
>>> binary_addition([0, 0, 0, 1, 1, 1], [1, 1, 1, 1, 1, 1])
[1, 1, 1, 0, 1, 1, 1]


$\square$

## 2.2. Analyzing Algorithms

### Exercises

2.2-1. Express the function $n^3/1000 - 100n^2 - 100n + 3$ in terms of $\Theta$-notation.

We let $f(n) = n^3/1000 - 100n^2 - 100n + 3$ for notationa convenience. We shall show that $f(n) = \Theta(n^3)$.

To see this, we observe first that

for all natural numbers $n$. It follows that $f(n) = O(n^3)$.

It now suffices to establish that $f(n) = \Omega(n^3)$. As a preliminary result, we show that

for all sufficiently large $n$. To this end, we set $g(x) = x^3/2000 - 100x^2 - 100x + 3$ and observe that

is increasing on $[\alpha, \infty)$, where

(The exact value of $\alpha$ is unimportant.)

Since the derivative of $g$ keeps increasing on $[\alpha,\infty)$, we see that $g$ increases without bound on $[\alpha,\infty)$. It follows that $g(x) \to \infty$ as $x \to \infty$, which means we can find a real number $M$ such that $g(x) \geq 0$ whenever $x \geq M$. We now pick $N$ to be any natural number larger than $M$ and conclude that

whenever $n \geq N$, because $n \geq M$.

Estimate $(\ast)$ is equivalent to the statement that

whenever $n \geq N$. It follows that

for all $n \geq N$, whence $f(n) = \Omega(n^3)$, as was to be shown. $\square$

2.2-2. Consider sorting $n$ numbers stored in array $A$ by first finding the smallest element of $A$ and exchanging it with the element in $A[1]$. Then find the second smallest element of $A$, and exchange it with $A[2]$. Continue in this manner for the first $n-1$ elements of $A$. Write pseudocode for this algorithm, which is known as selection sort. What loop invariant does this algorithm maintain? Why does it need to run for only the first $n-1$ elements, rather than for all $n$ elements? Give the best-case and worst-case running times of selection sort in $\Theta$-notation.

def selection_sort(A):
for i in range(len(A) - 1):
# minimum-finding subroutine
min_index = i
for j in range(i, len(A)):
if A[j] < A[min_index]:
min_index = j
# swap
A[i], A[min_index] = A[min_index], A[i]
return A

>>> selection_sort([3, 2, 1])
[1, 2, 3]
>>> selection_sort([2, 1, 3, 1, 4, 1, 5, 1, 6, 1, 7, 1, 8, 1])
[1, 1, 1, 1, 1, 1, 1, 2, 3, 4, 5, 6, 7, 8]


The loop invariant of selection_sort is as follows: at the end of the $i$th iteration of the outer for loop, the subarray A[:i] is sorted and its elements are no larger than the minimum of the subarray A[i:]. It follows, in particular, that the for loop need not run for the last element of $A$, as it is trivially sorted.

Let us prove the loop invariant.

Initialization. The first interation of the for loop finds the minimum of A and places it at A[0]. A[:1] == A[0] is trivially sorted, and its only element is the minimum of A, hence no larger than the minimum of A[i:].

Maintenance. The $i$th iteration of the for loop finds the minimum of A[i-1:] and places it at A[i-1]. Before the swap, the minimum of A[i-1] is no smaller than the maximum of A[:i-1], and so the resulting subarray A[:i] is sorted. We have

which, in particular, implies that the elements of A[:i] are no larger than the minimum of the subarray A[i:].

Termination. The for loop terminates after len(A)-1 iterations, with A sorted.

We let $n = \operatorname{len}(A)$ for notational convenience . The outer for loop runs $n-1$ times regardless of what input array A is. During the $i$th iteration of the outer for loop, the inner for loop runs $n-i$ times regardless of what A is. It follows that The time complexity of selection_sort bounded below by

The best-case running time occurs when the if clause in the inner for loop never holds, i.e., when A is already sorted. The worst-case running time is bounded by the scenario in which the if clause holds every time. Therefoer, the time complexity of selection_sort is bounded above by

It follows that the time complexity of selection_sort is $\Theta(n^2)$. $\square$

2.2-3. Consider linear search again (see Exercise 2.1-3). How many elements of the input sequence need to be checked on the average, assuming that the element being searched for is equally likely to be any element in the array? How about in the worst case?

For an input array A of length $n$ and an input value v to be searched, there are $n+1$ possible scenarios: either A[i] == v for some $0 \leq i \leq n-1$ or v is not in A. If A[i] == v, then the search terminates after checking $i+1$ elements. If v is not in A, then the search terminates after checking all $n$ elements. It follows that the expected number of elements to be checked is

which is approximately $\frac{n}{2}$ for large enough values of $n$. If we assume that v must be in A, then the case of not finding v after searching all $n$ elements of A is eliminated. In this case, the expected number of elements to be checked is

which, once again, is approximately $\frac{n}{2}$ for large enough values of $n$.

The worst case of linear search entails, of course, searching all $n$ elements of the array. $\square$

2.2-4. How can we modify almost any algorithm to have a good best-case running time?

Suppose that, for each $n$, we are able to characterize explicitly the desired output of at least one input of size $n$. We build this case into the algorithm, requiring only the time needed to check whether the input data is the specified “good input”. $\square$

## 2.3. Designing Algorithms

### Exercises

Since Merge sort always runs in $\Theta(n \log n)$ time, it is not an adaptive sorting algorithm. The runtime of insertion sort can be as low as $O(n)$, and so we expect insertion sort to run faster than merge sort for lists that are largely sorted.

Moreover, $O(n^2)$ sorting algorithms can run faster than merge sort on small arrays. This is because asymtotpic analysis is useful only for large $n$.

What do we mean by this? Given two positive constants $C_1$ and $C_2$, we observe that

and so there exists a number $N$ such that

whenever $n \geq N$. Therefore,

for all $n \geq N$. This is what we mean when we say $n^2$ asymptotically dominates $n \log n$.

On the other hand, $\frac{n}{\log n}$ attains its minimum at $n = 2$ and is an increasing function on $[2,\infty)$. Therefore, so long as

we can find a positive integer $n \geq 2$ such that

Therefore, if $C_1 > 2C_2$, there exists a positive integer $n$ such that

In other words, it is possible for an $O(n^2)$ algorithm to run faster than an $O(n \log n)$ algorithm for small $n$.

### Exercises

2.3-1. Using Figure 2.4 as a model, illustrate the operation of merge sort on the array <3, 41, 52, 26, 38, 57, 9, 49>.

[3] [4] [52] [26] [38] [57] [9] [49]

[3 4] [26 52] [38 57] [9 49]

[3 4 26 52] [9 38 49 57]

[3 4 9 26 38 49 52 57] $\square$

2.3-2. Rewrite the merge procedure so that it does not use sentinels, instead stopping once either array L or R has all its elements copied back to A and then copying the remainder of the other array back into A.

import math

def merge(A, p, q, r):
L, R = A[p:q], A[q:r]
n, m = q-p, r-q

i, j, k = 0, 0, p

while i < n and j < m:
if L[i] < R[j]:
A[k] = L[i]
i += 1
else:
j += 1
k += 1

while i < n or j < m:
if i < n:
A[k] = L[i]
else:
A[k] = R[j]
j += 1
k += 1

def merge_sort(A, p, r):
if p+1 < r:
q = p + math.floor((r-p)/2)
merge_sort(A, p, q)
merge_sort(A, q, r)
merge(A, p, q, r)

>>> A = [3, 41, 52, 26, 38, 57, 9, 49]
merge_sort(A, 0, len(A))
>>> A
[3, 9, 26, 38, 41, 49, 52, 57]

>>> from random import choice
>>> for _ in range(5):
...     # generate a list of 10 random integers between 0 and 99
...     B = [choice(range(100)) for _ in range(10)]
...     print('before sort: {}'.format(B))
...     merge_sort(B, 0, len(B))
...     print('after sort: {}'.format(B))
...     print('')
before sort: [14, 72, 7, 44, 5, 15, 56, 42, 48, 41]
after sort: [5, 7, 14, 15, 41, 42, 44, 48, 56, 72]

before sort: [75, 13, 23, 29, 90, 33, 0, 81, 25, 1]
after sort: [0, 1, 13, 23, 25, 29, 33, 75, 81, 90]

before sort: [89, 1, 14, 36, 58, 60, 57, 82, 65, 71]
after sort: [1, 14, 36, 57, 58, 60, 65, 71, 82, 89]

before sort: [40, 29, 96, 57, 33, 37, 81, 24, 23, 56]
after sort: [23, 24, 29, 33, 37, 40, 56, 57, 81, 96]

before sort: [98, 54, 65, 70, 77, 89, 17, 37, 69, 89]
after sort: [17, 37, 54, 65, 69, 70, 77, 89, 89, 98]


$\square$

2.3-3. Use mathematical induction to show that when $n$ is an exact power of $2$, the solution of the recurrence

is $T(n) = n \log n$.

If $n = 2$, then the claim is holds trivially. Suppose that the claim holds for $n = 2^k$ and observe that

as was to be shown. It follows from mathematical induction that the claim holds for all $n = 2^k$ with $k \in \mathbb{N}$. $\square$

2.3-4. We can express insertion sort as a recursive procedure as follows. In order to sort $A[1..n]$, we recursively sort $A[1..n-1]$ and then insert $A[n]$ into the sorted array $A[1..n-1]$. Write a recurrence for the worst-case running time of this recursive version of insertion sort.

def insert(A, k):
i = k
while i > 0 and A[i] < A[i-1]:
A[i], A[i-1] = A[i-1], A[i]
i -= 1

def recursive_insertion_sort(A, k):
if k > 0:
recursive_insertion_sort(A, k-1)
insert(A, k-1)

>>> A = [3, 41, 52, 26, 38, 57, 9, 49]
>>> recursive_insertion_sort(A, len(A))
>>> A
[3, 9, 26, 38, 41, 49, 52, 57]

>>> from random import choice
>>> for _ in range(5):
...     # generate a list of 10 random integers between 0 and 99
...     B = [choice(range(100)) for _ in range(10)]
...     print('before sort: {}'.format(B))
...     recursive_insertion_sort(B, len(B))
...     print('after sort: {}'.format(B))
...     print('')
before sort: [89, 32, 83, 65, 81, 5, 24, 72, 48, 8]
after sort: [5, 8, 24, 32, 48, 65, 72, 81, 83, 89]

before sort: [67, 16, 47, 32, 53, 65, 47, 49, 35, 89]
after sort: [16, 32, 35, 47, 47, 49, 53, 65, 67, 89]

before sort: [94, 71, 19, 60, 19, 58, 63, 13, 13, 99]
after sort: [13, 13, 19, 19, 58, 60, 63, 71, 94, 99]

before sort: [74, 8, 41, 26, 46, 7, 53, 83, 93, 57]
after sort: [7, 8, 26, 41, 46, 53, 57, 74, 83, 93]

before sort: [88, 7, 33, 95, 95, 63, 67, 36, 80, 56]
after sort: [7, 33, 36, 56, 63, 67, 80, 88, 95, 95]


Observe first that the time complexity of insert(A, k) is $O(k)$, as the while loop could very well iterate all the way from $i=k$ to $i=1$. The worst-case running time $T(n)$ of recursive_insertion_sort is therefore

2.3-5. Referring back to the searching problem (see Exercise 2.1-3), observe that if the sequence $A$ is sorted, we can check the midpoint of the sequence against $v$ and eliminate half of the sequence from further consideration. The binary search algorithm repeats this procedure, halving the size of the remaining portion of the sequence each time. Write pseudocode, either iterative or recursive, for binary search. Argue that the worst-case running time of binary search is $\Theta(\log n)$.

from math import floor

def binary_search(A, p, q, v):
if p > q:
return -1

if p == q:
return p if A[p] == v else -1  # -1 if v is not found

mid = p + floor((q-p/2))
if A[mid] == v:
return mid
else:
if A[mid] > v:
return binary_search(A, p, mid-1, v)
else:  # A[mid] < v
return binary_search(A, mid+1, q, v)

>>> A = [0, 2, 4, 6, 8, 10, 12, 14]
>>> for v in range(0, 16):
...     print(binary_search(A, 0, len(A)-1, v))
0
-1
1
-1
2
-1
3
-1
4
-1
5
-1
6
-1
7
-1


Let $T(n)$ be the worst-case running time of binary_search on a list of length $n$. Since the $n = 1$ case is checked directly by the second if clause, we have $T(1) = \Theta(1)$. Otherwise, we have $T(n) = T(\lfloor n/2 \rfloor) + \Theta(1)$. Since $\lfloor n/2 \rfloor^{\lfloor \log n \rfloor} = \Theta(n)$, we conclude that

2.3-6. Observe that the while loop of lines 5-7 of insertion_sort procedure in Section 2.1 uses a linear search to scan (backward) through the sorted subarray A[1..j-1]. Can we use a binary search (see Exercise 2.3-5) instead to improve the overall worst-case running time of insertion sort to $\Theta(n \log n)$?

Binary search does not improve the time complexity of insertion sort. The insertion operation, even without the search operation, takes linear time, as terms must be shifted to make room for the term to be inserted.

2.3-7. Describe a $\Theta(n \log n)$-time algorithm that, given a set $S$ of $n$ integers and another integer $x$, determines whether or not there exist two elements in $S$ whose sum is exactly $x$.

We first perform merge sort on $S$, which takes $\Theta(n \log n)$ time. Now, for each element $v$ of $S$, we search for $x-v$ in $S$ using binary search. This takes $\Theta(\log n)$ for each $v$. Since there are $n$ elements of $S$, the search portion takes $\Theta(n \log n)$ time. It follows that the entire algorithm takes $\Theta(n \log n)$ time.

def two_sum(S, x):
merge_sort(S, 0, len(S))
for i in range(len(S)):
if binary_search(S, 0, len(S)-1, x-S[i]) > -1:
return True:
return False

>>> two_sum([3, 41, 52, 26, 38, 57, 9, 49], 44)
True
>>> two_sum([3, 41, 52, 26, 38, 57, 9, 49], 55)
True
>>> two_sum([3, 41, 52, 26, 38, 57, 9, 49], 95)
True
>>> two_sum([3, 41, 52, 26, 38, 57, 9, 49], -32)
False


## Problems

### 2-1. Insertion sort on small arrays in merge sort

Although merge sort runs in $\Theta(n \log n)$ worst-case time and insertion sort runs in $\Theta(n^2)$ worst-case time, the constant factors in insertion sort can make it faster in practice for small problem sizes on many machines. Thus, it makes sense to coarsen the leaves of the recursion by using insertion sort within merge sort when subproblems become sufficiently small. Consider a modification to merge sort in which $n/k$ sublists of length $k$ are sorted using insertion sort and then merged using the standard merging mechanism, where $k$ is a value to be determined.

a. Show that insertion sort can sort the $n/k$ sublists, each of length $k$, in $\Theta(nk)$ worst-case time.

b. Show how to merge the sublists in $\Theta(n \log(n/k))$ worst-case time.

c. Given that the modified algorithm runs in $\Theta(nk + n \log(n/k))$ worst-case time, what is the largest value of $k$ as a function of $n$ for which the modified algorithm has the same running time as standard merge sort, in terms of $\Theta$-notation?

d. How should we choose $k$ in practice?

#### Problem 2-1a

We recall that insertion sort on a list of length $k$ takes $\Theta(k^2)$ worst-case time. Since there are $n/k$ sublists, the entire procedure takes

time. $\square$

#### Problem 2-1b

Recall that merging two lists of length $k$ takes $\Theta(k)$ time. We merge the $n/k$ sublists of length $k$ two at a time, so that we end up with $n/2k$ sublists of length $2k$. Repeating the process until we end up with one list of length $n$ takes

#### Problem 2-1c

Since $\log(n/k) = \log n - \log k$, we have that

It thus suffices to pick $k$ such that

Since $k - \log k= \Theta(\log n)$, we see that any $k$ with $\Theta$-estimate of $\Theta(\log n)$ should give us the desired time complexity.

To verify this, we set $k = C \log n$ and observe that

as desired.

We remark that not all choices of $C$ are valid, as $k$ must not be larger than $n$. In fact, we must have

for all sufficiently large $n$. This can be done by, say, choosing a positive integer $N$ and setting

#### Problem 2-1d

We experiment with a number of input arrays to find a positive integer $N$ such that input arrays of length $N$ result in similar runtime for insertion sort and merge sort. Choosing

we ensure that any input array of length less than $N$ is sorted using insertion sort, and that longer arrays are sorted using the hybrid sort with insertion sort running on subarrays of reasonable length. $\square$

### 2-2. Correctness of bubblesort

Bubblesort is a popular, but inefficient, sorting algorithm. It works by repeatedly swapping adjacent elements that are out of order:

def bubblesort(A):
for i in range(len(A) - 1):
for j in range(len(A) - 1, i, -1):
if A[j] < A[j-1]:
A[j], A[j-1] = A[j-1], A[j]


a. Let $A'$ denote the output of bubblesort(A). To prove that bubblesort is correct, we need to prove that it terminates and that

where $n = A.\operatorname{length}$. In order to show that bubblesort actually sorts, what else do we need to prove?

The next two parts will prove inequality (2.3).

b. State precisely a loop invaraint for the for loop in lines 2-3, and prove that this loop invariant holds. Your proof should use the structure of the loop invariant proof presented in this chapter.

c. Using the termination condition of the loop invariation proved in part (b), state a loop invariant for the for loop in lines 1-4 that will allow you to prove inequality (2.3). Your proof should use the structure of the loop invariant proof presented in this chapter.

d. What is the worst-case running time of bubblesort? How does it compare to the running time of insertion sort?

#### Problem 2-2a

We need to show that $A'$ is a permutation of $A$, i.e., the terms of $A'$ are the terms of $A$, potentially in a different order. $\square$

#### Problem 2-2b

We shall show the following loop invariant:

At the end of the $k$th iteration (starting from len(A)-1 down to 0), the subarray $A[k-1:]$ is permuted in such a way that

for all $l \geq k$.

Initialization. The len(A)-1th iteration of the inner for loop ensures that

making a swap if necessary.

Maintenance. Assume that we have

at the beginning of the $k$th iteration of the inner for loop. If $A[k-1] \leq A[k]$, then the loop invariant is trivially satisfied. If not, the if clause makes the swap, satisfying the loop invariant.

Termination. The inner for loop terminates after the ith iteration. At this point, we have

and the new subarray is a permutatin of the old subarray. $\square$

#### Problem 2-2c

We shall show the following loop invariant:

At the end of the $i$th iteration, the subarray $A[i:]$ is permuted in such a way that

The initialization and maintenance step are, in fact, established already as part of the termination condition of the loop invariant of the inner for loop. The for loop terminates with a permutation $A'$ of $A$ that satisfies (2.3). $\square$

#### Problem 2-2d

We let $n = \operatorname{len}(A)$ for notational convenience. Taking into account the two for loops, we see that the time complexity of bubblesort is bounded below by

Since each iteration of the inner for loop adds at most one step to the computational process, we see that the time complexity of bubblesort is, in fact, $\Theta(n^2)$.

bubblesort is worse than insertion sort, as insertion sort is adaptive, whereas bubblesort fails to be adaptive. $\square$

### 2-3. Correctness of Horner’s rule

The following code fragment implements Horner’s rule for evaluating a polynomial

given the coefficients $a_0,a_1,\ldots,a_n$ and a value for $x$:

def horner(coeffs, val):
y = 0
for i in range(len(coeffs)-1, -1, -1):
y = coeffs[i] + val * y
return y


a. In terms of $\Theta$-notation, what is the running time of this code fragment for Horner’s rule?

b. Write pseudocode to implement the naive polynomial-evaluation algorithm that computes each term of the polynomial from scratch. What is the running time of this algorithm? How does it compare to Horner’s rule?

c. Consider the following loop invariant:

At the start of each iteration of the for loop of lines 2-3,

Interpret a summation with no terms as equaling 0. Following the structure of the loop invariant proof presented in this chapter, use this loop invariant to show that, at termination, $y = \sum_{k=0}^n a_k x^k$.

d. Conclude by arguing that the given code fragment correctly evaluates a polynomial characterized by the coefficients $a_0,a_1,\ldots,a_n$.

#### Problem 2-3a

Each iteration of the for loop incurs constant time, and so the running time of horner is $\Theta(n)$. $\square$

#### Problem 2-3b

A naive implementation is as follows:

def polynomial_multiplication(coeffs, val):
y = 0
for i in range(len(coeffs)):
term = coeffs[i]
for j in range(i):
term *= val
y += term
return y

>>> polynomial_multiplication([1, 2, 3, 4, 5, 6], 2)
321


Each inner for loop incurs constant time, and so the running time of polynomial_multiplication is

which is slower than horner.

>>> %%timeit
... horner([1, 2, 3, 4, 5, 6], 2)
651 ns ± 3.66 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
In [9]:

>>> %%timeit
... polynomial_multiplication([1, 2, 3, 4, 5, 6], 2)
2.02 µs ± 7.73 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)


$\square$

#### Problem 2-3c

The initialization step is the $n$th iteration, at the beginning of which we have $y = 0$, as the formula indicates.

As for the maintenance step, we assume that

at the beginning of the $i_0$th iteration of the for loop. After the for loop, we have

maintaining the loop invariant at the beginning of the $(i_0-1)$th iteration.

The loop terminates after the $0$th iteration, yielding

as desired. $\square$

#### Problem 2-3d

See 2-3c $\square$

### 2-4. Inversions

Let $A[1..n]$ be an array of $n$ distinct numbers. If $% $ and $A[i] > A[j]$, then the pair $(i,j)$ is called an inversion of $A$.

a. List the five inversions of the array $\langle2,3,8,6,1\rangle$.

b. What array with elements from the set $\{1,2,\ldots,n\}$ has the most inversions? How many does it have?

c. What is the relationship between the running time of insertion sort and the number of inversions in the input array? Justify your answer.

d. Give an algorithm that determines the number of inversions in any permutation on $n$ elements in $\Theta(n \log n)$ worst-case time. (Hint: Modify merge sort.)

#### Problem 2-4a

$(0, 4)$, $(1,4)$, $(2,3)$, $(2,4)$, $(3,4)$. $\square$

#### Problem 2-4b

If every pair $(i,j)$ such that $% $ is an inversion, then we have

$\{n,n-1,\ldots,2,1\}$ satisfies this condition. $\square$

#### Problem 2-4d

We observe that the number of inversions in an array $A$ is the sum of the following three quantities:

• $I(L)$, the number of inversions in $L$, the left half of $A$;
• $I(R)$, the number of inversions in $R$, the right half of $A$;
• $I(L, R)$, the number of inversions required to merge $L$ and $R$.

This yields a simple recursive algorithm for computing the number of inversions in $A$. So long as we can compute $I(L, R)$ in linear time with respect to the length of $L$ (hence the length of $R$), the time complexity should be the same as that of merge sort, $\Theta(n \log n)$. We can do this by keeping a counter during the merge process, incrementing it every time an element of $R$ is added to the merged list before we exhaust $L$. This only incurs constant time per each element of $L$ and $R$, and so it can be in linear time.

from math import floor

def inversion_counter(A):
return inversion_mergesort(A, 0, len(A), 0)

def inversion_mergesort(A, p, r, counter):
if p+1 < r:
q = p + floor((r-p)/2)
counter = inversion_mergesort(A, p, q, counter)
counter = inversion_mergesort(A, q, r, counter)
counter = inversion_merge(A, p, q, r, counter)
return counter

def inversion_merge(A, p, q, r, counter):
L, R = A[p:q], A[q:r]
n, m = q-p, r-q

i, j, k = 0, 0, p

while i < n and j < m:
if L[i] < R[j]:
A[k] = L[i]
i += 1
else:
counter += n-i  # R[j] now precedes all n-i remaining elements of L
A[k] = R[j]
j += 1
k += 1

while i < n or j < m:
if i < n:
A[k] = L[i]
i += 1
else:
A[k] = R[j]
j += 1
k += 1
return counter

>>> inversion_counter([5, 4, 3, 2, 1])
10
>>> (5 * (5-1))/2
10.0
>>> inversion_counter([10, 9, 8, 7, 6, 5, 4, 3, 2, 1])
45
>>> (10 * (10-1))/2
45.0


$\square$