跳转至

30.1 Representing polynomials

30.1-1

Multiply the polynomials \(A(x) = 7x^3 - x^2 + x - 10\) and \(B(x) = 8x^3 - 6x + 3\) using equations \(\text{(30.1)}\) and \(\text{(30.2)}\).

\[ \begin{array}{rl} & 56x^6 - 8x^5 + (8 - 42)x^4 + (-80 + 6 + 21)x^3 + (-3 - 6)x^2 + (60 + 3)x - 30 \\\\ = & 56x^6 - 8x^5 - 34x^4 - 53x^3 - 9x^2 + 63x - 30. \end{array} \]

30.1-2

Another way to evaluate a polynomial \(A(x)\) of degree-bound \(n\) at a given point \(x_0\) is to divide \(A(x)\) by the polynomial \((x - x_0)\), obtaining a quotient polynomial \(q(x)\) of degree-bound \(n - 1\) and a remainder \(r\), such that

\[A(x) = q(x)(x - x_0) + r.\]

Clearly, \(A(x_0) = r\). Show how to compute the remainder \(r\) and the coefficients of \(q(x)\) in time \(\Theta(n)\) from \(x_0\) and the coefficients of \(A\).

Let \(A\) be the matrix with \(1\)'s on the diagonal, \(−x_0\)'s on the super diagonal, and \(0\)'s everywhere else. Let \(q\) be the vector \((r, q_0, q_1, \dots, x_{n − 2})\). If \(a = (a_0, a_1, \dots, a_{n − 1})\) then we need to solve the matrix equation \(Aq = a\) to compute the remainder and coefficients. Since \(A\) is tridiagonal, Problem 28-1 (e) tells us how to solve this equation in linear time.

30.1-3

Derive a point-value representation for \(A^\text{rev}(x) = \sum_{j = 0}^{n - 1} a_{n - 1 - j}x^j\) from a point-value representation for \(A(x) = \sum_{j = 0}^{n - 1} a_jx^j\), assuming that none of the points is \(0\).

For each pair of points, \((p, A(p))\), we can compute the pair \((\frac{1}{p}, A^{rev}(\frac{1}{p}))\). To do this, we note that

\[ \begin{aligned} A^{rev}(\frac{1}{p}) & = \sum_{j = 0}^{n - 1} a_{n - 1 - j} (\frac{1}{p})^j \\\\ & = \sum_{j = 0}^{n - 1} a_j(\frac{1}{p})^{n - 1 - j} \\\\ & = p^{1 - n} \sum_{j = 0}^{n - 1} a_jp^j \\\\ & = p^{1 - n}A(p), \end{aligned} \]

since we know what \(A(p)\) is, we can compute \(A^{rev}(\frac{1}{p})\) of course, we are using the fact that \(p \ne 0\) because we are dividing by it. Also, we know that each of these points are distinct, because \(\frac{1}{p} = \frac{1}{p'}\) implies that \(p = p'\) by cross multiplication. So, since all the \(x\) values were distinct in the point value representation of \(A\), they will be distinct in this point value representation of \(A^{rev}\) that we have made.

30.1-4

Prove that \(n\) distinct point-value pairs are necessary to uniquely specify a polynomial of degree-bound \(n\), that is, if fewer than \(n\) distinct point-value pairs are given, they fail to specify a unique polynomial of degree-bound \(n\). (\(\textit{Hint:}\) Using Theorem 30.1, what can you say about a set of \(n - 1\) point-value pairs to which you add one more arbitrarily chosen point-value pair?)

Suppose that just \(n − 1\) point-value pairs uniquely determine a polynomial \(P\) which satisfies them. Append the point value pair \((x_{n - 1}, y_{n − 1})\) to them, and let \(P'\) be the unique polynomial which agrees with the \(n\) pairs, given by Theorem 30.1. Now append instead \((x\_{n - 1}, y'\_{n − 1})\) where \(y_{n − 1} \ne y'_{n - 1}\), and let \(P''\) be the polynomial obtained from these points via Theorem 30.1. Since polynomials coming from \(n\) pairs are unique, \(P' \ne P''\). However, \(P'\) and \(P''\) agree on the original \(n − 1\) point-value pairs, contradicting the fact that \(P\) was determined uniquely.

30.1-5

Show how to use equation \(\text{(30.5)}\) to interpolate in time \(\Theta(n^2)\). (\(\textit{Hint:}\) First compute the coefficient representation of the polynomial \(\prod_j (x - x_j)\) and then divide by \((x - x_k)\) as necessary for the numerator of each term; see Exercise 30.1-2. You can compute each of the \(n\) denominators in time \(O(n)\).)

First, we show that we can compute the coefficient representation of \(\prod_j (x - x_j)\) in time \(\Theta(n^2)\). We will do it by recursion, showing that multiplying \(\prod_{j < k} (x − x_j)\) by \((x − x_k)\) only takes time \(O(n)\), since this only needs to be done \(n\) times, this gets is total runtime of \(O(n)\). Suppose that \(\sum_{i = 0}^{k - 1} k_ix^i\) is a coefficient representation of \(\prod_{j < k} (x − x_j)\). To multiply this by \((x − x_k)\), we just set \((k + 1)\_i = k\_{i − 1} - x_kk_i\) for \(i = 1, \dots, k\) and \((k + 1)_0 = −x_k \cdot k_0\). Each of these coefficients can be computed in constant time, since there are only linearly many coefficients, then, the time to compute the next partial product is just \(O(n)\).

Now that we have a coefficient representation of \(\prod_j (x − x_j)\), we need to compute, for each \(k \prod_{j − k} (x − x_j)\), each of which can be computed in time \(\Theta(n)\) by problem 30.1-2. Since the polynomial is defined as a product of things containing the thing we are dividing by, we have that the remainder in each case is equal to \(0\). Lets call these polynomials \(f_k\). Then, we need only compute the sum \(\sum_k y_k \frac{f_k(x)}{f_k(x_k)}\). That is, we compute \(f(x_k)\) each in time \(\Theta(n)\), so all told, only \(\Theta(n^2)\) time is spent computing all the \(f(x_k)\) values. For each of the terms in the sum, dividing the polynomial \(f_k(x)\) by the number \(f_k(x_k)\) and multiplying by \(y_k\) only takes time \(\Theta(n)\), so total it takes time \(\Theta(n^2)\). Lastly, we are adding up \(n\) polynomials, each of degree bound \(n − 1\), so the total time taken there is \(\Theta(n^2)\).

30.1-6

Explain what is wrong with the "obvious" approach to polynomial division using a point-value representation, i.e., dividing the corresponding \(y\) values. Discuss separately the case in which the division comes out exactly and the case in which it doesn't.

If we wish to compute \(P / Q\) but \(Q\) takes on the value zero at some of these points, then we can't carry out the "obvious" method. However, as long as all point value pairs \((x_i, y_i)\) we choose for \(Q\) are such that \(y_i \ne 0\), then the approach comes out exactly as we would like.

30.1-7

Consider two sets \(A\) and \(B\), each having \(n\) integers in the range from \(0\) to \(10n\). We wish to compute the Cartesian sum of \(A\) and \(B\), defined by

\[C = \\{x + y: x \in A \text{ and } y \in B\\}.\]

Note that the integers in \(C\) are in the range from \(0\) to \(20n\). We want to find the elements of \(C\) and the number of times each element of \(C\) is realized as a sum of elements in \(A\) and \(B\). Show how to solve the problem in \(O(n\lg n)\) time. (\(\textit{Hint:}\) Represent \(A\) and \(B\) as polynomials of degree at most \(10n\).)

For the set \(A\), we define the polynomial \(f_A\) to have a coefficient representation that has \(a_i\) equal zero if \(i \notin A\) and equal to \(1\) if \(i \in A\). Similarly define \(f_B\). Then, we claim that looking at \(f_C := f_A \cdot f_B\) in coefficient form, we have that the \(i\)th coefficient, \(c_i\) is exactly equal to the number of times that \(i\) is realized as a sum of elements from \(A\) and \(B\). Since we can perform the polynomial multiplication in time \(O(n \lg n)\) by the methods of this chapter, we can get the final answer in time \(O(n \lg n)\). To see that \(f_C\) has the nice property described, we'll look at the ways that we could end up having a term of \(x^i\) appear. Each contribution to that coefficient must come from there being some \(k\) so that \(a_k \ne 0\) and \(b_{i − k} \ne 0\), because the powers of \(x\) attached to each are additive when we multiply. Since each of these contributions is only ever \(1\), the final coefficient is counting the total number of such contributions, therefore counting the number of \(k \in A\) such that \(−k \in B\), which is exactly what we claimed \(f_C\) was counting.