Polynomial interpolation
Theorem : Given a table of n ( ) data points , , with all the 's distinct. There is a unique polynomial of minimum degree no more than which passes through (that is, interpolates) the data.
Proof outline: Write . We have n coefficients to determine. Evaluate p at each to get . This gives a system of n linear equations in n unknowns. There will be a unique solution if and only if the determinant of the matrix of the linear system is non zero. One does some algebra and computes that the determinant of the system is the product of the pairwise differences with . Since all the 's are assumed distinct , we are done. QED
The algebra one does to compute the determinant in the above theorem is fairly formidable. We can use Maple to compute it for any particular value of n (n can't be too big)
> with(linalg);
> i := 'i': mat :=vandermonde([seq(x.i,i=1..3)]);
> dmat :=det(mat);
> factor(dmat);
Exercise: At some value of n, your system will not compute the determinant of the nxn vandermonde matrix. Discover that value of n.
Fortunately, there are ways to compute the interpolating polynomial which don't involve computing large determinants.
Lagrange's form of the interpolating polynomial
Here is Lagrange's construction of the interpolating polynomial: For each i from 1 to n let be the polynomial
By inspection we see that and for . Hence the polynomial
is a polynomial of degree no more than n which interpolates the data. We can construct l in Maple like so. First, define the unit polynomials l[i].
> lu := (i,x,n,`t`) -> (product(t-x[j],j=1..i-1)*product(t-x[j],j=i+1..n))/(product(x[i]-x[j],j=1..i-1)*product(x[i]-x[j],j=i+1..n));
Just to check, we take some specific x data and compute one of the polynomials for that x data.
>
x := [x1,x2,x3,x4]:
lu(2,x,4,t);
Next, tell lagrange to construct his polynomial given data x,y and the name of the variable in the polynomial
>
lagrange := proc(x,y,`t`)
local i;
unapply(sum(y[i]*lu(i,x,nops(x),t),i=1..nops(x)),t) end:
Test with some data
> x := [1,2,5,7]: y := [4,3,1,4]:
>
> p1 := lagrange(x,y,w);
> map(p1,x);
>
Works very nicely. But Lagrange's form has two major defects. 1. It is very expensive to compute. 2. It is not extensible: That is, if we add a data point to the data, we have to recompute all over. Newton had a better idea.
Newton's form of the interpolating polynomial.
Instead of writing the interpolating polynomial as a linear combination of the n polynomials , Newton says write it as a linear combination of the polynomials
. Thus
This gives us a triangular system of linear equations to solve for through
Such systems can be solved easily, even by hand, by computing a divided difference table . The solutions lie along the super diagonal of the matrix computed by the Maple word divdiff which we define below.
>
divdiff := proc(x,y)
local n, nt, j, i,k,p;
n := nops(x);
nt := matrix(n,n+1,[seq(0,k=1..n*(n+1))]);
for i from 1 to n do nt[i,1]:= x[i]: nt[i,2]:= y[i]: od:
p := nt[1,2]:
for j from 3 to n+1 do
for i from j-1 to n do
nt[i,j]:= (nt[i,j-1]-nt[i-1,j-1])/(x[i]-x[i-(j-2)]);
od;
od;
evalm(nt) end:
> divdiff([seq('x'[i],i=1..3)],[seq('y'[j],j=1..3)]);
Newtinterp computes the solutions and returns the interpolating polynomial.
>
newtinterp := proc(x,y,`t`)
local n, nt, j, i,k,p;
n := nops(x);
nt := matrix(n,n+1,[seq(0,k=1..n*(n+1))]);
for i from 1 to n do nt[i,1]:= x[i]: nt[i,2]:= y[i]: od:
p := nt[1,2]:
for j from 3 to n+1 do
for i from j-1 to n do
nt[i,j]:= (nt[i,j-1]-nt[i-1,j-1])/(x[i]-x[i-(j-2)]);
od;
p := p+ nt[j-1,j]*convert([seq(t-x[k],k=1..j-2)],`*`) od;
end:
We can test this out by generating the polynomial of smallest degree which is 0 at the integers from 0 to 8, except at 4 where it takes the value 1.
> x := [seq(i,i=0..8)]: y := [0,0,0,0,1,0,0,0,0]:
>
> sol:=newtinterp(x,y ,t);
> p := unapply(sol,t);
> map(p,[0,1,2,3,4,5,6,7]);
> plot(p,1..7);
Problems:
1. Verify that the Lagrange interpolation polynomial which is 0 at the integers from 0 to 8, except at 4 where it takes the value 1 is the same as the polynomial p found above.
2. For the function , calculate the interpolation polynomial p for the data and . Make p into a function with unapply and plot f and p from -5 to 5. You should see the polynomial start to wiggle out near the endpoints of the interval. Experiment with this by increasing the number of data points to 18. What happens? What if we change the interval to [0,1]? What if we change the function to the sin function?
3. Export to C++ : What do we want to export? If all we want to export is the final function routine for the polynomial p we found above, that can proceed as before. But suppose we want to export the procedures defined in divdiff and newtinterp to C++? How would we do that?