Classle

LAGRANGIAN POLYNOMIAL

 Definition

Given a set of k + 1 data points

(x_0, y_0),\ldots,(x_k, y_k)

where no two xj are the same, the interpolation polynomial in the Lagrange form is a linear combination

L(x) := \sum_{j=0}^{k} y_j \ell_j(x)

of Lagrange basis polynomials

\ell_j(x) := \prod_{i=0,\, i\neq j}^{k} \frac{x-x_i}{x_j-x_i} = \frac{(x-x_0)}{(x_j-x_0)} \cdots \frac{(x-x_{j-1})}{(x_j-x_{j-1})} \frac{(x-x_{j+1})}{(x_j-x_{j+1})} \cdots \frac{(x-x_{k})}{(x_j-x_{k})}.

Proof

The function we are looking for has to be a polynomial function L(x) of degree less than or equal to k with

L(x_j) = y_j \qquad j=0,\ldots,k

The Lagrange polynomial is a solution to the interpolation problem.

As can be seen

  1. \ell_j(x) is a polynomial and has degree k.
  2. \ell_i(x_j) = \delta_{ij},\quad 0 \leq i,j \leq k.\,

Thus the function L(x) is a polynomial with degree at most k and

L(x_i) = \sum_{j=0}^{k} y_j \ell_j(x_i) = \sum_{j=0}^{k} y_j \delta_{ji} = y_i.

There can be only one solution to the interpolation problem since the difference of two such solutions would be a polynomial with degree at most k and k+1 zeros. This is only possible if the difference is identically zero, so L(x) is the unique polynomial interpolating the given data.

 Main idea

Solving an interpolation problem leads to a problem in linear algebra where we have to solve a matrix. Using a standard monomial basis for our interpolation polynomial we get the very complicated Vandermonde matrix. By choosing another basis, the Lagrange basis, we get the much simpler identity matrix = δi,j which we can solve instantly: the Lagrange basis inverts the Vandermonde matrix.

 Implementation in C++

Note : "pos" and "val" arrays are of size "degree".

float lagrangeInterpolatingPolynomial (float pos[], float val[], int degree, float desiredPos)  { 
   float retVal = 0; 
 
   for (int i = 0; i < degree; ++i) { 
      float weight = 1; 
 
      for (int j = 0; j < degree; ++j) {
         // The i-th term has to be skipped
         if (j != i) {
            weight *= (desiredPos - pos[j]) / (pos[i] - pos[j]);
         }
      }
 
      retVal += weight * val[i]; 
   } 
 
   return retVal; 
}

 Usage

Example 1

The tangent function and its interpolant

Find an interpolation formula for f(x) = tan(x) given this set of known values:

x_0=-1.5\, f(x_0)=-14.1014\,
x_1=-0.75\, f(x_1)=-0.931596\,
x_2=0\, f(x_2)=0\,
x_3=0.75\, f(x_3)=0.931596\,
x_4=1.5\, f(x_4)=14.1014.\,

The basis polynomials are:

\ell_0(x)={x - x_1 \over x_0 - x_1}\cdot{x - x_2 \over x_0 - x_2}\cdot{x - x_3 \over x_0 - x_3}\cdot{x - x_4 \over x_0 - x_4}<br />
    ={1\over 243} x (2x-3)(4x-3)(4x+3)
\ell_1(x) = {x - x_0 \over x_1 - x_0}\cdot{x - x_2 \over x_1 - x_2}\cdot{x - x_3 \over x_1 - x_3}\cdot{x - x_4 \over x_1 - x_4}<br />
    = {} -{8\over 243} x (2x-3)(2x+3)(4x-3)
\ell_2(x)={x - x_0 \over x_2 - x_0}\cdot{x - x_1 \over x_2 - x_1}\cdot{x - x_3 \over x_2 - x_3}\cdot{x - x_4 \over x_2 - x_4}<br />
    ={3\over 243} (2x+3)(4x+3)(4x-3)(2x-3)
\ell_3(x)={x - x_0 \over x_3 - x_0}\cdot{x - x_1 \over x_3 - x_1}\cdot{x - x_2 \over x_3 - x_2}\cdot{x - x_4 \over x_3 - x_4}<br />
    =-{8\over 243} x (2x-3)(2x+3)(4x+3)
\ell_4(x)={x - x_0 \over x_4 - x_0}\cdot{x - x_1 \over x_4 - x_1}\cdot{x - x_2 \over x_4 - x_2}\cdot{x - x_3 \over x_4 - x_3}<br />
    ={1\over 243} x (2x+3)(4x-3)(4x+3).

Thus the interpolating polynomial then is

 \begin{align}L(x) &= {1\over 243}\Big(f(x_0)x (2x-3)(4x-3)(4x+3) \\<br />
    & {} - 8f(x_1)x (2x-3)(2x+3)(4x-3) \\<br />
    & {} + 3f(x_2)(2x+3)(4x+3)(4x-3)(2x-3) \\<br />
    & {} - 8f(x_3)x (2x-3)(2x+3)(4x+3) \\<br />
    & {} + f(x_4)x (2x+3)(4x-3)(4x+3)\Big)\\<br />
    & = 4.834848x^3 - 1.477474x.<br />
    \end{align}

Example 2

We wish to interpolate ƒ(x) = x2 over the range 1 ≤ x ≤ 3, given these 3 points:

x_0=1\, f(x_0)=1\,
x_1=2\, f(x_1)=4\,
x_2=3\, f(x_2)=9.\,

The interpolating polynomial is:

 \begin{align}<br />
    L(x) &= {1}\cdot{x - 2 \over 1 - 2}\cdot{x - 3 \over 1 - 3}+{4}\cdot{x - 1 \over 2 - 1}\cdot{x - 3 \over 2 - 3}+{9}\cdot{x - 1 \over 3 - 1}\cdot{x - 2 \over 3 - 2} \\<br />
    &= x^2.<br />
    \end{align}

Example 3

We wish to interpolate ƒ(x) = x3 over the range 1 ≤ x ≤ 3, given these 3 points:

x_0=1\, f(x_0)=1\,
x_1=2\, f(x_1)=8\,
x_2=3\, f(x_2)=27\,

The interpolating polynomial is:

 \begin{align}<br />
    L(x) &= {1}\cdot{x - 2 \over 1 - 2}\cdot{x - 3 \over 1 - 3}+{8}\cdot{x - 1 \over 2 - 1}\cdot{x - 3 \over 2 - 3}+{27}\cdot{x - 1 \over 3 - 1}\cdot{x - 2 \over 3 - 2} \\<br />
    &=  6x^2 - 11x + 6.<br />
    \end{align}

 Notes

The Lagrange form of the interpolation polynomial shows the linear character of polynomial interpolation and the uniqueness of the interpolation polynomial. Therefore, it is preferred in proofs and theoretical arguments. Uniqueness can also be seen from the invertibility of the Vandermonde matrix, due to the non-vanishing of the Vandermonde determinant.

But, as can be seen from the construction, each time a node xk changes, all Lagrange basis polynomials have to be recalculated. A better form of the interpolation polynomial for practical (or computational) purposes is the barycentric form of the Lagrange interpolation (see below) or Newton polynomials.

Lagrange and other interpolation at equally spaced points, as in the example above, yield a polynomial oscillating above and below the true function. This behaviour tends to grow with the number of points, leading to a divergence known as Runge's phenomenon; the problem may be eliminated by choosing interpolation points at Chebyshev nodes.

The Lagrange basis polynomials can be used in numerical integration to derive the Newton–Cotes formulas.

Barycentric interpolation

Using the quantity

\ell(x) = (x - x_0)(x - x_1) \cdots (x - x_k)

we can rewrite the Lagrange basis polynomials as

\ell_j(x) = \frac{\ell(x)}{x-x_j} \frac{1}{\prod_{i=0,i \neq j}^k(x_j-x_i)}

or, by defining the barycentric weights[1]

w_j = \frac{1}{\prod_{i=0,i \neq j}^k(x_j-x_i)}

we can simply write

\ell_j(x) = \ell(x)\frac{w_j}{x-x_j}

which is commonly referred to as the first form of the barycentric interpolation formula.

The advantage of this representation is that the interpolation polynomial may now be evaluated as

L(x) = \ell(x) \sum_{j=0}^k \frac{w_j}{x-x_j}y_j

which, if the weights wj have been pre-computed, requires only \mathcal O(n) operations (evaluating \ell(x) and the weights wj / (xxj)) as opposed to \mathcal O(n^2) for evaluating the Lagrange basis polynomials \ell_j(x) individually.

The barycentric interpolation formula can also easily be updated to incorporate a new node xk + 1 by dividing each of the wj, j=0 \dots k by (xjxk + 1) and constructing the new wk + 1 as above.

We can further simplify the first form by first considering the barycentric interpolation of the constant function g(x)\equiv 1:

g(x) = \ell(x) \sum_{j=0}^k \frac{w_j}{x-x_j}.

Dividing L(x) by g(x) does not modify the interpolation, yet yields

L(x) = \frac{\sum_{j=0}^k \frac{w_j}{x-x_j}y_j}{\sum_{j=0}^k \frac{w_j}{x-x_j}}

which is referred to as the second form or true form of the barycentric interpolation formula. This second form has the advantage, that \ell(x) need not be evaluated for each evaluation of L(x).

COURTESY:en.wikipedia.org/wiki/Lagrange_polynomial

LAGRANGE INTERPOLATING POLYNOMIAL:

LagrangeInterpolatingPoly

The Lagrange interpolating polynomial is the polynomial P(x) of degree <=(n-1) that passes through the n points (x_1,y_1=f(x_1)), (x_2,y_2=f(x_2)), ..., (x_n,y_n=f(x_n)), and is given by

 P(x)=sum_(j=1)^nP_j(x),
(1)

where

 P_j(x)=y_jproduct_(k=1; k!=j)^n(x-x_k)/(x_j-x_k).
(2)

Written explicitly,

P(x) = ((x-x_2)(x-x_3)...(x-x_n))/((x_1-x_2)(x_1-x_3)...(x_1-x_n))y_1+((x-x_1)(x-x_3)...(x-x_n))/((x_2-x_1)(x_2-x_3)...(x_2-x_n))y_2+...+((x-x_1)(x-x_2)...(x-x_(n-1)))/((x_n-x_1)(x_n-x_2)...(x_n-x_(n-1)))y_n.
(3)

The formula was first published by Waring (1779), rediscovered by Euler in 1783, and published by Lagrange in 1795 (Jeffreys and Jeffreys 1988).

Lagrange interpolating polynomials are implemented in Mathematica as InterpolatingPolynomial[data, var]. They are used, for example, in the construction of Newton-Cotes formulas.

When constructing interpolating polynomials, there is a tradeoff between having a better fit and having a smooth well-behaved fitting function. The more data points that are used in the interpolation, the higher the degree of the resulting polynomial, and therefore the greater oscillation it will exhibit between the data points. Therefore, a high-degree interpolation may be a poor predictor of the function between points, although the accuracy at the data points will be "perfect."

For n=3 points,

P(x) = ((x-x_2)(x-x_3))/((x_1-x_2)(x_1-x_3))y_1+((x-x_1)(x-x_3))/((x_2-x_1)(x_2-x_3))y_2+((x-x_1)(x-x_2))/((x_3-x_1)(x_3-x_2))y_3
(4)
P^'(x) = (2x-x_2-x_3)/((x_1-x_2)(x_1-x_3))y_1+(2x-x_1-x_3)/((x_2-x_1)(x_2-x_3))y_2+(2x-x_1-x_2)/((x_3-x_1)(x_3-x_2))y_3.
(5)

Note that the function P(x) passes through the points (x_i,y_i), as can be seen for the case n=3,

P(x_1) = ((x_1-x_2)(x_1-x_3))/((x_1-x_2)(x_1-x_3))y_1+((x_1-x_1)(x_1-x_3))/((x_2-x_1)(x_2-x_3))y_2+((x_1-x_1)(x_1-x_2))/((x_3-x_1)(x_3-x_2))y_3=y_1
(6)
P(x_2) = ((x_2-x_2)(x_2-x_3))/((x_1-x_2)(x_1-x_3))y_1+((x_2-x_1)(x_2-x_3))/((x_2-x_1)(x_2-x_3))y_2+((x_2-x_1)(x_2-x_2))/((x_3-x_1)(x_3-x_2))y_3=y_2
(7)
P(x_3) = ((x_3-x_2)(x_3-x_3))/((x_1-x_2)(x_1-x_3))y_1+((x_3-x_1)(x_3-x_3))/((x_2-x_1)(x_2-x_3))y_2+((x_3-x_1)(x_3-x_2))/((x_3-x_1)(x_3-x_2))y_3=y_3.
(8)

Generalizing to arbitrary n,

 P(x_j)=sum_(k=1)^nP_k(x_j)=sum_(k=1)^ndelta_(jk)y_k=y_j.
(9)

The Lagrange interpolating polynomials can also be written using what Szegö (1975) called Lagrange's fundamental interpolating polynomials. Let

pi(x) = product_(k=1)^(n)(x-x_k)
(10)
pi(x_j) = product_(k=1)^(n)(x_j-x_k),
(11)
pi^'(x_j) = [(dpi)/(dx)]_(x=x_j)
(12)
= product_(k=1; k!=j)^(n)(x_j-x_k)
(13)

so that pi(x) is an nth degree polynomial with zeros at x_1, ..., x_n. Then define the fundamental polynomials by

 pi_nu(x)=(pi(x))/(pi^'(x_nu)(x-x_nu)),
(14)

which satisfy

 pi_nu(x_mu)=delta_(numu),
(15)

where delta_(numu) is the Kronecker delta. Now let y_1=P(x_1), ..., y_n=P(x_n), then the expansion

 P(x)=sum_(k=1)^npi_k(x)y_k=sum_(k=1)^n(pi(x))/((x-x_k)pi^'(x_k))y_k
(16)

gives the unique Lagrange interpolating polynomial assuming the values y_k at x_k. More generally, let dalpha(x) be an arbitrary distribution on the interval [a,b], {p_n(x)} the associated orthogonal polynomials, and l_1(x), ..., l_n(x) the fundamental polynomials corresponding to the set of zeros of a polynomial P_n(x). Then

 int_a^bl_nu(x)l_mu(x)dalpha(x)=lambda_mudelta_(numu)
(17)

for nu,mu=1, 2, ..., n, where lambda_nu are Christoffel numbers.

Lagrange interpolating polynomials give no error estimate. A more conceptually straightforward method for calculating them is Neville's algorithm. COURTESY:mathworld.wolfram.com/LagrangeInterpolatingPolynomial.html


Dear Guest,
Spend a minute to Register in a few simple steps, for complete access to the Social Learning Platform with Community Learning Features and Learning Resources.
If you are part of the Learning Community already, Login now!
0
Your rating: None