Loading AI tools
Approximation of the definite integral of a function From Wikipedia, the free encyclopedia
In numerical analysis, an n-point Gaussian quadrature rule, named after Carl Friedrich Gauss,[1] is a quadrature rule constructed to yield an exact result for polynomials of degree 2n − 1 or less by a suitable choice of the nodes xi and weights wi for i = 1, ..., n.
This article includes a list of general references, but it lacks sufficient corresponding inline citations. (September 2018) |
The modern formulation using orthogonal polynomials was developed by Carl Gustav Jacobi in 1826.[2] The most common domain of integration for such a rule is taken as [−1, 1], so the rule is stated as
which is exact for polynomials of degree 2n − 1 or less. This exact rule is known as the Gauss–Legendre quadrature rule. The quadrature rule will only be an accurate approximation to the integral above if f (x) is well-approximated by a polynomial of degree 2n − 1 or less on [−1, 1].
The Gauss–Legendre quadrature rule is not typically used for integrable functions with endpoint singularities. Instead, if the integrand can be written as
where g(x) is well-approximated by a low-degree polynomial, then alternative nodes xi' and weights wi' will usually give more accurate quadrature rules. These are known as Gauss–Jacobi quadrature rules, i.e.,
Common weights include (Chebyshev–Gauss) and . One may also want to integrate over semi-infinite (Gauss–Laguerre quadrature) and infinite intervals (Gauss–Hermite quadrature).
It can be shown (see Press et al., or Stoer and Bulirsch) that the quadrature nodes xi are the roots of a polynomial belonging to a class of orthogonal polynomials (the class orthogonal with respect to a weighted inner-product). This is a key observation for computing Gauss quadrature nodes and weights.
For the simplest integration problem stated above, i.e., f(x) is well-approximated by polynomials on , the associated orthogonal polynomials are Legendre polynomials, denoted by Pn(x). With the n-th polynomial normalized to give Pn(1) = 1, the i-th Gauss node, xi, is the i-th root of Pn and the weights are given by the formula[3]
Some low-order quadrature rules are tabulated below (over interval [−1, 1], see the section below for other intervals).
Number of points, n | Points, xi | Weights, wi | ||
---|---|---|---|---|
1 | 0 | 2 | ||
2 | ±0.57735... | 1 | ||
3 | 0 | 0.888889... | ||
±0.774597... | 0.555556... | |||
4 | ±0.339981... | 0.652145... | ||
±0.861136... | 0.347855... | |||
5 | 0 | 0.568889... | ||
±0.538469... | 0.478629... | |||
±0.90618... | 0.236927... |
An integral over [a, b] must be changed into an integral over [−1, 1] before applying the Gaussian quadrature rule. This change of interval can be done in the following way:
with
Applying the point Gaussian quadrature rule then results in the following approximation:
Use the two-point Gauss quadrature rule to approximate the distance in meters covered by a rocket from to as given by
Change the limits so that one can use the weights and abscissae given in Table 1. Also, find the absolute relative true error. The true value is given as 11061.34 m.
Solution
First, changing the limits of integration from to gives
Next, get the weighting factors and function argument values from Table 1 for the two-point rule,
Now we can use the Gauss quadrature formula since
Given that the true value is 11061.34 m, the absolute relative true error, is
The integration problem can be expressed in a slightly more general way by introducing a positive weight function ω into the integrand, and allowing an interval other than [−1, 1]. That is, the problem is to calculate for some choices of a, b, and ω. For a = −1, b = 1, and ω(x) = 1, the problem is the same as that considered above. Other choices lead to other integration rules. Some of these are tabulated below. Equation numbers are given for Abramowitz and Stegun (A & S).
Interval | ω(x) | Orthogonal polynomials | A & S | For more information, see ... |
---|---|---|---|---|
[−1, 1] | 1 | Legendre polynomials | 25.4.29 | § Gauss–Legendre quadrature |
(−1, 1) | Jacobi polynomials | 25.4.33 (β = 0) | Gauss–Jacobi quadrature | |
(−1, 1) | Chebyshev polynomials (first kind) | 25.4.38 | Chebyshev–Gauss quadrature | |
[−1, 1] | Chebyshev polynomials (second kind) | 25.4.40 | Chebyshev–Gauss quadrature | |
[0, ∞) | Laguerre polynomials | 25.4.45 | Gauss–Laguerre quadrature | |
[0, ∞) | Generalized Laguerre polynomials | Gauss–Laguerre quadrature | ||
(−∞, ∞) | Hermite polynomials | 25.4.46 | Gauss–Hermite quadrature |
Let pn be a nontrivial polynomial of degree n such that
Note that this will be true for all the orthogonal polynomials above, because each pn is constructed to be orthogonal to the other polynomials pj for j<n, and xk is in the span of that set.
If we pick the n nodes xi to be the zeros of pn, then there exist n weights wi which make the Gaussian quadrature computed integral exact for all polynomials h(x) of degree 2n − 1 or less. Furthermore, all these nodes xi will lie in the open interval (a, b).[4]
To prove the first part of this claim, let h(x) be any polynomial of degree 2n − 1 or less. Divide it by the orthogonal polynomial pn to get where q(x) is the quotient, of degree n − 1 or less (because the sum of its degree and that of the divisor pn must equal that of the dividend), and r(x) is the remainder, also of degree n − 1 or less (because the degree of the remainder is always less than that of the divisor). Since pn is by assumption orthogonal to all monomials of degree less than n, it must be orthogonal to the quotient q(x). Therefore
Since the remainder r(x) is of degree n − 1 or less, we can interpolate it exactly using n interpolation points with Lagrange polynomials li(x), where
We have
Then its integral will equal
where wi, the weight associated with the node xi, is defined to equal the weighted integral of li(x) (see below for other formulas for the weights). But all the xi are roots of pn, so the division formula above tells us that for all i. Thus we finally have
This proves that for any polynomial h(x) of degree 2n − 1 or less, its integral is given exactly by the Gaussian quadrature sum.
To prove the second part of the claim, consider the factored form of the polynomial pn. Any complex conjugate roots will yield a quadratic factor that is either strictly positive or strictly negative over the entire real line. Any factors for roots outside the interval from a to b will not change sign over that interval. Finally, for factors corresponding to roots xi inside the interval from a to b that are of odd multiplicity, multiply pn by one more factor to make a new polynomial
This polynomial cannot change sign over the interval from a to b because all its roots there are now of even multiplicity. So the integral since the weight function ω(x) is always non-negative. But pn is orthogonal to all polynomials of degree n-1 or less, so the degree of the product must be at least n. Therefore pn has n distinct roots, all real, in the interval from a to b.
The weights can be expressed as
(1) |
where is the coefficient of in . To prove this, note that using Lagrange interpolation one can express r(x) in terms of as because r(x) has degree less than n and is thus fixed by the values it attains at n different points. Multiplying both sides by ω(x) and integrating from a to b yields
The weights wi are thus given by
This integral expression for can be expressed in terms of the orthogonal polynomials and as follows.
We can write
where is the coefficient of in . Taking the limit of x to yields using L'Hôpital's rule
We can thus write the integral expression for the weights as
(2) |
In the integrand, writing
yields
provided , because is a polynomial of degree k − 1 which is then orthogonal to . So, if q(x) is a polynomial of at most nth degree we have
We can evaluate the integral on the right hand side for as follows. Because is a polynomial of degree n − 1, we have where s(x) is a polynomial of degree . Since s(x) is orthogonal to we have
We can then write
The term in the brackets is a polynomial of degree , which is therefore orthogonal to . The integral can thus be written as
According to equation (2), the weights are obtained by dividing this by and that yields the expression in equation (1).
can also be expressed in terms of the orthogonal polynomials and now . In the 3-term recurrence relation the term with vanishes, so in Eq. (1) can be replaced by .
Consider the following polynomial of degree where, as above, the xj are the roots of the polynomial . Clearly . Since the degree of is less than , the Gaussian quadrature formula involving the weights and nodes obtained from applies. Since for j not equal to i, we have
Since both and are non-negative functions, it follows that .
There are many algorithms for computing the nodes xi and weights wi of Gaussian quadrature rules. The most popular are the Golub-Welsch algorithm requiring O(n2) operations, Newton's method for solving using the three-term recurrence for evaluation requiring O(n2) operations, and asymptotic formulas for large n requiring O(n) operations.
Orthogonal polynomials with for for a scalar product , degree and leading coefficient one (i.e. monic orthogonal polynomials) satisfy the recurrence relation
and scalar product defined
for where n is the maximal degree which can be taken to be infinity, and where . First of all, the polynomials defined by the recurrence relation starting with have leading coefficient one and correct degree. Given the starting point by , the orthogonality of can be shown by induction. For one has
Now if are orthogonal, then also , because in all scalar products vanish except for the first one and the one where meets the same orthogonal polynomial. Therefore,
However, if the scalar product satisfies (which is the case for Gaussian quadrature), the recurrence relation reduces to a three-term recurrence relation: For is a polynomial of degree less than or equal to r − 1. On the other hand, is orthogonal to every polynomial of degree less than or equal to r − 1. Therefore, one has and for s < r − 1. The recurrence relation then simplifies to
or
(with the convention ) where
(the last because of , since differs from by a degree less than r).
The three-term recurrence relation can be written in matrix form where , is the th standard basis vector, i.e., , and J is the following tridiagonal matrix, called the Jacobi matrix:
The zeros of the polynomials up to degree n, which are used as nodes for the Gaussian quadrature can be found by computing the eigenvalues of this matrix. This procedure is known as Golub–Welsch algorithm.
For computing the weights and nodes, it is preferable to consider the symmetric tridiagonal matrix with elements
That is,
J and are similar matrices and therefore have the same eigenvalues (the nodes). The weights can be computed from the corresponding eigenvectors: If is a normalized eigenvector (i.e., an eigenvector with euclidean norm equal to one) associated with the eigenvalue xj, the corresponding weight can be computed from the first component of this eigenvector, namely:
where is the integral of the weight function
See, for instance, (Gil, Segura & Temme 2007) for further details.
The error of a Gaussian quadrature rule can be stated as follows.[5] For an integrand which has 2n continuous derivatives, for some ξ in (a, b), where pn is the monic (i.e. the leading coefficient is 1) orthogonal polynomial of degree n and where
In the important special case of ω(x) = 1, we have the error estimate[6]
Stoer and Bulirsch remark that this error estimate is inconvenient in practice, since it may be difficult to estimate the order 2n derivative, and furthermore the actual error may be much less than a bound established by the derivative. Another approach is to use two Gaussian quadrature rules of different orders, and to estimate the error as the difference between the two results. For this purpose, Gauss–Kronrod quadrature rules can be useful.
If the interval [a, b] is subdivided, the Gauss evaluation points of the new subintervals never coincide with the previous evaluation points (except at zero for odd numbers), and thus the integrand must be evaluated at every point. Gauss–Kronrod rules are extensions of Gauss quadrature rules generated by adding n + 1 points to an n-point rule in such a way that the resulting rule is of order 2n + 1. This allows for computing higher-order estimates while re-using the function values of a lower-order estimate. The difference between a Gauss quadrature rule and its Kronrod extension is often used as an estimate of the approximation error.
Also known as Lobatto quadrature,[7] named after Dutch mathematician Rehuel Lobatto. It is similar to Gaussian quadrature with the following differences:
Lobatto quadrature of function f(x) on interval [−1, 1]:
Abscissas: xi is the st zero of , here denotes the standard Legendre polynomial of m-th degree and the dash denotes the derivative.
Weights:
Remainder:
Some of the weights are:
Number of points, n | Points, xi | Weights, wi |
---|---|---|
An adaptive variant of this algorithm with 2 interior nodes[9] is found in GNU Octave and MATLAB as quadl
and integrate
.[10][11]
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.