Remove ads
From Wikipedia, the free encyclopedia
In multivariable calculus, an iterated limit is a limit of a sequence or a limit of a function in the form
This article includes a list of general references, but it lacks sufficient corresponding inline citations. (August 2013) |
or other similar forms.
An iterated limit is only defined for an expression whose value depends on at least two variables. To evaluate such a limit, one takes the limiting process as one of the two variables approaches some number, getting an expression whose value depends only on the other variable, and then one takes the limit as the other variable approaches some number.
This section introduces definitions of iterated limits in two variables. These may generalize easily to multiple variables.
For each , let be a real double sequence. Then there are two forms of iterated limits, namely
For example, let
Then
Let . Then there are also two forms of iterated limits, namely
For example, let such that
Then
The limit(s) for x and/or y can also be taken at infinity, i.e.,
For each , let be a sequence of functions. Then there are two forms of iterated limits, namely
For example, let such that
Then
The limit in x can also be taken at infinity, i.e.,
For example, let such that
Then
Note that the limit in n is taken discretely, while the limit in x is taken continuously.
This section introduces various definitions of limits in two variables. These may generalize easily to multiple variables.
For a double sequence , there is another definition of limit, which is commonly referred to as double limit, denote by
which means that for all , there exist such that implies .[3]
The following theorem states the relationship between double limit and iterated limits.
Proof. By existence of for any , there exists such that implies .
Let each such that exists, there exists such that implies .
Both the above statements are true for and . Combining equations from the above two, for any there exists for all ,
,
which proves that . Similarly for , we prove: .
For example, let
Since , , and , we have
This theorem requires the single limits and to converge. This condition cannot be dropped. For example, consider
Then we may see that
This is because does not exist in the first place.
For a two-variable function , there are two other types of limits. One is the ordinary limit, denoted by
which means that for all , there exist such that implies .[6]
For this limit to exist, f(x, y) can be made as close to L as desired along every possible path approaching the point (a, b). In this definition, the point (a, b) is excluded from the paths. Therefore, the value of f at the point (a, b), even if it is defined, does not affect the limit.
The other type is the double limit, denoted by
which means that for all , there exist such that and implies .[7]
For this limit to exist, f(x, y) can be made as close to L as desired along every possible path approaching the point (a, b), except the lines x=a and y=b. In other words, the value of f along the lines x=a and y=b does not affect the limit. This is different from the ordinary limit where only the point (a, b) is excluded. In this sense, ordinary limit is a stronger notion than double limit:
Both of these limits do not involve first taking one limit and then another. This contrasts with iterated limits where the limiting process is taken in x-direction first, and then in y-direction (or in reversed order).
The following theorem states the relationship between double limit and iterated limits:
For example, let
Since , and , we have
(Note that in this example, does not exist.)
This theorem requires the single limits and to exist. This condition cannot be dropped. For example, consider
Then we may see that
This is because does not exist for x near 0 in the first place.
Combining Theorem 2 and 3, we have the following corollary:
For a two-variable function , we may also define the double limit at infinity
which means that for all , there exist such that and implies .
Similar definitions may be given for limits at negative infinity.
The following theorem states the relationship between double limit at infinity and iterated limits at infinity:
For example, let
Since , and , we have
Again, this theorem requires the single limits and to exist. This condition cannot be dropped. For example, consider
Then we may see that
This is because does not exist for fixed y in the first place.
The converses of Theorems 1, 3 and 4 do not hold, i.e., the existence of iterated limits, even if they are equal, does not imply the existence of the double limit. A counter-example is
near the point (0, 0). On one hand,
On the other hand, the double limit does not exist. This can be seen by taking the limit along the path (x, y) = (t, t) → (0,0), which gives
and along the path (x, y) = (t, t2) → (0,0), which gives
In the examples above, we may see that interchanging limits may or may not give the same result. A sufficient condition for interchanging limits is given by the Moore-Osgood theorem.[8] The essence of the interchangeability depends on uniform convergence.
The following theorem allows us to interchange two limits of sequences.
A corollary is about the interchangeability of infinite sum.
Similar results hold for multivariable functions.
Note that this theorem does not imply the existence of . A counter-example is near (0,0).[10]
An important variation of Moore-Osgood theorem is specifically for sequences of functions.
A corollary is the continuity theorem for uniform convergence as follows:
Another corollary is about the interchangeability of limit and infinite sum.
Consider a matrix of infinite entries
Suppose we would like to find the sum of all entries. If we sum it column by column first, we will find that the first column gives 1, while all others give 0. Hence the sum of all columns is 1. However, if we sum it row by row first, it will find that all rows give 0. Hence the sum of all rows is 0.
The explanation for this paradox is that the vertical sum to infinity and horizontal sum to infinity are two limiting processes that cannot be interchanged. Let be the sum of entries up to entries (n, m). Then we have , but . In this case, the double limit does not exist, and thus this problem is not well-defined.
By the integration theorem for uniform convergence, once we have converges uniformly on , the limit in n and an integration over a bounded interval can be interchanged:
However, such a property may fail for an improper integral over an unbounded interval . In this case, one may rely on the Moore-Osgood theorem.
Consider as an example.
We first expand the integrand as for . (Here x=0 is a limiting case.)
One can prove by calculus that for and , we have . By Weierstrass M-test, converges uniformly on .
Then by the integration theorem for uniform convergence, .
To further interchange the limit with the infinite summation , the Moore-Osgood theorem requires the infinite series to be uniformly convergent.
Note that . Again, by Weierstrass M-test, converges uniformly on .
Then by the Moore-Osgood theorem, . (Here is the Riemann zeta function.)
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.