Transcript

Convergence of SeriesConvergence of Series

Absolute convergence,Conditional convergence,

Examples.

Absolute convergence,Conditional convergence,

Examples.

Absolute convergence• In mathematics, a series (or sometimes also an integral) of numbers is said

to converge absolutely if the sum (or integral) of the absolute value of the summand or integrand is finite.

• More precisely, a real or complex-valued series is said to converge absolutely if

• Absolute convergence is vitally important to the study of infinite series because on the one hand, it is strong enough that such series retain certain basic properties of finite sums—the most important ones being rearrangement of the terms and convergence of products of two infinite series—that are unfortunately not possessed by all convergent series.

• On the other hand absolute convergence is weak enough to occur very often in practice. Indeed, in some (though not all) branches of mathematics in which series are applied, the existence of convergent but not absolutely convergent series is little more than a curiosity.

• In mathematics, a series (or sometimes also an integral) of numbers is said to converge absolutely if the sum (or integral) of the absolute value of the summand or integrand is finite.

• More precisely, a real or complex-valued series is said to converge absolutely if

• Absolute convergence is vitally important to the study of infinite series because on the one hand, it is strong enough that such series retain certain basic properties of finite sums—the most important ones being rearrangement of the terms and convergence of products of two infinite series—that are unfortunately not possessed by all convergent series.

• On the other hand absolute convergence is weak enough to occur very often in practice. Indeed, in some (though not all) branches of mathematics in which series are applied, the existence of convergent but not absolutely convergent series is little more than a curiosity.

More general setting for absolute convergence

• One may study the convergence of series whose terms an are elements of an arbitrary abelian topological group. The notion of absolute convergence requires more structure, namely a norm:

• A norm on an abelian group G (written additively, with identity element 0) is a real-valued function on G such that:

• The norm of the identity element of G is zero: • The norm of any nonidentity element is strictly positive: • For every x in G, • For every x, y in G, • Then the function induces on G the structure of a

metric space (and in particular, a topology). We can therefore consider G-valued series and define such a series to be absolutely convergent if

• One may study the convergence of series whose terms an are elements of an arbitrary abelian topological group. The notion of absolute convergence requires more structure, namely a norm:

• A norm on an abelian group G (written additively, with identity element 0) is a real-valued function on G such that:

• The norm of the identity element of G is zero: • The norm of any nonidentity element is strictly positive: • For every x in G, • For every x, y in G, • Then the function induces on G the structure of a

metric space (and in particular, a topology). We can therefore consider G-valued series and define such a series to be absolutely convergent if

Relations with convergence• If the metric d on G is complete, then every absolutely convergent series is

convergent. The proof is the same as for complex-valued series: use the completeness to derive the Cauchy criterion for convergence—a series is convergent if and only if its tails can be made arbitrarily small in norm—and apply the triangle inequality.

• In particular, for series with values in any Banach space, absolute convergence implies convergence. The converse is also true: if absolute convergence implies convergence in a normed space, then the space is complete; i.e., a Banach space.

• Of course a series may be convergent without being absolutely convergent, the standard example being the alternating harmonic series.

• However, many standard tests which show that a series is convergent in fact show absolute convergence, notably the ratio and root tests. This has the important consequence that a power series is absolutely convergent on the interior of its disk of convergence.

• It is standard in calculus courses to say that a real series which is convergent but not absolutely convergent is conditionally convergent. However, in the more general context of G-valued series a distinction is made between absolute and unconditional convergence, and the assertion that a real or complex series which is not absolutely convergent is necessarily conditionally convergent (i.e., not unconditionally convergent) is then a theorem, not a definition. This is discussed in more detail below.

• If the metric d on G is complete, then every absolutely convergent series is convergent. The proof is the same as for complex-valued series: use the completeness to derive the Cauchy criterion for convergence—a series is convergent if and only if its tails can be made arbitrarily small in norm—and apply the triangle inequality.

• In particular, for series with values in any Banach space, absolute convergence implies convergence. The converse is also true: if absolute convergence implies convergence in a normed space, then the space is complete; i.e., a Banach space.

• Of course a series may be convergent without being absolutely convergent, the standard example being the alternating harmonic series.

• However, many standard tests which show that a series is convergent in fact show absolute convergence, notably the ratio and root tests. This has the important consequence that a power series is absolutely convergent on the interior of its disk of convergence.

• It is standard in calculus courses to say that a real series which is convergent but not absolutely convergent is conditionally convergent. However, in the more general context of G-valued series a distinction is made between absolute and unconditional convergence, and the assertion that a real or complex series which is not absolutely convergent is necessarily conditionally convergent (i.e., not unconditionally convergent) is then a theorem, not a definition. This is discussed in more detail below.

Rearrangements and unconditional convergence

• Given a series with values in a normed abelian group G and a permutation σ of the natural numbers, one builds a new series , said to be a rearrangement of the original series.

• A series is said to be unconditionally convergent if all rearrangements of the series are convergent to the same value.

• When G is complete, absolute convergence implies unconditional convergence.

• Given a series with values in a normed abelian group G and a permutation σ of the natural numbers, one builds a new series , said to be a rearrangement of the original series.

• A series is said to be unconditionally convergent if all rearrangements of the series are convergent to the same value.

• When G is complete, absolute convergence implies unconditional convergence.

• Theorem

• Let • and let σ be a permutation of • then

• Proof• By the condition of absolute convergence:

• And

• now let , and , that is to say the set of all inverse elements of up to N. Substituting into the definition we get:

• Now we only need a little discussion, why the second definition conforms.

• Theorem

• Let • and let σ be a permutation of • then

• Proof• By the condition of absolute convergence:

• And

• now let , and , that is to say the set of all inverse elements of up to N. Substituting into the definition we get:

• Now we only need a little discussion, why the second definition conforms.

• For this we write

• qed.• The issue of the converse is much more interesting. For real series it follows from the

Riemann rearrangement theorem that unconditional convergence implies absolute convergence. Since a series with values in a finite-dimensional normed space is absolutely convergent if each of its one-dimensional projections is absolutely convergent, it follows easily that absolute and unconditional convergence coincide for -valued series.

• But there is an unconditionally and nonabsolutely convergent series with values in

Hilbert space : if is an orthonormal basis, take .• Remarkably, a theorem of Dvoretzky-Rogers asserts that every infinite-dimensional

Banach space admits an unconditionally but non-absolutely convergent series.

• For this we write

• qed.• The issue of the converse is much more interesting. For real series it follows from the

Riemann rearrangement theorem that unconditional convergence implies absolute convergence. Since a series with values in a finite-dimensional normed space is absolutely convergent if each of its one-dimensional projections is absolutely convergent, it follows easily that absolute and unconditional convergence coincide for -valued series.

• But there is an unconditionally and nonabsolutely convergent series with values in

Hilbert space : if is an orthonormal basis, take .• Remarkably, a theorem of Dvoretzky-Rogers asserts that every infinite-dimensional

Banach space admits an unconditionally but non-absolutely convergent series.

Products of series• The Cauchy product of two series converges to the product

of the sums if at least one of the series converges absolutely.

• That is, suppose:

• The Cauchy product is defined as the sum of terms cn where:

• Then, if either the an or bn sum converges absolutely, then

• The Cauchy product of two series converges to the product of the sums if at least one of the series converges absolutely.

• That is, suppose:

• The Cauchy product is defined as the sum of terms cn where:

• Then, if either the an or bn sum converges absolutely, then

Absolute convergence of integrals

• The integral of a real or complex-valued function is said to converge absolutely if One also says that f is absolutely integrable.

• When A = [a,b] is a closed bounded interval, every continuous function is integrable, and since f continuous implies | f | continuous, similarly every continuous function is absolutely integrable. It is not generally true that absolutely integrable functions on [a,b] are integrable: let be a nonmeasurable

subset and take , where χS is the characteristic function of S. Then f is not Lebesgue measurable but |f| is constant. However, it is a standard result that if f is Riemann integrable, so is |f|. This holds also for the Lebesgue integral; see below.

On the other hand a function f may be Kurzweil-Henstock integrable (or "gauge integrable") while |f| is not. This includes the case of improperly Riemann integrable functions.

• The integral of a real or complex-valued function is said to converge absolutely if One also says that f is absolutely integrable.

• When A = [a,b] is a closed bounded interval, every continuous function is integrable, and since f continuous implies | f | continuous, similarly every continuous function is absolutely integrable. It is not generally true that absolutely integrable functions on [a,b] are integrable: let be a nonmeasurable

subset and take , where χS is the characteristic function of S. Then f is not Lebesgue measurable but |f| is constant. However, it is a standard result that if f is Riemann integrable, so is |f|. This holds also for the Lebesgue integral; see below.

On the other hand a function f may be Kurzweil-Henstock integrable (or "gauge integrable") while |f| is not. This includes the case of improperly Riemann integrable functions.

• Similarly, when A is an interval of infinite length it is well-known that there are improperly Riemann integrable functions f which are not absolutely

integrable. Indeed, given any series one can consider the associated step function defined by fa([n,n + 1)) = an. Then converges absolutely, converges conditionally or diverges according to the corresponding behavior of

• Another example of a convergent but not absolutely convergent improper Riemann integral is .

• Similarly, when A is an interval of infinite length it is well-known that there are improperly Riemann integrable functions f which are not absolutely

integrable. Indeed, given any series one can consider the associated step function defined by fa([n,n + 1)) = an. Then converges absolutely, converges conditionally or diverges according to the corresponding behavior of

• Another example of a convergent but not absolutely convergent improper Riemann integral is .

• On any measure space A the Lebesgue integral of a real-valued function is defined in terms of its positive and negative parts, so the facts:

• f integrable implies |f| integrable • f measurable, |f| integrable implies f integrable • are essentially built into the definition of the Lebesgue integral. In

particular, applying the theory to the counting measure on a set S, one recovers the notion of unordered summation of series developed by Moore-Smith using (what are now called) nets.

• When is the set of natural numbers, Lebesgue integrability, unordered summability and absolute convergence all coincide.

• Finally, all of the above holds for integrals with values in a Banach space. The definition of a Banach-valued Riemann integral is an evident modification of the usual one.

• For the Lebesgue integral one needs to circumvent the decomposition into positive and negative parts with Daniell's more functional analytic approach, obtaining the Bochner integral.

• On any measure space A the Lebesgue integral of a real-valued function is defined in terms of its positive and negative parts, so the facts:

• f integrable implies |f| integrable • f measurable, |f| integrable implies f integrable • are essentially built into the definition of the Lebesgue integral. In

particular, applying the theory to the counting measure on a set S, one recovers the notion of unordered summation of series developed by Moore-Smith using (what are now called) nets.

• When is the set of natural numbers, Lebesgue integrability, unordered summability and absolute convergence all coincide.

• Finally, all of the above holds for integrals with values in a Banach space. The definition of a Banach-valued Riemann integral is an evident modification of the usual one.

• For the Lebesgue integral one needs to circumvent the decomposition into positive and negative parts with Daniell's more functional analytic approach, obtaining the Bochner integral.

Conditional convergence• In mathematics, a series or integral is said to be conditionally convergent

if it converges, but it does not converge absolutely.• Definition• More precisely, a series is said to converge conditionally

if exists and is a finite number (not ∞ or −∞), but • A classical example is given by

• which converges to , but is not absolutely convergent (see Harmonic series).

• The simplest examples of conditionally convergent series (including the one above) are the alternating series.

• Bernhard Riemann proved that a conditionally convergent series may be rearranged to converge to any sum at all, including ∞ or −∞; see Riemann series theorem.

• In mathematics, a series or integral is said to be conditionally convergent if it converges, but it does not converge absolutely.

• Definition• More precisely, a series is said to converge conditionally

if exists and is a finite number (not ∞ or −∞), but • A classical example is given by

• which converges to , but is not absolutely convergent (see Harmonic series).

• The simplest examples of conditionally convergent series (including the one above) are the alternating series.

• Bernhard Riemann proved that a conditionally convergent series may be rearranged to converge to any sum at all, including ∞ or −∞; see Riemann series theorem.

Divergent series• In mathematics, a divergent series is an infinite series that

is not convergent, meaning that the infinite sequence of the partial sums of the series does not have a limit.

• If a series converges, the individual terms of the series must approach zero. Thus any series in which the

• However, convergence is a stronger condition: not all series whose terms approach zero converge. The simplest counter example is the harmonic series

• The divergence of the harmonic series was proven by the medieval mathematician Nicole Oresme.

• In mathematics, a divergent series is an infinite series that is not convergent, meaning that the infinite sequence of the partial sums of the series does not have a limit.

• If a series converges, the individual terms of the series must approach zero. Thus any series in which the

• However, convergence is a stronger condition: not all series whose terms approach zero converge. The simplest counter example is the harmonic series

• The divergence of the harmonic series was proven by the medieval mathematician Nicole Oresme.

• In specialized mathematical contexts, values can be usefully assigned to certain series whose sequence of partial sums diverges.

• A summability method or summation method is a partial function from the set of sequences of partial sums of series to values.

• For example, Cesàro summation assigns Grandi's divergent series

• the value 1/2. Cesàro summation is an averaging method, in that it relies on the arithmetic mean of the sequence of partial sums.

• Other methods involve analytic continuations of related series. • In physics, there are a wide variety of summability methods; these

are discussed in greater detail in the article on regularization.

• In specialized mathematical contexts, values can be usefully assigned to certain series whose sequence of partial sums diverges.

• A summability method or summation method is a partial function from the set of sequences of partial sums of series to values.

• For example, Cesàro summation assigns Grandi's divergent series

• the value 1/2. Cesàro summation is an averaging method, in that it relies on the arithmetic mean of the sequence of partial sums.

• Other methods involve analytic continuations of related series. • In physics, there are a wide variety of summability methods; these

are discussed in greater detail in the article on regularization.

Theorems on methods for summing divergent series

• A summability method M is regular if it agrees with the actual limit on all convergent series.

• Such a result is called an abelian theorem for M, from the prototypical Abel's theorem.

• More interesting and in general more subtle are partial converse results, called tauberian theorems, from a prototype proved by Alfred Tauber.

• Here partial converse means that if M sums the series Σ, and some side-condition holds, then Σ was convergent in the first place; without any side condition such a result would say that M only summed convergent series (making it useless as a summation method for divergent series).

• The operator giving the sum of a convergent series is linear, and it follows from the Hahn-Banach theorem that it may be extended to a summation method summing any series with bounded partial sums.

• This fact is not very useful in practice since there are many such extensions, inconsistent with each other, and also since proving such operators exist requires invoking the axiom of choice or its equivalents, such as Zorn's lemma. They are therefore nonconstructive.

• A summability method M is regular if it agrees with the actual limit on all convergent series.

• Such a result is called an abelian theorem for M, from the prototypical Abel's theorem.

• More interesting and in general more subtle are partial converse results, called tauberian theorems, from a prototype proved by Alfred Tauber.

• Here partial converse means that if M sums the series Σ, and some side-condition holds, then Σ was convergent in the first place; without any side condition such a result would say that M only summed convergent series (making it useless as a summation method for divergent series).

• The operator giving the sum of a convergent series is linear, and it follows from the Hahn-Banach theorem that it may be extended to a summation method summing any series with bounded partial sums.

• This fact is not very useful in practice since there are many such extensions, inconsistent with each other, and also since proving such operators exist requires invoking the axiom of choice or its equivalents, such as Zorn's lemma. They are therefore nonconstructive.

• The subject of divergent series, as a domain of mathematical analysis, is primarily concerned with explicit and natural techniques such as Abel summation, Cesàro summation and Borel summation, and their relationships.

• The advent of Wiener's tauberian theorem marked an epoch in the subject, introducing unexpected connections to Banach algebra methods in Fourier analysis.

• Summation of divergent series is also related to extrapolation methods and sequence transformations as numerical techniques.

• Examples for such techniques are Padé approximants, Levin-type sequence transformations, and order-dependent mappings related to renormalization techniques for large-order perturbation theory in quantum mechanics.

• The subject of divergent series, as a domain of mathematical analysis, is primarily concerned with explicit and natural techniques such as Abel summation, Cesàro summation and Borel summation, and their relationships.

• The advent of Wiener's tauberian theorem marked an epoch in the subject, introducing unexpected connections to Banach algebra methods in Fourier analysis.

• Summation of divergent series is also related to extrapolation methods and sequence transformations as numerical techniques.

• Examples for such techniques are Padé approximants, Levin-type sequence transformations, and order-dependent mappings related to renormalization techniques for large-order perturbation theory in quantum mechanics.

Properties of summation methods• Summation methods usually concentrate on the sequence of partial sums

of the series. • While this sequence does not converge, we may often find that when we

take an average of larger and larger initial terms of the sequence, the average converges, and we can use this average instead of a limit to evaluate the sum of the series.

• So in evaluating a = a0 + a1 + a2 + ..., we work with the sequence s, where s0 = a0 and sn+1 = sn + an. In the convergent case, the sequence s approaches the limit a.

• A summation method can be seen as a function from a set of sequences of partial sums to values.

• If A is any summation method assigning values to a set of sequences, we may mechanically translate this to a series-summation method AΣ that assigns the same values to the corresponding series.

• There are certain properties it is desirable for these methods to possess if they are to arrive at values corresponding to limits and sums, respectively.

• Summation methods usually concentrate on the sequence of partial sums of the series.

• While this sequence does not converge, we may often find that when we take an average of larger and larger initial terms of the sequence, the average converges, and we can use this average instead of a limit to evaluate the sum of the series.

• So in evaluating a = a0 + a1 + a2 + ..., we work with the sequence s, where s0 = a0 and sn+1 = sn + an. In the convergent case, the sequence s approaches the limit a.

• A summation method can be seen as a function from a set of sequences of partial sums to values.

• If A is any summation method assigning values to a set of sequences, we may mechanically translate this to a series-summation method AΣ that assigns the same values to the corresponding series.

• There are certain properties it is desirable for these methods to possess if they are to arrive at values corresponding to limits and sums, respectively.

• Regularity. A summation method is regular if, whenever the sequence s converges to x, A(s) = x. Equivalently, the corresponding series-summation method evaluates AΣ(a) = x.

• Linearity. A is linear if it is a linear functional on the sequences where it is defined, so that A(r + s) = A(r) + A(s) and A(ks) = k A(s), for k a scalar (real or complex.) Since the terms an = sn+1 − sn of the series a are linear functionals on the sequence s and vice versa, this is equivalent to AΣ being a linear functional on the terms of the series.

• Stability. If s is a sequence starting from s0 and s is the sequence obtained by ′omitting the first value and subtracting it from the rest, so that s′n = sn+1 − s0, then A(s) is defined if and only if A(s ) is defined, and ′ A(s) = s0 + A(s ). Equivalently, ′whenever a′n = an+1 for all n, then AΣ(a) = a0 + AΣ(a ). ′

• The third condition is less important, and some significant methods, such as Borel summation, do not possess it.

• A desirable property for two distinct summation methods A and B to share is consistency: A and B are consistent if for every sequence s to which both assign a value, A(s) = B(s). If two methods are consistent, and one sums more series than the other, the one summing more series is stronger.

• It should be noted that there are powerful numerical summation methods that are neither regular nor linear, for instance nonlinear sequence transformations like Levin-type sequence transformations and Padé approximants, as well as the order-dependent mappings of perturbative series based on renormalization techniques.

• Regularity. A summation method is regular if, whenever the sequence s converges to x, A(s) = x. Equivalently, the corresponding series-summation method evaluates AΣ(a) = x.

• Linearity. A is linear if it is a linear functional on the sequences where it is defined, so that A(r + s) = A(r) + A(s) and A(ks) = k A(s), for k a scalar (real or complex.) Since the terms an = sn+1 − sn of the series a are linear functionals on the sequence s and vice versa, this is equivalent to AΣ being a linear functional on the terms of the series.

• Stability. If s is a sequence starting from s0 and s is the sequence obtained by ′omitting the first value and subtracting it from the rest, so that s′n = sn+1 − s0, then A(s) is defined if and only if A(s ) is defined, and ′ A(s) = s0 + A(s ). Equivalently, ′whenever a′n = an+1 for all n, then AΣ(a) = a0 + AΣ(a ). ′

• The third condition is less important, and some significant methods, such as Borel summation, do not possess it.

• A desirable property for two distinct summation methods A and B to share is consistency: A and B are consistent if for every sequence s to which both assign a value, A(s) = B(s). If two methods are consistent, and one sums more series than the other, the one summing more series is stronger.

• It should be noted that there are powerful numerical summation methods that are neither regular nor linear, for instance nonlinear sequence transformations like Levin-type sequence transformations and Padé approximants, as well as the order-dependent mappings of perturbative series based on renormalization techniques.

Axiomatic methods• Taking regularity, linearity and stability as axioms, it is possible to sum many

divergent series by elementary algebraic manipulations. • For instance, whenever r ≠ 1, the geometric series

• can be evaluated regardless of convergence. More rigorously, any summation method that possesses these properties and which assigns a finite value to the geometric series must assign this value.

• However, when r is a real number larger than 1, the partial sums increase without bound, and averaging methods assign a limit of ∞.

• Taking regularity, linearity and stability as axioms, it is possible to sum many divergent series by elementary algebraic manipulations.

• For instance, whenever r ≠ 1, the geometric series

• can be evaluated regardless of convergence. More rigorously, any summation method that possesses these properties and which assigns a finite value to the geometric series must assign this value.

• However, when r is a real number larger than 1, the partial sums increase without bound, and averaging methods assign a limit of ∞.

Nörlund means• Suppose pn is a sequence of positive terms, starting from p0. Suppose also that

• If now we transform a sequence s by using p to give weighted means, setting

• then the limit of tn as n goes to infinity is an average called the Nörlund mean Np(s).• The Nörlund mean is regular, linear, and stable. Moreover, any two Nörlund means

are consistent. • The most significant of the Nörlund means are the Cesàro sums. Here, if we define

the sequence pk by

• then the Cesàro sum Ck is defined by Ck(s) = N(pk

)(s). • Cesàro sums are Nörlund means if k ≥ 0, and hence are regular, linear, stable, and

consistent. C0 is ordinary summation, and C1 is ordinary Cesàro summation. • Cesàro sums have the property that if h > k, then Ch is stronger than Ck.

• Suppose pn is a sequence of positive terms, starting from p0. Suppose also that

• If now we transform a sequence s by using p to give weighted means, setting

• then the limit of tn as n goes to infinity is an average called the Nörlund mean Np(s).• The Nörlund mean is regular, linear, and stable. Moreover, any two Nörlund means

are consistent. • The most significant of the Nörlund means are the Cesàro sums. Here, if we define

the sequence pk by

• then the Cesàro sum Ck is defined by Ck(s) = N(pk

)(s). • Cesàro sums are Nörlund means if k ≥ 0, and hence are regular, linear, stable, and

consistent. C0 is ordinary summation, and C1 is ordinary Cesàro summation. • Cesàro sums have the property that if h > k, then Ch is stronger than Ck.

Abelian means• Suppose λ = {λ0, λ1, λ2, ...} is a strictly increasing sequence tending towards

infinity, and that λ0 ≥ 0. Recall that an = sn+1 − sn is the associated series whose partial sums form the sequence s. Suppose

• converges for all positive real numbers x. Then the Abelian mean Aλ is defined as

• A series of this type is known as a generalized Dirichlet series; in applications to physics, this is known as the method of heat-kernel regularization.

• Abelian means are regular, linear, and stable, but not always consistent between different choices of λ. However, some special cases are very important summation methods.

• Suppose λ = {λ0, λ1, λ2, ...} is a strictly increasing sequence tending towards infinity, and that λ0 ≥ 0. Recall that an = sn+1 − sn is the associated series whose partial sums form the sequence s. Suppose

• converges for all positive real numbers x. Then the Abelian mean Aλ is defined as

• A series of this type is known as a generalized Dirichlet series; in applications to physics, this is known as the method of heat-kernel regularization.

• Abelian means are regular, linear, and stable, but not always consistent between different choices of λ. However, some special cases are very important summation methods.

Abel summation• If λn = n, then we obtain the method of Abel summation. Here

• where z = exp(−x). • Then the limit of ƒ(x) as x approaches 0 through positive reals is the

limit of the power series for ƒ(z) as z approaches 1 from below through positive reals, and the Abel sum A(s) is defined as

• Abel summation is interesting in part because it is consistent with but more powerful than Cesàro summation: A(s) = Ck(s) whenever the latter is defined.

• The Abel sum is therefore regular, linear, stable, and consistent with Cesàro summation.

• If λn = n, then we obtain the method of Abel summation. Here

• where z = exp(−x). • Then the limit of ƒ(x) as x approaches 0 through positive reals is the

limit of the power series for ƒ(z) as z approaches 1 from below through positive reals, and the Abel sum A(s) is defined as

• Abel summation is interesting in part because it is consistent with but more powerful than Cesàro summation: A(s) = Ck(s) whenever the latter is defined.

• The Abel sum is therefore regular, linear, stable, and consistent with Cesàro summation.

Lindelöf summation• If λn = n ln(n), then (indexing from one) we have

• Then L(s), the Lindelöf sum (Volkov 2001), is the limit of ƒ(x) as x goes to zero.

• The Lindelöf sum is a powerful method when applied to power series among other applications, summing power series in the Mittag-Leffler star.

• If g(z) is analytic in a disk around zero, and hence has a Maclaurin series G(z) with a positive radius of convergence, then L(G(z)) = g(z) in the Mittag-Leffler star.

• Moreover, convergence to g(z) is uniform on compact subsets of the star.

• If λn = n ln(n), then (indexing from one) we have

• Then L(s), the Lindelöf sum (Volkov 2001), is the limit of ƒ(x) as x goes to zero.

• The Lindelöf sum is a powerful method when applied to power series among other applications, summing power series in the Mittag-Leffler star.

• If g(z) is analytic in a disk around zero, and hence has a Maclaurin series G(z) with a positive radius of convergence, then L(G(z)) = g(z) in the Mittag-Leffler star.

• Moreover, convergence to g(z) is uniform on compact subsets of the star.

Alternating series• In mathematics, an alternating series is an infinite series of the form

• with an ≥ 0 (or an ≤ 0) for all n. A finite sum of this kind is an alternating sum. An alternating series converges if the terms an converge to 0 monotonically.

• The error E introduced by approximating an alternating series with its partial sum to n terms is given by |E|<|an+1|.

• A sufficient condition for the series to converge is that it converges absolutely. But this is often too strong a condition to ask: it is not necessary.

• For example, the harmonic series

• diverges, while the alternating version

• converges to the natural logarithm of 2.

• In mathematics, an alternating series is an infinite series of the form

• with an ≥ 0 (or an ≤ 0) for all n. A finite sum of this kind is an alternating sum. An alternating series converges if the terms an converge to 0 monotonically.

• The error E introduced by approximating an alternating series with its partial sum to n terms is given by |E|<|an+1|.

• A sufficient condition for the series to converge is that it converges absolutely. But this is often too strong a condition to ask: it is not necessary.

• For example, the harmonic series

• diverges, while the alternating version

• converges to the natural logarithm of 2.

• A broader test for convergence of an alternating series is Leibniz' test: if the sequence an is monotone decreasing and tends to zero, then the series

• converges.• The partial sum

• can be used to approximate the sum of a convergent alternating series. If an is monotone decreasing and tends to zero, then the error in this approximation is less than an + 1.

• This last observation is the basis of the Leibniz test. Indeed, if the sequence an tends to zero and is monotone decreasing (at least from a certain point on), it can be easily shown that the sequence of partial sums is a Cauchy sequence. Assuming m < n,

.

• A broader test for convergence of an alternating series is Leibniz' test: if the sequence an is monotone decreasing and tends to zero, then the series

• converges.• The partial sum

• can be used to approximate the sum of a convergent alternating series. If an is monotone decreasing and tends to zero, then the error in this approximation is less than an + 1.

• This last observation is the basis of the Leibniz test. Indeed, if the sequence an tends to zero and is monotone decreasing (at least from a certain point on), it can be easily shown that the sequence of partial sums is a Cauchy sequence. Assuming m < n,

.

• (the sequence being monotone decreasing guarantees that ak − ak + 1 > 0; note that formally one needs to take into account whether n is even or odd, but this does not change the idea of the proof)

• As when , the sequence of partial sums is Cauchy, and so the series is convergent.

• Since the estimate above does not depend on n, it also shows that

• Convergent alternating series that do not converge absolutely are examples of conditional convergent series.

• In particular, the Riemann series theorem applies to their rearrangements

• (the sequence being monotone decreasing guarantees that ak − ak + 1 > 0; note that formally one needs to take into account whether n is even or odd, but this does not change the idea of the proof)

• As when , the sequence of partial sums is Cauchy, and so the series is convergent.

• Since the estimate above does not depend on n, it also shows that

• Convergent alternating series that do not converge absolutely are examples of conditional convergent series.

• In particular, the Riemann series theorem applies to their rearrangements

Harmonic series

• In mathematics, the harmonic series is the divergent infinite series:

• Its name derives from the concept of overtones, or harmonics, in music: the wavelengths of the overtones of a vibrating string are 1/2, 1/3, 1/4, etc., of the string's fundamental wavelength. Every term of the series after the first is the harmonic mean of the neighboring terms; the term harmonic mean likewise derives from music.

• In mathematics, the harmonic series is the divergent infinite series:

• Its name derives from the concept of overtones, or harmonics, in music: the wavelengths of the overtones of a vibrating string are 1/2, 1/3, 1/4, etc., of the string's fundamental wavelength. Every term of the series after the first is the harmonic mean of the neighboring terms; the term harmonic mean likewise derives from music.

Comparison test• One way to prove divergence is to compare the harmonic series with another

divergent series:

• Each term of the harmonic series is greater than or equal to the corresponding term of the second series, and therefore the sum of the harmonic series must be greater than the sum of the second series. However, the sum of the second series is infinite:

• It follows (by the comparison test) that sum of the harmonic series must be infinite as well. More precisely, the comparison above proves that

• for every positive integer k. This proof, due to Nicole Oresme, is a high point of medieval mathematics.

• One way to prove divergence is to compare the harmonic series with another divergent series:

• Each term of the harmonic series is greater than or equal to the corresponding term of the second series, and therefore the sum of the harmonic series must be greater than the sum of the second series. However, the sum of the second series is infinite:

• It follows (by the comparison test) that sum of the harmonic series must be infinite as well. More precisely, the comparison above proves that

• for every positive integer k. This proof, due to Nicole Oresme, is a high point of medieval mathematics.

Integral test• It is also possible to prove that the harmonic series diverges by comparing

the sum with an improper integral. Specifically, consider the arrangement of rectangles shown in the figure to the right.

• Each rectangle is 1 unit wide and 1 / n units high, so the total area of the rectangles is the sum of the harmonic series:

• However, the total area under the curve y = 1 / x from 1 to infinity is given by an improper integral:

• Since this area is entirely contained within the rectangles, the total area of the rectangles must be infinite as well. More precisely, this proves that

• The generalization of this argument is known as the integral test.

• It is also possible to prove that the harmonic series diverges by comparing the sum with an improper integral. Specifically, consider the arrangement of rectangles shown in the figure to the right.

• Each rectangle is 1 unit wide and 1 / n units high, so the total area of the rectangles is the sum of the harmonic series:

• However, the total area under the curve y = 1 / x from 1 to infinity is given by an improper integral:

• Since this area is entirely contained within the rectangles, the total area of the rectangles must be infinite as well. More precisely, this proves that

• The generalization of this argument is known as the integral test.

Rate of divergence

• The harmonic series diverges very slowly. For example, the sum of the first 1043 terms is less than 100.

• This is because the partial sums of the series have logarithmic growth. In particular,

• where γ is the Euler–Mascheroni constant and εk approaches 0 as k goes to infinity.

• This result is due to Leonhard Euler.

• The harmonic series diverges very slowly. For example, the sum of the first 1043 terms is less than 100.

• This is because the partial sums of the series have logarithmic growth. In particular,

• where γ is the Euler–Mascheroni constant and εk approaches 0 as k goes to infinity.

• This result is due to Leonhard Euler.


Recommended