Tricki
a repository of mathematical know-how

Use an appropriate series expansion

Quick description

Suppose we have a general integral of the form

 \int_X f(x) d\mu(x)

and we know how to expand f in a series

f(x) \sim \sum c_k h_k(x),

where each one of the base functions h_k is easy to integrate. Then interchanging the integral and the sum we can simplify the original integral to something like

 \int_X f(x) d\mu(x) = \sum _k c_k \int_X h_k(x) d\mu(x).

It will usually be enough for the series to converge in some average sense to our original function.

Because polynomials and trigonometric functions tend to be particularly easy to integrate, this technique is often effective if one uses power series expansions or Fourier series expansions.

Prerequisites

undergraduate calculus, undergraduate real analysis

Note iconContributions wanted This article could use additional contributions.

Example 1

Suppose we want to calculate the following integral on the unit disc \mathbb D of the complex plane

\int_{\mathbb D} \bigg|\frac{z}{1-\bar w z}\bigg|^2 dm(w),

where z\in\mathbb D and dm is the Lebesgue measure in the plane. Using the geometric series expansion

 f(w)=\frac{z}{1-\bar w z}=z\sum_{n=0} ^\infty {\bar w}^n z^n ,

and polar coordinates we can rewrite the integral in the form

 \begin{align}\int_{\mathbb D} f(w) d m(w) &= \int_0 ^{2\pi} \int_0 ^1 \frac{|z|^2}{|1- z re^{-i\theta}|^2} rdr \ d\theta =|z|^2\int_0 ^{2\pi} \int_0 ^1\bigg(\sum_{n=0} ^\infty z^n r^n e^{-in\theta}\bigg)^2 rdr\ d\theta =\\&=|z|^2\int_0 ^1 \int_0^{2\pi} \sum_{n=0} ^\infty \sum_{m=0} ^\infty  z ^n \bar z^m r^{n+m} e^{i(m-n)\theta} d\theta \ rdr=\\&=|z|^2\sum_{n=0} ^\infty \sum_{m=0} ^\infty  z ^n \bar z^m \int_0 ^1 r^{n+m+1}dr  \int_0^{2\pi}e^{i(m-n)\theta} d\theta.\end{align}

There are two or three tricks in the above lines worth mentioning. First we used polar coordinates as a means of decoupling the radial and angular variable. This becomes totally apparent in the last line where we end up with a product of integrals, one involving only the radial and one involving only the angular variable. This was combined with the interchange integrals or sums trick to bring the summation operators outside the integrals. Finally observe that we used the square and rearrange trick by writing

\bigg(\sum_{n=0} ^\infty z^n r^n e^{-in\theta}\bigg)^2=\sum_{n=0} ^\infty \sum_{m=0} ^\infty z^n\bar z^m r^{n+m} e^{i(m-n)\theta}.

In this case we didn't have to square first since our expression was already squared.

To finish the calculation observe that \int_0 ^{2\pi} e^{i(m-n)\theta}d\theta is equal to 2\pi whenever n=m and 0 otherwise because of the orthogonality of the exponentials e^{in\theta}. Thus we have

\int_{\mathbb D} \bigg|\frac{z}{1-\bar z w}\bigg|^2 dm(w)=2\pi|z|^2\sum_{n=0} ^\infty |z|^{2n} \frac{1}{2(n+1)}=\pi \log \frac{1}{1-|z|^2},

where we have used the Taylor series expansion \log\frac{1}{1-r}=\sum_{n=1} ^\infty \frac{r^n}{n} for 0<r<1. Observe that the calculation of the integral reduced to calculating integrals of power functions since we expanded our function in a power series.

Example 2

Let us now look at a slightly more complicated variant of the integral in Example 1. Here we have an extra logarithmic term under the integral sign

\int_{\mathbb D} \bigg|\frac{z}{1-\bar w z}\bigg|^2 \log \frac{1}{1-|w|^2}\ dm(w).

Following exactly the same steps as in Example 1 we end up with the expression

 \begin{align}\int_{\mathbb D} \bigg|\frac{z}{1-\bar w z}\bigg|^2 \log \frac{1}{1-|w|^2}\ dm(w)&= 2\pi|z|^2\sum_{n=0} ^\infty |z|^{2n} \int_0^1 r^{2n+1}\log\frac{1}{1-r^2} dr =\\&=\pi|z|^2\sum_{n=0} ^\infty |z|^{2n} \int_0^1 t^n\log\frac{1}{1-t} dt.\end{align}

Now in Example 1 we ended up with the integral \int_0^1t^n dt which is trivial to calculate exactly. Here we have to deal with the integral \int_0 ^1 t^n \log \frac{1}{1-t} dt which does not look so trivial. However, we can once again expand the function \log\frac{1}{1-t} in the power series \sum_{k=1} ^\infty \frac{t^k}{k} and calculate

 \int_0 ^1 t^n \log \frac{1}{1-t} dt = \sum_{k=1}^\infty \frac{1}{k}\int_0^1t^{n+k}dt= \sum_{k=1}^\infty \frac{1}{k(n+k+1)}.

One can now simplify this last sum by observing that it is in fact a telescoping sum:

\sum_{k=1} ^\infty\frac{1}{k(n+k+1)}=\sum_{k=0} ^\infty \frac{1}{n+1}(\frac{1}{k}-\frac{1}{n+k+1})=\frac{1}{n+1} \sum_{k=1} ^{n+1} \frac{1}{k} \simeq \frac{\log(n+1)}{n+1}.

Thus, ignoring numerical constants, our original integral can be written in the form

\int_{\mathbb D} \bigg|\frac{z}{1-\bar w z}\bigg|^2 \log \frac{1}{1-|w|^2}\ dm(w)\simeq \sum_{n=1} ^\infty \frac{\log n}{n} |z|^{2n}.

One can probably look up the latter series in a table and discover that in fact

 \sum_{n=1} ^\infty \frac{\log n}{n} |z|^{2n} \simeq \bigg(\log\frac{1}{1-|z|^2}\bigg)^2.

However, there is a Tricki way to see that quite fast. This uses essentially the Divide and Conquer trick so we will describe this calculation in an Example therein.

Example 3

(prove exponential integrability of function based on the L^p norms of the function.)

General discussion

Comments

change your coordinate system

I am thinking that the use of polar coordinates could be another entry in methods for simplifying integrals (or in general methods for estimating integrals). Of course the example in this article is maybe an elementary one but one could think of more involved examples in singular integrals (e.g. method of rotations) where essentially this principle is the basic trick. I understand that 'polar coordinates' is probably quite restrictive. We should probably have an entry along the lines 'try to change your coordinate system'. I think it is a standard trick in estimating oscillatory integrals as well. Stein does this in order to prove the multi-dimensional version of the van der Corput lemma with a bump function, and I think also Michael Christ has done that in a couple of papers in order to prove sub-level set estimates (but I can't say i have all the details in my head right now). That is, adopts the coordinate system to the direction along which the phase function has a derivative that stays bounded away from zero and uses a one dimensional estimate along this direction. I guess there are plenty of other cases that I don't know of. On the other hand maybe this should be part of an 'exploit symmetries and invariance' article if one thinks for example the way Stein proves the dimension free bounds for the Euclidean ball-maximal function. I am getting a bit confused concerning which is the natural or 'right' place for each article.

yannis