Suppose that you are trying to bound the size of a sum of a product of two functions, one reasonably smooth and the other reasonably oscillating, and suppose that you believe that the oscillations give rise to cancellation that causes the sum to be small. Then a technique similar to integration by parts may well give rise to an efficient way of proving this.
Basic real analysis. Complex numbers.
Example 1: Abel summation
Let be a complex number, of modulus but not equal to , and let be a sequence of positive real numbers tending to zero. Then the sum converges.
How might we prove this? Why do we even believe it?
One can give a geometrical answer to the second question. If you plot the points , , , , and so on, drawing straight line segments between them, then you obtain a piecewise linear curve that seems to spiral inwards to a point. (The closer is to , the bigger this "spiral" tends to be.)
How about a rigorous proof? Well, the observation on which the proof is based is that the set of numbers of the form is bounded. Indeed, the formula for summing a geometric progression tells us that
which has modulus at most .
Since we know how to work out sums of the above form, it makes sense to try to use this information to investigate the sum , which we do by breaking it up into sums of the form we like. We can set aside the convergence issues for now and just look at the sum . In order to split this up into polynomials with constant coefficients, we begin by noting that the sequence can be split up as
The best motivation for this splitting comes from drawing a picture of the "graph" of the sequence , which we are chopping up horizontally.
To be continued soon.