Tricki
a repository of mathematical know-how

Dyadic decomposition

Quick description

When estimating a sum or integral of an expression which depends very much on the magnitude of some expression X, try decomposing the sum or integral based on the dyadic regions \{2^n \leq |X| < 2^{n+1}\} for n \in \Z. This is a special case of the divide and conquer strategy. (Occasionally, one needs also to deal with the exceptional set \{ X=0\}.)

Prerequisites

Undergraduate calculus. For more geometric integrals, a good geometric intuition is also useful.

Example 1

The Cauchy condensation test is a classic example of dyadic decomposition in action.

Example 2

Suppose we want to obtain upper and lower bounds for the integral

\int_{S^{n-1}} |x - \omega|^{-\alpha}\ d\omega

on the unit sphere S^{n-1} in \R^n, where d\omega is surface measure, \alpha > 0 is an exponent, and x \in \R^n is a parameter, which we assume to not lie on the unit sphere. For the purposes of this exercise, we are willing to lose multiplicative constants that depend on n and \alpha, but we are interested in the behavior in x (in particular, how does this integral degenerate as |x| \to 1?)

Let's first deal with some easy cases. Suppose x is quite small, e.g. |x| \leq 1/2. Then we see that |x-\omega| is comparable to 1. In this case a simple application of the base times height method gives upper and lower bounds of 1 (up to constants).

Similarly, when x is quite large, e.g. |x| \geq 3/2, then |x-\omega| is comparable to |x|, and the base times height method now gives upper and lower bounds of |x|^{-\alpha} (again up to constants).

Now we turn to the intermediate region 1/2 < |x| < 3/2. Here, the base times height method does not work so well. Indeed, if we let = |x|-1, this method gives an upper bound of r^{-\alpha} and a lower bound of 1 (up to constants); these two bounds diverge as r \to 0. The problem here is that the integrand |x-\omega|^{-\alpha} is no longer of the uniformly distributed type that is amenable to the base times height bounds; instead, the integrand is quite large when \omega is nearly parallel to x, and quite small when \omega is distant from x.

After looking at the geometry of the situation, it becomes clear that the most important parameter controlling the magnitude of the integrand is the angle = \angle(x,\omega) between x and \omega. Indeed, using some trigonometry (and taking advantage of useful simplifications that are available when one is willing to lose constants, e.g. that \sin x is comparable to x) we soon see that the distance |x-\omega| between x and |\omega| is comparable to r+\theta. In particular, we see that the "natural" scale for \theta is that of r.

Once one sees this, it becomes natural to dyadically decompose with respect to \theta, and specifically into the very near region \{ \theta < r \} and dyadic shells \{ 2^j r \leq \theta < 2^{j+1} r \} for j > 0. (It is pointless to decompose the very near region any further, because when \theta < r, \theta is no longer the dominant influence on the integrand.) Actually, since \theta cannot exceed \pi, we can stop the decomposition once j exceeds \log_2 1/r + O(1).

We can then apply the base times height method to each region. In the near region \{ \theta < r \}, the integrand has height bounded above and below by r^{-\alpha} (up to constants), and the base has measure r^{n-1} (up to constants), leading to a net bound of r^{n-1-\alpha}. Similarly, on each shell \{ 2^j r \leq \theta < 2^{j+1} r \}, the height is comparable to (2^j r)^{-\alpha}, and the base is comparable to (2^j r)^{n-1} (except possibly for the very last shell, but let's ignore this), leading to a net bound of 2^{j(n-1-\alpha)} r^{n-1-\alpha}. Summing this geometric series, we obtain a final upper or lower bound comparable to r^{n-1-\alpha} when \alpha > n-1, \log \frac{1}{r} when \alpha = n-1, and 1 when \alpha < n-1.

General discussion

Dyadic decomposition is a fairly efficient technique that generally only "loses a logarithm" or "loses an epsilon exponent" compared to the optimal bound. This is because on many cases, the number of dyadic scales \{ 2^n \leq |X| < 2^{n+1}\} which are "active" or "important" is only logarithmically large with respect to some relevant parameter. For instance, if one is estimating a sum \sum_{k=1}^N c_k by decomposing k into dyadic regions \{ 2^n \leq k < 2^{n+1} \}, the number of such regions is only O(\log N), and so even if one only decides to estimate a single, "worst-case" region rather than handle all the regions at once, one only expects to be off by a factor of O(\log N) at worst.

The only time when dyadic decomposition is truly inefficient is when one expects quite a lot of cancellation between different dyadic regions. In such cases, of course, this sort of decomposition should be avoided.

Sometimes (as one sees in Example 2) one does not want to dyadically decompose the very small or very low values of X, thus for instance one could decompose the region \{ |X| \geq 1 \} into dyadic regions \{2^n \leq |X| < 2^{n+1}\} for n \geq 0, but refuse to decompose the region \{|X| < 1\} any further. The choice of when to decompose and when not to varies from problem to problem; one has to weigh the benefits of a decomposition against the costs.

For an integral such as \int_{\R^d} f(x)\ dx, sometimes one wants to dyadically decompose the independent variable x, for instance into dyadic annuli  2^n \leq |x| < 2^{n+1} \}|. In other cases, it makes more sense to decompose the dependent variable f(x), for instance into level sets  2^n \leq |f(x)| < 2^{n+1} \}. The optimal choice of decomposition is often determined by the geometry of the situation. If, for instance, one is trying to integrate a function f which is becoming singular on some set S (e.g. a point, a line, a surface, etc.), then dyadically decomposing based on the distance to that set S, e.g. to sets  2^n \leq \hbox{dist}(x,S) < 2^{n+1} \}|

There is nothing sacred about powers of 2 here. Sometimes it is more advantageous to use a sparser dyadic decomposition, e.g. powers of R, where R is a parameter to be optimized later.

It is often advantageous to decompose using smooth cutoffs rather than sharp cutoffs, especially if there is oscillation present and one wishes to maximize the effect of cancellation.

See also this expository article of Tao.

Comments

Thanks a lot for this clear

Thanks a lot for this clear and precise page on Dyadic decompositions. With it, I have cleared all my doubts on the subject.

greetings

Miguel