Tricki
a repository of mathematical know-how

Establish an invariance principle first

Quick description

To prove a property P(x) for all x, first show that P(x) is equivalent to P(y) for all x, y in the desired parameter space. Then, one only needs to verify P(x) for a single x, which one can choose to make the verification as easy as possible.

Prerequisites

Probability theory

Example 1

(Lindeberg replacement trick) Suppose one wants to prove the central limit theorem, viz. that if X_1,X_2,\ldots is a sequence of iid real-valued random variables with mean zero and variance 1, then the random variables = \frac{X_1+\ldots+X_n}{\sqrt{n}} converge in distribution to the standard Gaussian random variable N(0,1). For simplicity let us assume that all moments of the X_i are finite. Lindeberg's proof of the central limit theorem proceeds in two steps:

  • (Base case) Verify the central limit theorem in the special case when the X_i are iid gaussians, X_i \equiv N(0,1). In this case the theorem is easy, basically because the sum of two independent gaussians is still a gaussian.

  • (Invariance) Show that for each k, the asymptotic limit \lim_{n \to \infty} {\Bbb E} S_n^k of the k^{th} moments of S_n remain unchanged if one replaces the iid sequence X_i by any other iid sequence, say Y_i, with the same mean and variance. This is done by expanding out {\Bbb E} S_n^k and observing that most terms only involve at most two factors of each X_i, and so (by the iid hypothesis) only involve the first and second moments of the X_i.

Indeed, once one has the invariance principle, one simply replaces the X_i with a gaussian iid sequence and uses the base case.

General discussion

The replacement trick has also been used in random matrix theory; see this blog post for some further discussion.