a repository of mathematical know-how

Revision of Useful heuristic principles for guessing probabilistic estimates from Mon, 15/12/2008 - 21:10

Quick description

If you are hoping to use the probabilistic method as part of a long and complicated proof, you will probably want to begin by producing a plausible sketch of the argument before you go into the technical details. For this purpose it is very useful to be able to guess upper (and sometimes lower) bounds for the probabilities of various complicated events. This article discusses heuristic principles that can help one do this, and illustrates them with examples.

Principle 1: pretend your variables are independent

This is the single most useful method for guessing probabilistic bounds: if you have some variables that exhibit a reasonable degree of independence, then they will probably give estimates that are very similar to the ones that would apply if they actually were independent. Of course, one needs to be clear about what "reasonably independent" might mean, so let us look at a few examples.

Example 1

Let G be a random graph with n vertices in which each edge is chosen independently with probability \lambda n^{-1}. Let \tau be the number of triangles in G. Then \tau is a random variable: how should we expect \tau to be distributed?

Given any triangle T in the complete graph on the n vertices of G, then T will belong to G with probability \lambda^3n^{-3}. The number of such triangles is \binom n3, which is about n^3/6, so the expectation of \tau is about \lambda^3/6. Also, if T_1,\dots,T_k are disjoint triangles, then the events "T_i belongs to G" are independent; and a typical pair of triangles will be disjoint.

It is clear then that we have a lot of independence about. What would happen if the events "triangle T belongs to G" were all independent? Then \tau would be a sum of \binom n3 Bernoulli random variables, each with probability \lambda^3n^{-3}. In other words, it would be counting the number of occurrences when you have lots of independent unlikely events. The probability distribution that's appropriate for this is the Poisson distribution, so we might guess that \tau is distributed roughly like a Poisson random variable of mean \lambda^3/6.

This is not a proof of course, and it turns out to be a hard problem to determine the distribution of \tau. Nevertheless, the exercise of pretending that the events are independent is a helpful one: it gives us some idea what to expect, and it also gives us a starting point if we want to prove it. (The starting point would be the thought that we could look very carefully at the proof that \tau is approximately Poisson when the events are independent and try to relax the assumptions we make, allowing a small amount of dependence. And there are indeed important proofs like this in probabilistic combinatorics: see Janson's inequality, for instance.)

Example 2

A certain amount of care is needed even when one is just guessing. For instance, suppose we tried to use the same reasoning to count the number of copies of a graph S that consists of a triangle with one of its vertices joined to a fourth vertex (which is joined to nothing else). And suppose that the edges of G have been chosen with probability \lambda n^{-1} again. Then the expected number \sigma of copies of S is about p^4n^4/2 (the factor of 1/2 is there because S has a symmetry between two of its vertices). So do we expect the distribution of \sigma to be approximately Poisson with mean \lambda^4/2?

Example 2

Now let us look at the same problem but

Parent article

Probabilistic combinatorics front page