Tricki
a repository of mathematical know-how

Revision of Useful heuristic principles for guessing probabilistic estimates from Mon, 15/12/2008 - 21:35

Quick description

If you are hoping to use the probabilistic method as part of a long and complicated proof, you will probably want to begin by producing a plausible sketch of the argument before you go into the technical details. For this purpose it is very useful to be able to guess upper (and sometimes lower) bounds for the probabilities of various complicated events. This article discusses heuristic principles that can help one do this, and illustrates them with examples.

Principle 1: pretend your variables are independent

This is the single most useful method for guessing probabilistic bounds: if you have some variables that exhibit a reasonable degree of independence, then they will probably give estimates that are very similar to the ones that would apply if they actually were independent. Of course, one needs to be clear about what "reasonably independent" might mean, so let us look at a few examples.

Example 1

Let G be a random graph with n vertices in which each edge is chosen independently with probability \lambda n^{-1}. Let \tau be the number of triangles in G. Then \tau is a random variable: how should we expect \tau to be distributed?

Given any triangle T in the complete graph on the n vertices of G, then T will belong to G with probability \lambda^3n^{-3}. The number of such triangles is \binom n3, which is about n^3/6, so the expectation of \tau is about \lambda^3/6. Also, if T_1,\dots,T_k are disjoint triangles, then the events "T_i belongs to G" are independent; and a typical pair of triangles will be disjoint.

It is clear then that we have a lot of independence about. What would happen if the events "triangle T belongs to G" were all independent? Then \tau would be a sum of \binom n3 Bernoulli random variables, each with probability \lambda^3n^{-3}. In other words, it would be counting the number of occurrences when you have lots of independent unlikely events. The probability distribution that's appropriate for this is the Poisson distribution, so we might guess that \tau is distributed roughly like a Poisson random variable of mean \lambda^3/6.

This is not a proof of course, and it turns out to be a hard problem to determine the distribution of \tau. Nevertheless, the exercise of pretending that the events are independent is a helpful one: it gives us some idea what to expect, and it also gives us a starting point if we want to prove it. (The starting point would be the thought that we could look very carefully at the proof that \tau is approximately Poisson when the events are independent and try to relax the assumptions we make, allowing a small amount of dependence. And there are indeed important proofs like this in probabilistic combinatorics: see Janson's inequality, for instance.)

Example 2

A certain amount of care is needed even when one is just guessing. For instance, suppose we tried to use the same reasoning to count the number of copies of a graph S that consists of five vertices x_1,\dots,x_5 with the first four all joined to each other and the fifth one joined just to the fourth (so S has seven edges). And suppose that the edges of G have been chosen with probability p=\lambda n^{-5/7}. The expected number \sigma of copies of S is about p^7n^5/6=\lambda^7/6 (the factor of 1/6 is there because S has a symmetry between three of its vertices). So do we expect the distribution of \sigma to be approximately Poisson with mean \lambda^7/6?

To see that we don't, note that the expected number of copies of K_4 in G is p^6\binom n4, which is about \lambda^4n^{-30/7}n^4/24=\lambda^4n^{-2/7}/24. In other words, it is far less than 1 (for fixed \lambda and large n). What this implies is that although the expected number of copies of S is \lambda^7/6, the norm is to get no copies at all, and just occasionally you get a K_ with a huge number of extra edges joined to its vertices forming copies of S.

Example 3

Let us return to triangles, but this time let us take a larger value of p.

Parent article

Probabilistic combinatorics front page