Tricki
a repository of mathematical know-how

Applying the probabilistic method

Quick description

If you are trying to optimize a parameter associated with a combinatorial structure, and if the extremal examples appear to be highly "spread about" and unstructured, then you may well do best to consider examples that have been generated randomly. You are unlikely to prove an exact formula this way, but impressively sharp results can be obtained, often of results that nobody knows how to prove in any other way.

This (unfinished) article is a general introduction to the method. It contains a few easy examples, together with some discussion about when one should expect it to be useful. Links to more detailed articles about specific techniques associated with the probabilistic method can be found on the probabilistic combinatorics front page.

Prerequisites

Basic concepts of combinatorics and graph theory.

Example 1

The Ramsey number R(k,k) is defined to be the smallest n such that if you colour the edges of the complete graph K_n with two colours, then there must be k vertices such that all the edges joining them have the same colour. We call such a collection of vertices a monochromatic K_k. There is a nice inductive argument due to Erdős and Szekeres that shows that R(k,k) is at most \binom{2k}k.

There is also a simple but revolutionary argument of Erdős that proves that R(k,k) is at least (1+o(1))k2^{k/2}/e\sqrt{2}. It goes as follows. Let us colour the edges of K_n as follows. For each edge we toss a coin, and if it comes up heads then we colour the edge red, and if it comes up tails then we colour the edge blue. In other words, we colour the edges randomly.

To see that this works, let us work out the expected number of monochromatic K_ks. First, we note that there are \binom nk possible sets of k vertices. Secondly, we note that for any given set of k vertices, the probability that all the edges linking the vertices are red is 2^{-\binom k2}, and so is the probability that these edges are all blue. Therefore, the expected number of monochromatic K_ks is 2\binom nk 2^{-\binom k2}.

A straightforward computation shows ( You want the details? OK, here are the details. Trivially, \binom nk\leq n^k/k!, which can be shown to be at most e(en/k)^k. Therefore, the expectation is at most 2e(en/k)^k2^{-k(k-1)/2}. For this to be less than 1 we need 2e(en/k)^k to be less than 2^{k(k-1)/2}, for which it is sufficient if n\leq 2^{k/2}k/e\sqrt{2}. This one can check by taking logs, rearranging, and exponentiating. So we really can conclude ) that if n\leq (2e)^{1/k}2^{k/2}/e\sqrt{2}, then this expected number is less than 1. But this implies that there is a non-zero probability that the actual number is 0. In other words, there exists a colouring with no monochromatic K_k.

General discussion

The above argument is an example of an averaging argument, since it depends on the principle that a random variable must have a non-zero probability of being less than its average (and also a non-zero probability of being more than its average). This very simple principle, often known as Markov's inequality, is surprisingly powerful in probabilistic combinatorics. However, there are many circumstances where it is not strong enough: it is for this reason that the probabilistic method can be considered an entire area of mathematics rather than a single clever observation.

Comments

There might be a mistake

There might be a mistake

According to the line "There is also a simple but revolutionary argument of Erd?s that proves that is at least (1+0(1))k 2^k /e sqr(2). "

If (1+0(1))k 2^k /e sqr(2) is the right estimate, then there might be a mistake in the straightforward computation part in the article. For the first estimate of n, it might miss the constant k and without sqr(2). Thus, the final estimate of n is missing a k and the sqr(2)is justified from taking the -1/2 power of 2^[(k-1)/2].