Tricki
a repository of mathematical know-how

Randomized algorithms front page

Quick description

Sometimes a very simple and efficient way of carrying out an algorithmic task is to make random choices. Indeed, there are many examples of tasks that have easy and fast randomized algorithms but no known efficient deterministic algorithms. The obvious disadvantage of randomization, that one cannot be certain that the algorithm will do what one wants, is in many situations not too important, since one can arrange for the probability of failure to be so small that in practice it is negligible. This article contains one very simple example of a randomized algorithm to illustrate the basic idea, and gives links to articles about randomized algorithms of various different types.

Example 1

Click here for the example ( A Boolean function is a function \phi from \{0,1\}^n to \{0,1\}. That is, \phi takes as its input a string of 0s and 1s of length n and outputs either 0 or 1. By focusing our attention on the set \mathcal{A} of strings where \phi takes the value 1. Suppose that you are presented with a function \phi and asked to estimate the size of \mathcal{A}. And suppose also that you do not understand \phi well enough to be able to tell for theoretical reasons even roughly how big \mathcal{A} is. What can you do? If n is very small, then you can simply work out \phi(\epsilon) for all the strings \epsilon=(\epsilon_1,\epsilon_2,\dots,\epsilon_n) and see how often you get 1. But if n=100, say, then this will take your computer far too long. However, there is a very easy way of getting round this problem if you are ready to make two small sacrifices: you will not try to find the exact answer, but just an approximation, and you will accept a very small probability that even the approximation will be wrong. In that case, you can simply choose m random strings (where the larger m is, the better your approximation will be and the smaller the probability that you do not get a good approximation), work out \phi for each string, and estimate that the proportion of strings for which \phi=1 is the proportion of the m strings you have chosen for which \phi=1. To see why that works, let us suppose that |\mathcal{A}|=p2^n. That is, the proportion of strings \epsilon for which \phi(\epsilon)=1 is p. Then if we choose m random strings, the number of those strings for which \phi=1 is binomially distributed with parameters m and p. For any fixed \delta>0 it can be shown that the probability that a binomial variable with parameters m and p differs by more than \delta m from its mean pm is at most 2\exp(-\delta^2m/4). (See Example 3 of the article on bounding probabilities by expectations for a proof.) Therefore, if we want to estimate p to within \delta and are prepared to accept a probability \rho of failure, then we can take m to be 4\log(2\rho^{-1})\delta^{-2}. Note that this does not even depend on n. )

Specific kinds of randomized algorithm

Elementary randomized algorithms Brief summary ( This article is about algorithms that, like the example above, exploit the fact that repeated trials of the same experiment almost always give rise to the same approximate behaviour in the long term. A famous example of such an algorithm is the randomized algorithm of Miller and Rabin for testing whether a positive integer is prime. )

Random sampling using Markov chains Brief summary ( Suppose that you want to generate, uniformly at random, some combinatorial structure or substructure. If the structure is simple enough, then there may be an easy way of converting n random bits into the structure you want, with the correct distribution. For example, to choose a random graph one can just pick each edge independently at random with probability 1/2. However, a trivial direct approach like this is often not possible: how, for example, would you choose a (labelled) tree uniformly at random? A commonly used technique in more difficult situations is to take a random walk through the "space" of all objects of interest and to prove that the walk is rapidly mixing, which means that after not too long the distribution of where the walk has reached is approximately uniform. )

Property testing

Comments

Post new comment

(Note: commenting is not possible on this snapshot.)

Before posting from this form, please consider whether it would be more appropriate to make an inline comment using the Turn commenting on link near the bottom of the window. (Simply click the link, move the cursor over the article, and click on the piece of text on which you want to comment.)

snapshot
Notifications