A fact of fundamental importance in computational number theory is that calculating mod can be done efficiently on a computer. The reason is simple: by repeatedly squaring , one can work out , , , ... and then other powers can be calculated by taking products chosen according to the binary expansion of .
One example is enough to give the idea. Let us work out mod . First, we repeatedly square mod until we have worked out for every such that . We get ; ; (because ); ; . Next, we observe that . Therefore,
This algorithm is considered efficient because the time it takes depends polynomially on the numbers of digits of , and . For instance, the number of steps taken by long multiplication of two -digit numbers is roughly proportional to (and there are quicker methods that use the fast Fourier transform), and the number of multiplications we need to do in the above calculation is proportional to , which is proportional to the number of digits of . If we reduce mod every time we multiply two numbers together, then the numbers we have to multiply are always smaller than . And reduction mod can also be done in time polynomial in the number of digits of the number to be reduced, which will always be at most and will therefore have at most twice as many digits as .