Tricki
a repository of mathematical know-how

Numerical solution of nonlinear equations

Quick description

Nonlinear equations are ubiquitous in mathematics, and we need practical methods to solve such equations. Methods vary in their reliability and robustness as well as their speed and ease of implementation, as well as applicability. For example, the Newton-Raphson method can be applied to systems of nonlinear equations \R^n\to\R^n and can converge very fast, while bisection is extremely reliable, but is much slower and can only be used for a single scalar equation in one variable.

Prerequisites

Calculus.

Example 1

Bisection is extremely robust but relatively slow for finding a root of \R\to\R once we have a<b where f(a) and f(b) have opposite signs.

One simply computes c\gets (a+b)/2 and determines the sign of f(c). One then either updates a\gets c or b\gets c and repeats the process depending on the sign of f(c) to ensure that we again have f(a) and f(b) of opposite signs. Of course, if f(c)=0, we stop. Normally we stop when |a-b|<\epsilon, a pre-determined tolerance.

As long as f is continuous, this method will converge thanks to the intermediate value theorem.

Speed can be improved using so-called bracketing methods such as Dekker's or Brent's method.

Example 2

The so-called Newton-Raphson method is based on the linear approximation f(x)\approx f(x_0)+\nabla f(x_0)(x-x_0) for x\approx x_0. Solving the linearized equation f(x_0)+\nabla f(x_0)(x-x_0)=0 gives a new estimate

   x_1 = x_0 - (\nabla f(x_0))^{-1} f(x_0).

This can be repeated to give further improvements:

   x_{k+1} = x_k - (\nabla f(x_k))^{-1} f(x_k),\qquad k=0,1,2,3,\ldots.

The Newton-Raphson method can be guaranteed to converge provided the Jacobian matrix \nabla f(x) is non-singular at the solution x^*, and x_0 is "sufficiently close" to x_0. When it does converge to x^* and \nabla f(x^*) is non-singular with f sufficiently smooth, convergence is quadratic; that is, \|x_{k+1}-x^*\|/\|x_k-x^*\|^2\to constant.

This rapid convergence is very desirable, and the number of iterations needed for an accuracy of \epsilon (so that \|f(x_k)\|<\epsilon) is O(\log\log(1/\epsilon) provided everything works nicely.

However, Newton's method is not robust, and there has been a great deal of work on practical modifications of the method to prevent bad behavior. These modifications typically include guarding the step: setting d_k=-\nabla f(x_k)f(x_k) and setting x_{k+1}\gets x_k+\alpha_k d_k where 0<\alpha_k\leq 1 to ensure that (at least) \|f(x_{k+1})\|<\|f(x_k)\|.

This indicates that the Newton-Raphson method is essentially a local method; some sort of "globalization" strategy is needed to handle practical situations.

Example 3

Methods that are much more reliable for highly nonlinear problems are homotopy (or continuation) methods. These are based on a homotopy between the problem we want to solve f(x)=0 and an easy problem g(x)=0: h(x,\lambda). We need, for example, h(x,1)=f(x) and h(x,0)=g(x). We then follow the curve h(x,\lambda)=0 in (x,\lambda)\in\R^n\times[0,1] from the solution(s) of g(x)=0 to one or more solutions of f(x)=0.

Typically people use h(x,\lambda)=\lambda f(x)+(1-\lambda)(x-a) for some (random) a. Provided no solutions "escape to \infty" on this curve, and the curve is smooth, we can use local techniques (such as a modified version of Newton-Raphson based on linear least squares) to follow the curve. Using random a is useful because the generalized Morse-Sard theorem guarantees that for almost all a the curve is smooth. Conditions recognizable from topology or nonlinear analysis (such as x^T f(x)>0 for all \|x\|=R) are typically used to ensure that curves do not escape to infinity.

There are also piecewise affine versions of this idea, and in fact this is related to simplex-tableau type algorithms for finding solutions to nonlinear equations and (for example) linear complementarity problems.

General discussion