a repository of mathematical know-how

Numerical solution of ordinary differential equations

Quick description

Usually we think of numerical solution of initial value problems:

  \frac{dx}{dt} = f(t,x(t)),\qquad x(t_0)=x_0,

although sometimes we need to solve two-point boundary value problems:

  \frac{dx}{dt} = f(t,x(t)),\qquad \psi(x(t_0),x(t_1))=0.

Theory and practice for solution of initial value problems is generally simpler, and that is the main focus here.

Methods for solution of initial value problems can be used for solving two-point boundary value problems by the shooting method (see, for example shooting methods and variants such as multiple shooting when combined with a general method for solving nonlinear equations (such as Newton's method or homotopy methods).

Stiff differential equations are common in practice and cause difficulty with numerical simulation. These are equations where the ratio of the largest to smallest eigenvalue of \nabla_x f(t,x) is very large. The name comes from simulation of mechanical systems, but the stiffest systems seem to come from simulation of chemical reactions where this ratio can be of the order of 10^{20} or more.

Stiff differential equations generally require implicit methods, where computation of x_{k+1} from x_k requires solution of (nonlinear) equations. These are generally more complex to implement, but give far superior performance and have less difficulty in "tuning" the step-size to fit the problem at hand.

Many codes for numerical solution of differential equations are "automatic" in the sense that they adjust the step size to keep the estimated error within certain user-prescribed bounds. However, it should be noted that these methods are not completely foolproof and care should be taken in their implementation to avoid falling into certain traps. Also, for optimal control problems where solution trajectories are used for optimization, the adjustments of the step sizes are often done is discontinuous ways, making optimization of the resulting output extremely difficult.


Calculus, ordinary differential equations.

Example 1

Euler's method is very basic: x_{k+1}=x_k + h\,f(t_k,x_k) for k=0,1,2,\ldots. For a differential equation dx/dt=f(t,x) with f(t,x) Lipschitz (in both t and x) with constant L we can obtain a bound on the error

   \|x(t)-x_h(t)\|\leq \left(\frac{e^{(t-t_0)L}-1}{L}\right)\frac{1}{2}h\max_t\|x''(\tau)\|.

We can bound x''(t) using x''(t)=(d/dt)f(t,x(t))=\nabla_x f(t,x(t))\,f(t,x(t))+\partial f/\partial t(t,x(t)) and noting that \|\nabla_x f(t,x(t))\|,\|\partial f/\partial t(t,x(t))\|\leq L for (almost all) t and x_0.

Example 2

Runge-Kutta methods are so-called one-step methods which involve some intermediate computation in order to compute x_{k+1}\approx x(t_{k+1}) from x_k\approx x(t_k), as opposed to multistep methods where computation of x_{k+1} requires x_k,\,x_{k-1},\ldots,x_{k-p} for p>1. The number of intermediate function values needed is the number of stages for the method.

Runge-Kutta methods can be represented by a Butcher tableau (see Wikipedia article here). Runge-Kutta methods can give optimal order if the coefficients of the method are generated from the points and weights of Gaussian quadrature methods in the appropriate way (order 2m for m stages). The computational cost and difficulty in implementing implicit Runge-Kutta methods like this has been a barrier to their popular acceptance, although these are often considered to be the most powerful methods, especially for high-accuracy solution of stiff differential equations, or if there are special features to be preserved by the numerical method (e.g., symplectic methods).

Example 3

In multistep methods, x_{k+1} is computed in terms of x_k,\,x_{k-1},\ldots,x_{k-p} for p>1. The best known multistep methods are Adams-Bashforth, Adams-Moulton, and Backward Difference Formulas (BDF) methods. All these methods are based on interpolation, either of f(t,x(t)) for the Adams methods, or x(t) itself (using dx/dt(t_{k+1})=f(t_{k+1},x(t_{k+1})) to obtain the equation for x(t_{k+1})) for the BDF methods.

These methods can give very high order (although this is often not the most important thing). However, they are difficult to get started as x_{k-j} for j>0 must be computed somehow. The most successful methods of this type are variable order and variable step size methods, such as W. Gear's old DIFSUB code. (There are much more modern codes available than this, though.) These start with a first order method (which reduces to Euler or implicit Euler and requires no previous values) with an extremely small step size, and increases both the order and the step size as rapidly as possible. The order must be increased in order to allow the step size to increase, so the two go together.

Care must be taken not to change the step size too often as this can cause some numerical instabilities in itself.

General discussion


Post new comment

(Note: commenting is not possible on this snapshot.)

Before posting from this form, please consider whether it would be more appropriate to make an inline comment using the Turn commenting on link near the bottom of the window. (Simply click the link, move the cursor over the article, and click on the piece of text on which you want to comment.)