Tricki
a repository of mathematical know-how

To construct a function with several local constraints, start with simple cases and build from there

Quick description

Sometimes one is required to build a function that satisfies several constraints of a similar type in different places. Often the neatest method is to build a particularly simple class of functions that each satisfy just one of the constraints and to use these as building blocks for more complicated examples.

Note iconAttention This article is in need of attention. This article needs to be categorized. It is closely related to some of the articles linked to from the page Prove the result for some cases and deduce it for the rest, but the focus here is on finding examples rather than proving general results (even if one can develop these arguments into general results).

Prerequisites

Basic real analysis

Example 1

Suppose you want to find a function \R\rightarrow\R that is infinitely differentiable such that f(x)=0 whenever |x|\geq 2 and f(x)=1 whenever |x|\leq 1. It seems somewhat difficult for an infinitely differentiable function to be constant and then to become non-constant, so there are going to be at least four "difficult places" to think about.

If we follow the advice given in the quick description, then we will think first about finding a function that has just one difficult place. That is, we would like any infinitely differentiable function that is constant on some range and non-constant on another range.

For the purposes of this article, we assume that the solution to that problem is given: let us take the best known example, which is f(x)=0 when x\leq 0 and f(x)=e^{-1/x^2} when x>0.

From this one example, we can create a family of examples: the function f(x-t) will be zero up to t and positive after that, and the function f(t-x) will be positive for x<t and zero from t onwards. And if we want we can put coefficients in front of these functions, but let us not bother with this, since later we can take linear combinations.

What building methods do we have at our disposal now that we have a family of basic examples? Well, we have just mentioned linear combinations, and a product of infinitely differentiable functions is infinitely differentiable as well. We can also differentiate or integrate our existing examples. Let's see what we can do with these various tools.

A first step is to get a function that is zero outside an interval such as [-2,2] but not zero everywhere. For this we can take our basic function f above and build the function f(x+2)f(2-x). It is easy to see how to use functions like this to create functions that have many constant parts and many non-constant parts, but it is not so easy to see how to get the constant parts to take different values (unlike here, where the constant parts are both zero), so what else can we do?

Let's try integration. If we integrate f(x+2)f(2-x) then we will get a function that is zero up to -2 and constant but positive beyond 2. Now we're in business. If we call this function g, then by taking functions of the form \lambda g(\mu x+\nu) or \lambda g(\nu-\mu x) we can easily build functions that are 0 up to r and 1 from s onwards, or 1 up to r and 0 from s onwards. So we can solve our original problem by building one function that is 0 up to -2 and 1 from -1 onwards and another function that is 1 up to 1 and 0 from 2 onwards, and multiplying these two functions together.

Example 2

Suppose you want to build a polynomial of degree d that takes prescribed values at x_1,\dots,x_{d+1}. That is, you would like P(x_i) to equal a_i for each i.

A first step might be simply to find a polynomial of degree d that vanishes at every x_j apart from x_i and does not vanish at x_i. How might we do that? Well, it has to be a multiple of \prod_{j\ne i}(x-x_j), and ... er ... well, there's an example. By dividing by an appropriate constant (which is equal to \prod_{j\ne i}(x_i-x_j)) we can get this polynomial to be 1 at x_i and 0 at all the other x_j. Let us call the resulting polynomial P_i.

But now we see that \sum_i a_iP_i takes the value a_i at x_i, and we are done.

Example 3

Suppose that you want to find a polynomial that approximates some given continuous function on [0,1]. An easy observation is that every continuous function f on [0,1] can be approximated by a piecewise linear function (since f is uniformly continuous, so you can just linearly interpolate between points (j/n,f(j/n)) for some large n). So it is enough to approximate piecewise linear functions.

Again, one can start with a basic example, such as f(x)=0 when x\leq 0 and f(x)=x when x>0. If you can approximate that on the interval [-1,1], then you know how to approximate any function that is zero up to t and linear with gradient \lambda from t onwards. And it is easy to produce a combination of such functions (plus a constant to get started) that equals any given piecewise linear function on [-1,1].