Jump to content

Derivative test

From Wikipedia, the free encyclopedia
(Redirected from Second-derivative test)

In calculus, a derivative test uses the derivatives of a function to locate the critical points of a function and determine whether each point is a local maximum, a local minimum, or a saddle point. Derivative tests can also give information about the concavity of a function.

The usefulness of derivatives to find extrema is proved mathematically by Fermat's theorem of stationary points.

First-derivative test

[edit]

The first-derivative test examines a function's monotonic properties (where the function is increasing or decreasing), focusing on a particular point in its domain. If the function "switches" from increasing to decreasing at the point, then the function will achieve a highest value at that point. Similarly, if the function "switches" from decreasing to increasing at the point, then it will achieve a least value at that point. If the function fails to "switch" and remains increasing or remains decreasing, then no highest or least value is achieved.

One can examine a function's monotonicity without calculus. However, calculus is usually helpful because there are sufficient conditions that guarantee the monotonicity properties above, and these conditions apply to the vast majority of functions one would encounter.

Precise statement of monotonicity properties

[edit]

Stated precisely, suppose that f is a real-valued function defined on some open interval containing the point x and suppose further that f is continuous at x.

  • If there exists a positive number r > 0 such that f is weakly increasing on (xr, x] and weakly decreasing on [x, x + r), then f has a local maximum at x.
  • If there exists a positive number r > 0 such that f is strictly increasing on (xr, x] and strictly increasing on [x, x + r), then f is strictly increasing on (xr, x + r) and does not have a local maximum or minimum at x.

Note that in the first case, f is not required to be strictly increasing or strictly decreasing to the left or right of x, while in the last case, f is required to be strictly increasing or strictly decreasing. The reason is that in the definition of local maximum and minimum, the inequality is not required to be strict: e.g. every value of a constant function is considered both a local maximum and a local minimum.

Precise statement of first-derivative test

[edit]

The first-derivative test depends on the "increasing–decreasing test", which is itself ultimately a consequence of the mean value theorem. It is a direct consequence of the way the derivative is defined and its connection to decrease and increase of a function locally, combined with the previous section.

Suppose f is a real-valued function of a real variable defined on some interval containing the critical point a. Further suppose that f is continuous at a and differentiable on some open interval containing a, except possibly at a itself.

  • If there exists a positive number r > 0 such that for every x in (ar, a) we have f(x) ≥ 0, and for every x in (a, a + r) we have f(x) ≤ 0, then f has a local maximum at a.
  • If there exists a positive number r > 0 such that for every x in (ar, a) we have f(x) ≤ 0, and for every x in (a, a + r) we have f(x) ≥ 0, then f has a local minimum at a.
  • If there exists a positive number r > 0 such that for every x in (ar, a) ∪ (a, a + r) we have f(x) > 0, then f is strictly increasing at a and has neither a local maximum nor a local minimum there.
  • If none of the above conditions hold, then the test fails. (Such a condition is not vacuous; there are functions that satisfy none of the first three conditions, e.g. f(x) = x2 sin(1/x)).

Again, corresponding to the comments in the section on monotonicity properties, note that in the first two cases, the inequality is not required to be strict, while in the third, strict inequality is required.

Applications

[edit]

The first-derivative test is helpful in solving optimization problems in physics, economics, and engineering. In conjunction with the extreme value theorem, it can be used to find the absolute maximum and minimum of a real-valued function defined on a closed and bounded interval. In conjunction with other information such as concavity, inflection points, and asymptotes, it can be used to sketch the graph of a function.

Second-derivative test (single variable)

[edit]

After establishing the critical points of a function, the second-derivative test uses the value of the second derivative at those points to determine whether such points are a local maximum or a local minimum.[1] If the function f is twice-differentiable at a critical point x (i.e. a point where f(x) = 0), then:

  • If , then has a local maximum at .
  • If , then has a local minimum at .
  • If , the test is inconclusive.

In the last case, Taylor's Theorem may sometimes be used to determine the behavior of f near x using higher derivatives.

Proof of the second-derivative test

[edit]

Suppose we have (the proof for is analogous). By assumption, . Then

Thus, for h sufficiently small we get

which means that if (intuitively, f is decreasing as it approaches from the left), and that if (intuitively, f is increasing as we go right from x). Now, by the first-derivative test, has a local minimum at .

Concavity test

[edit]

A related but distinct use of second derivatives is to determine whether a function is concave up or concave down at a point. It does not, however, provide information about inflection points. Specifically, a twice-differentiable function f is concave up if and concave down if . Note that if , then has zero second derivative, yet is not an inflection point, so the second derivative alone does not give enough information to determine whether a given point is an inflection point.

Higher-order derivative test

[edit]

The higher-order derivative test or general derivative test is able to determine whether a function's critical points are maxima, minima, or points of inflection for a wider variety of functions than the second-order derivative test. As shown below, the second-derivative test is mathematically identical to the special case of n = 1 in the higher-order derivative test.

Let f be a real-valued, sufficiently differentiable function on an interval , let , and let be a natural number. Also let all the derivatives of f at c be zero up to and including the n-th derivative, but with the (n + 1)th derivative being non-zero:

There are four possibilities, the first two cases where c is an extremum, the second two where c is a (local) saddle point:

  • If n is odd and , then c is a local maximum.
  • If n is odd and , then c is a local minimum.
  • If n is even and , then c is a strictly decreasing point of inflection.
  • If n is even and , then c is a strictly increasing point of inflection.

Since n must be either odd or even, this analytical test classifies any stationary point of f, so long as a nonzero derivative shows up eventually.

Example

[edit]

Say we want to perform the general derivative test on the function at the point . To do this, we calculate the derivatives of the function and then evaluate them at the point of interest until the result is nonzero.

,
,
,
,
,
,

As shown above, at the point , the function has all of its derivatives at 0 equal to 0, except for the 6th derivative, which is positive. Thus n = 5, and by the test, there is a local minimum at 0.

Multivariable case

[edit]

For a function of more than one variable, the second-derivative test generalizes to a test based on the eigenvalues of the function's Hessian matrix at the critical point. In particular, assuming that all second-order partial derivatives of f are continuous on a neighbourhood of a critical point x, then if the eigenvalues of the Hessian at x are all positive, then x is a local minimum. If the eigenvalues are all negative, then x is a local maximum, and if some are positive and some negative, then the point is a saddle point. If the Hessian matrix is singular, then the second-derivative test is inconclusive.

See also

[edit]

Further reading

[edit]
  • Chiang, Alpha C. (1984). Fundamental Methods of Mathematical Economics (Third ed.). New York: McGraw-Hill. pp. 231–267. ISBN 0-07-010813-7.
  • Marsden, Jerrold; Weinstein, Alan (1985). Calculus I (2nd ed.). New York: Springer. pp. 139–199. ISBN 0-387-90974-5.
  • Shockley, James E. (1976). The Brief Calculus : with Applications in the Social Sciences (2nd ed.). New York: Holt, Rinehart & Winston. pp. 77–109. ISBN 0-03-089397-6.
  • Stewart, James (2008). Calculus: Early Transcendentals (6th ed.). Brooks Cole Cengage Learning. ISBN 978-0-495-01166-8.
  • Willard, Stephen (1976). Calculus and its Applications. Boston: Prindle, Weber & Schmidt. pp. 103–145. ISBN 0-87150-203-8.

References

[edit]
[edit]