Taylor Series

A Taylor series is a polynomial of infinite degree that can be used to represent many different functions, particularly functions that aren't polynomials. Taylor series has applications ranging from classical and modern physics to the computations that your hand-held calculator makes when evaluating trigonometric expressions.

Taylor series is both useful.

\[\int_^\fracdt = x - \frac> + \frac> - \frac> + \cdots = \sum_^<\infty>(-1)^\frac>\cdot !>\]

Here, a Taylor series is being used to evaluate an integral that cannot be computed using known methods.

\[\sum_^ <\infty>(-1)^\frac = 4 - \frac + \frac - \frac + \cdots = \pi \]

Here, elegant use of a Taylor series gives us the exact value of \(\pi\).

Contents

Introduction

Let \(f(x)\) be a real-valued function that is infinitely differentiable at \(x = x_0\). The Taylor series expansion for the function \(f(x)\) centered around the point \(x = x_0\) is given by

\[\sum_^<\infty>f^(x_0)\frac<(x - x_0)^>.\]

Note that \(f^(x_0)\) represents the \(n^\text\) derivative of \(f(x)\) at \(x = x_0\).

It is not immediately obvious how this definition constructs a polynomial of infinite degree equivalent to the original function, \(f(x)\). Perhaps we can gain an understanding by writing out the first several terms of the Taylor series for \(f(x) = \cos x\) centered at \(x = 0\). Note that there is nothing special about using \(x = 0\) other than its ease in computation, but any other choice of center is allowed and will vary based on need.

We will now use the definition above to construct a graceful polynomial equivalency to \(\cos x\).

Because the formula for the Taylor series given in the definition above contains \(f^(x_0)\), we should build a list containing the values of \(f(x)\) and its first four derivatives at \(x = 0:\)

\[\begin f(0) &= \cos 0 &= \color1\\ f'(0) &= -\sin 0 &= \color0\\ f''(0) &= -\cos 0 &= \color\\ f'''(0) &= \sin 0 &= \color0\\ f^(0)&= \cos 0 &= <\color>. \end\]

We begin assembling the Taylor series by writing \(f(x) = \) [the first number in our list] \(\cdot \frac<(x - x_0)^0>\) like so:

\[f(x) = <\color1\cdot \displaystyle\frac<(x - 0)^0> = 1>.\]

So far, our constructed function \(f(x) = 1 \) looks nothing like \(f(x) = \cos x\). They merely have \(f(0) = 1\) in common, but we shall add more terms. We add the next term from our list above, this time multiplied by \(\frac<(x - x_0)^>:\)

\[f(x) = <\color1\cdot \displaystyle\frac<(x - 0)^0> + \boxed<\color0\cdot \displaystyle\frac<(x - 0)^1>> = 1>.\]

Notice the exponent on \((x - 0)\) and the argument inside the factorial are both 1 this time, rather than 0 as they were in the previous term. This is because the summation dictates that we increment \(n\) from 0 to 1. This process will continue by adding the next term from our list above, but again incrementing the power on \((x - 0)\) and the value inside the factorial:

\[f(x) = <\color1\cdot \displaystyle\frac<(x - 0)^0> + \color0\cdot \displaystyle\frac<(x - 0)^1> + \boxed<\color()\cdot \displaystyle\frac<(x - 0)^2>> = 1 - \displaystyle\frac>.\]

Let's stop and look at what we have so far. After three terms, our Taylor series has given us \(f(x) = 1 - \frac\).

Interestingly enough, if we continue taking numbers from our list while appending incremented powers of \((x - 0)\) and incremented factorials, then our Taylor series slowly but surely conforms to the cosine curve:

\[f(x) = <\color1\cdot \displaystyle\frac<(x - 0)^0> + \color0\cdot \displaystyle\frac<(x - 0)^1> + \color()\cdot \displaystyle\frac<(x - 0)^2> + \color0\cdot \displaystyle\frac<(x - 0)^3> + \color1\cdot \displaystyle\frac<(x - 0)^4> = 1 - \displaystyle\frac + \displaystyle\frac>.\]

At this point, we can guess at the emerging pattern. The powers on \(x\) are even, the factorials in the denominator are even, and the terms alternate signs. Note that more derivatives of the original function may be needed to discover a pattern, but only four derivatives were needed for this example. We encode this pattern into a summation, which finally yields our Taylor series for \(\cos x:\)

\[\cos x = \sum_^<\infty>(-1)^\frac>.\]

In the animation below, each frame represents an additional term appended to the previous frame's Taylor series. As we add more terms, the Taylor series tends to fit better to the cosine function it's attempting to approximate:

Important note: Because this series expansion was centered at \(x = 0\), this is also known as a Maclaurin series. A Maclaurin series is simply a Taylor series centered at \(x = 0\).

So how does this work exactly? What is the intuition for this formula? Let's solidify our understanding of the Taylor series with a slightly more abstract demonstration. For the purposes of this next example, let \(T(x)\) represent the Taylor series expansion of \(f(x)\).

\[\begin T(x) &= \sum_^<\infty>f^(x_0)\frac<(x - x_0)^> \\ &= f(x_0) + f'(x_0)(x - x_0) + f''(x_0)\frac + f'''(x_0)\frac + \cdots \end\]

It is important to note that the value of this summation at \(x = x_0\) is simply \(f(x_0)\), because all terms after the first will contain a 0 in their product. This means the value of the power series agrees with the value of the function at \(x_0\) \(\big(\)or that \(T(x_0) = f(x_0)\big).\) Surely this is what we'd want from a series that purports to agree with the function! After all, if our claim is that the Taylor series \(T(x)\) equals the function \(f(x)\), then it should agree in value at \(x = x_0\). Granted, there are an uncountable number of other functions that share the same value at \(x_0\), so this equivalence is nothing special so far. Let's investigate by taking the derivative of the terms in the power series we have listed:

\[T'(x) = 0 + f'(x_0) + f''(x_0)(x-x_0) + f'''(x_0)\frac + f^(x_0)\frac+ \cdots. \]

If we evaluate the differentiated summation at \(x = x_0\), then all terms after \(f'(x_0)\) vanish (again due to containing 0 in their product), leaving us with only \(f'(x_0)\). So, in addition to \(T(x_0) = f(x_0)\), we also have that \(T'(x_0) = f'(x_0)\), meaning the Taylor series and the function it represents agree in the value of their derivatives at \(x_0\). One can repeatedly differentiate \(T(x)\) and \(f(x)\) at \(x = x_0\) and find that this pattern continues. Indeed, the next derivative \(T''(x)\) takes on the value \(f''(x_0)\), the derivative after that \(T'''(x)\) takes on the value \(f'''(x_0),\) and so on, all at \(x = x_0\).

This is a promising result! If we can ensure that the \(n^\text\) derivative of \(T(x)\) agrees with the \(n^\text\) derivative of \(f(x)\) at \(x = x_0\) for all values of \(n\), then we can expect the behavior of the Taylor series and \(f(x)\) to be identical.

Now, there are rare, pathological examples to this conclusion, but to ensure those don't crop up, we condition this theorem on the function being infinitely differentiable.

\[x + \frac + \frac\] \[x - \frac + \frac\] \[x - \frac + \frac\] \[x + \frac + \frac\]

Compute the first three non-zero terms of the Taylor series for \(f(x) = \sin x\) centered at \(x = 0.\)