# A brief introduction to Taylor series

Today marks the day where I finally do it. The day that I explain the most powerful and wonderful mathematical formula birthed from calculus. What I feel is personally one of the most beautiful ideas I think the human mind has ever conceived. This beautiful Idea is known as the Taylor series. The Taylor series is a **method of turning non polynomial functions into polynomial functions. **When I say functions, I mean all of the nice kinds;-trigonometric, exponential, logarithmic. Now, this all probably sounds too good to be true and you may scratch your head in disbelief and ask yourself “How could it be possible to do such a thing?” And I say to that, sometimes dreams can come true.

**0. A deep relation between derivatives and coefficients of a polynomial**

Consider a general polynomial p(x) of degree’, in the form

Now, the question I propose is: how do the coefficients relate to a polynomial and its derivatives? To start, we evaluate our polynomial at x = 0. Then, all terms having any non zero powers of ‘x’ die off.”

Suppose we took a derivative of both sides of our original polynomial,

If we were again to put x=0 then we get a result similar to what happened when we put x=0 in the original polynomial,

And, we do that again to find for the third coefficient (a_2)

but this time, it is slightly different because we have a two downstairs,

Right, but how would we get the coefficient of the ‘kth’ term? To motivate it, I introduce another induction-based result.

Let us consider,

Now, the first derivative of this is,

And the second derivative of our original expression is

…

with some inductional thinking, we can say that the ‘jth’ derivative would be as follows

(note: The exponent on the left side means jth derivative not g to the jth power)

So, for the kth coefficient, we need to the kth derivative, for that just put j=k, and alas we get,

Right, so bringing back our original polynomial p(x),

The kth derivative of both sides,

So, the thing to notice here is that the terms which had exponent greater than ‘k’ still have an ‘x’ on them, so when we evaluate the kth derivative at x=0, then all of those terms vanish. And, hence we get a formula relating kth coefficient and evaluation of kth derivative of the polynomial at zero

Fantastic, so here we can see that, in the simplified form of a polynomial, all the coefficients of the monomial terms have a coefficient of the kth derivative of the polynomial evaluated at zero divided by k factorial. Now let’s rewrite the polynomial using the general formula we got for its coefficients,

So, our last term is x raised to m because our premise was that the polynomial is one of degree ‘m’.

**1. Motivating the Taylor Theorem.**

Consider the function,

and suppose I wanted to break this function into a sum of polynomials. To start, we consider the graph of sin x as shown below

Now, I propose that we can write this as as a weighted sum of other monomials,

So, now the question is how to figure out the weights and how to figure out how many terms there would be needed in this polynomial formulation of our sin x curve. So, we could start by equating sin x to ‘nth’ degree polynomial.

The problem for us, is finding the “weights”, as in the coefficients of the polynomial… well why don’t we fall back into the previous result and say that the **polynomial coefficients are determined to sin x and it’s derivatives? **As in, if sine really had a polynomial then we should be able to set up the same relation between its derivatives evaluated at zero and coefficients of the polynomial.

So, the smart way to calculate the coefficient of evaluating the derivatives is to create a table as I have shown below,

Right, so we found the coefficients till the fourth term. Plugging them back into the polynomial expression we had developed initially

And, with some clever induction,

Now, if you really sat down and tried calculating this, you would see the series goes on forever. And no one wants to calculate an infinite sum, and no one really has to calculate the infinite sum because the series converges to the value in just a few terms! Let’s take a finite amount of terms on the left side and see how the ‘fit’ of the polynomial curve varies as we take more and more terms. Below I have shown a gif of how the polynomial gets better fitting to the sine curve as we take more and more terms.

Now, I wish to discuss a common misconception that the value of series should be infinite because we take infinite terms, but one may notice that as we take higher and higher terms, slowly and steadily with more terms, the factorial starts growing faster and faster than the x term with the exponent on top.

After some k, so what happens is that after some kth term, the sum of series isn’t much affected if you take higher terms or not. For example, suppose our x=2, then just when k=4, the inequality becomes true, precisely put as an equation, so after the fourth term in our polynomial, we can start neglecting stuff because they start becoming very small.

What we have done here is something called the Maclaurin series, but what we wanted was the Taylor series. For the Taylor series, instead of having the higher-order polynomial terms go to zero at x=0, we instead make them go to zero at a point x=a, so that our coefficients are modeled by the derivatives of our function around the point x=a.

That is if we shifted the polynomial such that all of the **nonconstant terms go to zero at a point x=a,**

And, in a motivated spirit put x=a,

Skipping the formalities of an algebra similar to that which we have done before, we could show that the coefficients of this polynomial would be as follows

Now, using this formula, we can the Taylor series of sin(x) as

Now, how does our polynomial change as we change ‘a’ relative to ‘sin x’, well in the original polynomial we put a=0 (** the Maclaurin**), and hence the polynomial became better fitting to the polynomial around x=0, but suppose we choose some other ‘a’, say a=1, then the polynomial would be better on the sin x curve around x=1. Below I have shown a gif of how that would play out.

The purple line is x=a, as the purple line moves, the point where we create the series changes. And the red curve is the sine curve over which the Polynomial curve looks like its having a wobbly ride over

And, finally, with generality, I present the formula for turning a general ‘nice’ function f(x) into a polynomial which should approximate well for a point nearby to x=a

**3. Applications**

The most profound application of this is that we can easily derive the famous Euler’s identity using it. Doing some work, we find the Maclaurin of some famous functions (all of these are **nice functions**, which give the correct output value for a given input if we take enough terms of series)

Now suppose we plug x=it, where i is the imaginary unit, into the series of the exponential and then do some sneaky ‘rearrangement’…

Ta-da! Oh, wait HOLY SHIT! DID WE JUST PROVE EULER'S?!?!

Now, to get the identity they often reference in pop math put replace everywhere you see at with a pi!

Ehm, further discussion would get ‘complex’, so we will avoid that. But… BONUS TIME!!!

**4. BONUS**

A cool application of Taylor is in how it allows us to explore the motion of objects around the equilibrium as we give it tiny disturbances. Consider a single variable potential function V(x), and let us Taylor expand it around a point x=a where ‘a’ is the position of equilibrium. Then,

Now, the force on a particle at equilibrium is zero hence V’(a)=0, and we can change our coordinate system in such a way that at the equilibrium, our potential is zero. Finally, we will assume that our disturbances are small enough that we can neglect terms after (x-a)²

Then, the force near the equilibrium is given by,

So, now we have F=-k(x-a), and this is exactly the force equation for a shifted oscillator! This physically means that for ‘small perturbations’ near an equilibrium point x=a, The motion undergone is a simple harmonic oscillator!

Now let’s try to interpret this mathematical idea using physics! If an object has some excess energy which prevents it from settling at the mean position of x=a, This means that when the body crosses the mean position it has some velocity. Now, even though the force at equilibrium is zero when the particle comes in with velocity, it reaches the next extreme position from where it falls oscillates back again into our least potential state where it crosses to the other side again. Oho, Could you find the Time period of such a motion? (note: this may be easier for someone familiar with physics)

So, we could even further generalize this to see how motions change as we increase the perturbation into the system! What if we had motion where we didn’t neglect the (x-a)² term... Now that’s something to ponder!

End.

Hope you enjoyed my article and now share the same love for the Taylor series as I do