# What’s an intuitive way to understand Taylor and Maclaurin series?

## What's the point?

By adding together a big enough polynomial you can mimic any function. If this weren't true then most of grade-school mathematics would be useless, because you'll never encounter e.g. a pure sine function in real life.

But since we can approximate realistic functions with what's essentially just multiplication and addition (eg $series_1(x) = .3 x + .9 x^2 – .01 x^3 + \ldots$), we can use simple mathematics on the real world.

## INTUITION

I think of the Taylor terms like this:

• $x^0$ — up/down location
• $x^1$ — tilt
• $x^2$ — curve
• $x^3$ — wiggle
• $x^4$ — warp
• $x^5$ — sfleegn
• (…from here on out you just have to make up words, I think…)

As I enlarge or shrink the constants on each of these terms I'm adding more tilt, more curve, less wiggle, etc.

## PLAY

It's a good idea to play around with this on W|A. Just adjust the constants and see what the curve does.

(I have to adjust the constant on x^n by $x^n \over n!$, for reasons that become clear if you take sixty derivatives of $x^{100}$. (Do NOT combine the constants, i.e. do NOT multiply 100 × 99 × 98 × 97 out to 94109400 … rather just leave it written as 100 × 99 × 98 × 97. … you'll see the pattern … or leave a comment if not.))

## local approximation

The MacLaurin series is a Taylor centred at x=0, what does that mean? Well we're talking about approximating things, i.e. getting better and better as we add more terms.

(the colours represent adding successively more terms)

You can see from the picture that the approximation is better in some places than others. We can extend the range in which the series approximation is good by adding more terms to the series.

But in for example the last picture the approximation still isn't very good around x=8 even though we've already improved the estimate by adding terms to the polynomial.

We could take the same range and start building the approximation around x=8 instead, then it would converge faster around x=8 although it would take longer to work near x=0.

With infinity terms (i.e., in theory) this isn't a problem–but in practice you want to know how many terms will give you how good an approximation, where. And this depends on the centre point where everything is the most accurate (accuracy drops off from that centre point, which for MacLaurin is x=0).

## TAYLOR POLYNOMIALS

Taylor gives you an "easy" way (if you know derivatives) to figure out how to approximate some function. If you've measured velocity f′, acceleration f″, jerk f‴, then you already know what constants to put by your x, x², x³ — and as a bonus Taylor will also tell you how good this approximation is.

This sort of justifies doing calculus (in addition to find-the-argmax problems) because "With calculus, we can usefully and practically approximate real-world functions. We'll know what constants to put on and we'll have an idea of how far we are off."

## series can be other than polynomials

Now  the next step is that not only are polynomials sufficient to cover any  function, but so are sinusoids. Sums of sinusoids are better at  approximating things that fluctuate around the same value, whereas the  polynomials are better at approximating things that don't.

## a tool to play with series

I'm actually working on a visualisation tool for exploring this space. I can post it here when I've finished if anyone's interested.

What's an intuitive way to understand Taylor and Maclaurin series?