Guessing Fourier Coefficients

Given a ‘nice enough’ periodic function, it can be expressed as an infinite sum of sine and cosine functions, known as its Fourier Series.

Video: Approximating f(x) = x^2-x by calculating the first six terms in its Fourier series representation. (interactive plot)

In some cases, formulas specifying a_n and b_n can be computed by hand using the Euler-Fourier formulas. When these these methods are inconvenient, approximate coefficients can be found using a random search in the space of all possible coefficients.

The Coefficient Arrays

The random search works like this: first, start with two arrays of n arbitrary values each, where n is the number of trigonometric terms in the approximation. These arrays contain the cosine and sine coefficients, respectively.

The approximation corresponding to these arrays for a function periodic on the interval [-L, L] is f \approx S = \sum_{k=0}^{n-1}a_k\cos\left(\frac{kx\pi}{L}\right) + b_n \sin\left(\frac{kx\pi}{L}\right)

Perturbing the Arrays

Next, randomly pick one of these 2n elements in A or B and modify it slightly by adding random \varepsilon such that -0.5<\varepsilon<0.5.

Quantifying Error

How do we know that our arrays give rise to a good approximation? In the continuous setting, the error can be described as the total (unsigned) area between the curves of f and S in the interval [-L,L]. This can be computed exactly, with mathematical software packages.

The error between f(x) = x^2 – x and one of its trigonometric approximations is the total shaded area.

When these integrals are difficult to compute, it is often useful to use a discrete proxy for this term. Split up the interval [-L,L] with a mesh M = (t_0, t_1, t_2, … , t_m) where t_0 = -L, t_m = L, and t_i < t_j if i < j. Then this area is usually smaller when

E = \sum_{i=0}^m \left(f(t_i) – S(t_i)\right)^2 is smaller.

Iterating

Altering the coefficient arrays will produce a different S and therefore give a different value for E. This error can either be equal to or greater than the coefficient array before it was altered.

If the error is reduced, the random search algorithm accepts the new array coefficient and erases the previous one.

If the error is not reduced, the program maintains the coefficient array until it finds a perturbation that improves the error. This is the procedure that’s in the animation at the top. Feel free to play with different settings and come up with pathological examples!

Link to the code (requires Python, NumPy, and matplotlib)

As a map from \mathbb{R}^n \to \mathbb{R}

This problem of optimizing a function \mathbb{R}^n \to \mathbb{R} has been studied extensively. For example, creating the neural networks popular today boil down to finding sets of numbers (activation values and biases of neurons) that minimize cost functions.

The computer scientists frequently tackle this problem with gradient descent techniques. Given a function E(\vec{x}), its gradient, \nabla E(\vec{x}) returns the direction of greatest increase and conveniently, -\nabla E(\vec{x}) returns the direction of fastest decrease.

The main idea is that to find input values to minimize a function, start at some position x_0. Then calculate \nabla E(\vec{x_0}) and define a better approximation x_1 by pushing x_0 a tiny bit in the opposite direction. Iterating this process should output a local minimum in the search space, if everything goes right.

The code for this method is provided in the link above, using a simple forward difference approximation for the gradient.

2 Replies to “Guessing Fourier Coefficients”

  1. Eduardo says:

    Example reply! Making some sitewide edits. ¡Oralé!

  2. Alfredo Arroyo says:

    One of the links doesn’t work 🙁

Leave a Reply to Alfredo Arroyo Cancel reply

Your email address will not be published.