Some Examples of the Method of Frobenius


The method of Frobenius is a powerful tool for studying differential equations. Many of the “special” functions from applied math like the Bessel or Lengendre functions are defined as a differential equation and then solved using it. It works by assuming that the solutions are an infinite power series which reduces the problem down to algebra or solving a recurrence relation.

The method is less well known because solving the resulting algebraic equation or recurrence relation is usually very tedious. However, modern computer algebra software makes this much easier.


As a first step, assume that the function we wish to solve for, say \(f\), has the form:

Here, \(a(i)\) is ith coefficient of the power series. We must define \(a(i)\) for each \(i\) so that f solves the equation.

To do this, replace \(f\) with the sum. In many cases, we can rearrange the equation in a way to solve for the values of \(a(i)\).

A Simple Differential Equation

Consider this basic differential equation:

Replacing \(y(x)\) with \(\sum _{i=0}^{\infty} a(i)\, x^i\) gives us this equation:

Combine all the summations together. This requires playing with the index of summation.

Each coefficient of the power series must be equal to zero because the power series is equal to zero. So we have for each \(i\):

Giving us the recurrence relation:

This recurrence relation is solvable and the formula for the ith coefficient is:

Finally, we plug this back into the infinite summation:

If we evaluate the infinite sum, we find the solution to the differential equation:

A Slightly Harder Differential Equation

To solve this, it is important to write \(cos(x)\) as a Taylor series and do some algebra with it.

To combine the summations together, they must all sum over \(x^n\) instead of \(x^{n-1}\) and \(x^{2n}\).

All the coefficients must equal zero for all \(n\).

Solve for \(a(n+1)\) to get a recurrence relation.

The currence relation has a closed form.

Putting this into the infinite sum, we can get a closed form as well.

A Functional Equation

This method also works on some functional equations like the one below.

Replace \(f\) with an infinite sum and replace \(sin\) with a Taylor series. Then combine them all.

The coefficients must be equal to zero as always.

To simplify things, subsitute \(n = 2k+1 \) and simplify:

This equation doesn’t simplify like the others, but this formula may provide useful numerical approximation.

Why Does This Work?

Short Answer: \(x^i\) is the eigenfunction of the operators used the equations used in this article.

In the differential equations, we had the differential operator - let’s call it D: \(D(y(x)) = \frac{\partial y(x)}{\partial x}\). In the functional equations we had a “halving” operator - let’s call it H: \(H(y(x)) = y(x/2)\). Rewriting the equations using these operators, we get:

Both of these are linear operators. One of the key take-aways from linear algebra is that linear things are good because they are easy to work with. Linear operators are just like linear functions except that they operate on functions instead of vectors or numbers. Linear operators have eigenfunctions which are defined in same way as eigenvectors are defined for matrices. An operator, \(O\), has an eigenfunction, \(f\) if we can find applying the operator to the function preserves the function and just multiplies by some value called the eigenvalue, \(a\):

Both of or operators, \(D\) and \(H\), have \(x^i\) as eigenfunctions for values of \(x^i\).

This fact allows all the previous examples to work. If we can represent the answer as a summation using \(x^i\), then we will be able to get rid of all the \(x\) values as we did in the example and convert the differential or functional equation into an algebra problem.

Thanks to the fact that \(D\) and \(H\) are linear, we can nest and compose them together and \(x^i\) will still be an eigenfunction.

This is why decomposing a function into a sum with \(x^i\) is useful, but are there any other useful ways to decompose functions?