Fibonacci numbers five ways
The Fibonacci sequence is a list of numbers where each number is the sum of the two before it. It begins
Writing a function which calculates the nth Fibonacci number f_n seems like a very straightforward programming excercise, however writing this function in the most straightforward way possible leads to a nasty surprise (exponential running time). In this post I give several other methods of calculating f_n, each very different from the others, and comment on their running time^{1}, correctness, and and other relevant background about the solution.
Fibonacci numbers grow very fast: f_{48} overflows a 32bit unsigned integer, and f_{94} overflows a 64bit unsigned integer. If you are implementing this yourself and want exact answers, stick to programming languages like Python or Haskell which use true integers by default, or use a biginteger library of your choice.
1: Naive recursion (Python)
def fibonacci(n):
if n == 0:
return 0
if n == 1:
return 1
return fibonacci(n  1) + fibonacci(n  2)
Running time: O(\varphi^n) where \varphi = \frac{1 + \sqrt{5}}{2} \approx 1.618 is the golden ratio.
This solution is obviously correct, and is more or less a direct translation of the definition of the Fibonacci sequence given above.
However, it has exponential running time, so it is far too slow to be useful except in very small cases, say n \leq 40 or so.
In fact, it is not too hard to see that the running time of fibonacci(n)
is going to be roughly proportional to f_n itself, since if fibonacci(n1)
takes time f_{n1} and fibonacci(n2)
takes time f_{n2}, then fibonacci(n)
which calls both of those will take time approximately f_{n1} + f_{n2}, which is f_n.
The easiest way to fix this problem is to save each value of f_n computed and not recompute it each time: in Python this is as easy as importing functools
and then adding @functools.cache
just before the function definition.
Applying this fix gives an O(n) running time, while also using O(n) space (which we were using anyway by recursing).
It is not obvious that the Fibonacci sequence grows exponentially f_n \sim \varphi^n, with base \varphi the golden ratio. We can halfverify this in a very handwavy way: if we assume that f_{n1} \approx \varphi^{n1} and f_{n2} \approx \varphi^{n2}, then we have that f_n \approx \varphi^n(\varphi^{1} + \varphi^{2}), and it is easily checked (using a calculator, or algebraically) that \varphi^{1} + \varphi^{2} = 1. My favourite actual proof of this is that f_n can be obtained by raising a diagonalisable matrix to the nth power, and therefore grows like the eigenvalues raised to the nth power. I go further into this in another solution.
It’s hard to overstate how bad exponential running time is for those that haven’t seen it before. Even when written in C, this algorithm already takes around 110 seconds on my computer to calculate f_{50}. Calculating it for f_{60} will take \varphi^{10} \approx 123 times longer, almost four hours. Attempting to calculate f_{130} will take my computer approximately 14 billion years, the estimated age of the universe. (Every other solution on this page calculates f_{130} instantly).
2: Straightforward iteration (Python)
def fibonacci(n):
a, b = 0, 1
for _ in range(n):
a, b = b, a + b
return a
Running time: O(n).
This solution starts out with the base case (a, b) = (f_0, f_1) = (0, 1), and then repeatedly applies a recurrence rule (a, b) \mapsto (b, a + b) in order to “drive” the pair forward, producing (f_1, f_2) after the first loop iteration, (f_2, f_3) after the second, and so on, until after the nth iteration we have (f_n, f_{n+1}). This is an example of a general pattern in recurrence relations: often holding on to as many terms as is mentioned in the recurrence is a good idea. Since the Fibonacci sequence depends on the last two consecutive terms, we should always be holding two consecutive terms throughout the algorithm.
The fix suggested to the previous solution was to store all of the calculated values of f_n so that they don’t need to be recalculated each time, but this leads to some wasted space: for example when calculating f_{10} we only need to look at f_{9} and f_{8}, and we have no use for f_0, \ldots, f_7 anymore. This solution can be viewed as a straightforward optimisation of that method, where we store only the previous values we actually need.
3: An inexact exact solution (C)
long fibonacci(long n) {
double phi = (1.0 + sqrt(5)) / 2.0;
return floor(pow(phi, n) / sqrt(5));
}
Running time: O(1). Returns exact results up to n = 70, but is off by one for f_{71} = 308,061,521,170,129.
This solution, which looks almost like cheating, using Binet’s formula for the value of f_n:
Here \varphi \approx 1.618 is the golden ratio, and \psi \approx 0.618 is its “conjugate” (\varphi and \psi are the two roots of the equation x^2  x  1 = 0). Binet’s formula can be derived in a pleasant (but long) exercise with generating functions, or by diagonalising a matrix, or proved by induction if you like knowing things that are true but not knowing why they are.
The function fibonacci(n)
above uses only the first term in Binet’s formula and ignores the second completely.
This works because \lvert \psi \rvert^n \leq 1, and so \lvert \psi \rvert^n / \sqrt{5} \leq 1/\sqrt{5} \approx 0.447 for all n \geq 0, and so rounding the first term will always give the correct integer answer.
In fact, the second term decays exponentially and so only taking the first term gives answers that are closer and closer to integers as n gets higher: already by n = 10 we have \varphi^{10} / \sqrt{5} \approx 55.004, very close to the correct answer f_{10} = 55.
This can be used to approximate f_n on your pocket scientific calculator.
Unfortunately 64bit floating point numbers only go so far, and this solution stops giving the right answer after n = 70. A method of implementing Binet’s formula exactly is given below.
4: Matrix exponentiation (Julia)
fibonacci(n) = ([0 1; 1 1]^n * [0; 1])[1]
Running time: O(\log n).
This is by far the shortest and most concise^{2} solution here, calculating f_n as the first element of a vector in the matrixvector product F^n (0, 1), where F is a specific 2 \times 2 matrix. Almost identical code would apply to MATLAB, Octave, or any other programming environment with matrices available by default. Furthermore, this solution runs in O(\log n) arithmetic operations due to the fact that F^n can be calculated using exponentiation by squaring.
This solution is conceptually identical to the iterative solution above. We have already seen there that the map (a, b) \mapsto (b, a + b) has the effect of driving a pair (f_n, f_{n+1}) of consecutive Fibonacci numbers forward to the next pair (f_{n+1}, f_{n+2}); the iterative solution applies this function n times to the starting point (f_0, f_1) = (0, 1) to arrive at (f_n, f_{n+1}) and then returns f_n. The key insight here is that the map (a, b) \mapsto (b, a + b) is linear, meaning it can be written as a matrix:
This solution also gives a lot of insight into the Fibonacci sequence itself. The matrix F satisfies the equation F^2  F  1 = 0, which is the same equation x^2  x  1 which has roots \varphi and \psi. Consequently, F can be diagonalised with \varphi and \psi as eigenvalues, showing that F^n is of the form
5: Field extensions (Haskell)
import Data.Ratio
 (Q5 a b) represents a + b√5, with a, b rational.
data Q5 = Q5 Rational Rational
deriving (Eq, Show)
 Ring operations on the quadratic field.
zero = Q5 0 0
one = Q5 1 0
add (Q5 a b) (Q5 c d) = Q5 (a + c) (b + d)
sub (Q5 a b) (Q5 c d) = Q5 (a  c) (b  d)
mul (Q5 a b) (Q5 c d) = Q5 (a*c + 5*b*d) (a*d + b*c)
 Exponentiation by squaring.
pow :: Q5 > Integer > Q5
pow base exp
 exp == 0 = one
 exp `mod` 2 == 0 = half `mul` half
 otherwise = (half `mul` half) `mul` base
where half = pow base (exp `div` 2)
 nth Fibonacci number.
fibonacci :: Integer > Integer
fibonacci n = numerator b
where
phi = Q5 (1%2) (1%2)
psi = Q5 (1%2) (1%2)
(Q5 a b) = (pow phi n) `sub` (pow psi n)
Running time: O(\log n).
This method is an exact implementation of Binet’s formula f_n = (\varphi^n  \psi^n)/\sqrt{5}, which was mentioned previously in the “inexact exact” solution above. Here we have implemented arithmetic in the quadratic field \mathbb{Q}[\sqrt{5}], the set of numbers we get by “mixing in” the square root of 5 to all rational numbers. In this field we can express both \varphi and \psi without any loss of precision, and hence use Binet’s formula to get an exact result.
Numbers of the form a + b \sqrt{5} with a and b rational are closed under addition, subtraction, and multiplication (and even division, though we don’t need this here). For example, we can see that the product of two such numbers is
%
operator), and so to calculate f_n = (\varphi^n  \psi^n)/\sqrt{5}, we just need to take the second element in the pair for \varphi^n  \psi^n = 0 + f_n \sqrt{5}.
The powers \varphi^n and \psi^n are calculated in O(\log n) arithmetic operations using exponentiation by squaring, done by the pow
function above.
The pow
function itself is fairly simple: it knows that x^0 = 1 for the base case, and then it rewrites x^{2n} = x^n \cdot x^n or x^{2n + 1} = x^n \cdot x^n \cdot x, where the x^n is calculated once and then reused in each case.
The fibonacci
function then just does the computation \varphi^n  \psi^n and extracts the relevant part of the answer: the numerator of the fraction in the \sqrt{5} part of the number.
Afterword
It’s worth pointing out that the last two solutions, which can be made to be exact and run in O(\log n) time, are as good as the first solution is bad. If you want to calculate the last say, 100 decimal digits, of f_n for n absolutely enormous, either of the last two methods will serve you well. The matrix method is probably the easiest, since it does not involve any division or require any fractions. A straightforward unoptimised Python implementation, using only exponentiation by squaring of 2 \times 2 matrices modulo 10^{100}, can find the last 100 digits of f_n for n = 10^{10000} instantly.
Exponentiation by squaring is a powerful technique which can be used in any monoid, roughly meaning a datatype with an associative “multiplication” and an identity element for that multiplication. In this post we’ve seen two^{3} such monoids: 2 \times 2 matrices with matrix multiplication, and \mathbb{Q}[\sqrt{5}] with its own multiplication. It is worth looking around for monoids when solving a problem, since they can be used in many practical ways — they are also incredibly useful when trying to parallelise algorithms or distribute computation.

I am counting addition and multiplication as taking constant time for the purposes of this analysis. This could be true if you are calculating f_n modulo some fixed big number, for example. If calculating f_n using true integers, you can use the fact that the number of digits in f_n is proportional to n to modify the analysis. ↩

Since Julia defaults to 64bit integers when it sees an integer matrix, we had to sacrifice correctness to make this solution as concise as possible. Changing the matrix
[0 1; 1 1]
to read[BigInt(0) 1; 1 1]
will make this an exact solution. ↩ 
Python functions which take in pairs of integers and return pairs of integers, such as (a, b) \mapsto (b, a + b), also form a monoid under composition. However, this monoid is not helpful for speeding up algorithms using exponentiation by squaring, since the composition f \circ g of two functions still has to run both of those functions every time to produce an output. Compare this to two matrices F and G, where calculating FG once gives a new matrix which “shortcuts” running F and G separately ever again. ↩