It is good to remember why well designed algorithms matter. I decided to experiment with the well known Fibonacci number. I have this sticky note plastered on my monitor at work.

“Good Algorithms beat Supercomputers”. Lets take a glimpse into why that is the case.

The fibonacci recurrence is defined as:

F(N) = F(N-1) + F(N-2), N>1

and F(N) = N for N: 0 or 1

This will lead to a series of numbers in order 0,1,1,2,3,5,8,….

Translating this to code we have the naive algorithm as:

public static long naiveFibo(int n){ if(n==1 || n==0){ return n; } return naiveFibo(n-1) + naiveFibo(n-2); }

However this naive algorithm suffers from a fatal flaw. As the values of n get larger, we end up computing fibonacci values repeatedly for multiple m<n. This is computationally expensive as n becomes larger.

eg. Fib(5) = Fib(4) + Fib(3)

Fib(4) = Fib(3) + Fib(2)

Fib(3) = Fib(2) + Fib(1)

Fib(2) = Fib(1) + Fib(0)

We can already see Fib(2) being computed thrice and Fib(3) twice with this input.

So what now?

Lets do an optimization. This particular technique is called memoization. In simple terms what we do now is store the fibonacci value the first time it is computed in an array at that index. eg arr[5] will hold the value for Fib(5).

What this does is each time we want to compute the Fibonacci value for a given n we first do a lookup in our array. If it does not exist we compute and store it. While this is a fairly well known technique that is found in any decent CS Textbook. It is good to experiment and see the true power of well designed algorithms. Lets see how the code looks like:

public static long memoizedFibo(int n){ if (arr[n] == 0) { if (n == 0 || n == 1){ arr[n] = n; }else { arr[n] = memoizedFibo(n - 1) + memoizedFibo(n - 2); } } return arr[n]; }

The client code follows in case you want to reproduce this on your machines:

public class Fibo { private static long[] arr = new long[100]; public static void main(String[] args) { long startTime; double elapsedTime; for (int i = 0; i < 91; i++) { startTime = System.nanoTime(); System.out.println(memoizedFibo(i)); elapsedTime = (double) ((System.nanoTime() - startTime) / 1000000000); System.out.println("Time taken to compute Fib +" + i + " Memoized + " + elapsedTime + " Seconds"); } for (int j = 0; j < 91; j++) { startTime = System.nanoTime(); System.out.println(naiveFibo(j)); elapsedTime = (double) ((System.nanoTime() - startTime) / 1000000000); System.out.println("Time taken to Compute" + j +"Fib Naive" + elapsedTime + "Seconds"); } } // Add Fibonacci methods here }

This seemingly simple change gives a massive speedup. I did some experimental runs on my 1.7 GHZ Core I5 with 4GB RAM.[Click on Image]

Now That is wicked. Till n=40 we can see the JVM and processor compensate to keep up with the optimization. Beyond that we see a exponential speedup. By the time n was 53 the naive algorithm took 449 seconds to Compute the Fibonacci value while the memoized version was still at 0 seconds.Truly awesome. Next time you are writing any code remember that Good Algorithms beat Supercomputers.

Like your post; but I’d much rather treat this as a good example of how to improve an algorithm irrespective of the underlying system, be it a supercomputer or a PC.

I personally think that in order to support ‘A good algorithm beats a supercomputer’ theory, you’d need to do something a little more elaborate.

I agree, the point albeit indirectly was to illustrate how

exponentially expensive operations will ultimately

exhaust the resources of even a supercomputer without

a well designed algorithm.

Transient would have taken you probably to say 80 runs with naiveFibo before the deviation would start looking obvious. Since you are right on this one, maybe you could post numbers with that as well.

Hello,

Thanks for reading. Could you please elaborate on what I should try making

transient?