Showing posts with label complexity. Show all posts
Showing posts with label complexity. Show all posts

Wednesday, July 2, 2025

Tribonaccis as an example for all K-bonaccis

 Tribonacci numbers (and, in general, K-bonacci numbers) are defined not with 2 base cases (1, 1) and the rule to add the two predecessors, but K base cases and the rule to add the K predecessors.


The good news is that they look very similar to traditional Fibonacci, so the intuitive solution (the elegant recursive solution) comes naturally to almost anyone who has solved traditional Fibonacci.  

The bad news is that they look very similar to traditional Fibonacci, so the same problems encountered with it also apply to these extended Fibonacci-like sequences, namely, that the complexity grows way out of hand even more quickly for K-bonacci than for Fibonacci.

Fibonacci grew at 2^n because the rule was to add the 2 previous terms. Tribonacci (add the previous 3), then, will grow at 3^n (2^10 is about 1,000; 3^10 is about 59,000); 4-bonacci will grow at 4^n (2^10 is about 1,000; 4^10 is about 1,050,000) and so on.

Any Fibonacci-type sequence like this will grow at a rate of K^n, where K is the number of terms you have to compute to get the next one (2 for Fibonacci; 3 for Tribonacci; 4 for 4-bonacci; etc.)

However, you don’t need to panic if you ever see K-bonacci sequences in a homework assignment or interview. You know the strategy to make Fibonacci more efficient, and if it works for Fibonacci with 2 base cases, then it’ll work for Tribonacci with 3 base cases, and so on: don’t throw away the work you’ve already done. This becomes especially critical with bigger K-value K-bonacci sequences, since the increase in K really drives up the values of K to a power at lightning speed. (Recall that 2^10 was about a thousand, 3^10 is about 59 thousand, and 4^10 is more than a million.)

Here's what a tribonnaci solution would look like, using the same principle of saving our work as we go:

import java.util.HashMap;

import java.util.Map;

 

class Solution {

     Map<Integer, Integer> tribonacciMap = new HashMap<>();

 

     public void setTribonacciMap() {

          tribonacciMap.put(0, 0);

          tribonacciMap.put(1, 1);

          tribonacciMap.put(2, 1);

     }

 

     public int tribonacci(int n) {

          setTribonacciMap();

          if (n == 0 || n == 1 || n == 2) {

               return tribonacciMap.get(n);

          }

          if (!tribonacciMap.containsKey(n)) {

               tribonacciMap.put(n, (tribonacci(n - 1) + tribonacci(n - 2) + tribonacci(n - 3)));

          }

          return tribonacciMap.get(n);

     }

}





Saturday, June 28, 2025

A much-improved Fibonacci approach

 In the last article, we looked in depth at the enormous cost of even fib(7) by drawing out the tree of calls for it and realizing that going even to fib(8) would be much more expensive. The traditional recursive way, we concluded, was beautiful and simple, but terribly inefficient due to its exponential runtime.

Of course, almost anything is better than an exponential runtime, so we’ll take any improvement we can get.  But here’s the good news: I claim (and will soon prove) that I have a method that takes O(2^n) down to O(n). Those are enormous savings: in the first case, if n is 10, you do 1024 operations, whereas in the second case, with the same n, you only do 10—saving >=99% of the work from the exponential version, getting the answer that much more quickly and efficiently.

Here's how we can do it:

  • Set up an array
  • Use the array to remember values you’ve already calculated
  • Look them up, instead of recalculating them
  • Keep adding values as you calculate new ones


For comparison’s sake, let’s go calculate fib(7). For this, we need an array storing 8 integers, so we can index them from 0 to 7.

Initially, of course, the array looks like

Index

0

1

2

3

4

5

6

7

Value

0

0

0

0

0

0

0

0


But we can start by populating the values of the base cases, given in the recursive formula, namely, that fib(0)—or, here, in this context, fib[0] with square brackets because we’re accessing an index of an array, not calling a function—is 1, and so is fib[1].

Index

0

1

2

3

4

5

6

7

Value

1

1

0

0

0

0

0

0

 

Now, from fib[2] onward, let’s simply loop through the array and do one of two things:

  • Use an existing value
  • Update a 0 into a real value, where necessary

fib[2] is not a base case, but we know the rule is to add the previous two, so this is as simple as fib[2] = fib[0] + fib[1];

Index

0

1

2

3

4

5

6

7

Value

1

1

2

0

0

0

0

0

Now, we have fib[2] saved somewhere, and we’ll save ourselves a lot of time by not throwing that result away.

We do the same for fib[3], which is fib[2] + fib[1]. fib[1] was a base case, so we hard-coded that earlier, and we just calculated and saved fib[2], so we can use it

Index

0

1

2

3

4

5

6

7

Value

1

1

2

3

0

0

0

0

 
Likewise, to get fib[4] by looking up (rather than recalculating) fib[2] and fib[3]:

Index

0

1

2

3

4

5

6

7

Value

1

1

2

3

5

0

0

0

 
Likewise, to get fib[5] by looking up (rather than recalculating) fib[3] and fib[4]:

Index

0

1

2

3

4

5

6

7

Value

1

1

2

3

5

8

0

0

 

Likewise, to get fib[6] by looking up (rather than recalculating) fib[4] and fib[5]:

Index

0

1

2

3

4

5

6

7

Value

1

1

2

3

5

8

13

0

 

Likewise, to get fib[7]—our final answer— by looking up (rather than recalculating) fib[5] and fib[6]:

Index

0

1

2

3

4

5

6

7

Value

1

1

2

3

5

8

13

21

This technique of storing a value and using it later is called memoizing (for as much trouble as my word processor gives me, there is no “r” as in “memoRizing”—it’s “memoizing” as in “writing a memo”) and is key to a pattern of programming called dynamic programming, which we’ll start covering explicitly soon.

This way, you still have the relationship that f(n) = f(n-1) + f(n-2), but, thanks to the fact that you’re storing these values somewhere, you don’t need to waste any time redoing past calculations. 


Using a HashMap, we can write this much-improved solution like this:

import java.util.HashMap;

import java.util.Map;


public class FibonacciMemoization {

    // Cache to store already computed Fibonacci values

    private static Map<Integer, Long> memo = new HashMap<>();


    public static long fibonacci(int n) {

        // Base cases

        if (n == 0) return 0;

        if (n == 1) return 1;


        // Check if already computed

        if (memo.containsKey(n)) {

            return memo.get(n);

        }


        // Compute, store, and return

        long result = fibonacci(n - 1) + fibonacci(n - 2);

        memo.put(n, result);

        return result;

    }

}


Issues with a Naive Fibonacci

 Let’s look in detail at a naïve—that is, simple and elegant, but computationally expensive (you’ll get O(2^n)) way to implement Fibonacci. This, perhaps, was the way you learned how to implement it when you first came across the idea of recursive programming.


Fibonacci is defined very simply:

  • Fib(0) is 1
  • Fib(1) is 1
  • Fib(n) is the sum of Fib(n-1) and Fib(n-2) for all other n >=2

This last step is what makes it recursive—and, unfortunately, what makes the elegant recursive solution. Let’s look at why.

The following is as literal (and correct) a translation of the 3 bullet points above as possible:

public static int fibonacci(int n) {

    if (n == 0 || n == 1) {

        return n;

    }

    return fibonacci(n - 1) + fibonacci(n - 2);

}

There’s a problem with this code; even if not in theory, in practice: It repeats a lot  of work unnecessarily.

Suppose we need fib(10). Then we need

·       Fib(9)

o   For which we need fib(8)

§  For which we need fib(7)

·       For which we need fib(6)

·       And fib(5)

§  And fib(6)

·       For which we need fib(5)

·       And fib(4)

o   and fib(7)

·       Fib(8)

o   for which we need fib(7)

o   and fib(6)

 

The tree of calls isn’t even complete—this is why Fibonacci takes so long, O(2^n)—and look at how many times I’m repeating that I need royal blue, or turquoise, or any other color. And realize that, if, for example, I need fib(100), I need to do that whole tree’s worth of calculation before I can find the answer, even if fib(100) is just an intermediary goal.

Of course, using fib(100) is quite an extreme example, but it does its job: show just how much work you repeat due to the exponential nature of the problem.

Let’s actually look at a tree showing the sequence of calls for fib(7) to see just how much work is repeated, on our way, in the next article, to a better (if less obvious and elegant) way to calculate Fibonacci that saves orders of magnitude of time and memory:

 A diagram of a flowchart

AI-generated content may be incorrect.

Fib(7), by definition, requires fib(6) and fib(5). But here’s the problem: the traditional approach is to calculate fib(6) and throw away the intermediate results, including an answer we need anyway before calculating fib(5), which was already part of fib(6) but which we foolishly threw away.

We get the O(2^n) complexity from the fact that the naïve formula splits us up into 2 scenarios and has us calculate each one independently, and we see that given the branching of the tree. 2^n grows very, very quickly, so while this approach always remains correct, very quickly, it degrades in efficiency, so the method we’ll propose in the next article takes over as the best way to approach this problem.                

Switch

 Other than if/if-else/if-else if-else and the ternary operator, there is yet another common and important conditional expression in Java th...