Wednesday, July 2, 2025

Tribonaccis as an example for all K-bonaccis

 Tribonacci numbers (and, in general, K-bonacci numbers) are defined not with 2 base cases (1, 1) and the rule to add the two predecessors, but K base cases and the rule to add the K predecessors.


The good news is that they look very similar to traditional Fibonacci, so the intuitive solution (the elegant recursive solution) comes naturally to almost anyone who has solved traditional Fibonacci.  

The bad news is that they look very similar to traditional Fibonacci, so the same problems encountered with it also apply to these extended Fibonacci-like sequences, namely, that the complexity grows way out of hand even more quickly for K-bonacci than for Fibonacci.

Fibonacci grew at 2^n because the rule was to add the 2 previous terms. Tribonacci (add the previous 3), then, will grow at 3^n (2^10 is about 1,000; 3^10 is about 59,000); 4-bonacci will grow at 4^n (2^10 is about 1,000; 4^10 is about 1,050,000) and so on.

Any Fibonacci-type sequence like this will grow at a rate of K^n, where K is the number of terms you have to compute to get the next one (2 for Fibonacci; 3 for Tribonacci; 4 for 4-bonacci; etc.)

However, you don’t need to panic if you ever see K-bonacci sequences in a homework assignment or interview. You know the strategy to make Fibonacci more efficient, and if it works for Fibonacci with 2 base cases, then it’ll work for Tribonacci with 3 base cases, and so on: don’t throw away the work you’ve already done. This becomes especially critical with bigger K-value K-bonacci sequences, since the increase in K really drives up the values of K to a power at lightning speed. (Recall that 2^10 was about a thousand, 3^10 is about 59 thousand, and 4^10 is more than a million.)

Here's what a tribonnaci solution would look like, using the same principle of saving our work as we go:

import java.util.HashMap;

import java.util.Map;

 

class Solution {

     Map<Integer, Integer> tribonacciMap = new HashMap<>();

 

     public void setTribonacciMap() {

          tribonacciMap.put(0, 0);

          tribonacciMap.put(1, 1);

          tribonacciMap.put(2, 1);

     }

 

     public int tribonacci(int n) {

          setTribonacciMap();

          if (n == 0 || n == 1 || n == 2) {

               return tribonacciMap.get(n);

          }

          if (!tribonacciMap.containsKey(n)) {

               tribonacciMap.put(n, (tribonacci(n - 1) + tribonacci(n - 2) + tribonacci(n - 3)));

          }

          return tribonacciMap.get(n);

     }

}





Tuesday, July 1, 2025

Binary and Hex basics

Every good programmer needs to know—of course—how to count and do basic math in the decimal system with digits 0123456789. But good programmers should also know how to count and do basic math in two other bases: binary and hexadecimal. The two are related, and it's especially easy to go between them, so we’ll cover conversions before we wrap up here.

First, binary. We normally count with 0123456789, so we have 10 symbols, and our numbers revolve around powers of 10: 1 is 1, 10 is ten times more; 100 is ten times more than that; 1000 is ten times more still; etc. On the other hand, 0.1 is one tenth of 1; 0.01 is one tenth of 0.1; 0.001 is one tenth of 0.01; and so on.

Binary reduces the symbol load to only two—01. Anything you can do in decimal, you can do in binary, and vice versa. The only difference is that place values aren’t 1s, 10s, 100s, 1000s, etc.—they’re 1s, 2s, 4s, 8s, 16s, etc. So binary numbers roll over to more places sooner than base 10 does (about 3.32 times faster), but how the numbers behave when counting, adding, multiplying, dividing, and subtracting is exactly the same.

And just like 9,999,999 is one less than 10,000,000 (which is a perfect power of 10), the equivalent is that 1111 1111 is one less than 1 0000 0000, which is a perfect power of 2. (In decimal, that’s equivalent to saying that 255 is one less than 256, by the way.)

Here's how I like to convert numbers from decimal to binary. Let’s convert 485.

  • I happen to know the biggest power of 2 less than or equal to 485 is 256 (which is 2*2*2*2… 8 times, or 2^8)
  • Subtract that number from 485, and we’re left with 229.
    • We have 1 grouping of 256, plus 229
  • Now, look at the biggest power of 2 less than 229. That’s 128. Make a note. Subtracting it off leaves 101.
    • 1 group of 256
    • 1 group of 128
  • Now, the biggest power left is 64, and subtracting it off leaves 37.
    • 1 group of 256
    • 1 group of 128
    • 1 group of 64
  • Now do the same for 37. The biggest power is 32. Subtracting it leaves 5.
    • 1 group of 256
    • 1 group of 128
    • 1 group of 64
    • 1 group of 32
  • 5 is smaller than 16 and 8, so we won’t have any groups of those
    • 1 group of 256
    • 1 group of 128
    • 1 group of 64
    • 1 group of 32
    • 0 groups of 16
    • 0 groups of 8
  • But we will have 1 group of 4, and that leaves us with 1 left
    • 1 group of 256
    • 1 group of 128
    • 1 group of 64
    • 1 group of 32
    • 0 groups of 16
    • 0 groups of 8
    • 1 group of 4
    • 0 groups of 2
    • 1 group of 1

We then write the number by converting the vertical list into a horizontal one: 357 in binary is 1 1110 0101. Checking our math is very easy: we can simply add all the powers where we have 1s, and ignore the powers where we have 0s, and the sum should be the number we started with. 256+64+32+4+1 is, in fact, 357, so we did everything right. Notice something: because we only have 2 digits, we can immediately know if our answer is wrong if the parity does not match as expected. If a supposedly even number comes back with a last digit of 1, we did something wrong. If a supposedly odd number comes back with a last digit of 0, we did something wrong.

Adding and subtracting work much the same way as in decimal, only there are more frequent carries or borrowings, since there are fewer symbols.

Hexadecimal uses 16 symbols, as opposed to just 10. In addition to 0123456789, we also use ABCDEF.

So counting in hexadecimal looks like 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, F, 10, 11, 12…, and that is equivalent in decimal to 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18…

Hexadecimal-binary conversions are nice because 16 is a power of 2, so the mapping between the bases is really easy: much, much easier than either binary or hexadecimal to decimal, or vice versa.

You’ll notice that, earlier in the article, I split up a binary number like 1 0110 0101. This makes seeing things in 4-bit groupings (as opposed to 3 decimal-digits groupings of thousands together, millions, billions, etc.) so much easier, and lets the hexadecimal practically jump off the page.

The conversion becomes almost trivial:

  • Starting from the rightmost place, break up the number into 4-bit segments, as done above: 1 1110 0101.
  • If the left-most segment (the most significant) is less than 4 bits, pad it on the left with zeroes: 0001 1110 010
  • Now, just treat every 4-bit section as a single hex digit—because it is—and convert it, remembering that A is decimal 10, B is decimal 11, C is decimal 12, D is decimal 13, E is decimal 14, and F is decimal 15

·       The number becomes:

  • Left-most group: 0001 in binary is 1 in hex
  • Second group: 1110 in binary is E in hex (because 1110 is 14 in decimal, which is E in hex)
  • Third group: 0101 in binary is 5 in hex

·       So the final number is 1E5 in hexadecimal

To convert this back into decimal, we know that our columns now are 1s, 16s, 256s, etc., so we have

  • 5 1s
  • E 16s (that is, 14 16s), or 224 in this column
  • 1 256

And the sum of all that (5 + 224 + 256) is in fact our original number of 485.

 

Monday, June 30, 2025

Harsahad Numbers and the Ternary operator

Today’s problem has less to do with computing Harshad numbers—I was unaware of the name until I solved the problem, just a few days before this went live—and much more to do with a piece of Java syntax that tightens up conditionals. By now, you are certainly advanced enough in Java to have come across it if you’ve looked anywhere else for tutorials, so I certainly should address it here.

The problem is the following:

A number is a Harshad number if it’s divisible by the sum of its digits. Given a number, figure out if it is one of these numbers. If it is, return the sum of the digits. If it isn’t—as is convention when things fail, are not found, etc.—return -1.

Here's the logic that gets it done:

public int sumOfTheDigitsOfHarshadNumber(int x) {

          int num = x;

          int currentDigit;

          int sumOfDigits = 0;

         

          while (x > 0) {

               currentDigit = x % 10;

               x /= 10;

               sumOfDigits += currentDigit;

          }

         

          return num % sumOfDigits == 0 ? sumOfDigits : -1;

     }

I need the parameter number x to do things to it—get its digits, etc.—but I also need that value somewhere else for safekeeping. So I created another variable, num, to also store the same value.

The variable currentDigit will store, as I’m looping through, the digit I’m currently working on.

sumOfDigits is declared and initialized to 0 outside the loop, naturally, to store the sum of the digits of the number, which will determine if the number is a Harshad number or not.

Successively modding by 10 and dividing by 10 exposes and captures one digit at a time from the right, i.e., the least significant places. I’m doing addition here, so I don’t care much about the order of the numbers. This way works, and is much easier than coming from the left, so I might as well do what I know well. As I capture digits, I’m adding them to the running total of the sum of all the number’s digits.

There aren’t any leading zeroes, so while the value of x (which is initially the same as my parameter num) is bigger than zero, I can keep looping—I have to keep looping—because I still have digits to process.

Once I finish looping, I can do my Harshad calculation. This is the whole point of this article—that last line of code.

This is a ternary operator, a syntactical choice in Java and other languages that allows programmers to condense if-else logic into one line.

The ternary statement a ? b : c is exactly equivalent to the more verbose

if(a){
     b;
} else{
     c;
}

Get used to reading and writing both styles. They are interchangeable, and a good programmer knows and uses both.

Given this, that last line means the following: if num % sumOfDigits == 0  is true (that is, if num—passed in as x, but  I did all kinds of manipulation to the original x, so I made a copy for this reason—is divisible by sumOfDigits), then return sumOfDigits; if not, return -1.


 

Sunday, June 29, 2025

A classic easy problem

A classic LeetCode Easy problem will be yet another gateway into the fantastic world of optimization, which we have only begun to unlock with Fibonacci recently.

The problem is the following:

·       You have a 1d array of integers

·       You are given a target number

·       There may or may not be a pair of numbers (only one pair, and it will not be the same number twice) whose sum is the target number

·       If such a pair exists, return an array with the indices of the numbers that sum to the target

·       If no such pair exists, return [-1, -1] (recall that -1 is the traditionally-used index for when something is not found in an indexed collection)

It seems natural—naïve, yes, but natural—to check the pairwise sum of every number with every other number. This does work and will produce the correct result, but it is much less efficient than it could be.

The code might look something like

public static int[] twoSum(int[] nums, int target) {

    for (int i = 0; i < nums.length - 1; i++) {

        for (int j = i + 1; j < nums.length; j++) {

            if (nums[i] + nums[j] == target) {

                return new int[] { i, j };

            }

        }

    }

    // Return [-1, -1] if no solution is found

    return new int[] { -1, -1 };

}

No one reinvents the wheel here:

If this is our array:

2

3

5

2

1

4

6

 And the target is 10,

the algorithm will check 2 and 3; 2 and 5; 2 and 2; 2 and 1; 2 and 4; 2 and 6; 3 and 5; 3 and 2; 3 and 1; 3 and 4; 3 and 6; 5 and 2; 5 and 1; 5 and 4; 5 and 6; 2 and 1; 2 and 4; 2 and 6; 4 and 6—and only on that last check will it find 4+ 6 = 10, so it’ll return out {5, 6} since the match is in indices 5 and 6 (the 6th and 7th positions, out of 7, in the array).

Again, this is correct, but wildly inefficient.

Let me propose a better solution: use a structure that we’ve already covered to store both the current number and the number that would need to be found to pair with it, to sum to the target.


Again, whatever structure we would use would need to do something like this, storing pairs:

Found

2

3

5

2

1

4

6

Need to see

8

7

5

8

9

6

4


Can you think of any structures? How about, since we need to maintain the pairs, a HashMap, whose key is an Integer and whose value is an Integer? Each key-value pair then encodes what we’ve actually seen, and what we would need to see to make a valid target-sum.

We can use the HashMap’s get() method to find the complement of the number we’ve just seen. We can then use containsKey() to see if we have the complement anywhere in the Map. If we come across a new number, calculate its complement and put the new pair in the map. If we do eventually find a pair, return where we found the pair; and if not, as before, we return {-1,-1}.

That looks like:

import java.util.HashMap;

public static int[] twoSum(int[] nums, int target) {

    // Create a HashMap to store numbers and their indices

    HashMap<Integer, Integer> map = new HashMap<>();

    for (int i = 0; i < nums.length; i++) {

        // Calculate the complement needed to reach the target

        int complement = target - nums[i];       

        // Check if the complement has already been seen

        if (map.containsKey(complement)) {

            // If found, return the indices of the complement and current number

            return new int[] { map.get(complement), i };

        }

        // Otherwise, store the current number and its index in the map

        map.put(nums[i], i);

    }

   

    // If no solution is found, return [-1, -1]

    return new int[] { -1, -1 };

}

Notice the enormous time savings this allows us to achieve in going from O(n^2) looking at every possible pair, versus now looking at O(n) by having only one pass through the array, since we’re using a map to store complements, updating it as we go, and looking to see if the map contains a pair.

This problem is a classic for a reason: the naïve solution isn’t difficult at all, but having the experience to know you can improve (and then actually writing the improved solution) demonstrates you are no longer just a novice programmer and have made real progress in learning how to systematically approach

Saturday, June 28, 2025

A much-improved Fibonacci approach

 In the last article, we looked in depth at the enormous cost of even fib(7) by drawing out the tree of calls for it and realizing that going even to fib(8) would be much more expensive. The traditional recursive way, we concluded, was beautiful and simple, but terribly inefficient due to its exponential runtime.

Of course, almost anything is better than an exponential runtime, so we’ll take any improvement we can get.  But here’s the good news: I claim (and will soon prove) that I have a method that takes O(2^n) down to O(n). Those are enormous savings: in the first case, if n is 10, you do 1024 operations, whereas in the second case, with the same n, you only do 10—saving >=99% of the work from the exponential version, getting the answer that much more quickly and efficiently.

Here's how we can do it:

  • Set up an array
  • Use the array to remember values you’ve already calculated
  • Look them up, instead of recalculating them
  • Keep adding values as you calculate new ones


For comparison’s sake, let’s go calculate fib(7). For this, we need an array storing 8 integers, so we can index them from 0 to 7.

Initially, of course, the array looks like

Index

0

1

2

3

4

5

6

7

Value

0

0

0

0

0

0

0

0


But we can start by populating the values of the base cases, given in the recursive formula, namely, that fib(0)—or, here, in this context, fib[0] with square brackets because we’re accessing an index of an array, not calling a function—is 1, and so is fib[1].

Index

0

1

2

3

4

5

6

7

Value

1

1

0

0

0

0

0

0

 

Now, from fib[2] onward, let’s simply loop through the array and do one of two things:

  • Use an existing value
  • Update a 0 into a real value, where necessary

fib[2] is not a base case, but we know the rule is to add the previous two, so this is as simple as fib[2] = fib[0] + fib[1];

Index

0

1

2

3

4

5

6

7

Value

1

1

2

0

0

0

0

0

Now, we have fib[2] saved somewhere, and we’ll save ourselves a lot of time by not throwing that result away.

We do the same for fib[3], which is fib[2] + fib[1]. fib[1] was a base case, so we hard-coded that earlier, and we just calculated and saved fib[2], so we can use it

Index

0

1

2

3

4

5

6

7

Value

1

1

2

3

0

0

0

0

 
Likewise, to get fib[4] by looking up (rather than recalculating) fib[2] and fib[3]:

Index

0

1

2

3

4

5

6

7

Value

1

1

2

3

5

0

0

0

 
Likewise, to get fib[5] by looking up (rather than recalculating) fib[3] and fib[4]:

Index

0

1

2

3

4

5

6

7

Value

1

1

2

3

5

8

0

0

 

Likewise, to get fib[6] by looking up (rather than recalculating) fib[4] and fib[5]:

Index

0

1

2

3

4

5

6

7

Value

1

1

2

3

5

8

13

0

 

Likewise, to get fib[7]—our final answer— by looking up (rather than recalculating) fib[5] and fib[6]:

Index

0

1

2

3

4

5

6

7

Value

1

1

2

3

5

8

13

21

This technique of storing a value and using it later is called memoizing (for as much trouble as my word processor gives me, there is no “r” as in “memoRizing”—it’s “memoizing” as in “writing a memo”) and is key to a pattern of programming called dynamic programming, which we’ll start covering explicitly soon.

This way, you still have the relationship that f(n) = f(n-1) + f(n-2), but, thanks to the fact that you’re storing these values somewhere, you don’t need to waste any time redoing past calculations. 


Using a HashMap, we can write this much-improved solution like this:

import java.util.HashMap;

import java.util.Map;


public class FibonacciMemoization {

    // Cache to store already computed Fibonacci values

    private static Map<Integer, Long> memo = new HashMap<>();


    public static long fibonacci(int n) {

        // Base cases

        if (n == 0) return 0;

        if (n == 1) return 1;


        // Check if already computed

        if (memo.containsKey(n)) {

            return memo.get(n);

        }


        // Compute, store, and return

        long result = fibonacci(n - 1) + fibonacci(n - 2);

        memo.put(n, result);

        return result;

    }

}


Issues with a Naive Fibonacci

 Let’s look in detail at a naïve—that is, simple and elegant, but computationally expensive (you’ll get O(2^n)) way to implement Fibonacci. This, perhaps, was the way you learned how to implement it when you first came across the idea of recursive programming.


Fibonacci is defined very simply:

  • Fib(0) is 1
  • Fib(1) is 1
  • Fib(n) is the sum of Fib(n-1) and Fib(n-2) for all other n >=2

This last step is what makes it recursive—and, unfortunately, what makes the elegant recursive solution. Let’s look at why.

The following is as literal (and correct) a translation of the 3 bullet points above as possible:

public static int fibonacci(int n) {

    if (n == 0 || n == 1) {

        return n;

    }

    return fibonacci(n - 1) + fibonacci(n - 2);

}

There’s a problem with this code; even if not in theory, in practice: It repeats a lot  of work unnecessarily.

Suppose we need fib(10). Then we need

·       Fib(9)

o   For which we need fib(8)

§  For which we need fib(7)

·       For which we need fib(6)

·       And fib(5)

§  And fib(6)

·       For which we need fib(5)

·       And fib(4)

o   and fib(7)

·       Fib(8)

o   for which we need fib(7)

o   and fib(6)

 

The tree of calls isn’t even complete—this is why Fibonacci takes so long, O(2^n)—and look at how many times I’m repeating that I need royal blue, or turquoise, or any other color. And realize that, if, for example, I need fib(100), I need to do that whole tree’s worth of calculation before I can find the answer, even if fib(100) is just an intermediary goal.

Of course, using fib(100) is quite an extreme example, but it does its job: show just how much work you repeat due to the exponential nature of the problem.

Let’s actually look at a tree showing the sequence of calls for fib(7) to see just how much work is repeated, on our way, in the next article, to a better (if less obvious and elegant) way to calculate Fibonacci that saves orders of magnitude of time and memory:

 A diagram of a flowchart

AI-generated content may be incorrect.

Fib(7), by definition, requires fib(6) and fib(5). But here’s the problem: the traditional approach is to calculate fib(6) and throw away the intermediate results, including an answer we need anyway before calculating fib(5), which was already part of fib(6) but which we foolishly threw away.

We get the O(2^n) complexity from the fact that the naïve formula splits us up into 2 scenarios and has us calculate each one independently, and we see that given the branching of the tree. 2^n grows very, very quickly, so while this approach always remains correct, very quickly, it degrades in efficiency, so the method we’ll propose in the next article takes over as the best way to approach this problem.                

Switch

 Other than if/if-else/if-else if-else and the ternary operator, there is yet another common and important conditional expression in Java th...