Monday, June 30, 2025

Harsahad Numbers and the Ternary operator

Today’s problem has less to do with computing Harshad numbers—I was unaware of the name until I solved the problem, just a few days before this went live—and much more to do with a piece of Java syntax that tightens up conditionals. By now, you are certainly advanced enough in Java to have come across it if you’ve looked anywhere else for tutorials, so I certainly should address it here.

The problem is the following:

A number is a Harshad number if it’s divisible by the sum of its digits. Given a number, figure out if it is one of these numbers. If it is, return the sum of the digits. If it isn’t—as is convention when things fail, are not found, etc.—return -1.

Here's the logic that gets it done:

public int sumOfTheDigitsOfHarshadNumber(int x) {

          int num = x;

          int currentDigit;

          int sumOfDigits = 0;

         

          while (x > 0) {

               currentDigit = x % 10;

               x /= 10;

               sumOfDigits += currentDigit;

          }

         

          return num % sumOfDigits == 0 ? sumOfDigits : -1;

     }

I need the parameter number x to do things to it—get its digits, etc.—but I also need that value somewhere else for safekeeping. So I created another variable, num, to also store the same value.

The variable currentDigit will store, as I’m looping through, the digit I’m currently working on.

sumOfDigits is declared and initialized to 0 outside the loop, naturally, to store the sum of the digits of the number, which will determine if the number is a Harshad number or not.

Successively modding by 10 and dividing by 10 exposes and captures one digit at a time from the right, i.e., the least significant places. I’m doing addition here, so I don’t care much about the order of the numbers. This way works, and is much easier than coming from the left, so I might as well do what I know well. As I capture digits, I’m adding them to the running total of the sum of all the number’s digits.

There aren’t any leading zeroes, so while the value of x (which is initially the same as my parameter num) is bigger than zero, I can keep looping—I have to keep looping—because I still have digits to process.

Once I finish looping, I can do my Harshad calculation. This is the whole point of this article—that last line of code.

This is a ternary operator, a syntactical choice in Java and other languages that allows programmers to condense if-else logic into one line.

The ternary statement a ? b : c is exactly equivalent to the more verbose

if(a){
     b;
} else{
     c;
}

Get used to reading and writing both styles. They are interchangeable, and a good programmer knows and uses both.

Given this, that last line means the following: if num % sumOfDigits == 0  is true (that is, if num—passed in as x, but  I did all kinds of manipulation to the original x, so I made a copy for this reason—is divisible by sumOfDigits), then return sumOfDigits; if not, return -1.


 

Sunday, June 29, 2025

A classic easy problem

A classic LeetCode Easy problem will be yet another gateway into the fantastic world of optimization, which we have only begun to unlock with Fibonacci recently.

The problem is the following:

·       You have a 1d array of integers

·       You are given a target number

·       There may or may not be a pair of numbers (only one pair, and it will not be the same number twice) whose sum is the target number

·       If such a pair exists, return an array with the indices of the numbers that sum to the target

·       If no such pair exists, return [-1, -1] (recall that -1 is the traditionally-used index for when something is not found in an indexed collection)

It seems natural—naïve, yes, but natural—to check the pairwise sum of every number with every other number. This does work and will produce the correct result, but it is much less efficient than it could be.

The code might look something like

public static int[] twoSum(int[] nums, int target) {

    for (int i = 0; i < nums.length - 1; i++) {

        for (int j = i + 1; j < nums.length; j++) {

            if (nums[i] + nums[j] == target) {

                return new int[] { i, j };

            }

        }

    }

    // Return [-1, -1] if no solution is found

    return new int[] { -1, -1 };

}

No one reinvents the wheel here:

If this is our array:

2

3

5

2

1

4

6

 And the target is 10,

the algorithm will check 2 and 3; 2 and 5; 2 and 2; 2 and 1; 2 and 4; 2 and 6; 3 and 5; 3 and 2; 3 and 1; 3 and 4; 3 and 6; 5 and 2; 5 and 1; 5 and 4; 5 and 6; 2 and 1; 2 and 4; 2 and 6; 4 and 6—and only on that last check will it find 4+ 6 = 10, so it’ll return out {5, 6} since the match is in indices 5 and 6 (the 6th and 7th positions, out of 7, in the array).

Again, this is correct, but wildly inefficient.

Let me propose a better solution: use a structure that we’ve already covered to store both the current number and the number that would need to be found to pair with it, to sum to the target.


Again, whatever structure we would use would need to do something like this, storing pairs:

Found

2

3

5

2

1

4

6

Need to see

8

7

5

8

9

6

4


Can you think of any structures? How about, since we need to maintain the pairs, a HashMap, whose key is an Integer and whose value is an Integer? Each key-value pair then encodes what we’ve actually seen, and what we would need to see to make a valid target-sum.

We can use the HashMap’s get() method to find the complement of the number we’ve just seen. We can then use containsKey() to see if we have the complement anywhere in the Map. If we come across a new number, calculate its complement and put the new pair in the map. If we do eventually find a pair, return where we found the pair; and if not, as before, we return {-1,-1}.

That looks like:

import java.util.HashMap;

public static int[] twoSum(int[] nums, int target) {

    // Create a HashMap to store numbers and their indices

    HashMap<Integer, Integer> map = new HashMap<>();

    for (int i = 0; i < nums.length; i++) {

        // Calculate the complement needed to reach the target

        int complement = target - nums[i];       

        // Check if the complement has already been seen

        if (map.containsKey(complement)) {

            // If found, return the indices of the complement and current number

            return new int[] { map.get(complement), i };

        }

        // Otherwise, store the current number and its index in the map

        map.put(nums[i], i);

    }

   

    // If no solution is found, return [-1, -1]

    return new int[] { -1, -1 };

}

Notice the enormous time savings this allows us to achieve in going from O(n^2) looking at every possible pair, versus now looking at O(n) by having only one pass through the array, since we’re using a map to store complements, updating it as we go, and looking to see if the map contains a pair.

This problem is a classic for a reason: the naïve solution isn’t difficult at all, but having the experience to know you can improve (and then actually writing the improved solution) demonstrates you are no longer just a novice programmer and have made real progress in learning how to systematically approach

Saturday, June 28, 2025

A much-improved Fibonacci approach

 In the last article, we looked in depth at the enormous cost of even fib(7) by drawing out the tree of calls for it and realizing that going even to fib(8) would be much more expensive. The traditional recursive way, we concluded, was beautiful and simple, but terribly inefficient due to its exponential runtime.

Of course, almost anything is better than an exponential runtime, so we’ll take any improvement we can get.  But here’s the good news: I claim (and will soon prove) that I have a method that takes O(2^n) down to O(n). Those are enormous savings: in the first case, if n is 10, you do 1024 operations, whereas in the second case, with the same n, you only do 10—saving >=99% of the work from the exponential version, getting the answer that much more quickly and efficiently.

Here's how we can do it:

  • Set up an array
  • Use the array to remember values you’ve already calculated
  • Look them up, instead of recalculating them
  • Keep adding values as you calculate new ones


For comparison’s sake, let’s go calculate fib(7). For this, we need an array storing 8 integers, so we can index them from 0 to 7.

Initially, of course, the array looks like

Index

0

1

2

3

4

5

6

7

Value

0

0

0

0

0

0

0

0


But we can start by populating the values of the base cases, given in the recursive formula, namely, that fib(0)—or, here, in this context, fib[0] with square brackets because we’re accessing an index of an array, not calling a function—is 1, and so is fib[1].

Index

0

1

2

3

4

5

6

7

Value

1

1

0

0

0

0

0

0

 

Now, from fib[2] onward, let’s simply loop through the array and do one of two things:

  • Use an existing value
  • Update a 0 into a real value, where necessary

fib[2] is not a base case, but we know the rule is to add the previous two, so this is as simple as fib[2] = fib[0] + fib[1];

Index

0

1

2

3

4

5

6

7

Value

1

1

2

0

0

0

0

0

Now, we have fib[2] saved somewhere, and we’ll save ourselves a lot of time by not throwing that result away.

We do the same for fib[3], which is fib[2] + fib[1]. fib[1] was a base case, so we hard-coded that earlier, and we just calculated and saved fib[2], so we can use it

Index

0

1

2

3

4

5

6

7

Value

1

1

2

3

0

0

0

0

 
Likewise, to get fib[4] by looking up (rather than recalculating) fib[2] and fib[3]:

Index

0

1

2

3

4

5

6

7

Value

1

1

2

3

5

0

0

0

 
Likewise, to get fib[5] by looking up (rather than recalculating) fib[3] and fib[4]:

Index

0

1

2

3

4

5

6

7

Value

1

1

2

3

5

8

0

0

 

Likewise, to get fib[6] by looking up (rather than recalculating) fib[4] and fib[5]:

Index

0

1

2

3

4

5

6

7

Value

1

1

2

3

5

8

13

0

 

Likewise, to get fib[7]—our final answer— by looking up (rather than recalculating) fib[5] and fib[6]:

Index

0

1

2

3

4

5

6

7

Value

1

1

2

3

5

8

13

21

This technique of storing a value and using it later is called memoizing (for as much trouble as my word processor gives me, there is no “r” as in “memoRizing”—it’s “memoizing” as in “writing a memo”) and is key to a pattern of programming called dynamic programming, which we’ll start covering explicitly soon.

This way, you still have the relationship that f(n) = f(n-1) + f(n-2), but, thanks to the fact that you’re storing these values somewhere, you don’t need to waste any time redoing past calculations. 


Using a HashMap, we can write this much-improved solution like this:

import java.util.HashMap;

import java.util.Map;


public class FibonacciMemoization {

    // Cache to store already computed Fibonacci values

    private static Map<Integer, Long> memo = new HashMap<>();


    public static long fibonacci(int n) {

        // Base cases

        if (n == 0) return 0;

        if (n == 1) return 1;


        // Check if already computed

        if (memo.containsKey(n)) {

            return memo.get(n);

        }


        // Compute, store, and return

        long result = fibonacci(n - 1) + fibonacci(n - 2);

        memo.put(n, result);

        return result;

    }

}


Issues with a Naive Fibonacci

 Let’s look in detail at a naïve—that is, simple and elegant, but computationally expensive (you’ll get O(2^n)) way to implement Fibonacci. This, perhaps, was the way you learned how to implement it when you first came across the idea of recursive programming.


Fibonacci is defined very simply:

  • Fib(0) is 1
  • Fib(1) is 1
  • Fib(n) is the sum of Fib(n-1) and Fib(n-2) for all other n >=2

This last step is what makes it recursive—and, unfortunately, what makes the elegant recursive solution. Let’s look at why.

The following is as literal (and correct) a translation of the 3 bullet points above as possible:

public static int fibonacci(int n) {

    if (n == 0 || n == 1) {

        return n;

    }

    return fibonacci(n - 1) + fibonacci(n - 2);

}

There’s a problem with this code; even if not in theory, in practice: It repeats a lot  of work unnecessarily.

Suppose we need fib(10). Then we need

·       Fib(9)

o   For which we need fib(8)

§  For which we need fib(7)

·       For which we need fib(6)

·       And fib(5)

§  And fib(6)

·       For which we need fib(5)

·       And fib(4)

o   and fib(7)

·       Fib(8)

o   for which we need fib(7)

o   and fib(6)

 

The tree of calls isn’t even complete—this is why Fibonacci takes so long, O(2^n)—and look at how many times I’m repeating that I need royal blue, or turquoise, or any other color. And realize that, if, for example, I need fib(100), I need to do that whole tree’s worth of calculation before I can find the answer, even if fib(100) is just an intermediary goal.

Of course, using fib(100) is quite an extreme example, but it does its job: show just how much work you repeat due to the exponential nature of the problem.

Let’s actually look at a tree showing the sequence of calls for fib(7) to see just how much work is repeated, on our way, in the next article, to a better (if less obvious and elegant) way to calculate Fibonacci that saves orders of magnitude of time and memory:

 A diagram of a flowchart

AI-generated content may be incorrect.

Fib(7), by definition, requires fib(6) and fib(5). But here’s the problem: the traditional approach is to calculate fib(6) and throw away the intermediate results, including an answer we need anyway before calculating fib(5), which was already part of fib(6) but which we foolishly threw away.

We get the O(2^n) complexity from the fact that the naïve formula splits us up into 2 scenarios and has us calculate each one independently, and we see that given the branching of the tree. 2^n grows very, very quickly, so while this approach always remains correct, very quickly, it degrades in efficiency, so the method we’ll propose in the next article takes over as the best way to approach this problem.                

Switch

 Other than if/if-else/if-else if-else and the ternary operator, there is yet another common and important conditional expression in Java th...