Finding the number of reduced fractions with denominator d

Part 1: isPrime and GCD

I recently came across a Codewars problem that I really enjoyed:

Given a number ‘d’, find out how many reduced fractions can be generated with it as the denominator. https://www.codewars.com/kata/number-of-proper-fractions-with-denominator-d . (Notice, the author first named them proper fractions, and later corrects himself, but the name of the kata stuck. As a former math teacher, I can’t not use the proper terminology :) )

For example, if the input is the number 5, the output is 4 because 1/5, 2/5, 3/5, and 4/5 are all reduced fractions. 5/5 is not a reduced fraction because it can be reduced to 1/1.

If the input is 8, the output is 4 because 1/8, 3/8, 5/8, 7/8 are reduced fractions, but 2/8 (= 1/4), 4/8 (= 1/2), 6/8 (=3/4), and 8/8 (=1/1) are not.

If the input is 15, what is the output? If you got 8 you’ve got the hang of this. If not, try again.

The prompt includes two useful tips for passing the specs:

1) A fraction is considered reduced if and only if the greatest common divisor (GCD) between the numerator and denominator is 1.

2) Be ready to handle big numbers!

I always love/hate that second hint/warning because I know it means I’m going to be doing some optimizations… In this blogpost I’ll be sharing my thought process, code snippets and sometimes pseudocode so that my final solution can’t just be copied and pasted and kata points scored willy-nilly. Still, this post will obviously be a spoiler, which might be what you’re looking for. In any case, you’ve been warned!

Without any optimization in mind, I went for a brute force solution that first called for writing a GCD(n, d) function, where n = numerator and d = denominator, but they could be any two numbers

From there, I incorporated it into a brute force numReducedFractions(d) function where d is the denominator:

This brute force method got the job done for small numbers but timed out for large numbers. From here I went through a string of optimizations.

Optimization 1: isPrime?

I wrote a helper function to see if a number was prime. Before doing anything else I checked to see if the passed in denominator was prime, and if it was, the function returned one less than the denominator. I did this after I observed that for prime denominators (for example 3, 5, 7) the number of reduced fractions was one less than the number itself since the only reducible fraction would be n/n. If a number is prime, its GCD with any other number will be one.

Optimization 2: a better isPrime?

My isPrime method looped through all numbers from 2 to the number itself looking for factors. I only needed to loop up to and including the square root of the number. You can’t have two (or more) factors larger than the square root because their product be larger than the number.

Obviously the whole isPrime-check cut my runtime for prime inputs way down regardless of how large the input was, but it did nothing for composite numbers. Though this was the first optimization, I knew I would eventually come back to it since primality seems to be central to reduciblility.

Optimization 3: a better GCD?

Because the isPrime-check only cut down my runtime in very specific cases, it wasn’t good enough. So, I became interested in optimizations for GCD and did some research. I found two GCD optimizations of particular interest. One used recursion and bitwise operators and can be read about here. I won’t describe it in detail since I didn’t end up using it in my solution or really in my thinking. I did include it in my time tests though, which can be found on a repl here.

The second optimization was recursive (and I later found out is called Euclid’s algorithm) but did not use bitwise operators. Based on my time tests, it was the best option. Sometimes for very large numbers (9 digits and above) the bitwise recursive solution exceeded the call stack. Here’s the recursiveGCD function:

  • For base case: denominator = 0, return the numerator
  • For the recursive case: the denominator != 0, return recursiveGCD(denominator, the remainder of n / d)

As is often the case, the recursive solution is mindblowingly simple and elegant, but might take a walkthrough to really understand. Let’s step through recursiveGCD(378, 868)

  1. 868 != 0, run recursiveGCD(868, 378)
  2. 378 != 0, run recursiveGCD(378, 112)
  3. 112 != 0, run recursiveGCD(112, 42)
  4. 42 != 0, run recursiveGCD(42, 28)
  5. 28 != 0, run recursiveGCD(28, 14)
  6. 14 != 0, run recursiveGCD(14, 0)
  7. 0 == 0, return 14

So actually this walkthrough demonstrates how quickly the function works (only 6 function calls as opposed to a for-loop iterating over 868 times) but it still might not make sense. Why does this work?

The base case makes sense if you think about it in the context of the recursive case — if the second number is divisible by the first number (aka reminder = 0), then the first number is the GCD. Think of the GCD(2, 20): 20 is divisible by 2, so while there are larger numbers that 20 is divisible by (4, 5, and 10) two is the GCD since the largest number any number is evenly divisibly by is itself.

So the base case should never happen on the first run (you can’t have a GCD of a number and 0) but as soon as you find a pair of numbers where the larger is divisible by the smaller, the smaller is the GCD.

But it’s not really as simple as that since we aren’t just checking any two random numbers — if the larger number is not divisible by the smaller number, on the next function call we make the smaller number the ‘new’ large number, and the remainder of that division the new small number.

So why does running the GCD function on the smaller number and the remainder of the division work? Let’s flesh out what we are actually doing with our previous example of recursiveGCD(378, 868):

  • Divide 378 by 868, and get the result 0 with remainder 378 → whenever you run the algorithm with the second number larger than the first, it begins by flipping the numbers
  • Divide 868 by 378, and get the result 2 with remainder 112, so 868=2·378+112.
  • Divide 378 by 112, and get the result 3 with remainder 42, so 378=3·28+42.
  • Divide 112 by 42, and get the result 2 with remainder 28 so 112=3·42+28.
  • Divide 42 by 28, and get the result 1 with remainder 14, so 42=1·28+14.
  • Divide 28 by 14, and get the result 2 with remainder 0, so 28=2·14+0.
  • The greatest common divisor of 378 and 868 is 14.

Using some substitutions, we could progressively write 868 as the sum of:

868 = 2·378 + 112

868 = 2(3·42 + 28) + 112

868 = 2(3·42 + 28) + 3·42+28

868 = 2(3·(1·28+14) + 28) + 3·(1·28+14)+28

868 = 2(3·(1·(2·14+0)+14) + (2·14+0) + 3·(1·(2·14+0)+14)+(2·14+0)

If you notice, both 868 and 378 are broken down into how many times 14 fit into them. I bolded and italicized the original 378 as it is was broken down. We don’t need to break down 14 any further because we know that it can divide both 868 and 378 evenly. This is because we eventually receive 0 as the remainder of all of our division.

In effect, we ‘start big’ and keep getting smaller and smaller until we find the GCD, but we do this in a very smart way (waiting for when we no longer have any remainders). In the iterative brute force method, we mindlessly loop through small numbers to larger ones to see if both numerator and denominator are divisible by the given number and return the largest ‘successful’ divisor.

This blog post has already become much longer than I anticipated, and we haven’t even gotten to the optimization for the reducedFraction function! To be truthful, it leaves the GCD function behind. Still, I hope you enjoyed learning about Euclid’s algorithm as much as I did. Up next — optimizing our properFractions algorithm!