The Seen and The Unseen Part 3 Exponential Function

Shweta Barge
AlgoAsylum
Published in
5 min readJul 21, 2020
Image Source

This is part 3 of The Seen and The Unseen Series. Part 2 covered how the seen and the unseen is affected by different implementation choices using geometric series. In part 3, I use the exponential function.

Exponential function

In this series, there is factorial in every term. Python provides math.factorial(x) function that calculates the factorial of x. This adds to one more case in the optimization of the function.
The following are the optimizations that I will be working on for this function.

In the end, I will put all the optimizations together and find the speedup with respect to naïve implementation.

Just like part 2 — Geometric Series, I need to find the terminating condition for the exponential function. To get an idea of how the exponential function works, I will plot its series for different values of x.

In the exponential function, the terms go as 1 + x + x²/2! + ... We’ve seen in geometric series that for |𝑥|<1 the series converges as the power increases. In the case of 𝑥>0, the terms will increase as the power increases but there is division by factorial which brings the series down to zero. That is why in the first plot of 𝑥=0.1 the series just converges whereas in the other three plots it increases to a certain value and then goes on decreasing. This is because the denominator increases at a greater rate than the numerator. Also, note that as the value of 𝑥 is increasing the series is converging slowly.
After a certain no.of iterations, the terms don’t change in value which means that they don’t add anything to the sum. Therefore I will terminate the loop once the value of the sum stops changing.

After implementing the naïve implementation, I will test it using the generic test function theSeenAndTheUnseenTest() , and also check the runtime.

The runtime is a function of x, as x increases the runtime increases. The plot_term() graph also showed that as the value is 𝑥 increases the series converges slowly. This implies that the iterations are high for higher values of 𝑥.

Case 1

As seen in Part 2 — Geometric Series the no.of iterations were reduced by terminating once the error was below 0.01%. I will use the same case for the exponential function.

The iteration optimization works fine for all test cases, I need to check the speedup, and also see by how much the no.of iterations has reduced.

In the exponential function, the no.of iterations is converging a lot faster than those in the geometric progression. This is because there is division by factorial.

Case 2

Just like in Part 1’s calculation optimization case, instead of finding the power at every iteration, I will multiply x to the previous term value. After implementing the function I will test it for accuracy.

Using plot_time() I will check the speedup of exp_power().

The power optimization alone gives no speedup, it is probably because of calculating the factorial term and then dividing the numerator. The table below contains information about the performance of geometric series from Part 2 and Exponential Function from this article so that it is easier to compare and understand.

I can find the factorial in the same way as I am finding the power and see if it has any effect on runtime.

Case 3

Factorial of any given number 𝑛! is given as
𝑛!=(𝑛)⋅(𝑛−1)⋅(𝑛−2)…2⋅1

Just like calculating power at every iteration, I can use the same logic for finding factorial.

Iteration 1: i = 1; fact = 1

Iteration 2: i = 2; fact = 1⋅𝑖

Iteration 3: i = 3; fact = 1⋅2⋅𝑖

……

Calling theSeenAndTheUnseenTest from utils.py to test and plot_time() to check speedup of the factorial optimization.

The factorial case of optimization also doesn’t give any speedup. Instead of putting the power and factorial optimizations in two separate cases, I can put them together and check for speed up. This will be my case 4 of optimization.

Case 4

The calculation for every term will look like this.I’ll call this function exp_power_factorial()and check for correctness.

This function also gives an error below 0.01%, therefore, I can check the speedup with respect to exp_naive().

Identifying where I could make changes in calculating gave me a speedup of 4x.

Case 5

Using utils.plot_time() I will find the speedup of exp_allots().

Aha! A speedup of 10x. Putting all the optimizations together gave me a speedup on 10x while maintaining an accuracy of 99.99%. By visualizing in graphs I was able to understand the impact of change in implementation had on the performance of the function. By working on power and factorial optimization cases individually I thought that the two can be put together to reduce some calculation overhead and get some speedup.

The table below summarizes different optimizations, runtime, and speedup from Part 2 and 3.

Part 4 of The Seen and The Unseen is explained using the example of three trigonometric series — sine, and cosine.

--

--