SICP 1.2: “Procedures and the Processes They Generate”

(Structure and Interpretation of Computer Programs) 1.2

My 1.2 exercise solutions are also on Github here: https://github.com/bolducp/SICP/tree/master/exercises/chapter_01/1.2_exercises


Points of Note

Concise explanation of processes and procedures, and the focus of the section:

A procedure is a pattern for the local evolution of a computational process. It specifies how each stage of the process is built upon the previous stage. We would like to be able to make statements about the overall, or global, behavior of a process whose local evolution has been specified by a procedure. This is very difficult to do in general, but we can at least try to describe some typical patterns of process evolution.
In this section we will examine some common “shapes” for processes generated by simple procedures. We will also investigate the rates at which these processes consume the important computational resources of time and space. (pg. 31–32)

Linear Recursion and Iteration

A consideration of the “shapes” of the two processes (using Factorial as an example), and how “they evolve quite differently” (pg. 34)

Recursion — “The substitution model reveals a shape of expansion followed by contraction […] The expansion occurs as the process builds up a chain of deferred operations (in this case, a chain of multiplications). The contraction occurs as the operations are actually performed. This type of process, characterized by a chain of deferred operations, is called a recursive process. Carrying out this process requires that the interpreter keep track of the operations to be performed later on.” (pg. 34)

Linear Recursion — “In the computation of n!, the length of the chain of deferred multiplications, and hence the amount of information needed to keep track of it, grows linearly with n (is proportional to n), just like the number of steps. Such a process is called a linear recursive process.” (pg. 34)

Iteration — “By contrast, the second process does not grow and shrink.[…]We call this an iterative process. In general, an iterative process is one whose state can be summarized by a fixed number of state variables, together with a fixed rule that describes how the state variables should be updated as the process moves from state to state and an (optional) end test that specifies conditions under which the process should terminate.” (pg. 34)

Tree Recursion — “In general, the number of steps required by a tree-recursive process will be proportional to the number of nodes in the tree, while the space required will be proportional to the maximum depth of the tree […] One should not conclude from this that tree-recursive processes are useless. When we consider processes that operate on hierarchically structured data rather than numbers, we will find that tree recursion is a natural and powerful tool.” (pg. 39)

Orders of Growth (Big O notation)— “the notion of order of growth to obtain a gross measure of the resources required by a process as the inputs become larger” (pg. 42)

“In computers that do only a fixed number of operations at a time, the time required will be proportional to the number of elementary machine operations performed. […] Orders of growth provide only a crude description of the behavior of a process […] On the other hand, order of growth provides a useful indication of how we may expect the behavior of the process to change as we change the size of the problem.” (pg. 43)

1.2 Section Exercises

My solutions are provided in gray text blocks below each exercise question.

Exercise 1.9

Each of the following two procedures defines a method for adding two positive integers in terms of the procedures inc, which increments its argument by 1, and dec, which decrements its argument by 1.
(define (+ a b)
 (if (= a 0)
 b
 (inc (+ (dec a) b))))
(define (+ a b)
 (if (= a 0)
 b
 (+ (dec a) (inc b))))
Using the substitution model, illustrate the process generated by each procedure in evaluating (+ 4 5). Are these processes iterative or recursive?
(define (+ a b)
(if (= a 0)
b
(inc (+ (dec a) b))))
(+ 4 5)
(if (= 4 0)
5
(inc (+ (dec 4) 5))))
= (if (= 4 0)
5
(inc (+ (3) 5))))
(if (= 4 0)
5
(inc (+ (3) 5))))
= (if (= 3 0)
5
(inc (+ (dec 3) 5))))
= (if (= 3 0)
5
(inc (+ (2) 5))))
= (if (= 2 0)
5
(inc (+ (1) 5))))
= (if (= 1 0)
5
(inc (+ (0) 5))))
= (if (= 0 0)
5
(inc (+ (-1) 5))))
5
(inc (+ (1) 5))))
= (if (= 1 0)
5
(inc 5)
= 6
(inc (+ (2) 5))))
= (if (= 2 0)
5
(inc 6)
= 7
= 6
(inc (+ (3) 5))))
= (if (= 3 0)
5
(inc 7)
= 8
(if (= 4 0)
5
(inc 8)
= 9
= 9
; Procedure #2
(define (+ a b)
(if (= a 0)
b
(+ (dec a) (inc b))))
(+ 4 5)
(if (= 4 0)
5
(+ (dec 4) (inc 5))))
= (+ 3 6)
(if (= 3 0)
6
(+ (dec 3) (inc 6))))
= (+ 2 7)
(if (= 2 0)
7
(+ (dec 2) (inc 7))))
= (+ 1 8)
(if (= 3 0)
6
(+ (dec 1) (inc 8))))
(+ 0 9)
(if (= 0 0)
9
(+ (dec 0) (inc 9))))
= 9
= 9
;; The first procedure is recursive. It requires you to keep track of which layer you are on in the cycles of recursive calls in order to pass the results back up to the initial invocation. The second procedure, however, returns as soon as it meets its correct condition, and does not require the lower recursive calls to know anything about, or pass results back up to, the prior calls.

Exercise 1.10.

The following procedure computes a mathematical function called Ackermann’s function.
(define (A x y)
 (cond ((= y 0) 0)
 ((= x 0) (* 2 y))
 ((= y 1) 2)
 (else (A (- x 1)
 (A x (- y 1))))))
What are the values of the following expressions?
(A 1 10)
(A 2 4)
(A 3 3)
Consider the following procedures, where A is the procedure defined above:
(define (f n) (A 0 n))
(define (g n) (A 1 n))
(define (h n) (A 2 n))
(define (k n) (* 5 n n))
Give concise mathematical definitions for the functions computed by the
 procedures ff, gg, and hh for positive integer values of nn. For example, (k n) computes 5n2.
(A 1 10)   1041
(A 2 4) 65536
(A 3 3) 65536
(define (f n) (A 0 n))  
((= x 0) (* 2 y))
((= 0 0) (* 2 n)) = 2n
(define (f n) (A 0 n)) = (f n): 2n
(define (g n) (A 1 n))
(else (A (- x 1)
(A x (- y 1))))))
(else (A (- 1 1)
(A 1 (- n 1))))))
= (A 0 (A 1 n-1))
2(A 1 n-1)
= 2 * ((A (- 1 1)
(A (- (n-1) 1))
(A 0 (A 1 (n - 2)
= 2 * 2 (A 1 (n - 2))
(define (g n) (A 1 n)) = (g n): 2^n
(define (k n) (* 5 n n)) = (k n): 5n^2

Exercise 1.11.

A function f is defined by the rule that A function f is defined by the rule that f(n)=n if n < 3 and f(n) = f(n−1) + 2f(n−2) + 3f(n−3) if n>=3. Write a procedure that computes ff by means of a recursive process.
Write a procedure that computes f by means of an iterative process.
(define (f-recursive n)
(if (< n 3)
n
(+ (f-recursive (- n 1))
(* 2 (f-recursive (- n 2)))
(* 3 (f-recursive (- n 3))))))
(f-recursive 4)
;(if (< n 3)
; n
; (+ (f-recursive (- 4 1))
; = (f-recursive 3)
; = 3 = (n - 1)
; (* 2 (f-recursive (- 4 2)))
; * 2 ((f-recursive 2)))
; = 2 * 2 = 4 = 2 * (n - 2)
; (* 3 (f-recursive (- 4 3))))
; * 3 (f-recursive (1))
; = 3 * 1 = 3 = 3 * (n - 3)
; = 3 + 4 + 3 = 11
(define (f-iterative n)
(define (f-iter count n1 n2 n3)
(if (< count 4)
(+ (* n1 (- count 1) )
(* n2 (- count 2))
(* n3 (- count 3)))
(f-iter (- count 1) (+ n2 n1) (+ n3 (* 2 n1)) (* 3 n1))))
(f-iter n 1 2 3))
(define (f-iterative2 n)
(define (f-iter count n1 n2 n3)
(if (= count 0)
n1
(f-iter (- count 1)
n2
n3
(+ n3 (* 2 n2) (* 3 n1)))))
(f-iter n 0 1 2))
;(f-iterative 30)

Exercise 1.12.

The following pattern of numbers is called Pascal’s triangle.
The numbers at the edge of the triangle are all 1, and each number inside the triangle is the sum of the two numbers above it.35
Write a procedure that computes elements of Pascal’s triangle by means of a recursive process. If a number lies outside of the triangle, return 0 (this makes sense if we view pascal as the combination function ). Start counting rows and columns from 0.
(define (pascal row col)
(cond ((= row 1) 1)
((= col 1) 1)
((= row col) 1)
(else (+ (pascal (- row 1) col)
(pascal row (- col 1))))
))
;(pascal-triangle 1 1) = 1
;(pascal-triangle 2 2) = 1
;(pascal-triangle 3 2) = 2
;(pascal-triangle 4 2) = 3
;(pascal-triangle 5 2) = 4
;(pascal-triangle 5 3) = 6

Exercise 1.14.

Draw the tree illustrating the process generated by the count-change procedure of section 1.2.2 in making change for 11 cents. What are the orders of growth of the space and number of steps used by this process as the amount to be changed increases?
My 1.14 exercise solution
It can easily be seen that there's a lot of repetition here-- for example, (cc 6 1) and all of its recursive steps are calculated at least twice.
Bill the lizard shows (http://www.billthelizard.com/2009/12/sicp-exercise-114-counting-change.html) that the order of growth in terms of steps is equal to O(n^y) where y is the number of kinds-of-coins. This makes sense if you look at my diagram too-- you can see that for each kinds-of-coins value, (i.e. 5, 4, 3, 2, 1) there seems to be an entire sub-tree of recursive calls, within which many of the same calculations are repeated.

Exercise 1.15.

The sine of an angle (specified in radians) can be computed by making use of the approximation sin(x)=x if x is sufficiently small, and the trigonometric identity
sin⁡(x)=3*sin⁡(x/3)−4*sin^3⁡(x/3)
to reduce the size of the argument of sin. (For purposes of this exercise an angle is considered “sufficiently small” if its magnitude is not greater than 0.1 radians.) These ideas are incorporated in the following procedures:
How many times is the procedure p applied when (sine 12.15) is evaluated?
What is the order of growth in space and number of steps (as a function of a) used by the process generated by the sine procedure when (sine a) is evaluated?
(define (cube x) (* x x x))
(define (p x) (- (* 3 x) (* 4 (cube x))))
(define (sine angle)
(if (not (> (abs angle) 0.1))
angle
(p (sine (/ angle 3.0)))))
(sine 12.15)
(if (not (> (abs 12.15) 0.1))
12.15
(p (sine (/ 12.15 3.0))))) ;#1
(sine 4.05)
(if (not (> (abs 4.05) 0.1))
4.05
(p (sine (/ 4.05 3.0))))) ;#2
(sine 1.35)
(if (not (> (abs 1.35) 0.1))
1.35
(p (sine (/ 1.35 3.0))))) ;#3
(sine 0.45)
(if (not (> (abs 0.45) 0.1))
0.45
(p (sine (/ 0.45 3.0))))) ;#4
(sine 0.15)
(if (not (> (abs 0.15) 0.1))
0.15
(p (sine (/ 0.15 3.0))))) ;#5
(sine 0.05)
(if (not (> (abs 0.05) 0.1))
0.05
(p (sine (/ 0.05 3.0))))) ;doesn't need to evaluate
= 0.05
(p (p (p (p (p (sine 0.05))))))
How many times is the procedure p applied when (sine 12.15) is evaluated?
5 times
What is the order of growth in space and number of steps (as a function of a) used by the process generated by the sine procedure when (sine a) is evaluated?
Θ(log(n))

Exercise 1.16.

Design a procedure that evolves an iterative exponentiation process that uses successive squaring and uses a logarithmic number of steps, as does fast-expt. (Hint: Using the observation that (b^n/2)^2=(b^2)^n/2, keep, along with the exponent n and the base b, an additional state variable a, and define the state transformation in such a way that the product ab^n is unchanged from state to state. At the beginning of the process a is taken to be 1, and the answer is given by the value of a at the end of the process. In general, the technique of defining an invariant quantity that remains unchanged from state to state is a powerful way to think about the design of iterative algorithms.)
(define (even? n)
(= (remainder n 2) 0))
(define (square n) (* n n) )
(define (fast-expt-iterative b n)
(define (expt-iter counter b n)
(cond ((= n 0) counter)
((even? n) (expt-iter counter (square b) (/ n 2)))
(else (expt-iter (* counter b) b (- n 1)))))
(expt-iter 1 b n))
;(fast-expt-iterative 2 4)

Exercise 1.17.

The exponentiation algorithms in this section are based on performing exponentiation by means of repeated multiplication. In a similar way, one can perform integer multiplication by means of repeated addition. The following multiplication procedure (in which it is assumed that our language can only add, not multiply) is analogous to the expt procedure:
(define (* a b)(if (= b 0) 0 (+ a (* a (- b 1)))))
This algorithm takes a number of steps that is linear in b. Now suppose we include, together with addition, operations double, which doubles an integer, and halve, which divides an (even) integer by 2. Using these, design a multiplication procedure analogous to fast-expt that uses a logarithmic number of steps.
(define (even? n)
(= (remainder n 2) 0))
(define (double x) (+ x x))
(define (halve x) (/ x 2))
(define (fast-mult a b)
(cond ((= b 0) 0)
((even? b) (fast-mult (double a) (halve b)))
(else (+ a (fast-mult a (- b 1))))))
(fast-mult 7 3)

Exercise 1.18.

Using the results of exercises 1.16 and 1.17, devise a procedure that generates an iterative process for multiplying two integers in terms of adding, doubling, and halving and uses a logarithmic number of steps.
(define (even? n)
(= (remainder n 2) 0))
(define (double x) (+ x x))
(define (halve x) (/ x 2))
(define (square n) (* n n) )
(define (fast-iterative-mult a b)
(define (mult-iter acc a b)
(cond ((= b 0) acc)
((even? b) (mult-iter acc (double a) (halve b)))
(else (mult-iter (+ acc a) a (- b 1)))))
(mult-iter 0 a b))
(fast-iterative-mult 2 6)

Exercise 1.19:

There is a clever algorithm for computing the Fibonacci numbers in a logarithmic number of steps. Recall the transformation of the state variables aa and bb in the fib-iterprocess of 1.2.2: a←a+band b←ab←a. Call this transformation T, and observe that applying T over and over again n times, starting with 1 and 0, produces the pair Fib(n+1) and Fib(n). In other words, the Fibonacci numbers are produced by applying TnTn, the nth power of the transformation T, starting with the pair (1, 0). Now consider T to be the special case of p=0 and q=1 in a family of transformations Tpq, where Tpq transforms the pair (a,b) according to a←bq+aq+ap and b←bp+aq. Show that if we apply such a transformation Tpq twice, the effect is the same as using a single transformation Tp′q′ of the same form, and compute p′ and q′ in terms of p and q.
This gives us an explicit way to square these transformations, and thus we can compute TnTn using successive squaring, as in the fast-expt procedure. Put this all together to complete the following procedure, which runs in a logarithmic number of steps:
1.) a ← bq + aq + ap = a(p + q) + bq
b ← bp + aq
2.) a ← bq + (a(p + q) + bq)q + (a(p + q) + bq)p
= a ← a(p^2 + 2pq + 2q^2) + b(2pq + q^2)
    b ← (bp + aq)p + aq = bp^2 + ap^2 + aq
= (2pq + q^2)a + (p^2 + q^2)b
p' = p^2 + q^2
q' = 2pq + q^2

(define (fib n)
(fib-iter 1 0 0 1 n))
(define (fib-iter a b p q count)
(cond ((= count 0)
b)
((even? count)
(fib-iter a
b
(+ (square p) (square q))
(+ (* 2 p q) (square q))
(/ count 2)))
(else
(fib-iter (+ (* b q)(* a q)(* a p))
(+ (* b p)(* a q))
p
q
(- count 1)))))

Exercise 1.20

The process that a procedure generates is of course dependent on the rules used by the interpreter. As an example, consider the iterative gcd procedure given above. Suppose we were to interpret this procedure using normal-order evaluation, as discussed in section 1.1.5. (The normal-order-evaluation rule for if is described in exercise 1.5.) Using the substitution method (for normal order), illustrate the process generated in evaluating (gcd 206 40) and indicate the remainder operations that are actually performed. How many remainder operations are actually performed in the normal-order evaluation of (gcd 206 40)? In the applicative-order evaluation?
Normal Order:
(gcd 206 40)
= (if (= 40 0)
40
(gcd 40 (remainder 206 40)))
= (if (= (remainder 206 40) 0) ; (if (= 6 0) ...) 1 time
(remainder 206 40)
(gcd (remainder 206 40) (remainder 40 (remainder 206 40)))
= (if (= (remainder 40 (remainder 206 40)) 0) ; (if (= 4 0) 2 times
(remainder 40 (remainder 206 40)
(gcd (remainder 40 (remainder 206 40)) (remainder (remainder 206 40) (remainder 40 (remainder 206 40))))
= (if (= (remainder (remainder 206 40) (remainder 40 (remainder 206 40))) 0) ) ; (if (= 2 0) ...) 4 times
(remainder 40 (remainder 206 40)
(gcd (remainder (remainder 206 40) (remainder 40 (remainder 206 40))) (remainder (remainder 40 (remainder 206 40)) (remainder (remainder 206 40) (remainder 40 (remainder 206 40)))))
= (if (= (remainder (remainder 40 (remainder 206 40)) (remainder (remainder 206 40) (remainder 40 (remainder 206 40)))) 0)) ; (if (= 0 0) ...) 7 times
(remainder (remainder 206 40)(remainder 40 (remainder 206 40))) ; 4 times

Applicative Order:
(gcd 206 40)
= (gcd 40 (remainder 206 40))
= (gcd 6 (remainder 40 6))
= (gcd 4 (remainder 6 4))
= (gcd 2 (remainder 4 2))
Normal order is evaluated 18 times: 1 + 2 + 4 + 7 (for every time remainder is evaluated inside the if-statement) + 4 (for the final recursive return), and in the applicative order remainder is evaluated 4 times.

Exercise 1.21

Use the smallest-divisor procedure to find the smallest divisor of each of the following numbers: 199, 1999, 19999.
(define (smallest-divisor n)
(find-divisor n 2))
(define (find-divisor n test-divisor)
(cond ((> (square test-divisor) n) n)
((divides? test-divisor n) test-divisor)
(else (find-divisor n (+ test-divisor 1)))))
(define (divides? a b)
(= (remainder b a) 0))
(smallest-divisor 199)
; 199
(smallest-divisor 1999)
; 1999
(smallest-divisor 19999)
;7

Exercise 1.22

Most Lisp implementations include a primitive called runtime that returns an integer that specifies the amount of time the system has been running (measured, for example, in microseconds). The following timed-prime-test procedure, when called with an integer n, prints n and checks to see if n is prime. If n is prime, the procedure prints three asterisks followed by the amount of time used in performing the test.
Using this procedure, write a procedure search-for-primes that checks the primality of consecutive odd integers in a specified range. Use your procedure to find the three smallest primes larger than 1000; larger than 10,000; larger than 100,000; larger than 1,000,000. Note the time needed to test each prime. Since the testing algorithm has order of growth of Θ(√n), you should expect that testing for primes around 10,000 should take about √10 times as long as testing for primes around 1000. Do your timing data bear this out? How well do the data for 100,000 and 1,000,000 support the √n prediction? Is your result compatible with the notion that programs on your machine run in time proportional to the number of steps required for the computation?
(define (search-for-primes start end)
(cond ((<= start end)
(begin
(timed-prime-test start)
(search-for-primes (+ start 1) end)))
(else (display "done"))
))
(define (search-for-primes-num-primes minimum num-primes)
(cond ((= num-primes 0) (display "done"))
((prime? minimum)
(begin
(display minimum)
(timed-prime-test minimum)
(search-for-primes-num-primes (+ minimum 2) (- num-primes 1))))
((even? minimum) (search-for-primes-num-primes (+ minimum 1) num-primes))
(else (search-for-primes-num-primes (+ minimum 2) num-primes))
))
(search-for-primes-num-primes 1000 3)
(display newline)
(search-for-primes-num-primes 10000 3)
(display newline)
(search-for-primes-num-primes 100000 3)
(display newline)
(search-for-primes-num-primes 1000000 3)
; I wrote two versions of this program, one taking specific start and end values as parameters, and one taking a starting param and the number of primes to find. I also limited the printing to only the primes in the second case. 
; I didn't see much difference in time with the suggested runs of 1,000, 10,000, 100,000, and 1,000,000, but was able to see a marginal difference with even higher starting values as shown in the screenshot below.

Exercise 1.23

The smallest-divisor procedure shown at the start of this section does lots of needless testing: After it checks to see if the number is divisible by 2 there is no point in checking to see if it is divisible by any larger even numbers. This suggests that the values used for test-divisor should not be 2,3,4,5,6,… but rather 2,3,5,7,9,…To implement this change, define a procedure next that returns 3 if its input is equal to 2 and otherwise returns its input plus 2. Modify the smallest-divisor procedure to use (next test-divisor) instead of (+ test-divisor 1). With timed-prime-test incorporating this modified version of smallest-divisor, run the test for each of the 12 primes found in exercise 1.22. Since this modification halves the number of test steps, you should expect it to run about twice as fast. Is this expectation confirmed? If not, what is the observed ratio of the speeds of the two algorithms, and how do you explain the fact that it is different from 2?
(define (square x) (* x x))
(define (smallest-divisor n)
(find-divisor n 2))
(define (divides? a b)
(= (remainder b a) 0))
(define (prime? n)
(= n (smallest-divisor n)))
(define (timed-prime-test n)
(start-prime-test n (runtime)))
(define (start-prime-test n start-time)
(if (prime? n)
(report-prime n (- (runtime) start-time))))
(define (report-prime n elapsed-time)
(newline)
(display n)
(display " *** ")
(display elapsed-time))
(define (search-for-primes-num-primes minimum num-primes)
(cond ((= num-primes 0) (display "done"))
((prime? minimum)
(begin
(display minimum)
(timed-prime-test minimum)
(search-for-primes-num-primes (+ minimum 2) (- num-primes 1))))
((even? minimum) (search-for-primes-num-primes (+ minimum 1) num-primes))
(else (search-for-primes-num-primes (+ minimum 2) num-primes))))
;; new/adjusted code below
(define (next num)
(if (= 2 num) 3 (+ num 2)))
(define (find-divisor n test-divisor)
(cond ((> (square test-divisor) n) n)
((divides? test-divisor n) test-divisor)
(else (find-divisor n (next test-divisor)))))
(display newline)
(search-for-primes-num-primes 10000000000000 3)
;(search-for-primes-num-primes 100 3)
;(display newline)
;(search-for-primes-num-primes 1000000 3)
;(display newline)
;(search-for-primes-num-primes 100000000000 3)
;(display newline)
;(search-for-primes-num-primes 10000000000000 3)
;
;100000000003 *** .3299999999999983100000000019 vs. .20000000000000284100000000019 (1.649999999999968 times faster)
;100000000019 *** .3400000000000034100000000057 vs. .20000000000000284100000000057 (1.6999999999999929 )
;100000000057 *** .3399999999999892 vs. .20000000000000284 (1.6999999999999218 times faster)
;10000000000037 *** 3.28000000000000110000000000051 vs. 1.98000000000000410000000000051 (1.6565656565656537 times faster)
;10000000000051 *** 3.25999999999999110000000000099 vs. 1.969999999999998910000000000099 (1.654822335025377 times faster)
;10000000000099 *** 3.280000000000001 vs. 1.9799999999999898 (1.6565656565656657 times faster)
;10000000000099 *** 3.1000000000000014 vs. 2.030000000000001 (1.5270935960591132 times faster)
; We're seeing around a 1.5 times speed increase, as opposed to 2 times with the new next procedure. 
; This might be in part accounted for by the fact that the new next check still adds one additional if-statement line of computation.

Exercise 1.24

Modify the timed-prime-test procedure of exercise 1.22 to use fast-prime? (the Fermat method), and test each of the 12 primes you found in that exercise. Since the Fermat test has Θ(logn) growth, how would you expect the time to test primes near 1,000,000 to compare with the time needed to test primes near 1000? Do your data bear this out? Can you explain any discrepancy you find?
(define (expmod base exp m)
(cond ((= exp 0) 1)
((even? exp)
(remainder (square (expmod base (/ exp 2) m))
m))
(else
(remainder (* base (expmod base (- exp 1) m))
m))))
(define (fermat-test n)
(define (try-it a)
(= (expmod a n n) a))
(try-it (+ 1 (random (- n 1)))))
(define (fast-prime? n times)
(cond ((= times 0) true)
((fermat-test n) (fast-prime? n (- times 1)))
(else false)))
(define (square x) (* x x))
(define (timed-prime-test n)
(start-prime-test n (runtime)))
(define (start-prime-test n start-time)
(if (fast-prime? n 100)
(report-prime n (- (runtime) start-time))))
(define (report-prime n elapsed-time)
(newline)
(display n)
(display " *** ")
(display elapsed-time))
(timed-prime-test 1009)
(timed-prime-test 1013)
(timed-prime-test 1019)
(timed-prime-test 100000000003)
(timed-prime-test 100000000019)
(timed-prime-test 100000000057)
(timed-prime-test 10000000000037)
(timed-prime-test 10000000000051)
(timed-prime-test 10000000000099)

Exercise 1.25

Alyssa P. Hacker complains that we went to a lot of extra work in writing expmod. After all, she says, since we already know how to compute exponentials, we could have simply written:
(define (expmod base exp m)
(remainder (fast-expt base exp) m))
Is she correct? Would this procedure serve as well for our fast prime tester? Explain.
(define (fast-expt b n)
(cond ((= n 0) 1)
((even? n) (square (fast-expt b (/ n 2))))
(else (* b (fast-expt b (- n 1))))))
(define (expmod base exp m)
(remainder (fast-expt base exp) m))
(define (new-expmod base exp m)
(cond ((= exp 0) 1)
((even? exp)
(remainder (square (new-expmod base (/ exp 2) m))
m))
(else
(remainder (* base (new-expmod base (- exp 1) m))
m))))
(define (fermat-test n)
(define (try-it a)
(= (expmod a n n) a))
(try-it (+ 1 (random (- n 1)))))
(define (fast-prime? n times)
(cond ((= times 0) true)
((fermat-test n) (fast-prime? n (- times 1)))
(else false)))
(define (square m)(* m m))
(define (timed-prime-test n)
(start-prime-test n (runtime)))
(define (start-prime-test n start-time)
(if (fast-prime? n 100)
(report-prime n (- (runtime) start-time))))
(define (report-prime n elapsed-time)
(newline)
(display n)
(display " *** ")
(display elapsed-time))
Using the original version of expmod that calls fast-exp does seem to work; however, it takes so much time that attempting to do timed-prime-test runs for numbers over 1019 seem to run indefinitely. Because the original expmod only calls the remainder function once the exponent has finished recursively multiplying through the fast-expt function, the return values within fast-expt grow massively large and computationally intensive. The new expmod, however, keeps the value being squared equal to or less than the possible prime number being tested.

Exercise 1.26.

Louis Reasoner is having great difficulty doing exercise 1.24. His fast-prime? test seems to run more slowly than his prime? test. Louis calls his friend Eva Lu Ator over to help. When they examine Louis’s code, they find that he has rewritten the expmod procedure to use an explicit multiplication, rather than calling square:
(define (expmod base exp m)
  (cond ((= exp 0) 1)
    ((even? exp)
    (remainder (* (expmod base (/ exp 2) m)
               (expmod base (/ exp 2) m))
                m))
    (else
      (remainder (* base (expmod base (- exp 1) m))
                    m))))
“I don’t see what difference that could make,” says Louis. “I do.” says Eva. “By writing the procedure like that, you have transformed the Θ(logn) process into a Θ(n) process.” Explain.
Because (expmod base (/ exp 2) m) is computed twice at each step in Louis's version expmod no longer halves the problem size at each step, but rather, creates a version that is tree-recursive, where the same calculations are occurring many more times than necessary.

Exercise 1.27.

Demonstrate that the Carmichael numbers listed in footnote 47 really do fool the Fermat test. That is, write a procedure that takes an integer n and tests whether a^n is congruent to a modulo n for every a<n, and try your procedure on the given Carmichael numbers.
(define (expmod base exp m)
(cond ((= exp 0) 1)
((even? exp)
(remainder (square (expmod base (/ exp 2) m))
m))
(else
(remainder (* base (expmod base (- exp 1) m))
m))))
(define (square x) (* x x))
(define (fermat-test n)
(define (fermat-iter count n)
(if (< count n)
(if (= (expmod count n n) count) (fermat-iter (+ count 1) n) false)
true))
(fermat-iter 1 n))
(fermat-test 561)  ; Carmichael
(fermat-test 1105) ; Carmichael
(fermat-test 1729) ; Carmichael
(fermat-test 2465) ; Carmichael
(fermat-test 2821) ; Carmichael
(fermat-test 6601) ; Carmichael
The Fermat test for the Carmichael numbers returns true, although it should be false