Number (part 1)

Aug 10 · 10 min read

Back when I was starting out in programming (late 60s), I wondered why scientific/math computing so often got things wrong. In those days, we didn’t have double-precision numbers, so actual errors could really mount up. On the other hand, we were able to send men to the moon, so clearly some people knew how to deal with the problem. But there was no general way I knew of even to keep track of possible errors in numeric calculations. It seemed like something was missing. I figured I would try to solve that sooner or later. In around 2002, I did start work on a set of Java libraries to track errors and units. But there were sufficient issues and limitations in Java that, although it was working pretty well, I stopped trying to improve it.

The first time I heard about Scala, it seemed like I would be able to fix some of those Java problems, given the enhanced type system. And indeed this is true. So, this article (part 1) is about the fuzziness of Number. How do we make a type fuzzy? Well, Fuzzy[X] is a trait which gives fuzziness to a type X. Because it’s a typeclass, we don’t have to own X or even have its source anywhere local. That’s one of the most important features of Scala that enables this sort of thing.

So, how about an example? Let’s say there’s a case class called Color. You didn’t write it yourself but it looks like this:

`case class Color(r: Short, g: Short, b: Short) {  def difference(x: Color): Color = Color(math.abs(r - x.r), math.abs(g - x.g), math.abs(b - x.b))  def whiteness: Double = math.sqrt(r * r + g * g + b * b) / 255}`

We can see how close two colors are by taking their difference and then getting the whiteness of the result. If it’s zero, then our two colors are the same. But what if we can’t even perceive small differences of color? Let’s add fuzziness. But, first we’d better define our Fuzzy trait:

`/**  * Type class which adds fuzzy behavior to a type X.  *  * @tparam X the type to be made fuzzy.  */trait Fuzzy[X] {  /**    * Method to determine if x1 and x2 can be considered the same with a probability of p.    *    * @param p  a probability between 0 and 1 -- 0 would always result in true; 1 will result in false unless x1 actually is x2.    * @param x1 a value of X.    * @param x2 a value of X.    * @return true if x1 and x2 are considered equal with probability p.    */  def same(p: Double)(x1: X, x2: X): Boolean}`

As you can see, there’s really only one method here. It’s called same and it takes two parameter sets: p (a Double representing probability) and a pair of Xs. The resulting Boolean indicates whether x1 and x2 should be considered the same. That’s all there is in this, the most general, case. We’ll find fuzziness more useful when we have numerical values, but we’ll get to that later.

Now, let’s see how we can add some fuzz to Color:

`object Color {  def apply(r: Int, g: Int, b: Int): Color = Color(r.toShort, g.toShort, b.toShort)  trait FuzzyColor extends Fuzzy[Color] {    def same(p: Double)(x1: Color, x2: Color): Boolean = -math.log(x1.difference(x2).whiteness) / 3 > p  }  implicit object FuzzyColor extends FuzzyColor}`

As you can see, we’ve somewhat arbitrarily defined two colors to be the same if one third of the (negative) log of the whiteness of the difference is greater than p. Using this definition, Color(255,255,255) and Color(242,242,242) are considered the same with 80% confidence. If we need 100% confidence (p = 1) then only colors that are identical are considered the same. For 0% confidence, all colors are the same. So much for this rather artificial example. Now, let’s see how it can work to consider numbers.

Along with the nominal value of a number, we also store some (optional) fuzziness. When there is no fuzziness, our number is exact. Two exact numbers can only be considered equal if they actually are equal. But fuzzy numbers can be “equal” according to their difference: both the nominal value and the probability distribution function (pdf).

Without getting too much into the mathematics of these pdfs, we note that if a nominal value is within the extent of the pdf, then there is some probability that the value is the true value. Suppose that the pdf is a step function. It starts (for very negative values) as zero; when it gets to a certain value, the probability density becomes 1/a (where a is the range of possible values, also known as the “tolerance”); this continues for a distance a along the value axis; thereafter (for more positive values) with zero probability again. I call this pdf a “box.” This is a hypothetical pdf, of course. But it could arise from a process which, for example, forms steel nails of random length (with a uniform distribution). The nails which are out of tolerance (too short or too long) are discarded and melted down. The nails which emerge would have a pdf of their length like the box described above.

When two fuzzy numbers (x +- a and y +- b), both with box-like pdfs are subtracted (or added), the resulting pdf looks something like the following:

Somewhat arbitrarily, Number does not use such trapezoidal pdfs. Instead, if we are adding two numbers, we approximate their pdfs as Gaussian (“normal”) distributions. That’s because it is very easy to combine such distributions: the mean is the sum (or difference if we subtract) of the means; the variance is the sum of the variances.

On the other hand, if we multiply (or divide) two such numbers with box-type fuzziness, the result has a box-like distribution (with width given by the sum of the input box widths), provided that the pdfs were relative to the nominal values, and not absolute. This follows from simple calculus.

So, as you can see, we have four types of numerical fuzziness: absolute or relative; box or Gaussian. Generally speaking, once we have combined fuzzy numbers together, their pdfs will typically be in Gaussian form. Furthermore, physical constants also start out with a Gaussian pdf, for example, the gravitational constant G:

The parentheses around the “15” imply a Gaussian distribution in the final two decimal places with mean of 30 and standard deviation of 15. I will describe how to input fuzzy numbers below. A number of constants, including G, are defined in the Constants type. It is defined by the following: Number(“6.67430(15)E-11”), i.e. using one of the apply methods.

For dyadic operations (particularly multiplication and raising to a power), relative pdfs are most convenient. For monadic operations (natural log, for example), or addition, absolute pdfs are more convenient. But, of course, all of the conversions and convolutions are taken care of automatically by the Number code.

So, how do we implement the same method described above? We take the difference of the two nominal values and determine if that number (x) could be considered to be zero, given the resulting pdf. For a box-type pdf (which, typically, results only when a fuzzy number is compared to an exact number), we essentially ignore the p value (unless it’s 0 or 1) and simply return true (i.e. same) if the number x is within the box. For a Gaussian distribution, we make use of the “inverse erf” function (into which we pass the complementary probability, i.e. 1-p). That effectively converts the Gaussian pdf into a box.

An alternative way of looking at this determinant is that p corresponds to the probability that zero belongs to the fuzzy set (as in Lotfi Zadeh’s “Fuzzy Logic”) which comprises all the possible fuzzy numbers.

As I will describe in part 2, the nominal value of a number is either an integer, a rational (based on a BigInt divided by a BigInt), or a double-precision value. If we have to convert an integer or rational to a Double, we add a small amount of (relative) fuzziness: (1.6E-16) to any existing fuzzines.

So, how do we deal with introducing fuzziness into our calculations? Well, first of all, we don’t arbitrarily add fuzziness. A number such as phi, the golden ratio, is defined thus: Number(“1.618033988749894”). The result has absolute fuzziness of Box shape with value 0.5E-15. This follows from the 15 decimal places given in the definition. By convention, a string with only one or two decimal places is considered exact (as if it was dollars and cents). However, you can overrule this convention simply by following the string with “*” or “…”. Furthermore, if you have a decimal representation of a number which is in fact exact, then represent it as an integer with a negative exponent (for now at least, we must precede the “E” with a decimal point). Or, more simply, just add “00” to the end of the string.

You can make the box-shape fuzziness explicit by suffixing a decimal string with “[x]” or “[xx]” where x or xx stands for the maximum possible deviation in any one direction. Similarly, for Gaussian fuzziness (as we showed with the constant G above), we suffix the decimal string with “(x)” or “(xx).” In this case x or xx represent standard deviations of possible differences in the digits indicated. Fuzziness must precede any exponent, by the way.

As I’ll explain next time, the numbers π and e are exact. So, for example, if you calculate atan(tan(π)), the result will be exactly π. Similarly, if you calculate e^(ln(e)) the result will be exactly e. However, the more general irrational numbers such as √2 cannot be represented exactly. But, as mentioned in my article Composable Matchers, if you evaluate √2 √2, or something you will get back exactly 2, but that is achieved through a mechanism called ExpressionMatchers which does not operate at the low level of the numbers described here. More on that later, in part 2.

So, what’s the point of all this? At the simplest level, suppose you have a pendulum, such as Foucault’s pendulum. It’s very high and it would be extremely awkward to measure its length. Nevertheless, you can quite easily measure the period t with a stopwatch. The formula to determine the length l of the pendulum is:

where g is the acceleration due to gravity where the pendulum is found. To two decimal places, g = 9.81 m/s.

In the first scenario, we have an accurate stopwatch but we simply ignore the hundredths of seconds. Our code looks like this:

`import com.phasmidsoftware.number.core.{Expression, Number}import Number._val g = Number("9.81*")val t = Number("16.5*")val length: Number = g * ((t / twoPi) ^ 2)`

We need to input the numbers as strings with the asterisk because they have one and two decimal places would, otherwise, be interpreted as exact. The result we get for the length is 67.65[44]. This implies that we are almost confident in the first two significant figures (67) but the actual value could be anywhere between 67.21 and 68.09. Let’s see how we can calculate the contribution from the uncertainties of g and t.

First, let’s look at the general formula for the error bounds for multiplication of two uncertain quantities where f = x y (this comes straight from taking the derivative of f):

If we divide both sides by f, we get:

Which implies that we simply sum the relative error bounds of x and y to get the relative error bound of f. In our example above, we have three terms: g and t (twice since it is squared). The relative error of g is about 0.0005 and the relative error of t is about 0.003. The relative error of l, therefore is approximately 0.0065.

A somewhat more realistic situation is when we time the swing of the pendulum perhaps 1000 times and we calculate the mean and standard deviation. Let’s say the mean is 16.487 and the standard deviation is 0.041 seconds. We write this quantity as 16.487(41). The result we get now is 67.54(35), which is to say that we have 68% confidence that the true length is between 67.19 and 67.89 meters.

The Number package is an ongoing development. What’s described here is version 1.0.10. You can find it at https://github.com/rchillyard/Number. In the second part, I will describe the lazy evaluation mechanism which is designed to avoid, where possible, unnecessary loss of precision, such as in the case of the expression (√3+1)(√3–1) which we know to have the exact value of 2.

CodeX

Everything connected with Tech & Code

CodeX

Everything connected with Tech & Code

Written by

Robin Hillyard

I’ve been addicted to programming for over 50 years now. I currently teach two classes to graduates at Northeastern University, Boston, USA.

CodeX

Everything connected with Tech & Code