Standard the Computer represent floating-point IEEE 754

Sachintha Punchihewa
3 min readOct 5, 2021

--

When we take a floating-point number in programming the way computers represent the number is using IEEE 754 standard. so that we have to have a solid understanding of how a floating-point number should behave in our computer system. Because when we work in the industry we have come across lots of problems when we deal with the floating-point number especially in developing a financial application, point of sale application likewise.

When we create a float or double value in programming. It will be divided into 3 parts like below. sign bit, exponent, mentissa. and total also there for an individual single, double and long double.

to convert our number into IEEE 754 format respectively we have to follow 3 steps. I will explain the 3 steps below.

  1. We have to convert whatever given value into the binary.
  2. Then we have to convert that binary value into a scientific notation in binary format.
  3. Finally, we have to convert our converted scientific notation binary value into IEEE 754 standard according to its sign bit, exponent, mantissa.

Let’s take 9.1 as an example.

Converting given number into binary

First, we need to take the binary value of the given number separately 9 and 0.1. So, let’s take the number 9 binary first.

when we convert 0.1 into binary you will be able to identify the pattern of dividing the decimal value, like 0001100110011001100 this never ends.

9 => 1001

0.1 => 00011001100110011001100… (this never ends)

finally 9.1 binary number is,

9.1 => 1001.0001100110011001100…

Converting the Binary into Scientific notation

Now, what we have to do is moving the decimal point into the first point binary number. In our scenario, we have to move the decimal point into 3 decimal places.

9.1 => 1.0010001100110011001100…*2³

  • why we have to get it as 2³ because we have moved the decimal point into 3 places to meet the 1st binary. So, we take it as 2³.
  • 2³ value represents our exponent value.

Converting Scientific notation into IEEE 754 standards which Computer understand.

Now science we have both scientific notation and binary format we need to convert this to IEEE standards.

In IEEE standard it says the first bit is a sign bit so how we get the sign bit is if the first number is (-) => 1 and the first number is (+) => 0.

(-) => 1

(+) => 0

In Scientific notation, we have written 2³ which represents the exponent of the value. In the exponent, it has 8 bits allocated. it means 2⁸. so we have to represent (-) and (+) both not only (+) we want to represent both. so that we can represent our exponent 8 to -128→+127 in here the bolded +127 value is called exponent bias.

and finally, what we do here we take the 3 which represents 2³ power. and it is added into the +127(exponent bias) + 3 = 130.

  • 128→+127 + 3 = 130

now in the exponent, we use those 8 bits to store this 130 value.

now we have to convert this 130 into binary.

130 -> 10000010

Let’s take our Binary, Scientific notation, and exponent value together.

9.1 => 1001.0001100110011001100…

9.1 => 1.0010001100110011001100…*²³

130 -> 10000010

Let’s write the IEEE standard how the computer understands the 9.1 value

  1. since our number is (+) number we take 0 as for the sign bit.
  2. Then we put the exponent value which is 130 -> 10000010
  3. when we write the scientific notation we always start with 1. so we do not need to include 1 which has in scientific notation. we take the other 23 values instead of it.

This is the IEEE standard value. all the modern computers will understand this format internally.

0(sign bit) 10000010(exponent) 00100011001100110011001(mentissa)

--

--