Decimal vs Double in C#

Mahabubul Hasan
.NET in 2 minutes
Published in
2 min readMar 21, 2021

double is useful for scientific computations decimal is useful for financial computations.

decimal is about 10 times slower than double

float and double internally represent numbers in base 2. For this reason only numbers expressible in base 2 are represented precisely. Practically this means most literals with fractional component (which are in base 10) will not be represented precisely. This is why float and double are bad for financial calculations. In contrast decimal works in base 10 and so can precisely represent numbers expressible in base 10 (as well as its factors, base 2 and base 5). Because real literals are in base 10, decimal can precisely represent numbers such as 0.1.

float x = 0.1f; // Not quite 0.1 
Console.WriteLine(x + x + x + x + x + x + x + x + x + x); // 1.0000001

However, neither double nor decimal can precisely represent a fractional number whose base 10 representation is recurring

decimal m = 1M / 6M; // 0.1666666666666666666666666667M 
double d = 1.0 / 6.0; // 0.16666666666666666

This leads to accumulated rounding errors:

decimal notQuiteWholeM = m + m + m + m + m + m; 
// 1.0000000000000000000000000002M
double notQuiteWholeD = d + d + d + d + d + d;
// 0.99999999999999989

which break equality and comparison operations:

Console.WriteLine (notQuiteWholeM = = 1M); // False 
Console.WriteLine (notQuiteWholeD < 1.0); // True

--

--