Last week I wrote about the making change for a dollar project. It got me thinking about how hard dealing with money in programming really is. The problem with money is that 1/10 is an infinitely repeating fraction in Binary.If one adds 1/10
Take the following C$ code.
double penny = 1.0 / 10.0;
double dime = 0.0;
Console.WriteLine(penny);
for (int I = 0; I < 100; I++)
{
dime += penny;
}
if (dime == 10.0) Console.WriteLine("Dime");
Console.WriteLine(dime);
One might expect that the word “dime” would be printed but it’s not. What this code actually prints is the following.
0.1
9.99999999999998
The 0.1 is expected but as you can see, adding the 0.1 100 times doesn’t give exactly 10.0. The problem often doesn’t show up right away. This sort of thing often confuses beginners because the programming language “helpfully” hides the issue. When I had the program display the value of “dime” after every addition the discrepancy don’t show up until around 6.0.
5.8
5.9
5.99999999999999
It’s not only beginners who struggle with this issue. It’s a pretty common issue and some languages support a currency data type for that reason. The currency data type adds some overhead to operations and not everyone is a fan of using it. A different alternative is to use integers and insert a decimal point on display.
One of the first programming languages I used for business related software, called DIBOL, didn’t support floating point numbers at all. It supported 18 digits of integer accuracy. It was an interesting language. Using it did make it easier for me to think about using integers for money later in my career though. Most beginners don’t have that sort of a helpful start. They are used to thinking in digital rather than binary. To my way of thinking this reinforces my thinking that learning binary is an important part of computer science education.