Fortsätt till huvudinnehåll

When to use float, double, and decimal in C#

If your dealing with numbers that have fractions you need a number type that can handle that case. When you define a variable in C# and assign it a fractional value, for example var myValue = 2.5; the default type is double.

In many cases double works fine. But sometimes you want to use float or decimal instead. The major difference between float and double is that float uses 32 bits while double uses 64 bits. This makes double able to represent a much larger range of values with higher precision. So if memory space is not an issue you can safely ignore float and use double instead.

One major drawback with float and double is that they use binary representation for the fractional part of the value. This becomes an issue when you need to have exact values for the fractional part. Not all fractional values, that looks really exact in the code, can be represented in binary. One example is 0.2 which gets rounded off when stored as a float or double.

What this means is that you may get unexpected rounding errors when using float or double and checking if two variables are equal may unexpectedly return false due to some tiny fraction being different. Due to this you should always allow for a small difference when comparing variables of type float or double.

Instead of: if (aDouble == anotherDouble) { ... }
Use: if (Math.Abs(aDouble - anotherDouble) < tolerance) { ... }

However, if it important in your application that you don't get any rounding errors, for example when handling currency, you should use the decimal type. The big difference with decimal is that digits are stored as 0 - 9 instead of a binary representation.
The drawback of decimal is that it uses 128 bits and still it's range is much smaller than float and double making it the most precise but also the least effective of the three.

For more details, see the official C# specification.
The decimal keyword
The double keyword
The float keyword

Kommentarer

Populära inlägg i den här bloggen

C# Enum as bit field

Bit field enum Whenever you wish to express combinations of properties of an object, bit fields are a good way to accomplish this. As a simple example, consider a file in the file system. It can be Readable , Writable , Hidden or a combination these. The different attributes can be defined as an enum : [Flags] public enum FileAttribute {   None      = 0b0000;   Readable  = 0b0001;   Writeable = 0b0010;   Hidden    = 0b0100; } To indicate that this enum is expected to be used as a bit field I have defined it with the FlagsAttribute . It is important to understand that the FlagsAttribute does nothing more than making some changes to how the ToString method of the enum works, making it possible to print out all flags. It does not introduce any validation or special treatment of the enum in any other way. I have defined the values of the different fields of the enum using binary representation, this should make it even more clear that this is a bit field and which bi

Codility tasks - Part I

I was recently faced with two codility tasks when applying for a job as an Embedded Software Engineer. For those of you who arn't familiar with Codility you can check out their website here:  www.codility.com Task one - Dominator The first task was called Dominator. The goal was to, given a std::vector of integers, find an integer that occurs in more than half of the positions in the vector. If no dominator was found -1 should be returned. My approach was to loop through the vector from the first to the last element, using a std::map to count the number of occurences of each integer. If the count ever reached above half the size of the vector I stopped and returned that integer and if I reached the end without finding a dominator I returned -1. So was that a good approach? Well, the reviewer at the company rated the solution as 'pretty ok'. His preferred solution was store the first integer in the array and set a counter to 1. Then loop through the remaining i