If your dealing with numbers that have fractions you need a number type that can handle that case. When you define a variable in C# and assign it a fractional value, for example var myValue = 2.5; the default type is double.
In many cases double works fine. But sometimes you want to use float or decimal instead. The major difference between float and double is that float uses 32 bits while double uses 64 bits. This makes double able to represent a much larger range of values with higher precision. So if memory space is not an issue you can safely ignore float and use double instead.
One major drawback with float and double is that they use binary representation for the fractional part of the value. This becomes an issue when you need to have exact values for the fractional part. Not all fractional values, that looks really exact in the code, can be represented in binary. One example is 0.2 which gets rounded off when stored as a float or double.
What this means is that you may get unexpected rounding errors when using float or double and checking if two variables are equal may unexpectedly return false due to some tiny fraction being different. Due to this you should always allow for a small difference when comparing variables of type float or double.
Instead of: if (aDouble == anotherDouble) { ... }
Use: if (Math.Abs(aDouble - anotherDouble) < tolerance) { ... }
However, if it important in your application that you don't get any rounding errors, for example when handling currency, you should use the decimal type. The big difference with decimal is that digits are stored as 0 - 9 instead of a binary representation.
The drawback of decimal is that it uses 128 bits and still it's range is much smaller than float and double making it the most precise but also the least effective of the three.
For more details, see the official C# specification.
The decimal keyword
The double keyword
The float keyword
In many cases double works fine. But sometimes you want to use float or decimal instead. The major difference between float and double is that float uses 32 bits while double uses 64 bits. This makes double able to represent a much larger range of values with higher precision. So if memory space is not an issue you can safely ignore float and use double instead.
One major drawback with float and double is that they use binary representation for the fractional part of the value. This becomes an issue when you need to have exact values for the fractional part. Not all fractional values, that looks really exact in the code, can be represented in binary. One example is 0.2 which gets rounded off when stored as a float or double.
What this means is that you may get unexpected rounding errors when using float or double and checking if two variables are equal may unexpectedly return false due to some tiny fraction being different. Due to this you should always allow for a small difference when comparing variables of type float or double.
Instead of: if (aDouble == anotherDouble) { ... }
Use: if (Math.Abs(aDouble - anotherDouble) < tolerance) { ... }
However, if it important in your application that you don't get any rounding errors, for example when handling currency, you should use the decimal type. The big difference with decimal is that digits are stored as 0 - 9 instead of a binary representation.
The drawback of decimal is that it uses 128 bits and still it's range is much smaller than float and double making it the most precise but also the least effective of the three.
For more details, see the official C# specification.
The decimal keyword
The double keyword
The float keyword
Kommentarer
Skicka en kommentar