标签云

微信群

扫码加入我们

WeChat QR Code

What is the difference between decimal, float and double in .NET?When would someone use one of these?


interesting article zetcode.com/lang/csharp/datatypes

2019年06月25日59分03秒

Related: sandbox.mc.edu/~bennet/cs110/flt/dtof.html

2019年06月25日59分03秒

float/double usually do not represent numbers as 101.101110, normally it is represented as something like 1101010 * 2^(01010010) - an exponent

2019年06月25日59分03秒

Hazzard: That's what the "and the location of the binary point" part of the answer means.

2019年06月25日59分03秒

I'm surprised it hasn't been said already, float is a C# alias keyword and isn't a .Net type. it's System.Single.. single and double are floating binary point types.

1970年01月01日00分03秒

BKSpurgeon: Well, only in the same way that you can say that everything is a binary type, at which point it becomes a fairly useless definition. Decimal is a decimal type in that it's a number represented as an integer significand and a scale, such that the result is significand * 10^scale, whereas float and double are significand * 2^scale. You take a number written in decimal, and move the decimal point far enough to the right that you've got an integer to work out the significand and the scale. For float/double you'd start with a number written in binary.

2019年06月26日59分03秒

Another difference: float 32-bit; double 64-bit; and decimal 128-bit.

2019年06月26日59分03秒

Thecrocodilehunter: sorry, but no. Decimal can represent all numbers that can be represented in decimal notation, but not 1/3 for example. 1.0m / 3.0m will evaluate to 0.33333333... with a large but finite number of 3s at the end. Multiplying it by 3 will not return an exact 1.0.

2019年06月25日59分03秒

Thecrocodilehunter: I think you're confusing accuracy and precision. They are different things in this context. Precision is the number of digits available to represent a number. The more precision, the less you need to round. No data type has infinite precision.

2019年06月25日59分03秒

Thecrocodilehunter: You're assuming that the value that is being measured is exactly 0.1 -- that is rarely the case in the real world!Any finite storage format will conflate an infinite number of possible values to a finite number of bit patterns.For example, float will conflate 0.1 and 0.1 + 1e-8, while decimal will conflate 0.1 and 0.1 + 1e-29.Sure, within a given range, certain values can be represented in any format with zero loss of accuracy (e.g. float can store any integer up to 1.6e7 with zero loss of accuracy) -- but that's still not infinite accuracy.

2019年06月25日59分03秒

Thecrocodilehunter: You missed my point.0.1 is not a special value!The only thing that makes 0.1 "better" than 0.10000001 is because human beings like base 10.And even with a float value, if you initialize two values with 0.1 the same way, they will both be the same value.It's just that that value won't be exactly 0.1 -- it will be the closest value to 0.1 that can be exactly represented as a float.Sure, with binary floats, (1.0 / 10) * 10 != 1.0, but with decimal floats, (1.0 / 3) * 3 != 1.0 either.Neither is perfectly precise.

2019年06月25日59分03秒

Thecrocodilehunter: You still don't understand. I don't know how to say this any more plainly: In C, if you do double a = 0.1; double b = 0.1; then a == b will be true. It's just that a and b will both not exactly equal 0.1. In C#, if you do decimal a = 1.0m / 3.0m; decimal b = 1.0m / 3.0m; then a == b will also be true.But in that case, neither of a nor b will exactly equal 1/3 -- they will both equal 0.3333.... In both cases, some accuracy is lost due to representation. You stubbornly say that decimal has "infinite" precision, which is false.

2019年06月25日59分03秒

If you're doing financial calculations, you absolutely have to roll your own datatypes or find a good library that matches your exact needs.Accuracy in a financial setting is defined by (human) standards bodies and they have very specific localized (both in time and geography) rules about how to do calculations.Things like correct rounding aren't captured in the simple numeric datatypes in .Net.The ability to do calculations is only a very small part of the puzzle.

2019年06月26日59分03秒

You left out the biggest difference, which is the base used for the decimal type (decimal is stored as base 10, all other numeric types listed are base 2).

2019年06月25日59分03秒

The value ranges for the Single and Double are not depicted correctly in the above image or the source forum post. Since we can't easily superscript the text here, use the caret character: Single should be 10^-45 and 10^38, and Double should be 10^-324 and 10^308. Also, MSDN has the float with a range of -3.4x10^38 to +3.4x10^38. Search MSDN for System.Single and System.Double in case of link changes. Single: msdn.microsoft.com/en-us/library/b1e65aza.aspx Double: msdn.microsoft.com/en-us/library/678hzkk9.aspx

2019年06月25日59分03秒

Decimal is 128 bits ... means it occupies 16 bytes not 12

2019年06月25日59分03秒

RogerLipscombe: I would consider double proper in accounting applications in those cases (and basically only those cases) where no integer type larger than 32 bits was available, and the double was being used as though it were a 53-bit integer type (e.g. to hold a whole number of pennies, or a whole number of hundredths of a cent).Not much use for such things nowadays, but many languages gained the ability to use double-precision floating-point values long before they gained 64-bit (or in some cases even 32-bit!) integer math.

2019年06月25日59分03秒

Your answer implies precision is the only difference between these data types. Given binary floating point arithmetic is typically implemented in hardware FPU, performance is a significant difference. This may be inconsequential for some applications, but is critical for others.

2019年06月26日59分03秒

supercat double is never proper in accounting applications.Because Double can only approximate decimal values (even within the range of its own precision).This is because double stores the values in a base-2 (binary)-centric format.

2019年06月25日59分03秒

BrainSlugs83: Use of floating-point types to hold non-whole-number quantities would be improper, but it was historically very common for languages to have floating-point types that could precisely represent larger whole-number values than their integer types could represent.Perhaps the most extreme example was Turbo-87 whose only integer types were limited to -32768 to +32767, but whose Real could IIRC represent values up to 1.8E+19 with unit precision.I would think it would be much saner for an accounting application to use Real to represent a whole number of pennies than...

2019年06月25日59分03秒

...for it to try to perform multi-precision math using a bunch of 16-bit values.For most other languages the difference wasn't that extreme, but for a long time it has been very common for languages not to have any integer type that went beyond 4E9 but have a double type which had unit accuracy up to 9E15.If one needs to store whole numbers which are bigger than the largest available integer type, using double is apt to be simpler and more efficient than trying to fudge multi-precision math, especially given that while processors have instructions to perform 16x16->32 or...

2019年06月25日59分03秒

I really like this answer, especially the question "do we count or measure money?" However, other than money, I can't think of anything that is "counted" that is not simply integer.I have seen some applications that use decimal simply because double has too few significant digits.In other words, decimal might be used because C# does not have a quadruple type en.wikipedia.org/wiki/Quadruple-precision_floating-point_format

2019年06月25日59分03秒

float.MaxValue+1 == float.MaxValue, just as decimal.MaxValue+0.1D == decimal.MaxValue.Perhaps you meant something like float.MaxValue*2?

2019年06月25日59分03秒

supercar But it is not true thatdecimal.MaxValue + 1 == decimal.MaxValue

2019年06月25日59分03秒

supercar decimal.MaxValue + 0.1m == decimal.MaxValue ok

2019年06月25日59分03秒

The System.Decimal throws an exception just before it becomes unable to distinguish whole units, but if an application is supposed to be dealing with e.g. dollars and cents, that could be too late.

2019年06月25日59分03秒

They sure can!They also also have a couple of "magic" values such as Infinity, Negative Infinity, and NaN (not a number) which make it very useful for detecting vertical lines while computing slopes...Further, if you need to decide between calling float.TryParse, double.TryParse, and decimal.TryParse (to detect if a string is a number, for example), I recommend using double or float, as they will parse "Infinity", "-Infinity", and "NaN" properly, whereas decimal will not.

2019年06月26日59分03秒

Compilation only fails if you attempt to divide a literal decimal by zero (CS0020), and the same is true of integral literals. However if a runtime decimal value is divided by zero, you'll get an exception not a compile error.

2019年06月26日59分03秒

BrainSlugs83 However, you might not want to parse "Infinity" or "NaN" depending on the context. Seems like a good exploit for user input if the developper isn't rigourous enough.

2019年06月25日59分03秒

Upvoted for using .42 and .007 in your example. : D

2019年06月26日59分03秒

The "point something" you mentioned is generally referred to as "the fractional part" of a number."Floating point" does not mean "a number with a point something on the end"; but instead "Floating Point" distinguishes the type of number, as opposed to a "Fixed Point" number (which can also store a fractional value); the difference is whether the precision is fixed, or floating. -- Floating point numbers give you a much bigger dynamic range of values (Min and Max), at the cost of precision, whereas a fixed point numbers give you a constant amount of precision at the cost of range.

2019年06月25日59分03秒

Out of curiosity, what was the raw value of cellValue.ToString()?Decimal.TryParse("0.00006317592", out val) seems to work...

2019年06月25日59分03秒

-1 Don't get me wrong, if true, it's very interesting but this is a separate question, it's certainly not an answer to this question.

2019年06月25日59分03秒

Maybe because the Excel cell was returning a double and ToString() value was "6.31759E-05" therefore the decimal.Parse() didn't like the notation. I bet if you checked the return value of Decimal.TryParse() it would have been false.

2019年06月25日59分03秒

weston Answers often complement other answers by filling in nuances they have missed. This answer highlights a difference in terms of parsing. It is very much an answer to the question!

2019年06月26日59分03秒

Er... decimal.Parse("0.00006317592") works -- you've got something else going on. -- Possibly scientific notation?

2019年06月25日59分03秒

The difference is more than just precision. -- decimal is actually stored in decimal format (as opposed to base 2; so it won't lose or round digits due to conversion between the two numeric systems); additionally, decimal has no concept of special values such as NaN, -0, ∞, or -∞.

2019年06月25日59分03秒

Pretty much all modern systems, even cell phones, have hardware support for double; and if you game has even simple physics, you will notice a big difference between double and float.(For example, calculating the velocity / friction in a simple Asteroids clone, doubles allow acceleration to flow much more fluidly than float. -- Seems like it shouldn't matter, but it totally does.)

2019年06月26日59分03秒

Doubles are also double the size of floats, meaning you need to chew through twice as much data, which hurts your cache performance. As always, measure and proceed accordingly.

2019年06月25日59分03秒

What does this answer add that isn't already covered in the existing answers?BTW, your "or" in the "decimal" line is incorrect: the slash in the web page that you're copying from indicates division rather than an alternative.

2019年06月26日59分03秒

And I'd dispute strongly that precision is the main difference. The main difference is the base: decimal floating-point versus binary floating-point. That difference is what makes Decimal suitable for financial applications, and it's the main criterion to use when deciding between Decimal and Double. It's rare that Double precision isn't enough for scientific applications, for example (and Decimal is often unsuitable for scientific applications because of its limited range).

2019年06月25日59分03秒