Is super-accuracy matters?
|
10-24-2023, 01:33 PM
Post: #32
|
|||
|
|||
RE: Is super-accuracy matters?
(10-07-2023 07:24 AM)EdS2 Wrote: I think this is related: we might ask ourselves, where in the real world might we need unexpectedly great accuracy? Which is why Microsoft introduced the "decimal" type: "The Decimal value type represents decimal numbers ranging from positive 79,228,162,514,264,337,593,543,950,335 to negative 79,228,162,514,264,337,593,543,950,335. The default value of a Decimal is 0. The Decimal value type is appropriate for financial calculations that require large numbers of significant integral and fractional digits and no round-off errors." "The binary representation of a Decimal value is 128-bits consisting of a 96-bit integer number, and a 32-bit set of flags representing things such as the sign and scaling factor used to specify what portion of it is a decimal fraction. Therefore, the binary representation of a Decimal value the form, ((-296 to 296) / 10(0 to 28)), where -(296-1) is equal to MinValue, and 296-1 is equal to MaxValue. " https://learn.microsoft.com/en-us/dotnet...ew=net-7.0 |
|||
« Next Oldest | Next Newest »
|
User(s) browsing this thread: 2 Guest(s)