Ad
  • Default User Avatar

    In memory when you store double it's more of an approximation and ONLY accurate to a certain amount of bits like 15 I think? So when you try to use 1.7 * 1.7 its more like 1.6999...55 * 1.69999......5555. As you multiplty things with uncertainty the rate of uncertainty compounds so you're moving up the errors higher and higher to a point where it gets recognzied by the compiler as an actual value.

    Atleast I think that's what it is. You can see this if you initialize a double var to a value in something like VS code, when you hover over the var it'll show you a number thats slightly off.