Is super-accuracy matters?
|
10-07-2023, 03:44 PM
Post: #9
|
|||
|
|||
RE: Is super-accuracy matters?
The need for a more accurate pi comes to play when you are implementing and trying to produce correct results for example sin(π + ε) with small values of ε (but you don't know how small).
As you noted, sin(π + ε) ≈ ε so it's an easy answer... Except you are given x = π + ε in floating point, so how do you calculate ε? ε = x - π of course, but the "exact" value of pi, not the approximated floating point version. If x is very close to π, and that subtraction is off by 1 ULP, that 1 ULP might be of huge relative magnitude with respect to ε, (let's say the 1ULP is 10 times smaller than ε, then you only have 1 significant digit correct on the answer, which is a "bad" answer if you are trying to implement sin() in a correct way. Now when people tell me "but the input already comes with an uncertainty", my answer is: that by itself doesn't justify needlessly introducing more uncertainty by disrespecting the calculations. 1ULP doesn't seem much, but when you are solving a system with 6000 equations, those tiny errors pile up like crazy over thousands of operations. The idea here is that calculations should be as accurate as possible, such that the answer after many operations still has an uncertainty based on the input uncertainty, not the input + all this garbage random uncertainty I introduced because I was lazy on my implementation. It's my opinion, of course, and I'm an engineer so I regularly use 3 or 4 digits and that's plenty for hand calculations. But when I solve an initial value problem with 100k time steps of integration... I need to trust that those ULP's are fine, otherwise my results can diverge quickly into garbage territory. |
|||
« Next Oldest | Next Newest »
|
User(s) browsing this thread: 1 Guest(s)