Post Reply 
Error propagation in adaptive Simpson algorithm
07-31-2018, 05:02 PM (This post was last modified: 07-31-2018 05:05 PM by Claudio L..)
Post: #4
RE: Error propagation in adaptive Simpson algorithm
(07-31-2018 09:35 AM)Dieter Wrote:  Maybe a few general thoughts on this kind of algorithm may help.

First of all, adaptive Simpson methods have already been discussed here. Take a look at this thread and at this program.

The idea basically is to estimate the error of Simpson approximations with n and 2n intervals. This yields an even more accurate result with the value (16·S2n – Sn)/15. That's where the 15 comes from.

For more details on this take a look at the German Wikipedia article where this is explained. The error term E(N)(f) with the 16/15 factor is discussed earlier in that article.

Thanks, now I know where the 15 comes from, yet we are talking very different algorithms from the one you provided links to. I'm not even sure why they still call the adaptive one "Simpson", since you are not using the summation that is the hallmark of the Simpson method, but the error analysis still applies.


(07-31-2018 09:35 AM)Dieter Wrote:  As far as the threshold for quitting the iteration is concerned: this is a general problem with any method that produces successive approximations of the true result. Since the latter is unknown all you can do is compare the last two approximations, i.e. the last and the previous one. If these two agree within the desired accuracy, the iteration stops and the result is returned. But – if the final and the previous value agree in the desired, say, 8 places, this means that already the previous value was sufficiently accurate! And since the last approximation is more accurate than this, it may be correct to e.g. 11 places. So its true accuracy is greater than requred. That's what you observe here.

I understand, and I agree with your logic more than the wikipedia article for adaptive simpson: if the difference between the areas calculated with a step h and the one calculated with h/2 is less than the target error then we should be good, right?
The thing is, every time it halves the step size, it also halves the target error, and that's what I'm questioning if perhaps is too conservative, leading to the overshoot in accuracy. When you add all the tiny pieces back together the error of the sum will never be the sum of the estimated errors because they are randomly going to cancel each other, it's more like:

Error_whole ≅ √N * AvgError

Couldn't we at least say that (for one single "split" section)

Error_whole ≅ √2 * AvgError

So we could say our target error wouldn't be halved each way, but:

AvgError = Error_whole/√2 ≅ 0.7 * Error_whole

So we accept slightly more error as we reduce the step, knowing that when we look at the whole interval, we are going to be very close to meet the required error target. I guess the other approach tries to guarantee you'll meet the target, versus this approach wants to get close to the intended target, not necessarily guarantee it, but that could help reduce the number of unnecessary subdivisions.


(07-31-2018 09:35 AM)Dieter Wrote:  Here and there the exit condition can be improved. Assume you know for sure (!) that the iteration converges quadratically and the difference between the last two approximations was 1E–2, 1E–4 and 1E–8. If the next approximation can be expected to change the result only in the 16th digit this means that the current (!) approximation already is this precise. So if you need 12 places you may stop now although the last two approximation only agreed in 8 places. Because one more iteration will only change the result in the 16th place.

Isn't the error proportional to h^4? Anyway, I'm not so sure how this one behaves because the function is subdivided more in the areas where it's less well-behaved and less in other areas, as long as the target error is met.
It's basically "minimal effort to locally achieve a target error".

You do have a good point, perhaps the overshoot in error is because halving the step increases the accuracy a lot more than what we needed.


(07-31-2018 09:35 AM)Dieter Wrote:  BTW, integrating exp(–x) up to 500 doesn't make sense. At x=20 the function value has dropped to about 2E–9 so anything beyond this will not affect the first 8 decimals.

Dieter

I know, it came up on another thread as a speed test, and I wanted to compare speeds as well. But I tested with several other functions (polynomials, combined exponentials with trig, trig alone, etc) and I always get a few more digits than I requested, at the expense of a huge amount of wasted time.
For that example, using 1E-8 as a target error, I get good 11 digits in 3 seconds (doing 741 function evaluations), now if I request 11 digits I get 13 good digits but it takes 15 seconds doing 4293 function evaluations.
This is the real reason why I'm seeking advice to see if perhaps we could do better.
Find all posts by this user
Quote this message in a reply
Post Reply 


Messages In This Thread
RE: Error propagation in adaptive Simpson algorithm - Claudio L. - 07-31-2018 05:02 PM



User(s) browsing this thread: 1 Guest(s)