Post Reply 
Accuracy of Integral with epsilon
10-10-2021, 02:07 PM
Post: #21
RE: Accuracy of Integral with epsilon
(10-10-2021 11:51 AM)Albert Chan Wrote:  
(10-09-2021 09:50 PM)robve Wrote:  You do realize that it is a fallacy to assume exp(x) and cosh(x)+sinh(x) equate numerically, in addition to algebraically?

It is interesting that my MAPM implementation of raw_exp() actually use above "identity" Big Grin
Of course, |x| is limited to tiny value, below 1E-4, so catastrophic cancellation is not an issue.

sinh(x) converge with half as many terms, compared with exp(x)

exp(x) = x + x^2/2! + x^3/3! + ...
sinh(x) = x + x^3/3! + x^5/5! + ...

You're right. To summarize:

1. cosh(x)+sinh(x) suffers from catastrophic cancellation for negative x, with ~|x| digits lost, e.g. cosh(-10)+sinh(-10)-exp(-10) = -0.000000099 and it is worse when taking exp(-10)/(cosh(-10)+sinh(-10)) = 1.0022 (depending on the calculator's precision)
2. sinh(x) implementations may use sinh(x) = (z+z/(z+1))/2 with z=expm1(x), which improves the accuracy of sinh(x) for x<0 whereas cosh(x) is unaffected, as another example when algebraic identities may not hold numerically.
3. floating point noise not contributed by the above.

This integrand is a really bad one to compare numerical integration on calculators. The results are not all that meaningful. Picking eps=10^-4 does not necessarily help to prevent issues, see point 1. I might wager that a less sophisticated calculator may produce a more "accurate" result close to 40.

As a funny consequence of playing around with this for a bit, I found a bug in Boost Tanh-Sinh that produces 20 as the answer to the integral \( \int_{-10}^{10} 2\,dx \). I could not believe it at first, but it is a specific problem.

- Rob

"I count on old friends" -- HP 71B,Prime|Ti VOY200,Nspire CXII CAS|Casio fx-CG50...|Sharp PC-G850,E500,2500,1500,14xx,13xx,12xx...
Visit this user's website Find all posts by this user
Quote this message in a reply
10-12-2021, 04:47 PM
Post: #22
RE: Accuracy of Integral with epsilon
(10-09-2021 08:37 PM)robve Wrote:  
(10-09-2021 02:27 AM)Paul Dale Wrote:  It's neither complex nor state of the art.

How many function evaluations should be performed?

Clearly one evaluation is sufficient to get the correct answer here. Two might be prudent to verify that it is a constant function. Three perhaps?

There is never a guarantee that a given function is constant by probing a few points. That should be obvious, no?

Naive implementations of Gauss-Legendre quadrature methods may stop at two or three points to fit a constant (polynomial). However, it would be detrimental to give up that early. Evidently, calculators run quite a bit longer on this function for a good reason.

My point about Tanh-Sinh was simply that this method doesn't require a huge number of points to converge accurately and it is used in the WP 34S... so make your own conclusions. Tanh-Sinh simply works well for this type of function based on its syntactical structure (duh). More importantly, the function is NOT numerically constant in its present non-simplified form, which is to be expected: noise increases quickly towards -10 and beyond. This noise is already in the order of 10^-2 at x=-18. Tabulating the error in Excel from -10 to -8.9 gives:

-5.31248E-08
-6.01538E-08
-5.39791E-08
-1.77564E-08
-1.86315E-08
-1.64802E-08
-5.18465E-10
-8.54069E-09
-1.47008E-08
1.40072E-08
-6.26628E-09
-5.77673E-09

This amount of noise is sufficient to trip up quadrature methods. Of course, the noise may differ with non-IEEE 754 double precision and different implementations of the constituent functions EXP, SINH and COSH. However, a general consequence of the noise and floating point limitations, you can either get lucky to get to the exact result 40 with a few points or unlucky, which can cost you a great deal of time wasted to evaluate points in the presence of noise. The WP 34S and qthsh points reported show exactly what I mean. Also, attempting to push the accuracy beyond 10^8 (or about) is pointless. There is too much noise to make definitive conclusions.

- Rob

Going variable precision shows typically COSH()-SINH() creates noise in the last 8 or 9 digits. So if your precision is 16 digits (like Excel), you end up with the 10^-8 that you correctly recorded.
So I guess the trick is as long as you don't ask the integration algorithm for an error smaller than the system precision less 9 digits, you'll be fine and get a nice result, with an error in the order of (precision-9), regardless of the tolerance you requested. For example, I set newRPL to 120 digits and request 1E-60 tolerance, and adaptive Simpson very quickly returns 40 with 112 correct digits and noise in the last 8. With the standard 32 digits I get an error of 10^-23. If I request a tolerance below 10^-23, the algorithm will literally "hang" in an infinite loop trying to refine the step unsuccessfully.
Find all posts by this user
Quote this message in a reply
10-14-2021, 02:43 AM
Post: #23
RE: Accuracy of Integral with epsilon
(10-12-2021 04:47 PM)Claudio L. Wrote:  So I guess the trick is as long as you don't ask the integration algorithm for an error smaller than the system precision less 9 digits, you'll be fine and get a nice result, with an error in the order of (precision-9), regardless of the tolerance you requested.

Catastrophic cancellation errors exponentially increase towards negative infinity for this integrand.

I may be going off on a tangent in this thread, but what other sources are there for noise, besides cancellation and cosh/sinh/exp roundoff/differences? Can the integration method be noisy? I'd expect that to be insignificant. It may depend on the accuracy of the weight calculations. Simple weights pose no problem, e.g. Simpson, Romberg are exact for (low order) polynomials anyway. Weights derived from table-based methods, such as Gauss-Kronrod may collect small errors in the weight calculations. I was curious to test this with a simple constant function \( \int_{-10}^{10} 2\,dx \) to integrate with eps=10^-6 tolerance (or about, what works to get comparable results w.r.t. number of evaluation points):

- Gauss-Kronrod G10K21 with IEEE 754 doubles = 40.000000000000007.
- WP-34S Tanh-Sinh C version with IEEE 754 double: 39.999999999987885 (23 points)
- qthsh Tanh-Sinh with IEEE 754 double: 40.000000000000057 (29 points)
- Boost Math Tanh-Sinh with IEEE 754 double: 40.000000000001464 (25 points)

Tanh-Sinh methods may suffer "weight issues" due to their point distributions. There are also numerical differences in the implementations. WP-34S and qthsh use the more efficient Michalski-Mosig quadrature method. Boost Math uses the standard Takahashi-Mori Tanh-Sinh quadrature method. And there are other internal differences. WP-34S computes the weights as follows (see forum thread):

Code:
      double ch = cosh(t);
      double ecs = cosh(M_PI/2 * sqrt(ch*ch - 1)); // = cosh(pi/2*sinh(t))
      double w = 1/(ecs*ecs);
      double r = sqrt(ecs*ecs - 1)/ecs; // = sinh(pi/2*sinh(t))/cosh(pi/2*sinh(t)) = tanh(pi/2*sinh(t))

Whereas qthsh computes the weights as follows:

Code:
      double u = exp(1/t-t); // = exp(-2*sinh(j*h)) = 1/exp(sinh(j*h))^2
      double r = 2*u/(1+u);  // = 1 - tanh(sinh(j*h))
      double w = (t+1/t)*r/(1+u); // = cosh(j*h)/cosh(sinh(j*h))^2

The noise here is still rather insignificant compared to cancellation and diminishes with smaller given tolerances toward 10^-15 with more points evaluated by Tanh-Sinh.

- Rob

"I count on old friends" -- HP 71B,Prime|Ti VOY200,Nspire CXII CAS|Casio fx-CG50...|Sharp PC-G850,E500,2500,1500,14xx,13xx,12xx...
Visit this user's website Find all posts by this user
Quote this message in a reply
Post Reply 




User(s) browsing this thread: 1 Guest(s)