A (quite) accurate ln1+x function, or "how close can you get" part II - Printable Version +- HP Forums (https://www.hpmuseum.org/forum) +-- Forum: HP Calculators (and very old HP Computers) (/forum-3.html) +--- Forum: General Forum (/forum-4.html) +--- Thread: A (quite) accurate ln1+x function, or "how close can you get" part II (/thread-1072.html) A (quite) accurate ln1+x function, or "how close can you get" part II - Dieter - 04-09-2014 06:44 PM Over the last days there has been some discussion regarding the accurate evaluation of the TVM equation. For small interest rates a dedicated ln1+x function allows significantly better accuracy than the standard method. But such a function is missing on many calculators, as well as its counterpart, a special ex–1 function. There have been some suggestions on how to emulate a sufficiently accurate ln1+x function on calculators that do not offer one in their function set. On the one hand there is the widely know classic approach as suggested by W. Kahan in the HP-15C Advanced Functions Handbook, on the other hand some other ways based on hyperbolic functions have been suggested, both for ln1+x and ex–1. I did some accuracy tests with these methods, the results have been posted in another thread. Among 100.000 random numbers between 1 and 10–15 these methods showed errors of about 5...9 units in the last place. I wanted to know if this can be improved, so I tried a new approach. It does not require any exotic hyperbolics and is based on a Taylor series: Let   $$u = 1+x$$ rounded Then $$ln(1+x) \simeq ln u - \frac{(u-1) - x}{u}$$ I did a test with this method, using the WP34s emulator with 16 digit standard precision. The following program was used to generate 100.000 random numbers between 1 and 10–16. Dependig on Flag A, either the classic HP/Kahan method (Flag A set) or the new method (Flag A clear) is used. Code: 001 LBL D 002 CLSTK 003 STO 01 004 STO 02 005 # 001 006 SDL 005  ' 100.000 loops 007 STO 00 008 , 009 4 010 7 011 1 012 1        ' set seed = 0,4711 013 SEED 014 LBL 55 015 RAN# 016 #016 018 × 019 +/- 020 10^x 021 STO 03 022 XEQ 88   'call approximation 023 RCL 03 024 LN1+x 025 - 026 RCL L 027 ULP 028 / 029 STO↓ 01    'largest negative error in R01 030 STO↑ 02    'largest positive error in R02 031 DSE 00 032 GTO 55 033 RCL 01 034 RCL 02     'return largest errors 035 RTN 036 LBL 88 037 FS? A        ' Select method based on Flag A 038 GTO 89 039 ENTER        ' New method as suggested above 040 INC X 041 ENTER 042 DEC X 043 RCL- Z 044 x<>Y 045 / 046 +/- 047 RCL L 048 LN 049 + 050 RTN 051 LBL 89   ' HP/Kahan method as suggested in HP-15C AFH 052 ENTER 053 INC X 054 LN 055 x<>Y 056 RCL L 057 1 058 x≠? Y 059 - 060 / 061 × 062 RTN And here are the results: Select HP/Kahan method: f [SF] [A] The = symbol appears Start: [D] "Running PrOGrAM"... Result in x and y: largest positive error: +8 ULP largest negative error: –5 ULP This matches the error level reported earlier. Now let's see how the new method compares: Select new method: g [CF] [A] The = symbol disappears Start: [D] "Running PrOGrAM"... Result in x and y: largest positive error: +2 ULP largest negative error: –1 ULP That looks much better. Further tests showed the following error distribution: –2 ULP: 0 –1 ULP: 10553 ±0 ULP: 74440 +1 ULP: 14996 +2 ULP: 11 Edit: Another run with 1 milliion random numbers shows the same pattern: –2 ULP: 0 –1 ULP: 105163 ±0 ULP: 744963 +1 ULP: 149766 +2 ULP: 108 So nearly 99,99% of the results are within ±1 ULP. What do you think? Dieter RE: A (quite) accurate ln1+x function, or "how close can you get" part II - htom trites - 04-10-2014 04:47 AM I'm curious about monotonic and inverse behaviors. For a and a+ULP, is (f(a+ULP) - f(a)) positive, zero, or negative. Hopefully they'd all be positive, but it can happen that there's a place where you get a string of zeros where f(a) is changing much slower than a. You shouldn't ever find a negative. The value of the error of a-inverse(function(a)) and a-function(inverse(a)) would ideally be always zero, of course, but unless both the function and the inverse are absolutely monotonic that won't happen. RE: A (quite) accurate ln1+x function, or "how close can you get" part II - Dieter - 04-11-2014 07:01 PM Thank you very much for your reply. (04-10-2014 04:47 AM)htom trites Wrote:  For a and a+ULP, is (f(a+ULP) - f(a)) positive, zero, or negative. Hopefully they'd all be positive I assume you mean: "in this case", i.e. for f(x) = ln(1+x). (04-10-2014 04:47 AM)htom trites Wrote:  but it can happen that there's a place where you get a string of zeros where f(a) is changing much slower than a. You shouldn't ever find a negative. In a monotonically increasing function, yes. Hm, what about a test with a milliion random numbers? I did one just out of curiosity. There were no negatives. (04-10-2014 04:47 AM)htom trites Wrote:  The value of the error of a-inverse(function(a)) and a-function(inverse(a)) would ideally be always zero, of course, but unless both the function and the inverse are absolutely monotonic that won't happen. I won't happen either in real life calculators with limited accuracy. ;-) Consider for instance sqrt(x) and its inverse x² with, say, 10 digits: 1,414213562 < sqrt(2) < 1,414213563 1,414213562² = 1,999999998 1,414213563² = 2,000000001 So a – inverse(function(a)) is either 2 ULP low or 1 ULP high, although both f(a) and its inverse are strictly monotonic. I did some more tests of the ln1+x approximation suggested above. There is one weak point for negative x between –9,5 · 10n and –10n–1, where n is the working precision (number of significant digits). Here the suggested approximation is typically 5 ULP off, so in this small interval it's not better than the original HP/Kahan method. Otherwise it seems to work fine. Dieter RE: A (quite) accurate ln1+x function, or "how close can you get" part II - Albert Chan - 01-31-2019 07:04 PM (04-11-2014 07:01 PM)Dieter Wrote:  I did some more tests of the ln1+x approximation suggested above. There is one weak point for negative x between –9,5 · 10n and –10n–1, where n is the working precision (number of significant digits). Here the suggested approximation is typically 5 ULP off, so in this small interval it's not better than the original HP/Kahan method. Otherwise it seems to work fine. Excess ULP error is due to correction *lowering* decimal exponent. It does not limited to the edge of working precision. (note: above exponents had the sign wrong) Example, crossing -0.001 boundary: -0.001 = LN(1 - 0.0009995001666 ...), so try around the edge, say X = -0.00099950016 LN(1+X) = LN( 0.9990004998 ) = -1.000000033e-3 (error ~ 0.4 ulp) correction = -(X+1-1-X) / (1+X) = +4.004002001e-11 (all digits correct) log1p(X) ~ LN(1+X) + correction = -9.9999999930e-4 (error = 4 ULP, exponent down 1) Actual error, either absolute (4e-13) or relative (4e-10) are not affected. RE: A (quite) accurate ln1+x function, or "how close can you get" part II - Albert Chan - 02-01-2019 04:26 PM (01-31-2019 07:04 PM)Albert Chan Wrote:  Excess ULP error is due to correction *lowering* decimal exponent. To avoid excess ULP error, we like correction same sign as X Y = 1+X, rounded-toward 1.0 log1p(X) ~ LN(Y) - (Y-1-X)/Y Previous example, log1p(X = -0.00099950016) : Y = round-toward-1 of 1+X = 0.9990004999 (10 digits) log1p(X) ~ LN(Y) - (Y-1-X)/Y = -9.999999333e-4 - 6.006003001e-11 = -9.999999934e-4 (all digits correct)