Post Reply 
Approximating function derivatives
01-24-2024, 02:31 PM (This post was last modified: 01-24-2024 02:36 PM by Pekis.)
Post: #1
Approximating function derivatives
Here is a subject slightly extended from the book "La calculatrice scientifique" Vangeluwe/Glorieux, 1979:

The point is to approximate function derivatives (targets are f(x0) or f'(x0) or f"(x0) or f"'(x0)) with the help of neighboring points (at distance h and h/2 from both sides of x0)

So, knowing h, find a,b,c,d such as
a * f(x0 + h) + b * f(x0 - h) + c * f(x0 + h/2) + d * f(x0 - h/2) ~= (f(x0) or f'(x0) or f"(x0) f"'(x0)), ie each target needs its (a,b,c,d) set

In any case, with Taylor:
f(x0 + h) = f(x0) + h * f'(x0) + h^2/2! * f"(x0) + h^3/3! * f"'(x0) + ...
f(x0 - h) = f(x0) - h * f'(x0) + h^2/2! * f"(x0) - h^3/3! * f"'(x0) + ...
f(x0 + h/2) = f(x0) + h/2 * f'(x0) + h^2/(2^2*2!) * f"(x0) + h^3/(2^3*3!) * f"'(x0) + ...
f(x0 - h/2) = f(x0) - h/2 * f'(x0) + h^2/(2^2*2!) * f"(x0) - h^3/(2^3*3!) * f"'(x0) + ...

=> Solve system:
a + b + c + d = 1 if target f(x0), else 0
a - b + c/2 - d/2 = 1/h if target f'(x0), else 0
a + b + c/4 + d/4 = 2!/(h^2) if target f"(x0), else 0
a - b + c/8 - d/8 = 3!/(h^3) if target f"'(x0), else 0

Results:
If target f(x0): a = -1/6, b = -1/6, c = 2/3, d = 2/3
If target f'(x0): a = -1/(6*h), b = 1/(6*h), c = 4/(3*h), d = -4/(3*h)
If target f"(x0): a = 4/(3*h^2), b = 4/(3*h^2), c = -4/(3*h^2), d = -4/(3*h^2)
If target f"'(x0): a = 4/(h^3), b = -4/(h^3), c = -8/(h^3), d = 8/(h^3)

Here a the results for f(x)=ln(x), h=0.001, at x0=3, from Excel:

[Image: uMo1Quc]
   

It seems nice for f(x0) and f'(x0), acceptable for f"(x0), but looses precision for f"'(x0)

Did you already know that method ?
Find all posts by this user
Quote this message in a reply
01-25-2024, 11:38 PM (This post was last modified: 01-26-2024 12:01 AM by Thomas Klemm.)
Post: #2
RE: Approximating function derivatives
(01-24-2024 02:31 PM)Pekis Wrote:  Did you already know that method ?

It was new to me. Thank you for sharing it with us here.

I used the following Python code to print the formulas below:
Code:
from sympy import series, latex, symbols, Function, Matrix

x_0, h = symbols("x_0 h")
f = Function("f")

for x in [x_0 - 2*h, x_0 - h, x_0 + h, x_0 + 2*h]:
    print(f"{latex(f(x))} &= {latex(series(f(x), h, 0, 4))} \\\\")
    print("\\\\")

\(
\begin{align}
f{\left(- 2 h + x_{0} \right)} &= f{\left(x_{0} \right)} - 2 h \left. \frac{d}{d \xi_{1}} f{\left(\xi_{1} \right)} \right|_{\substack{ \xi_{1}=x_{0} }} + 2 h^{2} \left. \frac{d^{2}}{d \xi_{1}^{2}} f{\left(\xi_{1} \right)} \right|_{\substack{ \xi_{1}=x_{0} }} - \frac{4 h^{3} \left. \frac{d^{3}}{d \xi_{1}^{3}} f{\left(\xi_{1} \right)} \right|_{\substack{ \xi_{1}=x_{0} }}}{3} + O\left(h^{4}\right) \\
\\
f{\left(- h + x_{0} \right)} &= f{\left(x_{0} \right)} - h \left. \frac{d}{d \xi_{1}} f{\left(\xi_{1} \right)} \right|_{\substack{ \xi_{1}=x_{0} }} + \frac{h^{2} \left. \frac{d^{2}}{d \xi_{1}^{2}} f{\left(\xi_{1} \right)} \right|_{\substack{ \xi_{1}=x_{0} }}}{2} - \frac{h^{3} \left. \frac{d^{3}}{d \xi_{1}^{3}} f{\left(\xi_{1} \right)} \right|_{\substack{ \xi_{1}=x_{0} }}}{6} + O\left(h^{4}\right) \\
\\
f{\left(h + x_{0} \right)} &= f{\left(x_{0} \right)} + h \left. \frac{d}{d \xi_{1}} f{\left(\xi_{1} \right)} \right|_{\substack{ \xi_{1}=x_{0} }} + \frac{h^{2} \left. \frac{d^{2}}{d \xi_{1}^{2}} f{\left(\xi_{1} \right)} \right|_{\substack{ \xi_{1}=x_{0} }}}{2} + \frac{h^{3} \left. \frac{d^{3}}{d \xi_{1}^{3}} f{\left(\xi_{1} \right)} \right|_{\substack{ \xi_{1}=x_{0} }}}{6} + O\left(h^{4}\right) \\
\\
f{\left(2 h + x_{0} \right)} &= f{\left(x_{0} \right)} + 2 h \left. \frac{d}{d \xi_{1}} f{\left(\xi_{1} \right)} \right|_{\substack{ \xi_{1}=x_{0} }} + 2 h^{2} \left. \frac{d^{2}}{d \xi_{1}^{2}} f{\left(\xi_{1} \right)} \right|_{\substack{ \xi_{1}=x_{0} }} + \frac{4 h^{3} \left. \frac{d^{3}}{d \xi_{1}^{3}} f{\left(\xi_{1} \right)} \right|_{\substack{ \xi_{1}=x_{0} }}}{3} + O\left(h^{4}\right) \\
\end{align}
\)

This leads to the following matrix that can be inverted:
Code:
M = Matrix([
    [1, -2, 2, '-4/3'],
    [1, -1, '1/2', '-1/6'],
    [1, 1, '1/2', '1/6'],
    [1, 2, 2, '4/3'],
])
M.inv()

\(
\left[\begin{matrix}
- \frac{1}{6} & \frac{2}{3} & \frac{2}{3} & - \frac{1}{6}\\
\frac{1}{12} & - \frac{2}{3} & \frac{2}{3} & - \frac{1}{12}\\
\frac{1}{3} & - \frac{1}{3} & - \frac{1}{3} & \frac{1}{3}\\
- \frac{1}{2} & 1 & -1 & \frac{1}{2}
\end{matrix}\right]
\)

It is used in this program for the HP-42S:
Code:
00 { 109-Byte Prgm }
01▸LBL "DIFF"
02 STO 01
03 R↓
04 STO 00
05 RCL 01
06 2
07 ×
08 -
09 XEQ 00
10 STO 02
11 RCL 00
12 RCL 01
13 -
14 XEQ 00
15 STO 03
16 RCL 00
17 RCL 01
18 +
19 XEQ 00
20 STO 04
21 RCL 00
22 RCL 01
23 2
24 ×
25 +
26 XEQ 00
27 STO 05
28 RCL 04
29 RCL 03
30 +
31 4
32 ×
33 RCL 05
34 RCL 02
35 +
36 -
37 6
38 ÷
39 STOP
40 RCL 04
41 RCL 03
42 -
43 4
44 ×
45 RCL 05
46 RCL 02
47 -
48 2
49 ÷
50 -
51 6
52 ÷
53 RCL 01
54 ÷
55 STOP
56 RCL 05
57 RCL 02
58 +
59 RCL 04
60 RCL 03
61 +
62 -
63 3
64 ÷
65 RCL 01
66 X↑2
67 ÷
68 STOP
69 RCL 05
70 RCL 02
71 -
72 2
73 ÷
74 RCL 04
75 RCL 03
76 -
77 -
78 RCL 01
79 3
80 Y↑X
81 ÷
82 RTN
83▸LBL 00
84 LN
85 END

Registers

R00: \(x_0\)
R01: \(h\)
R02: \(f(x_0 - 2h)\)
R03: \(f(x_0 - h)\)
R04: \(f(x_0 + h)\)
R05: \(f(x_0 + 2h)\)

Example

3
1E-3
XEQ "DIFF"

1.098612289

R/S

0.333333333

R/S

-0.111111142

R/S

0.074074099
Find all posts by this user
Quote this message in a reply
01-26-2024, 02:36 AM (This post was last modified: 01-26-2024 02:44 AM by Albert Chan.)
Post: #3
RE: Approximating function derivatives
Let F(x) = f(x0 + k*h), so that F arguments are nice integers ±1, ±2
Let D = derivatives of F(x)'s, at x=0

F(1) ≈ F(0) + F'(0)*1 + F''(0)*1²/2! + F'''(0)*1³/3! --> coef = 1, 1, 1/2, 1/6
F(-1)                                                     symmetry --> coef = 1, -1, 1/2, -1/6
F(2) ≈ F(0) + F'(0)*2 + F''(0)*2²/2! + F'''(0)*2³/3! --> coef = 1, 2, 2, 4/3
F(-2)                                                     symmetry --> coef = 1, -2, 2, -4/3

Apply some symmetry before matrix inversion:
Code:
| 2 0 1  0  |           | F(1) + F(-1) |    
| 0 2 0 1/3 | × D = B = | F(1) - F(-1) |
| 2 0 4  0  |           | F(2) + F(-2) |    
| 0 4 0 8/3 |           | F(2) - F(-2) |

XCas> R := inv([[2,0,1,0],[0,2,0,1/3],[2,0,4,0],[0,4,0,8/3]])

\(\left(\begin{array}{cccc}
\frac{2}{3} & 0 & \frac{-1}{6} & 0 \\
0 & \frac{2}{3} & 0 & \frac{-1}{12} \\
\frac{-1}{3} & 0 & \frac{1}{3} & 0 \\
0 & -1 & 0 & \frac{1}{2}
\end{array}\right) \)

XCas> f, x0, h := ln, 3., 0.001/2
XCas> F(x) := f(x0 + x*h)
XCas> X := [1, -1, 2, -2]
XCas> B := map(F, X)

[1.09877894145, 1.09844560811, 1.09894556646, 1.09827889977]

XCas> B := [B[0]+B[1], B[0]-B[1], B[2]+B[3], B[2]-B[3]]

[2.19722454956, 0.00033333333642, 2.19722446623, 0.000666666691358]

XCas> D := R * B; /* = transpose(R * transpose(B) */

[1.09861228867, 0.000166666666667, -2.7777779632e-08, 9.25926002537e-12]

Scale Derivatives of F(x)|x=0 to f(x)|x=x0

XCas> D / h^range(len(D))

[1.09861228867, 0.333333333333, -0.111111118528, 0.074074080203]
Find all posts by this user
Quote this message in a reply
01-26-2024, 03:29 AM
Post: #4
RE: Approximating function derivatives
The WP 34S was the first calculator I'm aware of that included a numerical derivate function (actually several flavours from memory).

The very first "unexpected" issue was using the solver to find when the derivative was zero. I do not know how I never thought about this possibility. More so, since it is the completely natural outcome of supporting solve and f'.

As for numeric algorithms for this, check Wikipedia. There are different solutions that depend on being one ended, two ended, etc.

I must note that numeric algorithms for a function's derivative are inherent unstable. I suspect that this is the reason there has been *far* more effort put into integration (which is intractable in general).


Pauli
Find all posts by this user
Quote this message in a reply
01-26-2024, 08:33 AM
Post: #5
RE: Approximating function derivatives
Following Albert's approach we can disentangle the calculation of odd and even derivatives.

\(
\begin{align}
f_{+1} &= f(x_0 + h) &= f(x_0) + {f}'(x_0) h + \frac{1}{2}{f}''(x_0) h^2 + \frac{1}{6}{f}'''(x_0) h^3 + \mathcal{O}(h^4) \\
\\
f_{-1} &= f(x_0 - h) &= f(x_0) - {f}'(x_0) h + \frac{1}{2}{f}''(x_0) h^2 - \frac{1}{6}{f}'''(x_0) h^3 + \mathcal{O}(h^4) \\
\\
f_{+2} &= f(x_0 + 2h) &= f(x_0) + 2{f}'(x_0) h + 2{f}''(x_0) h^2 + \frac{4}{3}{f}'''(x_0) h^3 + \mathcal{O}(h^4) \\
\\
f_{-2} &= f(x_0 - 2h) &= f(x_0) - 2{f}'(x_0) h + 2{f}''(x_0) h^2 - \frac{4}{3}{f}'''(x_0) h^3 + \mathcal{O}(h^4) \\
\\
\end{align}
\)

Even derivatives

\(
\begin{align}
f_{+1} + f_{-1} &= 2f(x_0) + {f}''(x_0) h^2 + \mathcal{O}(h^4) \\
f_{+2} + f_{-2} &= 2f(x_0) + 4{f}''(x_0) h^2 + \mathcal{O}(h^4) \\
\end{align}
\)

Or:

\(
\begin{bmatrix}
f_{+1} + f_{-1} \\
f_{+2} + f_{-2} \\
\end{bmatrix}
\approx
\begin{bmatrix}
2 & 1 \\
2 & 4 \\
\end{bmatrix}
\cdot
\begin{bmatrix}
f(x_0) \\
{f}''(x_0) h^2 \\
\end{bmatrix}
\)


This leads to:

\(
\begin{bmatrix}
f(x_0) \\
{f}''(x_0) h^2 \\
\end{bmatrix}
\approx
\begin{bmatrix}
\frac{2}{3} & - \frac{1}{6} \\
- \frac{1}{3} & \frac{1}{3} \\
\end{bmatrix}
\cdot
\begin{bmatrix}
f_{+1} + f_{-1} \\
f_{+2} + f_{-2} \\
\end{bmatrix}
\)

Odd derivatives

\(
\begin{align}
f_{+1} - f_{-1} &= 2{f}'(x_0) h + \frac{1}{3}{f}'''(x_0) h^3 + \mathcal{O}(h^4) \\
f_{+2} - f_{-2} &= 4{f}'(x_0) h + \frac{8}{3}{f}'''(x_0) h^3 + \mathcal{O}(h^4) \\
\end{align}
\)

Or:

\(
\begin{bmatrix}
f_{+1} - f_{-1} \\
f_{+2} - f_{-2} \\
\end{bmatrix}
\approx
\begin{bmatrix}
2 & \frac{1}{3} \\
4 & \frac{8}{3} \\
\end{bmatrix}
\cdot
\begin{bmatrix}
{f}'(x_0) h \\
{f}'''(x_0) h^3 \\
\end{bmatrix}
\)


This leads to:

\(
\begin{bmatrix}
{f}'(x_0) h \\
{f}'''(x_0) h^3 \\
\end{bmatrix}
\approx
\begin{bmatrix}
\frac{2}{3} & - \frac{1}{12} \\
- 1 & \frac{1}{2} \\
\end{bmatrix}
\cdot
\begin{bmatrix}
f_{+1} - f_{-1} \\
f_{+2} - f_{-2} \\
\end{bmatrix}
\)

This allows to optimise the program for the HP-42S a bit:
Code:
00 { 103-Byte Prgm }
01▸LBL "DIFF"
02 STO 01
03 R↓
04 STO 00
05 RCL 01
06 -
07 XEQ 00
08 STO 02
09 RCL 00
10 RCL 01
11 +
12 XEQ 00
13 ENTER
14 X<> 02
15 STO+ 02
16 -
17 STO 03
18 RCL 00
19 RCL 01
20 2
21 ×
22 -
23 XEQ 00
24 STO 04
25 RCL 00
26 RCL 01
27 2
28 ×
29 +
30 XEQ 00
31 ENTER
32 X<> 04
33 STO+ 04
34 -
35 STO 05
36 RCL 02
37 4
38 ×
39 RCL 04
40 -
41 6
42 ÷
43 STOP
44 RCL 03
45 8
46 ×
47 RCL 05
48 -
49 12
50 ÷
51 RCL 01
52 ÷
53 STOP
54 RCL 04
55 RCL 02
56 -
57 3
58 ÷
59 RCL 01
60 X↑2
61 ÷
62 STOP
63 RCL 05
64 2
65 ÷
66 RCL 03
67 -
68 RCL 01
69 3
70 Y↑X
71 ÷
72 RTN
73▸LBL 00
74 LN
75 END

Registers

R00: \(x_0\)
R01: \(h\)
R02: \(f_{+1} + f_{-1}\)
R03: \(f_{+1} - f_{-1}\)
R04: \(f_{+2} + f_{-2}\)
R05: \(f_{+2} - f_{-2}\)
Find all posts by this user
Quote this message in a reply
01-26-2024, 01:32 PM
Post: #6
RE: Approximating function derivatives
(01-24-2024 02:31 PM)Pekis Wrote:  It seems nice for f(x0) and f'(x0), acceptable for f"(x0), but looses precision for f"'(x0)

f(x0+h) = f(x0) + f'(x0)*h + f''(x0)*h²/2! + f'''(x0)*h³/2! + O(h4)

if we fit points f(x0±h), f(x0±2h), and estimate f(x0), error = O(h4)

Rewrite same taylor series, horner style

f(x0+h) = f(x0) + h*( f'(x0) + h/2*( f''(x0) + h/3*( f'''(x0) + O(h) )))

Estimated f'''(x0) has error = O(h), much worse.

(01-26-2024 08:33 AM)Thomas Klemm Wrote:  Even derivatives

\(
\begin{align}
f_{+1} + f_{-1} &= 2f(x_0) + {f}''(x_0) h^2 + \mathcal{O}(h^4) \\
f_{+2} + f_{-2} &= 2f(x_0) + 4{f}''(x_0) h^2 + \mathcal{O}(h^4) \\
\end{align}
\)

Let f±k = fk + f-k,      where fk = f(x0 + k*h)

Ignore O(h^4), solve for f''(x0), we have:

\(\displaystyle
{f}''(x_0) ≈ \left(-\frac{1}{3}f_{±1} + \frac{1}{3}f_{±2}\right) /\,h^2
\)

If we have the missing gap, (x0, f(x0)), f''(x0) can be estimated slightly better.

\(
\begin{align}
(f_{±1} - 2 f_{0}) &= {f}''(x_0)\; h^2 + \mathcal{O}(h^4) \\
(f_{±2} - 2 f_{0}) &= {f}''(x_0)\; (2h)^2 + \mathcal{O}((2h)^4) \\
\end{align}
\)

Instead of ignore error terms, this time we cancel them:

\( 16×(f_{±1} - 2 f_0) - (f_{±2} - 2 f_0) ≈ {f}''(x_0)\; h^2 × (16 - 4) \)

\(\displaystyle
{f}''(x_0) ≈ \left(-\frac{5}{2}f_0 + \frac{4}{3}f_{±1} - \frac{1}{12}f_{±2}\right) /\,h^2
\)
Find all posts by this user
Quote this message in a reply
01-26-2024, 09:51 PM (This post was last modified: 01-27-2024 12:39 PM by Albert Chan.)
Post: #7
RE: Approximating function derivatives
(01-26-2024 01:32 PM)Albert Chan Wrote:  Let f±k = fk + f-k,      where fk = f(x0 + k*h)

Ignore O(h^4), solve for f''(x0), we have:

\(\displaystyle
{f}''(x_0) ≈ \left(-\frac{1}{3}f_{±1} + \frac{1}{3}f_{±2}\right) /\,h^2
\)

If h is tiny, above setup may cause massive cancellations.
It is better to write formula using finite difference operators.
Bonus: it explained why denominator is 1 - (-2) = 3

\(\displaystyle D^2\;h^2 ≈ \frac{Δ_{1} - Δ_{-2}}{3}\)

lua> x0, h = 3, 0.001/2
lua> f = setmetatable({}, {__index = fn'T,k: rawset(T,k,log(x0+k*h))[k]'}) -- memoize

lua> ((f[2]-f[1]) - (f[-1]-f[-2])) / (3*h^2)
-0.11111111912024776

Quote:If we have the missing gap, (x0, f(x0)), f''(x0) can be estimated slightly better.

\(
\begin{align}
(f_{±1} - 2 f_{0}) &= {f}''(x_0)\; h^2 + \mathcal{O}(h^4) \\
(f_{±2} - 2 f_{0}) &= {f}''(x_0)\; (2h)^2 + \mathcal{O}((2h)^4) \\
\end{align}
\)

Instead of ignore error terms, this time we cancel them:

\( 16×(f_{±1} - 2 f_0) - (f_{±2} - 2 f_0) ≈ {f}''(x_0)\; h^2 × (16 - 4) \)

\(\displaystyle
{f}''(x_0) ≈ \left(-\frac{5}{2}f_0 + \frac{4}{3}f_{±1} - \frac{1}{12}f_{±2}\right) /\,h^2
\)

We can use Divided difference (both sides) to estimate derivatives
One-sided also work, but have error of O(h) instead of O(h^2)

\(\displaystyle \underset{^h}{⍋}^p = D^{\,p} + \mathcal{O}(h^2) \)

Then, apply Richardson extrapolation to improve estimate.

\(\displaystyle D^{\,p} ≈
\underset{^h}{⍋}^p + (\underset{^h}{⍋}^p - \underset{^{2h}}{⍋}^p)/3
\)

1st derivatives, set p=1

lua> d1 = (f[1]-f[-1]) / (2*h)
lua> d2 = (f[2]-f[-2]) / (4*h)
lua> d1, d2
0.33333333641971663      0.33333334567897666
lua> d1 + (d1-d2)/3
0.3333333333332966

2nd derivatives, set p=2

lua> d1 = ((f[1]-f[0]) - (f[0]-f[-1])) / (h^2)
lua> d2 = ((f[2]-f[0]) - (f[0]-f[-2])) / (4*h^2)
lua> d1, d2
-0.11111111319905831      -0.11111111763995041
lua> d1 + (d1-d2)/3
-0.11111111171876094
Find all posts by this user
Quote this message in a reply
01-27-2024, 08:04 AM (This post was last modified: 01-27-2024 08:06 AM by Pekis.)
Post: #8
RE: Approximating function derivatives
I've found a more iterative way, and it seems interesting:

f'(x0) ~= a * f(x0-2h) + b * f(x0-h) + c * f(x0+h) + d * f(x0+2h):
=> a = 1/(12h), b = -8/(12h), c = 8/(12h), d = -1/(12h)

But after all, if the first derivative f'(x0) ~= a * f(x0-2h) + b * f(x0-h) + c * f(x0+h) + d * f(x0+2h) (already very good precision), then the 2nd derivative f"(x0) ~= a * f'(x0-2h) + b * f'(x0-h) + c * f'(x0+h) + d * f'(x0+2h) will have good precision, and so on, with the same a, b, c, d coefficients from above:

So, if you want it, f"(x0) ~= a * f'(x0-2h) + b * f'(x0-h) + c * f'(x0+h) + d * f'(x0+2h):
the f'(x0-2h) term has to be calculated as a * f(x0-4h) + b * f(x0-3h) + c * f(x0-h) + d * f(x0) (since f'(x0) ~= a * f(x0-2h) + b * f(x0-h) + c * f(x0+h) + d * f(x0+2h))
the f'(x0-h) term has to be calculated as a * f(x0-3h) + b * f(x0-2h) + c * f(x0) + d * f(x0+h)
the f'(x0+h) term has to be calculated as a * f(x0-h) + b * f(x0) + c * f(x0+2h) + d * f(x0+3h)
the f'(x0+2h) term has to be calculated as a * f(x0) + b * f(x0+h) + c * f(x0+3h) + d * f(x0+4h)
We can reuse previous calculations but also have to add new ones:
f(x0-4h), f(x0-3h), f(x0+3h), f(x0+4h).

And if you want it, f3(x0) ~= a * f"(x0-2h) + b * f"(x0-h) + c * f"(x0+h) + d * f"(x0+2h):
the f"(x0-2h) term has to be calculated as a * f'(x0-4h) + b * f'(x0-3h) + c * f'(x0-h) + d * f'(x0)
the f"(x0-h) term has to be calculated as a * f'(x0-3h) + b * f'(x0-2h) + c * f'(x0) + d * f'(x0+h)
the f"(x0+h) term has to be calculated as a * f'(x0-h) + b * f'(x0) + c * f'(x0+2h) + d * f'(x0+3h)
the f"(x0+2h) term has to be calculated as a * f'(x0) + b * f'(x0+h) + c * f'(x0+3h) + d * f('x0+4h)
So, we can reuse previous calculations but also have to add new ones:
f'(x0-4h) = a * f(x0-6h) + b * f(x0-5h) + c * f(x0-3h) + d * f(x0-2h)
f'(x0-3h) = a * f(x0-5h) + b * f(x0-4h) + c * f(x0-2h) + d * f(x0-h)
f'(x0+3h) = a * f(x0+h) + b * f(x0+2h) + c * f(x0+4h) + d * f(x0+5h)
f'(x0+4h) = a * f(x0+2h) + b * f(x0+3h) + c * f(x0+5h) + d * f(x0+6h)
We can reuse previous calculations but also have to add new ones:
f(x0-6h), f(x0-5h), f(x0+5h), f(x0+6h).

The precision seems good, as shown in the Excel attachment here, which also shows the building tree:

   

What do you think of it ?
Find all posts by this user
Quote this message in a reply
01-27-2024, 03:34 PM (This post was last modified: 01-27-2024 03:35 PM by Albert Chan.)
Post: #9
RE: Approximating function derivatives
Hi, Pekis

Yes, your way work!

I think you just re-discovered divided finite-difference (see post #7)

Think of divided finite-difference as slope.
slope of slope is 2nd derivatives.
slope of 2nd gives 3rd
...

That's why you get Excel table with triangular shape.
Find all posts by this user
Quote this message in a reply
01-27-2024, 03:46 PM
Post: #10
RE: Approximating function derivatives
Thanks Albert for your lights, and as we say in French, "Tous les chemins mènent à Rome" ?. It was fun
Find all posts by this user
Quote this message in a reply
01-28-2024, 12:55 AM
Post: #11
RE: Approximating function derivatives
Interesting subject! Whenever I calculate the root (or optimum) for a single-variable equation, I approximate the first derivative (and sometimes the second derivative, depending on the algorithm). This means that the approximations to the derivatives translate into function calls. I usually keep track of the total number of function calls needed by the algorithm to obtain a satisfactory approximation to the root (or optimum). I regard the number of function calls as a measure of the efficiency of the algorithms. Many new root-seeking algorithms that have come out in the last two decades use iterations where the guess to the root is refined multiple times in each iteration, and thus require several function calls. These new root-seeking algorithms claim orders of convergence that are higher than those of Newton or Halley. They also claim fewer iterations. But when it comes to the total number of function calls, these new algorithms exceed the total number of function calls needed by legacy root-seeking methods.

I doubt that 4 function calls to calculate the first derivative (plus another one to call f(x)) will greatly reduce the number of iterations compared to 2 functions used by Newton's method. My bet is that the reduction will be by 1, or at most 2 iterations (maybe???) compared to Newton's method.

Code:

given f(x)= 0, initial guess x0, tolernace, and max_iterations
x(1) = x0
n=1
do 
  fx = f(x(n))
  h = 0.001*(1 + abs(X))
  diff =  h*fx/(f(x(n)+h)-fx)
  x(n+1) = x(n) - diff
  n = n + 1
until abs(diff) < tolerance or n > max_iterations
return x
Find all posts by this user
Quote this message in a reply
01-28-2024, 01:37 AM
Post: #12
RE: Approximating function derivatives
Hi, Namir

You could still do 2 function calls per Newton step, yet getting much better correction!

\(\displaystyle
\frac{f(x)}{f'(x)}
= \frac
{\frac{f(x+h) + f(x-h)}{2} + \mathcal{O}(h^2)}
{\frac{f(x+h) - f(x-h)}{2h} + \mathcal{O}(h^2)}
≈ \frac{f(x+h) + f(x-h)}{f(x+h) - f(x-h)} ×h
\)

see thread (HP71B) Newton's method
Find all posts by this user
Quote this message in a reply
01-28-2024, 01:32 PM (This post was last modified: 01-28-2024 01:47 PM by Thomas Klemm.)
Post: #13
RE: Approximating function derivatives
These programs are written for the HP-42S but were executed on Free42.

Namir

Code:
00 { 52-Byte Prgm }
01▸LBL "NAMIR"  # (x_0 h -- dx_n)
02 STO 01       # h
03 R↓
04 STO 00       # x_0
05▸LBL 00       # loop
06 RCL 00       # x_n
07 RCL+ 01      # x_n + h
08 XEQ 01       # f(x_n + h)
09 STO 02       # f(x_n + h)
10 RCL 00       # x_n
11 XEQ 01       # f(x_n)
12 RCL 02       # f(x_n + h)
13 RCL- ST Y    # f(x_n + h) - f(x_n)
14 ÷            # f(x_n) / (f(x_n + h) - f(x_n))
15 RCL× 01      # h * f(x_n) / (f(x_n + h) - f(x_n))
16 STOP
17 STO- 00      # x_{n+1} = x_n - h * f(x_n)/(f(x_n + h) - f(x_n))
18 GTO 00       # loop
19▸LBL 01       # f(x)
20 2
21 RCL× ST Y    # 2 * x
22 3
23 +            # 2 * x + 3
24 ×            # (2 * x + 3) * x
25 12
26 -            # (2 * x + 3) * x - 12
27 END

Example

5 ENTER 1E-3 XEQ "NAMIR" R/S …

2.304147465437788018433179723502304
0.770582759117462734739653197597686
1.111023035061702437701327055382557e-1
2.428155751994818176546315444651607e-3
1.624379123244663423625637280518858e-6
3.174993748595121727106213314729139e-10
6.195745943475227358884079854367656e-14
1.209049801358665977248358523936021e-17
2.359363078168320091909037111452332e-21
4.604106574179830863422014948995832e-25
8.984529451135328979667386689678538e-29
1.756277341070306049325741843354146e-32
0


Albert

Code:
00 { 59-Byte Prgm }
01▸LBL "ALBERT" # (x_0 h -- dx_n)
02 STO 01       # h
03 R↓
04 STO 00       # x_0
05▸LBL 00       # loop
06 RCL 00       # x_n
07 RCL- 01      # x_n - h
08 XEQ 01       # f(x_n - h)
09 STO 02       # f(x_n - h)
10 RCL 00       # x_n
11 RCL+ 01      # x_n + h
12 XEQ 01       # f(x_n + h)
13 RCL+ 02      # f(x_n - h) + f(x_n + h)
14 LASTX        # f(x_n + h)
15 RCL- 02      # f(x_n - h) - f(x_n + h)
16 ÷            # (f(x_n - h) + f(x_n + h)) / (f(x_n - h) - f(x_n + h))
17 RCL× 01      # h * (f(x_n - h) + f(x_n + h)) / (f(x_n - h) - f(x_n + h))
18 STOP
19 STO- 00      # x_{n+1} = x_n - h * (f(x_n - h) + f(x_n + h)) / (f(x_n - h) - f(x_n + h))
20 GTO 00       # repeat
21▸LBL 01       # f(x)
22 2
23 RCL× ST Y    # 2 * x
24 3    
25 +            # 2 * x + 3
26 ×            # (2 * x + 3) * x
27 12   
28 -            # (2 * x + 3) * x - 12
29 END

Example

5 ENTER 1E-3 XEQ "ALBERT" R/S …

2.304347913043478260869565217391304
7.705390983101360703696388627092945e-1
1.109729765704264257764977158661139e-1
2.401390226528627812090476848659566e-3
1.125539305782882856562088581234399e-6
2.472616204210897690330978112713164e-13
1.193297714696925294598932151109756e-26
4.87950073651416474628973288871039e-34
4.87950073651416474628973288871039e-34



Newton

Code:
00 { 51-Byte Prgm }
01▸LBL "NEWTON" # (x_0 -- dx_n)
02 STO 00       # x_0
03▸LBL 00       # loop
04 RCL 00       # x_n
05 XEQ 02       # f'(x_n)
06 STO 01       # f'(x_n)
07 RCL 00       # x_n
08 XEQ 01       # f(x_n)
09 RCL÷ 01      # f(x_n) / f'(x_n)
10 STOP
11 STO- 00      # x_{n+1} = x_n - f(x_n) / f'(x_n)
12 GTO 00       # repeat
13▸LBL 01       # f(x)
14 2
15 RCL× ST Y    # 2 * x
16 3
17 +            # 2 * x + 3
18 ×
19 12
20 -            # (2 * x + 3) * x - 12
21 RTN
22▸LBL 02       # f'(x)
23 4
24 ×            # 4 * x
25 3
26 +            # 4 * x + 3
27 END

5 XEQ "NEWTON" R/S …

2.304347826086956521739130434782609
7.705390207104649567960499245645317e-1
1.109729473922707134927056532859122e-1
2.401388782294756614038691927472916e-3
1.125537866194583738614244483824163e-6
2.472609690769991318136999747343159e-13
1.193291280428526545827413545935046e-26
0


Conclusion

Both ALBERT and NEWTON provide quadratic convergence while NAMIR only provides linear convergence.
ALBERT may not end up with \(f(x) = 0\) as this is never evaluated.
All programs use two function evaluations per iteration.
For NEWTON an implementation of \({f}'(x)\) has to be provided.
Find all posts by this user
Quote this message in a reply
01-28-2024, 01:42 PM
Post: #14
RE: Approximating function derivatives
(09-07-2018 11:32 AM)Massimo Gnerucci Wrote:  
Code:
  diff = 2.0 * h * FF(guess) / (FF(guess + h) - FF(guess - h))

(01-28-2024 12:55 AM)Namir Wrote:  I regard the number of function calls as a measure of the efficiency of the algorithms.

Code:
  diff =  h*fx/(f(x(n)+h)-fx)

Albert shows us that we can have our cake and eat it too.
Find all posts by this user
Quote this message in a reply
01-29-2024, 01:46 AM
Post: #15
RE: Approximating function derivatives
(01-28-2024 01:37 AM)Albert Chan Wrote:  Hi, Namir

You could still do 2 function calls per Newton step, yet getting much better correction!

\(\displaystyle
\frac{f(x)}{f'(x)}
= \frac
{\frac{f(x+h) + f(x-h)}{2} + \mathcal{O}(h^2)}
{\frac{f(x+h) - f(x-h)}{2h} + \mathcal{O}(h^2)}
≈ \frac{f(x+h) + f(x-h)}{f(x+h) - f(x-h)} ×h
\)

see thread (HP71B) Newton's method

I found your approach interesting. I tested it with f(x)=exp(x)-3*x^2 an used two versions of Newton's and Halley's methods--one using calculated f(x) and one approximating f(x) with f(x+h) and f(x-h). I came to the conclusion that your approach attempts to find the root of some (f(x+h)-f(x-h))/2 and NOT f(x) itself. Your approach works only if f(x) represents a straight line. The results of your averaging f(x) yields roots with fewer accurate decimals.

Namir
Find all posts by this user
Quote this message in a reply
01-29-2024, 02:59 AM
Post: #16
RE: Approximating function derivatives
Everyone needs to remember that numerical derivative algorithms are generally very unstable.

Personally, I'd follow the literature (which is generally very well researched).


Pauli
Find all posts by this user
Quote this message in a reply
01-29-2024, 12:59 PM
Post: #17
RE: Approximating function derivatives
(01-29-2024 01:46 AM)Namir Wrote:  I found your approach interesting. I tested it with f(x)=exp(x)-3*x^2 an used two versions of Newton's and Halley's methods--one using calculated f(x) and one approximating f(x) with f(x+h) and f(x-h). I came to the conclusion that your approach attempts to find the root of some (f(x+h)-f(x-h))/2 and NOT f(x) itself. Your approach works only if f(x) represents a straight line. The results of your averaging f(x) yields roots with fewer accurate decimals.

FYI, this is *exactly* secant's method, from 2 points:

(x1,y1), (x2,y2) = ((x-h), f(x-h)), ((x+h), f(x+h))

We assume f from x1 to x2 is a striaght line. (only a tiny section, not the full f(x))
Thus, it make no difference how we pick for 1 or 2. Both get the same secant root.

x' = x1 - y1 * (x1-x2)/(y1-y2)
x' = x2 - y2 * (x1-x2)/(y1-y2)

Take the mean of these equivalent formula:

x' = (x1+x2)/2 - (y1+y2)/2 * (x1-x2)/(y1-y2) = x - (y1+y2)/(y1-y2) * h

Newton correction dx = f(x)/f'(x) ≈ (y1+y2)/(y1-y2) * h

If |dx| bigger than h (i.e. extrapolation), then correction estimate is good.
If |dx| smaller than h, we get into interpolation (note: f(x) never evaluated)

(x-h) ---- (x') -- (x) ------------ (x+h)

Whole section assumed straight line, interpolation to x' is not good, but closer.
With 1 function call, we can correct for this with another Newton step:

x'' = x' - f(x') / f'(x') ≈ x' - f(x') / (slope used to get from x to x')

We can of course set h small enough, so that x' is acceptable, skipping x''
Find all posts by this user
Quote this message in a reply
01-29-2024, 01:02 PM
Post: #18
RE: Approximating function derivatives
Instead of fixed h, we can based from a fraction of previous dx.

Code:
function S.newton(f,x,h0,eps,verbal)
    h0 = h0 or 1e-3
    eps = abs(eps or 1e-9)
    local n, h = 0, h0
    repeat
        if verbal then print(x) end
        local f1 = f(x-h)
        local f2 = f(x+h)
        local rs = 2*h / (f2-f1)    -- reciprocal slope
        local dx = (f2+f1)/2 * rs   -- secant root= x-dx
        if abs(dx) < h then dx = dx + f(x-dx)*rs; n=n+1 end
        x = x - dx
        h = h0 * dx
        n = n+2
    until not (abs(dx) > eps)
    return x, n

lua> S = require'solver'
lua> f = fn'x:exp(x) - 3*x^2'
lua> S.newton(f, -1, 1e-3, 1e-9, true)
-1
-0.5866562214529973
-0.4698017256933633
-0.45905390325056966
-0.4589622741012877
-0.4589622675369419
-0.4589622675369485      12
lua> f(_)
-1.1102230246251565e-16

lua> S.newton(f, 1, 1e-3, 1e-9, true)
1
0.914154769981483
0.9100176589772936
0.9100075725386966
0.910007572488709          9
lua> f(_)
0
Find all posts by this user
Quote this message in a reply
01-29-2024, 01:25 PM (This post was last modified: 01-29-2024 01:26 PM by Namir.)
Post: #19
RE: Approximating function derivatives
I used an initial guess of 6 for exp(x)-3*x^2 and 1e-16 as the tolerance for x. I also made another code version where the tolerance is applied to abs(f(x)).

Here is the Excel VBA code that I used to test for convergence of the guess refinment:

Code:
Option Explicit

Function Fx(ByVal X As Double) As Double
  Fx = Exp(X) - 3 * X ^ 2
End Function

Sub go()
  Dim X As Double, h As Double, diff As Double, toler As Double
  Dim f0 As Double, fp As Double, fm As Double
  Dim Deriv1 As Double, Deriv2 As Double
  Dim row As Integer
  
  Range("B2:Z10000").Clear
  ' Newton's method
  X = [A2].Value
  toler = [A4].Value
  row = 2
  Do
    h = 0.001 * (1 + Abs(X))
    f0 = Fx(X)
    diff = h * f0 / (Fx(X + h) - f0)
    X = X - diff
    Cells(row, 2) = Fx(X)
    Cells(row, 3) = X
    row = row + 1
  Loop Until Abs(diff) < toler Or row > 100
  
  'Newton's method using fx(x) approximation
  X = [A2].Value
  toler = [A4].Value
  row = 2
  Do
    h = 0.001 * (1 + Abs(X))
    fp = Fx(X + h)
    fm = Fx(X - h)
    diff = (fp + fm) / (fp - fm) * h
    X = X - diff
    Cells(row, 4) = Fx(X)
    Cells(row, 5) = X
    row = row + 1
  Loop Until Abs(diff) < toler Or row > 100
  MsgBox diff
  
  ' Halley's method
  X = [A2].Value
  toler = [A4].Value
  row = 2
  Do
    h = 0.001 * (1 + Abs(X))
    f0 = Fx(X)
    fp = Fx(X + h)
    fm = Fx(X - h)
    Deriv1 = (fp - fm) / 2 / h
    Deriv2 = (fp - 2 * f0 + fm) / h ^ 2
    diff = 2 * f0 * Deriv1 / (2 * Deriv1 ^ 2 - f0 * Deriv2)
    X = X - diff
    Cells(row, 6) = Fx(X)
    Cells(row, 7) = X
    row = row + 1
  Loop Until Abs(diff) < toler Or row > 100
  
  
   ' Halley's method using fx(x) approximation
  X = [A2].Value
  toler = [A4].Value
  row = 2
  Do
    h = 0.001 * (1 + Abs(X))
    fp = Fx(X + h)
    fm = Fx(X - h)
    f0 = (fp + fm) / 2
    Deriv1 = (fp - fm) / 2 / h
    Deriv2 = (fp - 2 * f0 + fm) / h ^ 2
    diff = 2 * f0 * Deriv1 / (2 * Deriv1 ^ 2 - f0 * Deriv2)
    X = X - diff
    Cells(row, 8) = Fx(X)
    Cells(row, 9) = X
    row = row + 1
  Loop Until Abs(diff) < toler Or row > 100
  
End Sub

Here is the output with calculated fx(x):

Code:

       Fx               X
100.0016498    5.198995948
31.47661127    4.534269054
8.480862488    4.058729991
1.53371327    3.806921961
0.096624566    3.738034628
0.000853428    3.733122997
3.75091E-06    3.733079222
1.63358E-08    3.733079029

Here is the output with approximated value of fx(x):

Code:

       Fx              X
99.54394976    5.195937045
31.16026661    4.529457029
8.320740809    4.053939407
1.470250565    3.804052193
0.083803449    3.737379665
-7.03483E-05    3.733075404
-0.000401073    3.733058364
-0.00040107    3.733058364

The second set shows reduced accuracy in the refined guess for the root.

Here is the Excel VBA code that I used to test the abs(fx(x)):

Code:
Option Explicit

Function Fx(ByVal X As Double) As Double
  Fx = Exp(X) - 3 * X ^ 2
End Function

Sub go()
  Dim X As Double, h As Double, diff As Double, toler As Double
  Dim f0 As Double, fp As Double, fm As Double
  Dim Deriv1 As Double, Deriv2 As Double
  Dim row As Integer
  
  Range("B2:Z10000").Clear
  X = [A2].Value
  toler = [A4].Value
  row = 2
  Do
    h = 0.001 * (1 + Abs(X))
    f0 = Fx(X)
    diff = h * f0 / (Fx(X + h) - f0)
    X = X - diff
    Cells(row, 2) = Fx(X)
    Cells(row, 3) = X
    row = row + 1
  Loop Until Abs(Fx(X)) < toler Or row > 100
  
  X = [A2].Value
  toler = [A4].Value
  row = 2
  Do
    h = 0.001 * (1 + Abs(X))
    fp = Fx(X + h)
    fm = Fx(X - h)
    diff = (fp + fm) / (fp - fm) * h
    X = X - diff
    Cells(row, 4) = Fx(X)
    Cells(row, 5) = X
    row = row + 1
  Loop Until Abs(Fx(X)) < toler Or row > 100
  
  ' Halley's method
  X = [A2].Value
  toler = [A4].Value
  row = 2
  Do
    h = 0.001 * (1 + Abs(X))
    f0 = Fx(X)
    fp = Fx(X + h)
    fm = Fx(X - h)
    Deriv1 = (fp - fm) / 2 / h
    Deriv2 = (fp - 2 * f0 + fm) / h ^ 2
    diff = 2 * f0 * Deriv1 / (2 * Deriv1 ^ 2 - f0 * Deriv2)
    X = X - diff
    Cells(row, 6) = Fx(X)
    Cells(row, 7) = X
    row = row + 1
  Loop Until Abs(Fx(X)) < toler Or row > 100
  
  
   ' Halley's method 2
  X = [A2].Value
  toler = [A4].Value
  row = 2
  Do
    h = 0.001 * (1 + Abs(X))
    fp = Fx(X + h)
    fm = Fx(X - h)
    f0 = (fp + fm) / 2
    Deriv1 = (fp - fm) / 2 / h
    Deriv2 = (fp - 2 * f0 + fm) / h ^ 2
    diff = 2 * f0 * Deriv1 / (2 * Deriv1 ^ 2 - f0 * Deriv2)
    X = X - diff
    Cells(row, 8) = Fx(X)
    Cells(row, 9) = X
    row = row + 1
  Loop Until Abs(Fx(X)) < toler Or row > 100
  
End Sub

Again, the results I get tell me that approximating f(x) with (f(x+h)+f(x+h))/2 reduces the accuracy of the result.

We can use any version that fit our conviction.
Find all posts by this user
Quote this message in a reply
01-29-2024, 02:27 PM
Post: #20
RE: Approximating function derivatives
(01-29-2024 01:25 PM)Namir Wrote:  I used an initial guess of 6 for exp(x)-3*x^2 and 1e-16 as the tolerance for x.

Ah, I missed the big root.

Since I expected quadratic convergence, I set eps = 1e-9, initial h = 1e-3
Again, h is decreasing along with dx, not a fixed number.

lua> S = require'solver'
lua> f = fn'x:exp(x) - 3*x^2'
lua> S.newton(f, 6, 1e-3, 1e-9, true)
6
5.195955942003588
4.52948873087134
4.053975908526381
3.80408413181784
3.737403183742807
3.7330961934908533
3.7330790288874782
3.7330790286328144      17
lua> f(_)
7.105427357601002e-15

This is very similar to true Newton's method

lua> df = fn'x:exp(x) - 6*x'
lua> x = 6
lua> for i=1,9 do print(x); x = x-f(x)/df(x) end
6
5.195956335690451
4.529489328402691
4.053976528531787
3.8040845309684834
3.7374032842820784
3.733096198370094
3.733079028904745
3.7330790286328144
Find all posts by this user
Quote this message in a reply
Post Reply 




User(s) browsing this thread: 1 Guest(s)