HP Forums

Full Version: Side benefit from Runge-Kutta methods for ODE
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
I have been looking at various methods to numerically solve ODEs (Ordinary Differential Equations). Among the various methods you can find (in Wikipedia) a family of Runge-Kutta methods. The 4th order Runge-Kutta method is perhaps the most popular. It has been used by HP in various calculator Stat Pacs.

The family of various Runge-Kutta methods solve ODE of:

dy/dx = f(x,y)

One interesting "side effect" is that if:

dy/dx = f(x)

Then you can obtain methods for numerical integration. The 4th order Runge-Kutta method, in this case, becomes Simpson's rule. The interesting windfall here is using the various variants of Runge-Kutta as additional methods for numerical integration. These variant methods are basically a "free bonus" method for numerical integration, which you can explore and use in performing numerical integration.

Namir
It goes the other way too. Integration methods lead to solutions to differential equations.

The predictor-corrector methods consist of using an "open" (no end point) method of integration to predict a solution and a "closed" point method to correct the result.
Good to know that. I will check it out sometime in the future.

Namir
.
Hi, Namir:

(09-23-2018 07:27 AM)Namir Wrote: [ -> ][...] you can obtain methods for numerical integration.

Sure you do.

Quote:The 4th order Runge-Kutta method, in this case, becomes Simpson's rule.

Simpson's rule is a 3rd-order method, i.e.: it gives exact results for polynomials up to the 3rd degree.

Quote:The interesting windfall here is using the various variants of Runge-Kutta as additional methods for numerical integration. These variant methods are basically a "free bonus" method for numerical integration, which you can explore and use in performing numerical integration.

You can but you'd be ill-advised to do that because using Runge-Kutta methods to perform numerical integration of some f(x) is very inefficient, that's why no one uses them for that purpose and that's why no book on numerical analysis would recommend them or even mention the possibility.

Why the inefficiency ? Well, taking as an example the 4th order Runge Kutta method you mention, it performs 4 evaluations of f(x) per step, and (being exactly equivalent to Simpson's rule) it's exact for polynomials only up to degree 3.

On the other hand, using Gaussian integration, as in this small program here in the General Library:

Gaussian integration

will perform just 3 evaluations of f(x) and yet it's exact for polynomials up to degree 5, so you get much greater accuracy (5th order vs. 3rd order) while performing 25% less evaluations of f(x) (3 vs. 4), i.e.; also running faster.

If in doubt, try the two examples in the linked article with a 4th-order RK implementation and see what precision you do get for 1 subinterval (1 step in RK).

The same considerations apply to other RK methods. That's why using Runge-Kutta methods for integration is just wasting time and getting an inferior result. They simply weren't designed and aren't suited for that, there are much, much better and well known methods available, such as the aforementioned Gaussian integration and many others.

Regards.
V.
.
(09-26-2018 10:45 PM)Valentin Albillo Wrote: [ -> ]Why the inefficiency ? Well, taking as an example the 4th order Runge Kutta method you mention, it performs 4 evaluations of f(x) per step, and (being exactly equivalent to Simpson's rule) it's exact for polynomials only up to degree 3.

The 4 evaluations are:

\(\begin{aligned}
k_{1}&=f(t_{n},y_{n}),\\
k_{2}&=f\left(t_{n}+\frac{h}{2},y_{n}+\frac{h}{2}k_{1}\right),\\
k_{3}&=f\left(t_{n}+\frac{h}{2},y_{n}+\frac{h}{2}k_{2}\right),\\
k_{4}&=f\left(t_{n}+h,y_{n}+k_{3}\right).
\end{aligned}\)

But since the function \(f\) only depends on \(t\) we end up with:

\(\begin{aligned}
k_{1}&=f(t_{n}),\\
k_{2}&=f\left(t_{n}+\frac{h}{2}\right),\\
k_{3}&=f\left(t_{n}+\frac{h}{2}\right),\\
k_{4}&=f\left(t_{n}+h\right).
\end{aligned}\)

Thus \(k_{2}=k_{3}\).

This reduces:

\(y_{n+1}=y_{n}+\tfrac{h}{6}\left(k_{1}+2k_{2}+2k_{3}+k_{4}\right)\)

to:

\(y_{n+1}=y_{n}+\tfrac{h}{6}\left(k_{1}+4k_{2}+k_{4}\right)\)


Compare this to the function evaluations of the 3rd-order method:

\(\begin{aligned}
k_{1}&=f(t_{n},y_{n}),\\
k_{2}&=f\left(t_{n}+\frac {h}{2},y_{n}+\frac {h}{2}k_{1}\right),\\
k_{3}&=f(t_{n}+h,y_{n}-hk_{1}+2hk_{2}).
\end{aligned}\)

Here we end up with:

\(\begin{aligned}
k_{1}&=f(t_{n}),\\
k_{2}&=f\left(t_{n}+{\frac {h}{2}}\right),\\
k_{3}&=f(t_{n}+h).
\end{aligned}\)

\(y_{n+1}=y_{n}+\tfrac{h}{6}\left(k_{1}+4k_{2}+k_{3}\right)\)

Therefore we can see that RK4 degrades to RK3.

Cheers
Thomas
Thanks Thomas for the clarification with equations.
Has anyone tried to fake an f(x) into an f(x,y) by doing somethin like this:

f(x,y) = g(x) + y - y

where g(x) is the original function of x?

:-)

Namir .... a truly original thinker .... LOL

PS: Maybe I am pulling a Ramanujan on this one. He presumably proved that:

c = 1 + 2 + 3 + 4 + ... + n = -1/12

In his proof, Ramanujan corrupted the equations he was using in calculating (c - 3c) and finagled the corrupted equations into giving -1/12, which of course is very wrong!
Reference URL's