HP Forums

Full Version: numerical integration algorithm details
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
A few years ago Bernard Parisse shared this link to an article that explains the numerical integration algorithm used by the Prime. The algorithm described works quite well. I'm curious about the mathematical reasons behind a couple of decisions made in the algorithm.

The method uses a standard 15-point Gaussian Quadrature and then a 14-point and a 6-point calculation to estimate the error. The 14 and 6 point calculations reuse nodes from the 15-point method but with different weights. I've tried to show this visually below using X to represent the nodes used. (df=degrees of freedom)

Code:
XXXXXXXXXXXXXXX - 15 nodes, df=30, exact for degree 29 polynomial
XXXXXXX XXXXXXX - 14 nodes, df=14, exact for degree 13 polynomial
 X X X   X X X  -  6 nodes, df=6,  exact for degree 5 polynomial

My question relates to why 6 nodes were chosen. As the least accurate part of the calculation it seems like you'd want to use more nodes. Using 7 nodes seems more natural. Why skip the middle node when you've already skipped the two adjacent nodes? The original 15 nodes are already distributed non-uniformly towards the endpoints.

Code:
 X X X X X X X  -  7 nodes, df=7,  exact for degree 6 polynomial

Or why not use even more, say 13 nodes

Code:
XXXXXX X XXXXXX - 13 nodes, df=13, exact for degree 12 polynomial

So my question boils down to, "Why 6?"


My second question pertains to the error calculation. Using ERR1 as the difference between the 15 and 14 point calculation, and ERR2 as the difference between the 15 and 6 point calculation, the error is estimated as

error = ERR1*(ERR1/ERR2)^2.

This seems to work well, but I don't see how it was derived. Why squared? Or perhaps it was simply found by experimenting and they found that this worked reasonable well.

Thanks.
I think the reason it works well is because the error ERR1*(ERR1/ERR2)^2 is in h^30, where h is the step, like the most accurate method theoretical error.
The numerical integration algorithm is introduced in the Wolfram reference documentation.

* NIntegrate uses symbolic preprocessing to resolve function symmetries, expand piecewise functions into cases, and decompose regions specified by inequalities into cells.
*With Method->Automatic, NIntegrate uses "GaussKronrod" in one dimension, and "MultiDimensional" otherwise.
*If an explicit setting for MaxPoints is given, NIntegrate by default uses Method->"QuasiMonteCarlo".
*"GaussKronrod": adaptive Gaussian quadrature with error estimation based on evaluation at Kronrod points.
*"DoubleExponential": non-adaptive double-exponential quadrature.
*"Trapezoidal": elementary trapezoidal method.
*"Oscillatory": transformation to handle certain integrals containing trigonometric and Bessel functions.
*"MultiDimensional": adaptive Genz\[Dash]Malik algorithm.
*"MonteCarlo": non-adaptive Monte Carlo.
*"QuasiMonteCarlo": non-adaptive Halton\[Dash]Hammersley\[Dash]Wozniakowski algorithm.
In certain parts we can not compare the symbolic calculation kernel and others of Xcas with the Mathematica Kernel, Wolfram has several brains working for it. XCas only BP and some other volunteer. Enthusiasts are sought to increase the power of Xcas, for example, that Xcas can read external data from the console like most languages.

contributions to Giac computer algebra system
https://github.com/marohnicluka/giac

Quote:marohnicluka contributions to Giac/Xcas computer algebra system, including:

graph theory (a comprehensive set of algorithms with over 150 commands, see doc/graphtheory-user_manual.pdf for details)
signal processing and audio tools (creating wave files, convolution and (auto)correlation, filtering, windowing, resampling, noise removal)
optimization in multivariate calculus (finding local and global extrema)
calculus of variations
(mixed integer) linear programming, transportation problem solver
curve interpolation (Thiele's algorithm, trigonometric polynomial fitting, Remez algorithm)
simplification of trigonometric expressions
statistics (continuous and discrete random variables with sampling, kernel density estimation, maximum-likelihood fitting)
ordinary differential equations (an implementation of Kovacic's algorithm, a second-degree boundary value problems solver)
Reference URL's