numerical integration algorithm details
05-31-2019, 04:12 AM
Post: #1
 Wes Loewer Member Posts: 170 Joined: Jan 2014
numerical integration algorithm details
A few years ago Bernard Parisse shared this link to an article that explains the numerical integration algorithm used by the Prime. The algorithm described works quite well. I'm curious about the mathematical reasons behind a couple of decisions made in the algorithm.

The method uses a standard 15-point Gaussian Quadrature and then a 14-point and a 6-point calculation to estimate the error. The 14 and 6 point calculations reuse nodes from the 15-point method but with different weights. I've tried to show this visually below using X to represent the nodes used. (df=degrees of freedom)

Code:
XXXXXXXXXXXXXXX - 15 nodes, df=30, exact for degree 29 polynomial XXXXXXX XXXXXXX - 14 nodes, df=14, exact for degree 13 polynomial  X X X   X X X  -  6 nodes, df=6,  exact for degree 5 polynomial

My question relates to why 6 nodes were chosen. As the least accurate part of the calculation it seems like you'd want to use more nodes. Using 7 nodes seems more natural. Why skip the middle node when you've already skipped the two adjacent nodes? The original 15 nodes are already distributed non-uniformly towards the endpoints.

Code:
 X X X X X X X  -  7 nodes, df=7,  exact for degree 6 polynomial

Or why not use even more, say 13 nodes

Code:
XXXXXX X XXXXXX - 13 nodes, df=13, exact for degree 12 polynomial

So my question boils down to, "Why 6?"

My second question pertains to the error calculation. Using ERR1 as the difference between the 15 and 14 point calculation, and ERR2 as the difference between the 15 and 6 point calculation, the error is estimated as

error = ERR1*(ERR1/ERR2)^2.

This seems to work well, but I don't see how it was derived. Why squared? Or perhaps it was simply found by experimenting and they found that this worked reasonable well.

Thanks.
05-31-2019, 06:10 AM
Post: #2
 parisse Senior Member Posts: 1,016 Joined: Dec 2013
RE: numerical integration algorithm details
I think the reason it works well is because the error ERR1*(ERR1/ERR2)^2 is in h^30, where h is the step, like the most accurate method theoretical error.
05-31-2019, 11:22 AM (This post was last modified: 05-31-2019 11:22 AM by yangyongkang.)
Post: #3
 yangyongkang Junior Member Posts: 36 Joined: Dec 2018
RE: numerical integration algorithm details
The numerical integration algorithm is introduced in the Wolfram reference documentation.

* NIntegrate uses symbolic preprocessing to resolve function symmetries, expand piecewise functions into cases, and decompose regions specified by inequalities into cells.
*With Method->Automatic, NIntegrate uses "GaussKronrod" in one dimension, and "MultiDimensional" otherwise.
*If an explicit setting for MaxPoints is given, NIntegrate by default uses Method->"QuasiMonteCarlo".
*"GaussKronrod": adaptive Gaussian quadrature with error estimation based on evaluation at Kronrod points.
*"Trapezoidal": elementary trapezoidal method.
*"Oscillatory": transformation to handle certain integrals containing trigonometric and Bessel functions.

study hard, improve every day
05-31-2019, 01:26 PM
Post: #4
 compsystems Senior Member Posts: 1,164 Joined: Dec 2013
RE: numerical integration algorithm details
In certain parts we can not compare the symbolic calculation kernel and others of Xcas with the Mathematica Kernel, Wolfram has several brains working for it. XCas only BP and some other volunteer. Enthusiasts are sought to increase the power of Xcas, for example, that Xcas can read external data from the console like most languages.

contributions to Giac computer algebra system
https://github.com/marohnicluka/giac

Quote:marohnicluka contributions to Giac/Xcas computer algebra system, including:

graph theory (a comprehensive set of algorithms with over 150 commands, see doc/graphtheory-user_manual.pdf for details)
signal processing and audio tools (creating wave files, convolution and (auto)correlation, filtering, windowing, resampling, noise removal)
optimization in multivariate calculus (finding local and global extrema)
calculus of variations
(mixed integer) linear programming, transportation problem solver
curve interpolation (Thiele's algorithm, trigonometric polynomial fitting, Remez algorithm)
simplification of trigonometric expressions
statistics (continuous and discrete random variables with sampling, kernel density estimation, maximum-likelihood fitting)
ordinary differential equations (an implementation of Kovacic's algorithm, a second-degree boundary value problems solver)

LO SUFICIENTEMENTE BUENO ES ENEMIGO DE LA EXCELENCIA.
 « Next Oldest | Next Newest »

User(s) browsing this thread: 1 Guest(s)