11-10-2018, 10:22 PM

Hi All,

I stumbled on an article discussing the advantages of using Bernstein polynomials for curve fitting. Unlike regular polynomials, the Bernstein polynomials offer smooth fitting with no wild deviations that occur when the order of the fitting classical polynomial is high.

The summary of Bernstein polynomials fitting of order n is:

1) map the values of the x observations to be in [0, 1] by using the minimum and maximum values of the original observations:

x = (x_orig - min(x_orig))/(max(x_orig) - min(x_orig))

2) Build the X matrix where the matrix elements are:

element(i,j) = combination(n,j-1) * x(i)^(j-1)*(1-x(i))^(n-j+1) for j=1 to n+1 and i=1 to number_of_observations

3) The last column of matrix X is filled with ones, needed to generate the constant term.

4) Solve X*c = y to obtain the regression coefficients c. The entities c and y are vectors.

I have tested the above concept using Excel and obtained satisfactory Bernstein polynomials fits.

I stumbled on an article discussing the advantages of using Bernstein polynomials for curve fitting. Unlike regular polynomials, the Bernstein polynomials offer smooth fitting with no wild deviations that occur when the order of the fitting classical polynomial is high.

The summary of Bernstein polynomials fitting of order n is:

1) map the values of the x observations to be in [0, 1] by using the minimum and maximum values of the original observations:

x = (x_orig - min(x_orig))/(max(x_orig) - min(x_orig))

2) Build the X matrix where the matrix elements are:

element(i,j) = combination(n,j-1) * x(i)^(j-1)*(1-x(i))^(n-j+1) for j=1 to n+1 and i=1 to number_of_observations

3) The last column of matrix X is filled with ones, needed to generate the constant term.

4) Solve X*c = y to obtain the regression coefficients c. The entities c and y are vectors.

I have tested the above concept using Excel and obtained satisfactory Bernstein polynomials fits.