Post Reply 
Handy Polynomial Fitting with Bernstein Polynomials
11-12-2018, 05:39 AM
Post: #7
RE: Handy Polynomial Fitting with Bernstein Polynomials
(11-11-2018 01:51 PM)Namir Wrote:  I was doing a regression with Bernstein basis polynomials terms.

There's nothing wrong with using them as a basis for regression.
However there's no benefit compared to using the ordinary polynomials \(1, x, x^2, \cdots\).

To see why, let's assume \(M\) is the matrix that describes the change of basis.
As before we try to solve the overdetermined equation: \(A\ x = b\)
Instead we solve: \(A^\top \ A\ x = A^\top \ b\)

Given that \(A = C\ M\) we conclude that \(A^\top = M^\top \ C^\top\).
Therefore:

\(A^\top \ A = M^\top \ C^\top \ C\ M\)

and

\( A^\top \ b = M^\top \ C^\top\ b\)

And so the equation becomes:

\(M^\top \ C^\top \ C\ M\ x = M^\top \ C^\top\ b\)

We can drop \(M^\top\) on both sides:

\(C^\top \ C\ M\ x = C^\top\ b\)

And with the definition \(y = M\ x\) this becomes:

\(C^\top \ C\ y = C^\top\ b\)

Thus it doesn't matter which basis is used to calculate the regression.
You could also use Chebyshev polynomials if you prefer them.

Example:

To make it a bit more interesting we will try to find a best fit for the sine function.
Thus we use:
Code:
from math import sin, pi
b = array([sin(x*pi/2) for j in range(m+1) for x in [j/m]])

>>> b
array([0.        , 0.15643447, 0.30901699, 0.4539905 , 0.58778525,
       0.70710678, 0.80901699, 0.89100652, 0.95105652, 0.98768834,
       1.        ])

The transformation matrix \(M\) is defined as:
Code:
M = array([[ 1,  0,  0],
           [-2,  2,  0],
           [ 1, -2,  1]])

And then we use the ordinary instead of the Bernstein polynomials:
Code:
C = array([[x**k for k in range(n+1)] for j in range(m+1) for x in [j/m]])

>>> C
array([[1.  , 0.  , 0.  ],
       [1.  , 0.1 , 0.01],
       [1.  , 0.2 , 0.04],
       [1.  , 0.3 , 0.09],
       [1.  , 0.4 , 0.16],
       [1.  , 0.5 , 0.25],
       [1.  , 0.6 , 0.36],
       [1.  , 0.7 , 0.49],
       [1.  , 0.8 , 0.64],
       [1.  , 0.9 , 0.81],
       [1.  , 1.  , 1.  ]])


We can then solve the system with:
Code:
y = solve(C.T @ C, C.T @ b)

The result is:

>>> y
array([-0.01699491, 1.85988312, -0.82839241])


But we can do the same using the Bernstein polynomials (using the definition of A from my previous post):

Code:
x = solve(A.T @ A, A.T @ b)

Here the result is:

>>> x
array([-0.01699491, 0.91294665, 1.0144958 ])


Now we can check that \(y = M\ x\) holds:

>>> M @ x
array([-0.01699491, 1.85988312, -0.82839241])



We can use WolframAlpha to plot this function and get:

[Image: attachment.php?aid=6586]


(11-10-2018 10:22 PM)Namir Wrote:  I stumbled on an article discussing the advantages of using Bernstein polynomials for curve fitting. Unlike regular polynomials, the Bernstein polynomials offer smooth fitting with no wild deviations that occur when the order of the fitting classical polynomial is high.

This happens e.g. with Lagrange interpolation and is called Runge's phenomenon.
To mitigate the problem both Least squares fitting and approximation by using Bernstein polynomials are mentioned.
However these methods are not the same.

Cheers
Thomas


Attached File(s) Thumbnail(s)
   
Find all posts by this user
Quote this message in a reply
Post Reply 


Messages In This Thread
RE: Handy Polynomial Fitting with Bernstein Polynomials - Thomas Klemm - 11-12-2018 05:39 AM



User(s) browsing this thread: 1 Guest(s)