(11C) Bairstow's Method
|
06-14-2015, 06:27 PM
(This post was last modified: 08-14-2024 05:36 PM by Thomas Klemm.)
Post: #1
|
|||
|
|||
(11C) Bairstow's Method
Bairstow's Method
Links Wikipedia MathWorld Numerical Recipes in Fortran 77 9.5 Roots of Polynomials Algorithm We are looking for a quadratic factor T(x) of the polynomial P(x). In general there will be a linear remainder R(x) after the division: \(P(x) = Q(x) \cdot T(x) + R(x)\) \(P(x) = a_n x^n + a_{n-1} x^{n-1} + \cdots + a_2 x^2 + a_1 x + a_0\) \(T(x) = x^2 + p x + q\) \(Q(x) = b_n x^{n-2} + b_{n-1} x^{n-3} + \cdots + b_4 x^2 + b_3 x + b_2\) \(R(x) = b_1 ( x + p ) + b_0\) If we set \(b_{n+1}\equiv b_{n+2}\equiv 0\), then \(\forall k\leq n\): \(b_k = a_k - b_{k+1} p - b_{k+2} q\) The two coefficients of the remainder R(x) depend on p and q: \(b_0 = b_0(p, q)\) \(b_1 = b_1(p, q)\) To make T(x) a factor of P(x) the remainder R(x) must vanish. Let's assume that we already have p and q close to the solution. We want to know how to change these values to get a better solution. Therefore we set the first order Taylor series expansion to 0: \(b_0(p + \delta p, q + \delta q) \approx b_0(p, q) + \frac{\partial b_0}{\partial p} \delta p + \frac{\partial b_0}{\partial q} \delta q = 0\) \(b_1(p + \delta p, q + \delta q) \approx b_1(p, q) + \frac{\partial b_1}{\partial p} \delta p + \frac{\partial b_1}{\partial q} \delta q = 0 \) It turns out that the partial derivatives of \(b_k\) with respect to p and q can be calculated with a similar recurrence formula: \(c_{k+1} = - \frac{\partial b_k}{\partial p}\) \(c_{k+2} = - \frac{\partial b_k}{\partial q}\) \(c_k = b_k - c_{k+1} p - c_{k+2} q\) Thus we have to solve: \(\begin{matrix} c_1 \delta p + c_2 \delta q &= b_0 \\ c_2 \delta p + c_3 \delta q &= b_1 \end{matrix}\) Using Linear Regression We have to solve a 2-dimensional linear equation: \(\begin{bmatrix} c_1 & c_2 \\ c_2 & c_3 \end{bmatrix}\cdot\begin{bmatrix} \delta p \\ \delta q \end{bmatrix}=\begin{bmatrix} b_0 \\ b_1 \end{bmatrix}\) But since the matrix is symmetric we can use the built-in function L.R. for linear regression. We know that the following equation is solved for A and B: \(\begin{bmatrix} \sum x^2 & \sum x \\ \sum x & n \end{bmatrix}\cdot\begin{bmatrix} A \\ B \end{bmatrix}=\begin{bmatrix} \sum xy \\ \sum y \end{bmatrix}\) Cramer's Rule gives the solutions: \(A=\frac{\begin{vmatrix} \sum xy & \sum x \\ \sum y & n \end{vmatrix}}{\begin{vmatrix} \sum x^2 & \sum x \\ \sum x & n \end{vmatrix}}=\frac{n\sum xy-\sum x\sum y}{n\sum x^2-(\sum x)^2}\) \(B=\frac{\begin{vmatrix} \sum x^2 & \sum xy \\ \sum x & \sum y \end{vmatrix}}{\begin{vmatrix} \sum x^2 & \sum x \\ \sum x & n \end{vmatrix}}=\frac{\sum y\sum x^2-\sum x\sum xy}{n\sum x^2-(\sum x)^2}\) For this to work we have to fill the registers R0 - R5 with the corresponding values of the equation: \(\begin{matrix} n & \rightarrow & R_0 \\ \sum x & \rightarrow & R_1 \\ \sum x^2 & \rightarrow & R_2 \\ \sum y & \rightarrow & R_3 \\ \sum y^2 & \rightarrow & R_4 \\ \sum xy & \rightarrow & R_5 \end{matrix}\) This mapping leaves us with the following linear equation: \(\begin{bmatrix} R_2 & R_1 \\ R_1 & R_0 \end{bmatrix}\cdot\begin{bmatrix} Y \\ X \end{bmatrix}=\begin{bmatrix} R_5 \\ R_3 \end{bmatrix}\) Example: \(\begin{bmatrix} 2 & -3 \\ -3 & 5 \end{bmatrix}\cdot\begin{bmatrix} 2 \\ -3 \end{bmatrix}=\begin{bmatrix} 13 \\ -21 \end{bmatrix}\) CLEAR Σ 2 STO 2 -3 STO 1 5 STO 0 13 STO 5 -21 STO 3 L.R. Y: 2 X: -3 Registers \(\begin{matrix} R_0 & c = c_j \\ R_1 & {c}′ = c_{j+1} \\ R_2 & {c}'' = c_{j+2} \\ R_3 & b = b_i \\ R_4 & \\ R_5 & {b}′ = b_{i+1} \\ R_6 & \text{index} = 9.fff \\ R_7 & p \\ R_8 & q \\ R_9 & a_n \\ R_{.0} & a_{n-1} \\ R_{.1} & a_{n-2} \\ R_{.2} & \cdots \\ R_{.3} & \cdots \\ \end{matrix}\) Program Code: 001 - 42,21,11 LBL A 032 - 44,40, 7 STO + 7 Description Initialization k = n Lines 002 - 004 \((b, {b}′, c, {c}′, {c}'') \leftarrow (0, 0, 0, 0, 0)\) \(I \leftarrow\) index Polynomial division Lines 005 - 030 At the end of the loop \(b = b_0\) but \(c = c_1\)! That's why the second division is performed before the first. Iteration k+1 → k Lines 006 - 017 \((c, {c}′, {c}'') \leftarrow (b - c p - {c}′ q, c, {c}′)\) Lines 018 - 028 \((b, {b}′) \leftarrow (a_k - b p - {b}′ q, b)\) Solve the linear equation Line 031 The trick using linear regression only works since the matrix is symmetric. \(\begin{bmatrix} c & {c}' \\ {c}' & {c}'' \end{bmatrix}\cdot\begin{bmatrix} \delta p \\ \delta q \end{bmatrix}=\begin{bmatrix} b \\ {b}' \end{bmatrix}\) Find improved values for p and q Lines 032 - 034 \(p \leftarrow p + \delta p\) \(q \leftarrow q + \delta q\) Stop Criterion Lines 035 - 037 \(\left | (\delta p, \delta q) \right |< \varepsilon\) \(\varepsilon = 10^{-n}\) if display is set to FIX n Polynomial Division Lines 039 - 061 We have to repeat the division here since we didn't keep \(b_k\) in lines 018 - 028. Quadratic Solver This program solves the quadratic equation: \(T(x)=x^2+px+q=0\) Code: 063 - 42,21,12 LBL B Just be aware that this program can't find complex roots. Instead an Error 0 will be displayed. However it's easy to find the complex solutions. Just use: CHS \(\sqrt{x}\) The solutions then are: \(Y \pm iX\) Example \(P(x)=2x^5-9x^4+15x^3+65x^2-267x+234=0\) Insert coefficients Code: 2 STO 9 Initialization Code: 9.014 STO 6 Run program Code: GSB A -52.0000 Code: GSB B 1.5000 Conclusion \(2x^5-9x^4+15x^3+65x^2-267x+234=\) \((x^2+1.5x-4.5)(2x^3-12x^2+42x-52)\) Solutions For \(x^2+1.5x-4.5=0\): \(x_1=1.5\) \(x_2=-3\) Initialize guess Code: 1 STO 7 Run program again Code: GSB A -4.0000 Code: GSB B Error 0 Code: RCL .0 -4.0000 Conclusion \(2x^3-12x^2+42x-52=\) \((x^2-4x+13)(2x-4)\) Solutions For \(x^2-4x+13=0\): \(x_3=2+3i\) \(x_4=2-3i\) For \(2x-4=0\): \(x_5=2\) Summary Factors \(2x^5-9x^4+15x^3+65x^2-267x+234=\) \((x^2+1.5x-4.5)(x^2-4x+13)(2x-4)=\) \((x-1.5)(x+3)(x^2-4x+13)2(x-2)=\) \((2x-3)(x+3)(x^2-4x+13)(x-2)\) Solutions \(x_1=1.5\) \(x_2=-3\) \(x_3=2+3i\) \(x_4=2-3i\) \(x_5=2\) |
|||
« Next Oldest | Next Newest »
|
User(s) browsing this thread: 1 Guest(s)