(15C) Bairstow's Method
|
02-25-2022, 01:20 AM
(This post was last modified: 08-16-2024 10:36 AM by Thomas Klemm.)
Post: #1
|
|||
|
|||
(15C) Bairstow's Method
Bairstow's Method
Links Wikipedia MathWorld Algorithm We are looking for a quadratic factor T(x) of the polynomial P(x). In general there will be a linear remainder R(x) after the division: \(P(x) = Q(x) \cdot T(x) + R(x)\) \(P(x) = a_n x^n + a_{n-1} x^{n-1} + \cdots + a_2 x^2 + a_1 x + a_0\) \(T(x) = x^2 + p x + q\) \(Q(x) = b_n x^{n-2} + b_{n-1} x^{n-3} + \cdots + b_4 x^2 + b_3 x + b_2\) \(R(x) = b_1 ( x + p ) + b_0\) If we set \(b_{n+1}\equiv b_{n+2}\equiv 0\), then \(\forall k\leq n\): \(b_k = a_k - b_{k+1} p - b_{k+2} q\) The two coefficients of the remainder R(x) depend on p and q: \(b_0 = b_0(p, q)\) \(b_1 = b_1(p, q)\) To make T(x) a factor of P(x) the remainder R(x) must vanish. Let's assume that we already have p and q close to the solution. We want to know how to change these values to get a better solution. Therefore we set the first order Taylor series expansion to 0: \(b_0(p + \delta p, q + \delta q) \approx b_0(p, q) + \frac{\partial b_0}{\partial p} \delta p + \frac{\partial b_0}{\partial q} \delta q = 0\) \(b_1(p + \delta p, q + \delta q) \approx b_1(p, q) + \frac{\partial b_1}{\partial p} \delta p + \frac{\partial b_1}{\partial q} \delta q = 0 \) It turns out that the partial derivatives of \(b_k\) with respect to p and q can be calculated with a similar recurrence formula: \(c_{k+1} = - \frac{\partial b_k}{\partial p}\) \(c_{k+2} = - \frac{\partial b_k}{\partial q}\) \(c_k = b_k - c_{k+1} p - c_{k+2} q\) Thus we have to solve: \(\begin{matrix} c_1 \delta p + c_2 \delta q &= b_0 \\ c_2 \delta p + c_3 \delta q &= b_1 \end{matrix}\) Using Linear Regression We have to solve a 2-dimensional linear equation: \(\begin{bmatrix} c_1 & c_2 \\ c_2 & c_3 \end{bmatrix}\cdot\begin{bmatrix} \delta p \\ \delta q \end{bmatrix}=\begin{bmatrix} b_0 \\ b_1 \end{bmatrix}\) But since the matrix is symmetric we can use the built-in function L.R. for linear regression. We know that the following equation is solved for A and B: \(\begin{bmatrix} \sum x^2 & \sum x \\ \sum x & n \end{bmatrix}\cdot\begin{bmatrix} A \\ B \end{bmatrix}=\begin{bmatrix} \sum xy \\ \sum y \end{bmatrix}\) Cramer's Rule gives the solutions: \(A=\frac{\begin{vmatrix} \sum xy & \sum x \\ \sum y & n \end{vmatrix}}{\begin{vmatrix} \sum x^2 & \sum x \\ \sum x & n \end{vmatrix}}=\frac{n\sum xy-\sum x\sum y}{n\sum x^2-(\sum x)^2}\) \(B=\frac{\begin{vmatrix} \sum x^2 & \sum xy \\ \sum x & \sum y \end{vmatrix}}{\begin{vmatrix} \sum x^2 & \sum x \\ \sum x & n \end{vmatrix}}=\frac{\sum y\sum x^2-\sum x\sum xy}{n\sum x^2-(\sum x)^2}\) For this to work we have to fill the registers R2 - R7 with the corresponding values of the equation: \(\begin{matrix} n & \rightarrow & R_2 \\ \sum x & \rightarrow & R_3 \\ \sum x^2 & \rightarrow & R_4 \\ \sum y & \rightarrow & R_5 \\ \sum y^2 & \rightarrow & R_6 \\ \sum xy & \rightarrow & R_7 \end{matrix}\) This mapping leaves us with the following linear equation: \(\begin{bmatrix} R_4 & R_3 \\ R_3 & R_2 \end{bmatrix}\cdot\begin{bmatrix} Y \\ X \end{bmatrix}=\begin{bmatrix} R_7 \\ R_5 \end{bmatrix}\) Example: \(\begin{bmatrix} 2 & -3 \\ -3 & 5 \end{bmatrix}\cdot\begin{bmatrix} 2 \\ -3 \end{bmatrix}=\begin{bmatrix} 13 \\ -21 \end{bmatrix}\) CLEAR Σ 2 STO 4 -3 STO 3 5 STO 2 13 STO 7 -21 STO 5 L.R. Y: 2 X: -3 Registers \(\begin{matrix} R_0 & p \\ R_1 & q \\ R_2 & c = c_j \\ R_3 & {c}′ = c_{j+1} \\ R_4 & {c}'' = c_{j+2} \\ R_5 & b = b_i \\ R_6 & \\ R_7 & {b}′ = b_{i+1} \\ R_8 & \text{index} = 9.fff \\ R_9 & a_n \\ R_{.0} & a_{n-1} \\ R_{.1} & a_{n-2} \\ R_{.2} & \cdots \\ R_{.3} & \cdots \\ \end{matrix}\) Program Code: 001 { 42 21 11 } f LBL A Description Initialization k = n Lines 002 - 004 \((b, {b}′, c, {c}′, {c}'') \leftarrow (0, 0, 0, 0, 0)\) \(I \leftarrow\) index Partial Derivatives Lines 005 - 026 At the end of the loop \(b = b_0\) but \(c = c_1\)! That's why the second division is performed before the first. Iteration k+1 → k Lines 006 - 015 \((c, {c}′, {c}'') \leftarrow (b - c p - {c}′ q, c, {c}′)\) Lines 016 - 024 \((b, {b}′) \leftarrow (a_k - b p - {b}′ q, b)\) Solve the linear equation Line 027 The trick using linear regression only works since the matrix is symmetric. \(\begin{bmatrix} c & {c}' \\ {c}' & {c}'' \end{bmatrix}\cdot\begin{bmatrix} \delta p \\ \delta q \end{bmatrix}=\begin{bmatrix} b \\ {b}' \end{bmatrix}\) Find improved values for p and q Lines 028 - 030 \(p \leftarrow p + \delta p\) \(q \leftarrow q + \delta q\) Stop Criterion Lines 031 - 033 \(\left | (\delta p, \delta q) \right |< \varepsilon\) \(\varepsilon = 10^{-n}\) if display is set to FIX n Polynomial Division Lines 035 - 057 We have to repeat the division here since we didn't keep \(b_k\) in lines 016 - 024. Quadratic Solver This program solves the quadratic equation: \(T(x)=x^2+px+q=0\) Code: 059 { 42 21 12 } f LBL B Just be aware that this program can't find complex roots. Instead an Error 0 will be displayed. However it's easy to find the complex solutions. Just use: CHS \(\sqrt{x}\) The solutions then are: \(Y \pm iX\) Example \(P(x)=2x^5-9x^4+15x^3+65x^2-267x+234=0\) Insert coefficients Code: 2 STO 9 Initialization Code: 9.014 STO 8 Alternatively use: Code: MATRIX 1 Run program Code: GSB A -52.0000 Code: GSB B 1.5000 Conclusion \(2x^5-9x^4+15x^3+65x^2-267x+234=\) \((x^2+1.5x-4.5)(2x^3-12x^2+42x-52)\) Solutions For \(x^2+1.5x-4.5=0\): \(x_1=1.5\) \(x_2=-3\) Initialize guess Code: 1 STO 0 Alternatively use: Code: MATRIX 1 Run program again Code: GSB A -4.0000 Code: GSB B Error 0 Code: RCL .0 -4.0000 Conclusion \(2x^3-12x^2+42x-52=\) \((x^2-4x+13)(2x-4)\) Solutions For \(x^2-4x+13=0\): \(x_3=2+3i\) \(x_4=2-3i\) For \(2x-4=0\): \(x_5=2\) Summary Factors \(2x^5-9x^4+15x^3+65x^2-267x+234=\) \((x^2+1.5x-4.5)(x^2-4x+13)(2x-4)=\) \((x-1.5)(x+3)(x^2-4x+13)2(x-2)=\) \((2x-3)(x+3)(x^2-4x+13)(x-2)\) Solutions \(x_1=1.5\) \(x_2=-3\) \(x_3=2+3i\) \(x_4=2-3i\) \(x_5=2\) |
|||
02-25-2022, 01:58 AM
Post: #2
|
|||
|
|||
RE: (15C) Bairstow's Method
Thomas, hello!
It's good to see you're re-engaged here, with predictably impressive posts. I have some Fortran stuff for Bairstow's method from the 70's in college, I've not seen much of this since then. I hope to take a closer look this weekend and maybe even find the old Fortran listings. --Bob Prosperi |
|||
02-25-2022, 07:04 AM
Post: #3
|
|||
|
|||
RE: (15C) Bairstow's Method
It is primarily a copy of the corresponding page of its sibling HP-11C.
Some of the registers had to be adjusted and due to the RCL arithmetic even a few steps could be saved. I used Torsten's excellent simulator to create the listing and a description in the attachment. (02-25-2022 01:58 AM)rprosperi Wrote: I have some Fortran stuff for Bairstow's method from the 70's in college, I've not seen much of this since then. I hope to take a closer look this weekend and maybe even find the old Fortran listings. You may search for "Numerical Recipes in Fortran 77" and find in chapter 9.5 Roots of Polynomials on page 370 (page 400 of the PDF) a short description together with a Fortran program. |
|||
02-27-2022, 12:40 PM
Post: #4
|
|||
|
|||
RE: (15C) Bairstow's Method
(02-25-2022 01:20 AM)Thomas Klemm Wrote: \(P(x) = Q(x) \cdot T(x) + R(x)\) quo(P*x²,T) = Q*x² + quo(R*x²,T) = Q*x² + b1*x + b0 This get all the b's. Same idea to get all the c's HP Prime code (note: index 1 = head of list, index 0 = end of list) Code: #cas // P,T = polynomial, test quadratic Example from Gerald's "Applied Numerical Analysis", p33 CAS> [1,1,1] // guess T = x^2+x+1 CAS> bairstow([1,-1.1,2.3,0.5,3.3],Ans) [1,0.890309886867,1.06345302509] [1,0.900024696323,1.10016130857] [1,0.899999998585,1.09999999975] [1,0.9,1.1] |
|||
« Next Oldest | Next Newest »
|
User(s) browsing this thread: 1 Guest(s)