Eigenvectors
|
12-27-2018, 11:40 AM
Post: #1
|
|||
|
|||
Eigenvectors
Commands exist to return eigenvectors for a given matrix; but difficulties arise with these, because of the nature of exact, (versus approximate), entries within a matrix.
exact(matrix) can be helpful here, but this layers confusion, delaying productivity, and becoming a frustrating part of the prime CAS. For example, (Markov) matrix: [CAS] a:=[[0.9,0.2], [0.1,0.8]]; Entries with approximate values a1:=[[9/10,2/10], [1/10,8/10]]; The same matrix with exact values eigenvects(a); ==> [[0.894427191,−0.7453559925],[0.4472135955,0.7453559925]]; // Using either of these: eigenvects(exact(a)); eigenvects(a1); // ==> [[2,-1],[1,1]]; MUCH nicer to work with Wolfram Alpha returns [[2,-1],[1,1]], in BOTH cases, visibly easier to work with, and quickly compares to the result obtained, by hand, when manually deriving the eigenvectors. Comments? -Dale- |
|||
12-27-2018, 12:58 PM
(This post was last modified: 12-27-2018 01:08 PM by parisse.)
Post: #2
|
|||
|
|||
RE: Eigenvectors
That's because the algorithms are not the same. If you compute the eigenvectors of an approx matrix, the eigenvectors are deduced from the Schur factorization SCHUR(m), not by factorizing the characteristic polynomial. The first matrix P returned by SCHUR is unitary, it's first column is an eigenvector with euclidean norm 1, the second eigenvector is deduced by a linear combination of both column vectors of P to cancel the non diagonal coefficient of the second matrix returned by SCHUR (and it is not normalized).
If there is something to improve here mathematically speaking, it's certainly not to convert to exact like Wolfram Alpha does. It would be to normalize the first eigenvector (the eigenvector corresponding to eigenvalue 1) with respect to the L1 norm (sum of absolute values), to get the invariant probability vector (2/3.,1/3.) (you can check it by powering your matrix to a high power, like m^100), that you can also get by solving m*v=v if you know that m is a stochastic matrix (the best way to do that is to iterate if m is approx). This also demonstrates that having too much confidence in Wolfram Alpha is not a good idea, it may look simpler, but it's not always appropriate (this is of course true for any CAS, but more true for those with an interface where everything is supposed to be done for you) |
|||
12-27-2018, 03:58 PM
Post: #3
|
|||
|
|||
RE: Eigenvectors
(12-27-2018 12:58 PM)parisse Wrote: ... This also demonstrates that having too much confidence in Wolfram Alpha is not a good idea, it may look simpler, but it's not always appropriate (this is of course true for any CAS, but more true for those with an interface where everything is supposed to be done for you) By extension, 'If a consumer of a product must somehow know not to place too much confidence in that product, just because it has an interface where everything is supposed to be done for you,' that says a lot about product suitability for purpose, doesn't it? Perhaps you're right, and by that reasoning, it will never be possible to design a safe autonomous vehicle to be used on public roads. In general, 'suitability for purpose can never be obtained,' can it? On the other hand, it just seems like a good thing when airlines have the latest technology, maximizing confidence that flights will be safe. More directly, when the prime CAS delivers results similar to those obtained by hand, (and by competing products), confidence in the product is $confirmed. As a perpetual student, when the results of eigenvects(a) are the same as eigenvects(exact(a)), where the two are the same, less frustration is spent trying to reconcile differences. |
|||
12-27-2018, 04:32 PM
Post: #4
|
|||
|
|||
RE: Eigenvectors
If you are a perpetual student, then you should at least consider my mathematical expertise and arguments before deciding that you are right and I'm wrong. Let me illustrate on this example that approx algorithms should not be the same as exact algorithms. Take for example a 30x30 random matrix a:=ranm(30,30). Compute the characteristic polynomial p:=charpoly(a). Of course you can not solve this polynomial exactly, but even in approx mode, look at the size of the coefficients, for example evalf(p[30]) and compare with the leading coefficient of p. How do you think one can compute the roots of this polynomial accurately? This is an ill-conditionned problem. And that's why numeric algorithms do not follow the same path as exact algorithms for eigenvalues/eigenvectors computation. This is also true for many other algorithms.
|
|||
12-27-2018, 04:42 PM
Post: #5
|
|||
|
|||
RE: Eigenvectors
Interestingly, the HP 50 gets different answers:
Code:
curious since the CASs are basically the same. |
|||
12-27-2018, 04:50 PM
Post: #6
|
|||
|
|||
RE: Eigenvectors
The algorithms of the numerical computation are different from the symbolic computation.
Enter a matrix with numbers in rational format and others in approximate format, may or may not use the same numerical algorithm, the exact data are approximate, but it is better to have independent algorithms for case. |
|||
12-27-2018, 04:51 PM
Post: #7
|
|||
|
|||
RE: Eigenvectors
The CAS are not the same. Giac is much more powerful than what I implemented on the HP49. In addition, I implemented myself some numeric algorithms that were already present in the 48g.
|
|||
12-27-2018, 05:30 PM
Post: #8
|
|||
|
|||
RE: Eigenvectors
GIAC is light years away from the HP50-CAS. =)
Thanks BP, for developing GIAC and its UI (Xcas) |
|||
08-20-2021, 08:21 PM
Post: #9
|
|||
|
|||
RE: Eigenvectors
(12-27-2018 04:32 PM)parisse Wrote: ⋮ Very true! There’s no end to the fun one can have designing and implementing algorithms. As well as considering the (ill- / well-)conditioning of a problem, one can also consider the arithmetics used in implementation (e.g., how round-off or range limits in an employed floating-point arithmetic will interplay with the numerical computations carried out by the algorithm). When I was adding the Sketch feature to the Function Plot view, I had concerns over the use of the already-present floating-point arithmetics when solving 3x3 and 4x4 matrix equations. Since I had a full plate of things to work on, I added higher-precision signed-magnitude arithmetics (1 sign bit + 128 binary digits and 1 sign bit + 160 binary digits, for just a few arithmetic operations) to rule out catastrophic cancellation in sums. (The extended precision arithmetics allowed for absolutely no round-off to be introduced until very near the end of calculation chains, where bounding it was entirely straightforward.) |
|||
08-21-2021, 07:48 AM
Post: #10
|
|||
|
|||
RE: Eigenvectors
DrD wrote:
Quote:a:=[[0.9,0.2], [0.1,0.8]]; Entries with approximate values I do not see any problem here. All solutions are correct. If a vector a is an eigenvector as well c*a is an eigenvector with c being any real number different from zero. The solution starting with .89 is a normalized solution with the length of the eigenvector being 1. The solution starting with 2 is not normalized but has the advantage of being an exact solution. Best |
|||
« Next Oldest | Next Newest »
|
User(s) browsing this thread: 4 Guest(s)