Commands exist to return eigenvectors for a given matrix; but difficulties arise with these, because of the nature of exact, (versus approximate), entries within a matrix.

exact(matrix) can be helpful here, but this layers confusion, delaying productivity, and becoming a frustrating part of the prime CAS.

For example, (Markov) matrix:

[CAS]

a:=[[0.9,0.2], [0.1,0.8]]; Entries with approximate values

a1:=[[9/10,2/10], [1/10,8/10]]; The same matrix with exact values

eigenvects(a); ==> [[0.894427191,−0.7453559925],[0.4472135955,0.7453559925]];

// Using either of these:

eigenvects(exact(a));

eigenvects(a1); // ==> [[2,-1],[1,1]]; MUCH nicer to work with

Wolfram Alpha returns [[2,-1],[1,1]], in BOTH cases, visibly easier to work with, and quickly compares to the result obtained, by hand, when manually deriving the eigenvectors.

Comments?

-Dale-

That's because the algorithms are not the same. If you compute the eigenvectors of an approx matrix, the eigenvectors are deduced from the Schur factorization SCHUR(m), not by factorizing the characteristic polynomial. The first matrix P returned by SCHUR is unitary, it's first column is an eigenvector with euclidean norm 1, the second eigenvector is deduced by a linear combination of both column vectors of P to cancel the non diagonal coefficient of the second matrix returned by SCHUR (and it is not normalized).

If there is something to improve here mathematically speaking, it's certainly not to convert to exact like Wolfram Alpha does. It would be to normalize the first eigenvector (the eigenvector corresponding to eigenvalue 1) with respect to the L1 norm (sum of absolute values), to get the invariant probability vector (2/3.,1/3.) (you can check it by powering your matrix to a high power, like m^100), that you can also get by solving m*v=v if you know that m is a stochastic matrix (the best way to do that is to iterate if m is approx).

This also demonstrates that having too much confidence in Wolfram Alpha is not a good idea, it may look simpler, but it's not always appropriate (this is of course true for any CAS, but more true for those with an interface where everything is supposed to be done for you)

(12-27-2018 12:58 PM)parisse Wrote: [ -> ]... This also demonstrates that having too much confidence in Wolfram Alpha is not a good idea, it may look simpler, but it's not always appropriate (this is of course true for any CAS, but more true for those with an interface where everything is supposed to be done for you)

By extension, 'If a consumer of a product must somehow know not to place too much confidence in that product, just because it has an interface where everything is supposed to be done for you,' that says a lot about product suitability for purpose, doesn't it?

Perhaps you're right, and by that reasoning, it will never be possible to design a safe autonomous vehicle to be used on public roads. In general, 'suitability for purpose can never be obtained,' can it? On the other hand, it just seems like a good thing when airlines have the latest technology, maximizing confidence that flights will be safe.

More directly, when the prime CAS delivers results similar to those obtained by hand, (and by competing products), confidence in the product is $confirmed. As a perpetual student, when the results of eigenvects(a) are the same as eigenvects(exact(a)), where the two are the same, less frustration is spent trying to reconcile differences.

If you are a perpetual student, then you should at least consider my mathematical expertise and arguments before deciding that you are right and I'm wrong. Let me illustrate on this example that approx algorithms should not be the same as exact algorithms. Take for example a 30x30 random matrix a:=ranm(30,30). Compute the characteristic polynomial p:=charpoly(a). Of course you can not solve this polynomial exactly, but even in approx mode, look at the size of the coefficients, for example evalf(p[30]) and compare with the leading coefficient of p. How do you think one can compute the roots of this polynomial accurately? This is an ill-conditionned problem. And that's why numeric algorithms do not follow the same path as exact algorithms for eigenvalues/eigenvectors computation. This is also true for many other algorithms.

Interestingly, the HP 50 gets different answers:

Code:

Approx. mode:

[[ 1. 1.]

[.5 -1.]]

Exact mode:

[[ 1/5 1/5]

[ 1/10 -1/5]]

curious since the CASs are basically the same.

The algorithms of the numerical computation are different from the symbolic computation.

Enter a matrix with numbers in rational format and others in approximate format, may or may not use the same numerical algorithm, the exact data are approximate, but it is better to have independent algorithms for case.

The CAS are not the same. Giac is much more powerful than what I implemented on the HP49. In addition, I implemented myself some numeric algorithms that were already present in the 48g.

GIAC is light years away from the HP50-CAS. =)

Thanks BP, for developing GIAC and its UI (Xcas)