# HP Forums

Full Version: [VA] SRC#003- New Year 2019 Special
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2
.
Hi, all:

Welcome to my brand new SRC#003, this time commemorating the New Year 2019.

There will be some (hopefully) interesting things discussed here but first of all there's a simple assignment for you to tackle with your preferred HP calculator (not Excel, Matlab, Mathematica, Python, Java, Haskell, Wolfram Alpha, etc., there are other forums/threads for that), namely:

Write a program which implements the following procedure:

1) Create this 3x3 matrix M:

|   $$\pi$$   2019  2019   |
M =    |   1     $$\pi$$   2019   |
|   1     1     $$\pi$$    |

2) Compute the successive powers of M (i.e: M2, M3, etc.) and for each power n compute the value of Mn(1,3) / Mn(2,3) until it converges to some limit.

What is the numerical value of this limit ?   What do you think this procedure is actually computing ?

You can use any HP calculator of your choice (Minimum Recommended Model would be the HP-11C or better) but, again, write your code only in RPN, RPL or 71-BASIC, please, and it would be better if you don't peruse the Internet, just your programming skills and math intuition.

In a few days I'll give a 5-line, 144-byte program for the HP-71B (easiest code to understand), plus extensive comments and further discussion.

Meanwhile, let's see what you come up with.

V.
.
I gain insight.

Pauli
Hello all,

After just silently reading these challenges for awhile, I have decided that I'll take a stab at this problem using UserRPL on my 50g. First, I store the array in a global variable named 'M'. Then I execute the the following code.

Code:
 SPOILERS!!! << -3. SF -105. SF 0. 'R1' STO 0. 'N' STO 0. 0. DO DROP 'R1' STO 1. 'N' STO+ M N ^ DUP { 1. 3. } GET SWAP { 2. 3. } GET / 'R2' STO UNTIL R2 R1 == LASTARG ROT END >>

It took my 50g 239.15 seconds to complete, getting a result of 12.6389823195. I believe that this is correct.

However, I don't have an answer to the more important question of "What do you think this procedure is actually computing?"
12.63898231939055286048607387871531

Code:
 SPOILERS!!! ...which is the cube root of 2019 It's late and I'm tired, I'll try to work out the "why" tomorrow. :) My program (assumes Q and R are initialized to the matrix in question): 00 { 29-Byte Prgm } 01▸LBL 00 02 RCL "Q" 03 STO× "R" 04 INDEX "R" 05 1 06 3 07 STOIJ 08 RCLEL 09 I+ 10 RCLEL 11 ÷ 12 ENTER 13 X<> 00 14 X≠Y? 15 GTO 00 16 END Trying the exercise with a 4x4 matrix and looking at the top two elements of the 4th column, the ratio converges to the 4th root of 2019. I bet there's a pattern there, and it has something to do with the eigenvalues of M...
The vector

$$e=\begin{bmatrix} u^2\\ u\\ 1 \end{bmatrix}$$

is eigenvector to the matrix

$$M=\begin{bmatrix} a & u^3 & u^3\\ 1 & a & u^3\\ 1 & 1 & a \end{bmatrix}$$

with eigenvalue $$\lambda=u^2+u+a$$ since

$$\begin{bmatrix} a & u^3 & u^3\\ 1 & a & u^3\\ 1 & 1 & a \end{bmatrix} \begin{bmatrix} u^2\\ u\\ 1 \end{bmatrix}= \begin{bmatrix} au^2+u^4+u^3\\ u^2+au+u^3\\ u^2+u+a \end{bmatrix}= (u^2+u+a) \begin{bmatrix} u^2\\ u\\ 1 \end{bmatrix}$$

For the given values $$a=\pi$$ and $$u^3=2019$$ it appears that $$\lambda$$ is the biggest eigenvalue.
Thus the vector $$M^nv$$ will eventually converge to a multiple of the eigenvector $$e$$ for every vector $$v\neq 0$$.
Since we look only at values in the last column of $$M^n$$ we can just as well calculate:

$$M^n\begin{bmatrix} 0\\ 0\\ 1 \end{bmatrix}$$

Thus the ratio

$$\frac{M^n_{1,3}}{M^n_{2,3}}$$

converges to the value

$$\frac{e_1}{e_2}=\frac{u^2}{u}=u$$

For the given value $$u^3=2019$$ this means $$u=\sqrt[3]{2019}\doteq 12.63898232$$.

Here's a program for the HP-48GX:
Code:
«   @ (n M - ratio )   [ 0 0 1 ] ROT   1 SWAP START     OVER SWAP *   NEXT   SWAP DROP   DUP 1 GET   SWAP 2 GET / »

Example:

With the given matrix M in a variable:

200 M

12.6389823194

Or then using what we know from above:

M EGV DROP { 2 1 } GET INV
(12.6389823194,0)

This works since $$e_1$$ of eigenvectors appears to be 1.

Cheers
Thomas
(01-16-2019 04:24 AM)Thomas Okken Wrote: [ -> ]
Code:
 SPOILERS!!! Trying the exercise with a 4x4 matrix and looking at the top two elements of the 4th column, the ratio converges to the 4th root of 2019. I bet there's a pattern there, and it has something to do with the eigenvalues of M...

A similar calculation can be done:

$$\begin{bmatrix} a & u^4 & u^4 & u^4\\ 1 & a & u^4 & u^4\\ 1 & 1 & a & u^4\\ 1 & 1 & 1 & a \end{bmatrix} \begin{bmatrix} u^3\\ u^2\\ u\\ 1 \end{bmatrix}= \begin{bmatrix} au^3+u^6+u^5+u^4\\ u^3+au^2+u^5+u^4\\ u^3+u^2+au+u^4\\ u^3+u^2+u+a \end{bmatrix}= (u^3+u^2+u+a) \begin{bmatrix} u^3\\ u^2\\ u\\ 1 \end{bmatrix}$$

Cheers
Thomas
Not being well-versed in the mathematics behind this, my attempt was simply a naïve/brute force loop to compare successive results until the first duplicate was encountered (50g/RPL):
Code:
 \<<    [['\pi' 2019. 2019.][1. '\pi' 2019.][1. 1. '\pi']] \->NUM    DUP    0.    DO       UNROT       OVER *       DUP 3. GET       OVER 6. GET       /       4. ROLL    UNTIL       OVER SAME    END    NIP NIP \>>
Result: 12.6389823194
Execution time: about 12s on a real 50g.
Size: 146.5 bytes, of which 84 bytes is spent building the initial matrix. If I take that part out and require the initial matrix being left on the stack as an argument, the size drops to 62.5 bytes.

A similar approach using Thomas' first optimization:
Code:
 \<<    [['\pi' 2019. 2019.][1. '\pi' 2019.][1. 1. '\pi']] \->NUM    0. DUP 1. \->V3    0.    DO       UNROT       OVER SWAP *       DUP OBJ\-> DROP2       /       4. ROLL    UNTIL       OVER SAME    END    NIP NIP \>>
Result: 12.6389823194
Execution time: about 8.5s on a real 50g
Size: 149 bytes (with embedded matrix build), 65 bytes (initial matrix as argument)

All of which pales in comparison, of course, to Thomas' ultimate eigenvalue solution. If I assume the initial matrix is on the stack as an argument, the following code gives the result in the same format as the other routines (I optimized the retrieval of the needed matrix element as well as adding "RE" to standardize the result):
Code:
\<<    EGV DROP 4. GET INV RE \>>
Result: 12.6389823194
Execution time: about 3.8s on a real 50g
Size: 28 bytes

Great job Thomas! (as usual). And thanks to Valentin for an interesting problem -- I hope to see more about the underlying concepts. I think this is the first time I've actually encountered eigenvalues being used outside of a Linear Algebra course many moons ago. I've forgotten most everything I was supposed to have learned in that class, I'm afraid.
(01-15-2019 10:42 PM)Valentin Albillo Wrote: [ -> ]Minimum Recommended Model would be the HP-11C or better

This program is for the HP-11C:
Code:
001-   42 34    CLEAR REG 002-   44 25    STO I 003-      33    R↓ 004-    44 0    STO 0 005-       1    1 006-    44 3    STO 3 007-42,21, 0   ▸LBL 0 008-    45 3    RCL 3 009-    45 0    RCL 0 010-      20    × 011-   42 16    π 012-44,20, 3    STO× 3 013-    45 2    RCL 2 014-44,40, 3    STO+ 3 015-      20    × 016-    44 2    STO 2 017-      33    R↓ 018-44,40, 2    STO+ 2 019-   43 36    LSTx 020-    45 0    RCL 0 021-      20    × 022-      40    + 023-    45 1    RCL 1 024-44,40, 2    STO+ 2 025-44,40, 3    STO+ 3 026-   42 16    π 027-      20    × 028-      40    + 029-44,10, 2    STO÷ 2 030-44,10, 3    STO÷ 3 031-       1    1 032-    44 1    STO 1 033-    42 5    DSE 034-    22 0    GTO 0 035-    45 2    RCL 2 036-      15    1/x

Example:

2019
ENTER
200
R/S

(this may take some time …)

12.63898232

Registers:

0: 2019 (or whatever you entered)
1: x
2: y
3: z

Starting with:

$$\begin{bmatrix} x\\ y\\ z \end{bmatrix}=\begin{bmatrix} 0\\ 0\\ 1 \end{bmatrix}$$

The following matrix multiplication is iterated:

$$\begin{bmatrix} x\\ y\\ z \end{bmatrix}\rightarrow \begin{bmatrix} \pi & 2019 & 2019\\ 1 & \pi & 2019\\ 1 & 1 & \pi \end{bmatrix}\cdot\begin{bmatrix} x\\ y\\ z \end{bmatrix}$$

The resulting vector is reduced in lines 29-31 to avoid overflow.

$$\begin{bmatrix} x\\ y\\ z \end{bmatrix}\rightarrow \begin{bmatrix} 1\\ \frac{y}{x}\\ \frac{z}{x} \end{bmatrix}$$

Cheers
Thomas
The translation for the HP-67 is straightforward:
Code:
001: 31 43     :    CL REG 002: 35 33     :    ST I 003: 35 53     :    R↓ 004: 33 00     :    STO 0 005: 01        :    1 006: 33 03     :    STO 3 007: 31 25 00  :   ▸LBL 0 008: 34 03     :    RCL 3 009: 34 00     :    RCL 0 010: 71        :    × 011: 35 73     :    π 012: 33 71 03  :    STO× 3 013: 34 02     :    RCL 2 014: 33 61 03  :    STO+ 3 015: 71        :    × 016: 33 02     :    STO 2 017: 35 53     :    R↓ 018: 33 61 02  :    STO+ 2 019: 35 82     :    LST x 020: 34 00     :    RCL 0 021: 71        :    × 022: 61        :    + 023: 34 01     :    RCL 1 024: 33 61 02  :    STO+ 2 025: 33 61 03  :    STO+ 3 026: 35 73     :    π 027: 71        :    × 028: 61        :    + 029: 33 81 02  :    STO÷ 2 030: 33 81 03  :    STO÷ 3 031: 01        :    1 032: 33 01     :    STO 1 033: 31 33     :    DSZ 034: 22 00     :    GTO 0 035: 34 02     :    RCL 2 036: 35 62     :    1/x

Example:

2019
ENTER↑
200
R/S

(lots of blinkenlichten …)

12.63898232

Cheers
Thomas
(01-18-2019 05:04 AM)Thomas Klemm Wrote: [ -> ]This program is for the HP-11C:
...
(this may take some time …)

12.63898232

2019/200 completed in about 476 seconds on my 11C.
Looking at the eigenvalues of the matrix $$M$$:

M
EGVL

[ (175.524449043,0) (-83.049835541,127.396573277) (-83.049835541,127.396573277) ]

respectively rather at their absolute values:

[ 175.524449043, 152.076171921, 152.076171921 ]

We can estimate the amount of iterations $$n$$ needed for a 10-digit calculator like the HP-11C to return the exact value as:

$$\left (\frac{152.076171921}{175.524449043} \right )^n = 10^{-10}$$

This leads to:

$$n=\frac{-10}{\log_{10} \left (\frac{152.076171921}{175.524449043} \right )}\approx 160.5744$$

Or then for a 12-digit calculator like the HP-48GX to:

$$n=\frac{-12}{\log_{10} \left (\frac{152.076171921}{175.524449043} \right )}\approx 192.6892$$

(01-18-2019 07:15 PM)DavidM Wrote: [ -> ]2019/200 completed in about 476 seconds on my 11C.

Using 160 instead of 200 would take about 380 seconds.

Cheers
Thomas
This program is for the HP-25:
Code:
01: 14 33     :    CLEAR REG 02: 23 04     :    STO 4 03: 22        :    R↓ 04: 23 00     :    STO 0 05: 01        :    1 06: 23 03     :    STO 3 07: 24 03     :    RCL 3 08: 24 00     :    RCL 0 09: 61        :    × 10: 15 73     :    π 11: 23 61 03  :    STO× 3 12: 24 02     :    RCL 2 13: 23 51 03  :    STO+ 3 14: 61        :    × 15: 23 02     :    STO 2 16: 22        :    R↓ 17: 23 51 02  :    STO+ 2 18: 14 73     :    LSTx 19: 24 00     :    RCL 0 20: 61        :    × 21: 51        :    + 22: 24 01     :    RCL 1 23: 23 51 02  :    STO+ 2 24: 23 51 03  :    STO+ 3 25: 15 73     :    π 26: 61        :    × 27: 51        :    + 28: 23 71 02  :    STO÷ 2 29: 23 71 03  :    STO÷ 3 30: 01        :    1 31: 23 01     :    STO 1 32: 23 41 04  :    STO- 4 33: 24 04     :    RCL 4 34: 15 61     :    x≠0 35: 13 07     :    GTO 07 36: 24 02     :    RCL 2 37: 15 22     :    1/x

Example:

2019
ENTER↑
200
R/S
12.63898232

Cheers
Thomas
This program is for the HP-29C:
Code:
01: 14 33    :    CLEAR REG 02: 23 00    :    STO 0 03: 22       :    R↓ 04: 23 04    :    STO 4 05: 01       :    1 06: 23 03    :    STO 3 07: 15 13 00 :   ▸LBL 0 08: 24 03    :    RCL 3 09: 24 04    :    RCL 4 10: 61       :    × 11: 15 73    :    π 12: 23 61 03 :    STO× 3 13: 24 02    :    RCL 2 14: 23 51 03 :    STO+ 3 15: 61       :    × 16: 23 02    :    STO 2 17: 22       :    R↓ 18: 23 51 02 :    STO+ 2 19: 14 73    :    LAST x 20: 24 04    :    RCL 4 21: 61       :    × 22: 51       :    + 23: 24 01    :    RCL 1 24: 23 51 02 :    STO+ 2 25: 23 51 03 :    STO+ 3 26: 15 73    :    π 27: 61       :    × 28: 51       :    + 29: 23 71 02 :    STO÷ 2 30: 23 71 03 :    STO÷ 3 31: 01       :    1 32: 23 01    :    STO 1 33: 15 23    :    DSZ 34: 13 00    :    GTO 0 35: 24 02    :    RCL 2 36: 15 74    :    1/x

Registers:

0: looping counter
1: x
2: y
3: z
4: 2019 (or whatever you entered)

Example:

2019
ENTER↑
200
R/S
12.63898232

Cheers
Thomas

Hi all:

First of all, many thanks to all of you who contributed your various RPN/RPL solutions and valuable comments. As Thomas Okken suspected and Thomas Klemm explained, the reason this procedure works and converges to the cubic root of 2019 has all to do with the eigenvalues of the matrix M.

My original code for the HP-71B (easiest to understand), which exactly follows the steps given in my OP (i.e.: computing the powers of M instead of repeatedly multiplying by a vector) is the following 5-line, 144-byte snippet of code, which also keeps count of the number of iterations needed:

1   DESTROY ALL @ OPTION BASE 1 @ DIM M(3,3),B(3,3)
2   DATA PI,2019,2019,1,PI,2019,1,1,PI
3   READ M @ MAT B=M @ R=0 @ I=O
4   REPEAT @ MAT B=B*M @ L=R @ R=B(1,3)/B(2,3)
5   I=I+1 @ UNTIL L=R @ DISP I;R;R^3

>RUN

183      12.6389823194      2019

so after 183 iterations the limit is found to be 12.6389823194, which is 2019^(1/3), the cubic root of 2019. Now for a few comments:

The procedure can be generalized in many ways. For instance:

1) My example used Pi in the main diagonal just for aesthetics but actually the procedure will converge for other positive values K in the main diagonal, resulting always in the same limit but greatly affecting the number if iterations needed for convergence. For instance:

K  Iterations
1      198
2      200
Pi     183
10     130
20      83
40      51
80      39
120     34
160     33
200     37

as you may see in the table above, the value of K which results in the lowest number of iterations needed seems to be around 120-140. In fact, the theoretically optimum value for K in the main diagonal which results in the minimum number of iterations to converge for a given number N (2019 in my OP) is:

K = (N^(1/3)+N) / (N^(1/3)+1)

which for N=2019 would be

(2019^(1/3)+2019)/(2019^(1/3)+1) = 148.958253244

thus placing, say, 148 or 149 in the main diagonal instead of Pi will result in the lowest number of iterations needed, about 33-34 instead of the 183 needed when K=Pi.

2) The procedure will converge for numbers N other than 2019, be they integer, real (or even complex !), and the limit will be N^(1/3), the cubic root of N. For instance:

- using  K = 1, N = 5:

1      5      5
M =   1      1      5
1      1      1

will converge in 25 iterations to 1.70997594668 = 5^(1/3), the cube root of 5

- using K = 2, N = 2:

2      2      2
M =   1      2      2
1      1      2

will converge in 14 iterations to 1.25992104989 = 2^(1/3), the cube root of 2.

- using K = 1, N = 1 + 2 i: (remember to define M, B, L and R as COMPLEX)

1    (1,2)  (1,2)
M =   1      1    (1,2)
1      1      1

will converge in 23 iterations to 1.21961650797 + 0.471711267789 i = (1 + 2 i)^(1/3), the cube root of 1 + 2 i.

3) Though my OP specified the ratio B(1,3)/B(2,3), the procedure will converge using many other different ratios. For instance;

B(1,1)/B(2,1), B(2,1)/B(3,1), B(1,2)/B(2,2), B(2,2)/B(3,2), ..., B(3,3)/B(3,2), B(3,2)/B(3,1)

will converge to N^(1/3), the cubic root of N, while the ratios;

B(1,1)/B(3,1), ... , B(3,3)/B(3,1)

will converge to N^(2/3), the square of the cubic root of N (or the cubic root of the square of N, your choice)

4) My OP used a 3x3 matrix and the limit was the cube root of N, but using DxD matrices with the same pattern will make the various ratios converge to N^(1/D), N^(2/D), N^(3/D), ..., i.e., the Dth root of N and its powers.

5) Besides computing cubic roots, the procedure can also be made to converge to the root of a given cubic equation of a particular form. For instance:

Find a root of:    x^3 - 1.2*x - 2.1 = 0

We'll first create the following 3x3 initial matrix M:

K      P      Q
M =   1      K      0
0      1      K

where we'll use K=1, P=1.2, Q=2.1, so the initial matrix will be:

1      1.2    2.1
M =   1      1      0
0      1      1

which converges in 24 iterations to 1.58816816249, which is the real root of the given cubic equation. This can be checked like this:

>FNROOT(1,2,FVAR^3-1.2*FVAR-2.1)

1.5881681625

6) As was the case for Dth roots, it's possible to create an initial DxD matrix to have the limit converge to a root of a given polynomial of degree D. The details are somewhat lengthy and will not be discussed here.

As a final observation, notice the fact that for integer N and K this procedure will produce rational approximations (integer numerator and denominator) that will converge arbitrarily close to the floating point real value of the Dth root using just integer additions and multiplications and nothing else: no subtractions, no divisions except the optional one at the end if you wish to convert the fraction to a floating point value. Same for polynomial equations with integer coefficients.

Thanks again to all of you for your interest and valuable contributions, much appreciated, keep them coming !

Regards.
V.

(01-20-2019 07:40 PM)Valentin Albillo Wrote: [ -> ]2) The procedure will converge for numbers N other than 2019, be they integer, real (or even complex !), and the limit will be N^(1/3), the cubic root of N.

Some N might not converge, say -1
(01-20-2019 09:28 PM)Albert Chan Wrote: [ -> ]
(01-20-2019 07:40 PM)Valentin Albillo Wrote: [ -> ]2) The procedure will converge for numbers N other than 2019, be they integer, real (or even complex !), and the limit will be N^(1/3), the cubic root of N.

Some N might not converge, say -1

Yes, I know, thanks for your post, Albert.

Funny, I thought that people would post some comments about the 6 generalizations and assorted details I gave in my latest, long, concluding post but a full week has elapsed and save for yours, no one posted a thing or seem to notice. Go figure ...

Regards.
V.
.
I appreciate the content. I though Thomas was covering most of it but instead there were many more patterns and properties.

Then again, the less trivial the content, the less the chance to discuss it due to the needed effort to understand it. https://en.wikipedia.org/wiki/Law_of_triviality

"The time spent on any item of the agenda will be in inverse proportion to the sum [of money] involved."

For technical discussions one can change it so
"The time spent on any item of the agenda will be in inverse proportion to the knowledge and understanding effort involved."
(01-28-2019 11:56 AM)pier4r Wrote: [ -> ]"The time spent on any item of the agenda will be in inverse proportion to the knowledge and understanding effort involved."

+1
I PM Thomas last week for how his estimated iterations work.
Sadly, I still don't understand the geometric intuition ...

Thomas Klemm Wrote:
Albert Chan Wrote:How is above formula derived ?

If the matrix $$M$$ is diagonalized the powers can be calculated easily as:

$$\begin{bmatrix} \lambda_1 & 0 & 0\\ 0 & \lambda_2 & 0\\ 0 & 0 & \lambda_3 \end{bmatrix}^n= \begin{bmatrix} \lambda_1^n & 0 & 0\\ 0 & \lambda_2^n & 0\\ 0 & 0 & \lambda_3^n \end{bmatrix}$$

Thus assuming the initial vector $$v$$ in a general position the ratio of the 2nd to the 1st coordinate after iterating $$n$$ times will be roughly:

$$\frac{\lambda_2^n}{\lambda_1^n}$$

We want that to be so small that it doesn't affect the display:

$$1+10^{-10}=1$$

For the given example I have a geometric intuition.
A vector is stretched in direction of the 1st eigenvector by factor 175.524449043.
The other two eigenvectors define a plane where the projection upon is rotated and stretched by 152.076171921.
Now you can wonder how often you have to iterate such that stretching in the dominant direction is so much bigger that the rest becomes irrelevant.

I assumed you are familiar with the concepts of eigenvalue and eigenvectors from linear algebra.
Otherwise it's not possible to explain the formula.

HTH
Thomas
(01-28-2019 12:46 PM)Albert Chan Wrote: [ -> ]Sadly, I still don't understand the geometric intuition ...

Sorry about that.

Let's start with a vector

$$v=\begin{bmatrix} 1\\ 1 \end{bmatrix}$$

and a matrix

$$M=\begin{bmatrix} 3 & 0\\ 0 & 2 \end{bmatrix}$$

If we calculate $$M^n v$$ we end up with:

$$M^n v = \begin{bmatrix} 3 & 0\\ 0 & 2 \end{bmatrix}^n \begin{bmatrix} 1\\ 1 \end{bmatrix} = \begin{bmatrix} 3^n & 0\\ 0 & 2^n \end{bmatrix} \begin{bmatrix} 1\\ 1 \end{bmatrix} = \begin{bmatrix} 3^n\\ 2^n \end{bmatrix}$$

Thus the ratio of the coordinates is:

$$\frac{y}{x}=\frac{2^n}{3^n}=\left (\frac{2}{3} \right )^n$$

If we want to know when the $$y$$ component can be neglected compared to the $$x$$ component using a 12-digit calculator we see that's when:

$$\frac{y}{x} < 10^{-12}$$

And therefore when:

$$\left (\frac{2}{3} \right )^n < 10^{-12}$$

Solve that for $$n$$ to get:

$$n > \frac{\log_{10} 10^{-12}}{\log_{10}\frac{2}{3}}=\frac{-12}{\log_{10}2-\log_{10}3}\approx 68.1465$$

Indeed:

2 ENTER 3 ÷ 69 yx
7.0746e-13

But of course we could get a bit closer by using 5e-12 instead of 1e-12.
This leads to $$n>64.1771$$ and indeed $$n=65$$ is enough:

2 ENTER 3 ÷ 65 yx
3.5815e-12

Using the eigenvectors as a basis just adds some coordinate transformations.
We don't know the details (or rather I ignored them) but for a general vector that shouldn't matter much.

But of course this calculation changes if we start with something like:

$$v=\begin{bmatrix} 1\\ 10^k \end{bmatrix}$$

for a big positive value of $$k$$.
But even that would just add $$k$$ to the result.

That's why I consider it only a rough estimate.

Does this make more sense?

Cheers
Thomas
Thomas Klemm Wrote:I assumed you are familiar with the concepts of eigenvalue and eigenvectors from linear algebra.

Maybe this video can give some geometric intuition:

Pages: 1 2
Reference URL's
• HP Forums: https://www.hpmuseum.org/forum/index.php
• :