2
$\begingroup$

Could someone please expand on

Method 9. Lagrange interpolation (page 17) at

http://www.cs.cornell.edu/cv/researchpdf/19ways+.pdf

because the summation runs from 0 to (n-1) but the eigenvalues are defined from 1 to n.

Also. is it true that this is an analytic solution? Is it practical for numerical methods? Because I have to compute exp(t*A) for SEVERAL t and its expensive. I thought that if I can get the product bit of the equation (in the paper) then I only need to "plug and chug" the exp(\lambda_j t) part (which is really cheap) for various t.

Alternatively, I could do a symbolic calculation for t,.... i have to compute exp(t*A)*v, then plot the elements of v against time.

  • 0
    See [this](http://books.google.com/books?hl=en&id=S6gpNn1JmbgC&pg=PA4) for a discussion on the interpolation approach to defining matrix functions.2012-04-26
  • 2
    The sum should probably be from $j=1$ to $n$. Often eigenvalues are indexed starting with $0$, so the author might have switched up accidentally.2012-04-26
  • 0
    "Methods 9, 10, and 11 suffer on several accounts. They are $O(n^4)$ algorithms making them prohibitively expensive except for small $n$. If the spanning matrices $A_0,\cdots,A_{n-1}$ are saved, then storage is $n^3$ which is an order of magnitude greater than the amount of storage required by any “nonpolynomial” method..."2012-04-26
  • 0
    "...Furthermore, even though the formulas which define Methods 9, 10, and 11 have special form in the confluent case, we do not have a satisfactory situation. The “gray” area of near confluence poses difficult problems which are best discussed in the next section on decomposition techniques."2012-04-26
  • 0
    .... well, it seems kind of like a stupid question but.... What then should I do? My matrix is such that the diagonal elements are equal to the negative sum of the other elements along a given row (so one eigenvalue is always zero). I'm not sure how best to compute this, since its the slowest part of my computer program. Would a Schur decomposition followed by exponentiation of the middle bit be best? Where A=QUQ^-1 in the schur decomposition. The problem I have with this though, is that even if I do it, I have to repeat this process for all time values and I have to check convergence!2012-04-26
  • 0
    Matrix exponentials are *always* expensive to compute. As already mentioned in that article, your best bet is either Padé approximation or Schur decomposition. You might want to consider using [Expokit](http://www.maths.uq.edu.au/expokit/).2012-04-26
  • 0
    If your matrix is large, you should look at Krylov methods (method 20 in the paper); this is used in the Expokit package that J.M. refers to. If you do (Schur) decomposition, then you do not need to re-compute the decomposition if you vary t, you only need to re-compute $\exp(tU)$. If your matrix is symmetric, the $U$ is diagonal so $\exp(tU)$ is very easy to compute.2012-04-27

0 Answers 0