Consider the quadratic form $x^T(B^TP + PB)x$ where is $B=DA$, where $D$ is diagonal with positive or nonnegative entries, and $A$ is Hurwitz. Now consider a symmetric positive definite matrix $Q$ and that $P$ is the (positive definite) solution to the Lyapunov equation $A^TP + PA=-Q$. It is clear that if $D=I$ (the identity matrix), then $x^T(B^TP + PB)x = -x^TQx \le -\lambda_{min}(Q) \|x\|^2$. Is there any way to give some bounds probably depending on $D$ if this is not the case? i.e. I want to find some function of $D$ such that $x^T(B^TP + PB)x \le -f(D)\lambda_{min}(Q)\|x\|^2$. The matrix $A$ may even have some particular structure.
Quadratic bounds
1 Answers
Hm, well, I am not sure how well this helps, but its the best I was able to come up with.
Let $D = I + \epsilon G$, where G is another diagonal matrix, and $\epsilon$ is some number. Let's keep it simple, with just real G.
$B = DA = A + \epsilon GA$
$B^T = A^T + \epsilon (GA)^T = A^T ( I + \epsilon G)$
$B^T P + PB = x^T (A^T P + PA) x + \epsilon x^T (A^T GP + PGA)x$
$= -x^T Q x + \epsilon x^T R x$
So, our new quadratic form is a sum of two different ones. Your original bound works for the $-x^T Q x$ term, but now we have a new quadratic form involving the matrix R. Examining the equation for Q shows that R is a modified version of Q, with G's inserted in between the A's and P's.
I am not 100% sure, but I am pretty sure the following holds:
A is Hurwitz, but we'll stick to just being negative and strictly real. So A is negdef. P is posdef. Thus, if G is posdef, $A^T GP$ will be negdef. If G is negdef, it will be posdef. This is just analogous to multiplying positive and negative numbers, or negative numbers and negative numbers. Adding posdef matrices gives new posdef matrices. I am pretty sure the same holds for negdef matrices.
So, if G is posdef, R is negdef. If G is negdef, R is posdef. If R is negdef, the new bound will be even lower than the old one you computed. If R is posdef, it will be higher.
I haven't worked it out, but more forms of D could be accomodated by higher-order expansion like:
$D = I + \epsilon G_1 + \epsilon^2 G_2 + ...$
Unfortunately, this approach still appears to require solving for A, which the previous approach did not.
Hope I was able to help, or at least give some kind of nudge.