8
$\begingroup$

I came across this post lying dormant on some online forum. I am putting it here verbatim, it seems to me worth a lot.


By Prof. S. D. Agashe, IIT Bombay

(Source: Vector Calculus, by Durgaprasanna Bhattacharyya, University Studies Series,Griffith Prize Thesis, 1918, published by the University of Calcutta, India, 1920, 90 pp)

Chapter IV: The Linear Vector Function, article 15, p.24:

"The most general vector expression linear in $r$ can contain terms only of three possible types, $r$, $(a.r)b$ and $c\times r$, $a$, $b$, $c$ being constant unit vectors. Since $r$, $(a.r)b$ and $c\times r$ are in general non-coplanar,it follows from the theorem of the parallelepiped of vectors that the most general linear vector expression can be written in the form $\lambda . r + \mu (a.r)b + \nu (c\times r)$, where $\lambda, \mu, \nu$ are scalar constants".

Bhattacharyya does not prove this. Has anyone seen a similar result and its proof?

Bhattacharyya uses this to show that the divergence of the linear function is ($3 \lambda + a.b$), that the curl is ($a \times b + 2c$). He goes on to define div and curl of a differentiable function as the div and curl of the (linear) derivative function. The div and curl of a linear function are defined in terms of certain surface integrals.

I am excited about this result because it seems to provide an excellent route to div and curl, as Bhattacharyya himself remarks.

Sorry for a rather long and "technical" communication.

  • 1
    A cross product is more properly viewed as an antisymmetric matrix. An antisymmetric $n \ times n$ matrix has $\frac{n(n-1)}{2}$ components. In 3 dimensions, this is 3 and we can call it$a$(pseudo)vector, but in 4 dimensions it is 6. The general linear transformation in 3 dimensions needs 9 components, which you have, but in 4 dimensions you need 16. With 4 each from $r$ and $b$ and 6 from the "cross product" you are still 2 short. There is some discussion at http://en.wikipedia.org/wiki/Cross_product2011-03-01

2 Answers 2

6

The claim is true. Any $3\times3$ matrix can be expressed as $ A= \lambda I+ a b^T + B $ where $\lambda$ is real, $a$ and $b$ are 3-vectors and $B$ is skew (so that $Bx=c\times x$ for some vector $c$).

To prove this, choose an orthogonal matrix $Q$ to diagonalize the symmetric part of $A$. Then $Q^TAQ=D+K$ where $D$ is diagonal and $K$ is skew. If the diagonal entries of $D$ are not all distinct then it is easy to write $D=\lambda I+\hat a \hat b^T$ and we finish as below. If the entries are all distinct, we can suppose that $Q$ was chosen so that the largest eigenvalue of $D$ is first, the smallest second and the middle last. Then for some positive $\mu$ and $\nu$, the matrix $D$ can be written $ D = \lambda I + \mu \begin{pmatrix} 1 & 0 & 0\cr 0 &-\nu^2&0\cr 0&0&0 \end{pmatrix} =\lambda I + \hat a\hat b^T+\hat K, $ with $ \hat a= \mu\begin{pmatrix}1\cr \nu\cr0\end{pmatrix}, \quad \hat b= \begin{pmatrix} 1 \cr -\nu\cr 0 \end{pmatrix}, \quad \hat K=\mu\begin{pmatrix} 0&\nu&0\cr -\nu & 0&0\cr 0&0&0 \end{pmatrix} . $ Let $a=Q\hat a$, $b=Q\hat b$, and $B=Q(\hat K+K)Q^T$, and you're done.

  • 0
    I think I got the proof of it. Still, any further ref is much appreciated.2011-03-02
1

I'm not sure if it is such a good approach. The most general vector linear in $r$ is $M r$ where $M$ is a 3x3 matrix. The number of unfixed constants in your formula is 12, while you only need 9 in general. If you set $b=c$ it seems more sensible, assuming $r\times c\neq0$. Then it's essentially just the expansion of a vector in the basis $r$, $c$, $r\times c$.

The matrix $M$ can be divided into 3 parts, the trace part, a traceless symmetric part and a antisymmetric part: $ M = \frac13\mathrm{tr}(M)\,I + \frac12(M+M^t-\frac23\mathrm{tr}(M)\,I) + \frac12(M-M^t) $ where $I$ is the identity matrix and ${}^t$ denotes transpose.

So, the terms in $M\cdot r$ correspond to the terms $ x r + y (r\cdot a)c + z (r\times c)\ .$ You can see exactly how by just taking the gradient of both sides to get $ M = x\, I + y\, c a^t + z\, \varepsilon\cdot c$ where $\varepsilon$ is the totally antisymmetric tensor and $c a^t$ can be chosen to be traceless, ie $a \cdot c=0$. In terms of components, this reads $ M_{ij} = x\, \delta_{ij} + y\, c_i a_j + z\, \sum_k\varepsilon_{ijk} c_k \ ,$ where $\delta$ is the Kronecker delta symbol.

  • 0
    There's something not quite right about this....2011-02-28