18
$\begingroup$

There seems to be two kinds of transposes of a linear mapping:

  1. If $f: V→W$ is a linear map between vector spaces $V$ and $W$ with nondegenerate bilinear forms, we define the transpose of $f$ to be the linear map $^tf : W→V$, determined by $$B_V(v,{}^tf(w))=B_W(f(v),w) \quad \forall\ v \in V, w \in W.$$ Here, $B_V$ and $B_W$ are the bilinear forms on $V$ and $W$ respectively. The matrix of the transpose of a map is the transposed matrix only if the bases are orthonormal with respect to their bilinear forms.

    I was wondering what "the bases are orthonormal with respect to their bilinear forms" means specifically?

    Does the transpose depends on the choice of the nondegenerate bilinear forms $B_W$ and $B_V$, given that there can be many choices?

  2. If $f : V → W$ is a linear map, then the transpose (or dual) $f^* : W^* → V^*$ is defined by $$f^*(\varphi) = \varphi \circ f \, $$ for every $φ ∈ W^*$. If the linear map $f$ is represented by the matrix $A$ with respect to two bases of $V$ and $W$, then $f^*$ is represented by the transpose matrix $A^T$ with respect to the dual bases of $W^*$ and $V^*$.

    Is the this a different definition from the previous one, or are they essentially the same? Can they be directly related in some way?

  3. In Fundamental theorem of linear algebra, for each matrix $A \in \mathbf{R}^{m \times n}$,

    $\mathrm{ker}(A) = (\mathrm{im}(A^T))^\perp$, that is, the nullspace is the orthogonal complement of the row space

    $\mathrm{ker}(A^T) = (\mathrm{im}(A))^\perp$, that is, the left nullspace is the orthogonal complement of the column space.

    Can the theorem be rephrased in terms of a linear mapping instead of a matrix? Which of the above two definitions of transpose of a linear mapping is used in the rephrase to replace $A^T$?

Thanks and regards!

2 Answers 2

12
  1. Given a bilinear map $H\colon V\times V\to F$, a basis $[v_1,\ldots,v_n]$ of $V$ would be orthonormal relative to $H$ if $H(v_i,v_j) = 0$ if $i\neq j$ and $H(v_i,v_i) = 1$ for each $i$.

    The transpose of the transformation is defined in terms of the bilinear forms. Change the forms, the "transpose" may change. This is just like, if you change the inner product of a vector space, then whether a projection is an "orthogonal projection" or not may change as well.

  2. This is really the definition of the dual transformation; it holds for any vector spaces (both finite and infinite dimensional). In the infinite dimensional case, you have no hope of relating it to the previous definition, because $W^*$ is not isomorphic (not even non-canonically) with $W$, nor $V^*$ with $V$. In the finite dimensional case, you can define a bilinear form specifically so that the given bases of $V$ and $W$ are orthonormal, and then identify $W^*$ with $W$ by identifying the basis with the dual basis in the obvious way, and similarly for $V^*$ and $V$. Then the transpose defined here coincides with the transpose defined in 1.

  3. Given a inner product vector spaces $V$ and $W$, with inner products $\langle-,-\rangle_V$ and $\langle -,-\rangle_W$ respectively, the adjoint of a linear transformation $T\colon V\to W$ is a function $T^*\colon W\to V$ such that for all $v\in V$ and $w\in W$, $$\langle T(v),w\rangle_W = \langle v,T^*(w)\rangle_V.$$ It is not hard to show that if the adjoint exists, then it is unique and linear; and that if $V$ and $W$ are finite dimensional, then the adjoint always exists. If $\beta$ and $\gamma$ are orthonormal bases for $V$ and $W$, then it turns out that $[T^*]_{\gamma}^{\beta} = ([T]_{\beta}^{\gamma})^*$, where $A^*$ is the conjugate transpose of matrix $A$. If the vector spaces are real vector spaces, then the matrix of the adjoint is just the transpose.

    It is a theorem that $$\mathrm{ker}(T) = (\mathrm{Im}(T^*))^{\perp}.$$ If the spaces are finite dimensional, then you also have $$\mathrm{Im}(T^*)=(\mathrm{ker}(T))^{\perp}.$$ If you consider the matrices relative to orthonormal bases over the reals, this translates to the equations you have for matrices.

    This theorem uses the first definition, with the bilinear forms being the inner products of the spaces, assuming they are real vector spaces (as opposed to complex ones), and the bases used are orthonormal bases.

  • 0
    Thanks! As to 3, is there a similar theorem for vector spaces instead of for inner product spaces?2011-08-17
  • 1
    @Tim: You cannot talk about orthogonal complement in the absence of an inner product. (An "inner product space" is just a vector space with an inner product).2011-08-18
  • 0
    The transpose of a linear mapping is a pure vector space concept, and doesn't require an inner product. So is there some theorem(s) that relates Image and kernels of $T$ and $T*$ or $T^T$?2011-08-18
  • 1
    @Tim: The transpose of a *matrix* can be defined in the *apparent* lack of inner product, but you are actually using the standard inner product in Euclidean space. There is no concept of "transpose of a linear transformation" in the abstract (i.e., between abstract vector spaces), except by invoking (non-canonical) isomorphisms to $F^n$ by choosing bases. Likewise, there is no concept of "adjoint of a linear transformation" in the absence of an inner product. So, no, you cannot talk about "$T^T$" or "$T^*$" absent an inner product for an arbitrary linear transformation.2011-08-18
  • 1
    @Tim: If by "transpose" in your comment you were referring to the definition in 2 (usually called the "dual"), then note that the kernel of $T$ and the image of $T^*$ don't even "live" in the same space: the first is in $V$, the second in $V^*$.2011-08-18
  • 0
    (1) Isn't that the first definition of transpose of a linear mapping depends on two bilinear forms but not inner products on domain and codomain? (2) Consider each kind of transpose definition and assume there are no inner products. Can coimage of $T$ and image of $T^*$ be isomorphic? Similar question for cokernel of $T$ and kernel of $T^*$?2011-08-18
  • 1
    @Tim: I'm thinking of the bilinear map as defining possibly non-definite inner products (satisfying all but the condition that $\langle x,x\rangle \geq 0$ and equal to $0$ if and only if $x=0$); you still cannot talk about "orthogonal complement" without refering to some inner product, so there is no way to make sense of the theorem in 3 without it. (cont)2011-08-18
  • 1
    @Tim: For $T^*$ in (2), the dimension of the image of $T$ in $W$ is equal to the dimension of the image of $T^*$ in $V^*$. So the coimage of $T$ has dimension $\dim W-\mathrm{rank}(T)$ (assuming I'm interpreting what you are saying correctly), while the image of $T^*$ has dimension $\mathrm{rank}(T)$.2011-08-18
  • 0
    I might be wrong before. But what I meant was the following. (1) Are there some isomorphic relation in the sense of vector spaces (without referring to inner product, and therefore not considering orthogonal complement) between images and kernels of $T$ and $T^*$. (2) Is it right that the two definitions of transpose of a linear mapping don't depend on inner products, and they are pure vector space concepts?2011-08-18
  • 1
    @Tim: $\mathrm{rank}(T) = \mathrm{rank}(T^*)$ in the finite dimensional case. With the kernels you are not so lucky in general, since $\dim(\mathrm{ker}(T)) = \dim V - \mathrm{rank}(T)$, but $\dim(\mathrm{ker}(T^*)) = \dim W - \mathrm{rank}(T^*)$. So in general, no (at least as far as I am aware of). Note also that the theorem in (3) need not hold for non-definite inner products (hence does not necessarily hold for bilinear forms).2011-08-18
  • 0
    Thanks! Are the following correct? (1) Any two vector spaces with the same dimension are isomorphic. (2) The two definitions of transpose of a linear mapping don't depend on inner products, and they are pure vector space concepts.2011-08-18
  • 1
    @Tim: (1) Two vector spaces of the same dimension over the same field are (non-canonically) isomorphic. (2) The first definition is *not* a "pure vector space concept", because it requires you to have *specified* bilinear forms (i.e., "extra information" beyond the two vector spaces and the linear transformation). The second definition (which is *usually* called the "dual", not the "transpose") indeed does not require information beyond the vector spaces and the linear transformations, but you don't have anything like the theorem in (3), only that the ranks are the same.2011-08-18
  • 0
    hanks! (1) I was wondering in your reply to part 3, when talking about existence and uniqueness of adjoint, can the inner products on domain and codomain be relaxed to more general bilinear forms? (2) In the rephrase of the Fundamental theorem by you, is the second formula $\mathrm{Im}(T^*)=(\mathrm{ker}(T))^{\perp}$ supposed to be $\mathrm{Ker}(T^*)=(\mathrm{Im}(T))^{\perp}$ instead?2011-08-18
  • 1
    @Tim: (1) Not in general for uniqueness (any nondefinite bilinear map will cause problem), though linearity will follow if the adjoint exists. Adjoints need not even exist for inner products in the infinite dimensional case. (2) No, it's exactly what I wrote. In the infinite dimensional case, you can have $(W^{\perp})^{\perp}\neq W$ (take square summable sequences, and $W$ the almost null sequences), so $\mathrm{Im}(T^*) = (\mathrm{ker}(T))^{\perp}$ does not follow from the first equality. But since $(T^*)^*=T$, what you propose *does* follow (by using $T^*$ instead of $T$).2011-08-19
  • 0
    @ArturoMagidin I stumbled upon this post whilst searching for "dual"+"transpose"+"adjoint" and now I understand better the distinction between these terms. I am very familiar with (1) and (3) but am not so clear on (2). Can you suggest a reference that would elaborate on the "dual mapping" as it is defined in this sense?2012-01-03
  • 1
    @3Sphere: Friedberg, Insel, and Spence have a section on the dual spaces and they deal with the contravariant functor induced by them, as well as with dual bases.2012-01-04
  • 0
    @ArturoMagidin Thanks for the reference.2012-01-04
  • 0
    @Arturo Magidin: [there is the transpose](https://en.wikipedia.org/wiki/Transpose_of_a_linear_map) for an abstract operator, but it acts between dual spaces: if *f* : *V₁* → *V₂* is a linear map and *v* : *V₂* → ground field is a linear functional (element of *V₂* \*), then their composition *v* ∘ *f* maps *V₁* to the ground field, and hence belongs to *V₁* \*. In short, *f* \⁠* : *V₂* \* → *V₁* \*.2015-04-23
1

If $f:V\to W$ is a linear map between vector spaces $V$ and $W$ with non-degenerate bilinear forms, we define the transpose of $f$ to be the linear map $t_f:W\to V$, determined by $$B_V(v,t_f(w))=B_W(f(v),w)$$ for all $v\in V$ and $w\in W$. Here, $B_V$ and $B_W$ are the bilinear forms on $V$ and $W$ respectively. The matrix of the transpose of a map is the transposed matrix only if the bases are orthonormal with respect to their bilinear forms.

  • 0
    You should use MathJax to format your posts properly: http://meta.math.stackexchange.com/questions/5020/mathjax-basic-tutorial-and-quick-reference.2016-09-16
  • 0
    @mulumba: thanks2016-09-16