18
$\begingroup$

There seems to be two kinds of transposes of a linear mapping:

  1. If $f: V→W$ is a linear map between vector spaces $V$ and $W$ with nondegenerate bilinear forms, we define the transpose of $f$ to be the linear map $^tf : W→V$, determined by $B_V(v,{}^tf(w))=B_W(f(v),w) \quad \forall\ v \in V, w \in W.$ Here, $B_V$ and $B_W$ are the bilinear forms on $V$ and $W$ respectively. The matrix of the transpose of a map is the transposed matrix only if the bases are orthonormal with respect to their bilinear forms.

    I was wondering what "the bases are orthonormal with respect to their bilinear forms" means specifically?

    Does the transpose depends on the choice of the nondegenerate bilinear forms $B_W$ and $B_V$, given that there can be many choices?

  2. If $f : V → W$ is a linear map, then the transpose (or dual) $f^* : W^* → V^*$ is defined by $f^*(\varphi) = \varphi \circ f \, $ for every $φ ∈ W^*$. If the linear map $f$ is represented by the matrix $A$ with respect to two bases of $V$ and $W$, then $f^*$ is represented by the transpose matrix $A^T$ with respect to the dual bases of $W^*$ and $V^*$.

    Is the this a different definition from the previous one, or are they essentially the same? Can they be directly related in some way?

  3. In Fundamental theorem of linear algebra, for each matrix $A \in \mathbf{R}^{m \times n}$,

    $\mathrm{ker}(A) = (\mathrm{im}(A^T))^\perp$, that is, the nullspace is the orthogonal complement of the row space

    $\mathrm{ker}(A^T) = (\mathrm{im}(A))^\perp$, that is, the left nullspace is the orthogonal complement of the column space.

    Can the theorem be rephrased in terms of a linear mapping instead of a matrix? Which of the above two definitions of transpose of a linear mapping is used in the rephrase to replace $A^T$?

Thanks and regards!

2 Answers 2

12
  1. Given a bilinear map $H\colon V\times V\to F$, a basis $[v_1,\ldots,v_n]$ of $V$ would be orthonormal relative to $H$ if $H(v_i,v_j) = 0$ if $i\neq j$ and $H(v_i,v_i) = 1$ for each $i$.

    The transpose of the transformation is defined in terms of the bilinear forms. Change the forms, the "transpose" may change. This is just like, if you change the inner product of a vector space, then whether a projection is an "orthogonal projection" or not may change as well.

  2. This is really the definition of the dual transformation; it holds for any vector spaces (both finite and infinite dimensional). In the infinite dimensional case, you have no hope of relating it to the previous definition, because $W^*$ is not isomorphic (not even non-canonically) with $W$, nor $V^*$ with $V$. In the finite dimensional case, you can define a bilinear form specifically so that the given bases of $V$ and $W$ are orthonormal, and then identify $W^*$ with $W$ by identifying the basis with the dual basis in the obvious way, and similarly for $V^*$ and $V$. Then the transpose defined here coincides with the transpose defined in 1.

  3. Given a inner product vector spaces $V$ and $W$, with inner products $\langle-,-\rangle_V$ and $\langle -,-\rangle_W$ respectively, the adjoint of a linear transformation $T\colon V\to W$ is a function $T^*\colon W\to V$ such that for all $v\in V$ and $w\in W$, $\langle T(v),w\rangle_W = \langle v,T^*(w)\rangle_V.$ It is not hard to show that if the adjoint exists, then it is unique and linear; and that if $V$ and $W$ are finite dimensional, then the adjoint always exists. If $\beta$ and $\gamma$ are orthonormal bases for $V$ and $W$, then it turns out that $[T^*]_{\gamma}^{\beta} = ([T]_{\beta}^{\gamma})^*$, where $A^*$ is the conjugate transpose of matrix $A$. If the vector spaces are real vector spaces, then the matrix of the adjoint is just the transpose.

    It is a theorem that $\mathrm{ker}(T) = (\mathrm{Im}(T^*))^{\perp}.$ If the spaces are finite dimensional, then you also have $\mathrm{Im}(T^*)=(\mathrm{ker}(T))^{\perp}.$ If you consider the matrices relative to orthonormal bases over the reals, this translates to the equations you have for matrices.

    This theorem uses the first definition, with the bilinear forms being the inner products of the spaces, assuming they are real vector spaces (as opposed to complex ones), and the bases used are orthonormal bases.

  • 0
    @Arturo Magidin: [there is the transpose](https://en.wikipedia.org/wiki/Transpose_of_a_linear_map) for an abstract operator, but it acts between dual spaces: if *f* : *V₁* → *V₂* is a linear map and *v* : *V₂* → ground field is a linear functional (element of *V₂* \*), then their composition *v* ∘ *f* maps *V₁* to the ground field, and hence belongs to *V₁* \*. In short, *f* \⁠* : *V₂* \* → *V₁* \*.2015-04-23
1

If $f:V\to W$ is a linear map between vector spaces $V$ and $W$ with non-degenerate bilinear forms, we define the transpose of $f$ to be the linear map $t_f:W\to V$, determined by $B_V(v,t_f(w))=B_W(f(v),w)$ for all $v\in V$ and $w\in W$. Here, $B_V$ and $B_W$ are the bilinear forms on $V$ and $W$ respectively. The matrix of the transpose of a map is the transposed matrix only if the bases are orthonormal with respect to their bilinear forms.

  • 0
    @mulumba: thanks2016-09-16