5
$\begingroup$

Why do we need QR factorization? Is this used in any particular field?

  • 0
    QR-decompostion based algorithm developed in 1950s is an efficient algorithm for computing the eigenvalues and eigenvectors of a matrix.A wiki link is provided [here](http://en.wikipedia.org/wiki/QR_algorithm).2012-09-18

2 Answers 2

4

$\def\R{\mathbb R}\def\norm#1{\left\|#1\right\|}\def\Mat{\mathrm{Mat}}$ QR-factorisation can, for example, be used to solve linear least squares approximation problems as follows: Given $n \le m$, $A \in \Mat_{m,n}(\R)$ with full rank $n$, $b \in \R^m$ \[ \mathrm{find}\ x \in \R^n : \norm{Ax - b}_2 = \min \] Computing the QR-factorisation of $A$, we write $A = QR$ with $Q \in O(m)$, $R \in \Mat_{m,n}(\R)$ upper triangular. Now, as the $2$-norm is invariant under multiplication with orthogonal matrices, \[ \norm{Ax- b}_2 = \norm{Q^{-1}Ax - Q^{-1}b}_2 = \norm{Rx - Q^{-1}b}_2 \] As $m \ge n$, $R$ is of the form $R = \begin{pmatrix} R' \\ 0\end{pmatrix}$ with $R' \in \Mat_n(\R)$ upper triangular. Writing $Q^{-1}b = \begin{pmatrix} y_1 \\ y_2\end{pmatrix}, x = \begin{pmatrix} x_1\\ x_2\end{pmatrix} \in \R^n \times \R^{m-n}$, we have \[ \norm{Ax - b}_2 = \sqrt{\norm{Rx_1 - y_1}^2 + \norm{y_2}^2} \] as $\mathop{\mathrm{rank}} R' = \mathop{\mathrm{rank}} A = n$, $R'$ is invertible and hence we need $x = R'^{-1}y_1$ for minimisation.


Another thing, QR factorisation can be useful for is numerical approximation of eigenvectors and -values of a symmetric matrix $A$. Let $A_0 = A$, $A_{k+1} = R_kQ_k$ with $A_k = Q_kR_k$ the QR-decomposition of $A$. One can show that $Q_k \to Q$ and $R_k \to \Lambda$ where $\Lambda$ contains the eigenvalues of $A$ along its diagonal and $Q$ the eigenvectors in its columns.

  • 0
    @yzhao You are right, but IIRC, it needn't converge for a general $A$, so I restricted the above to symmetric $A$s?2012-09-18
0

QR Factorization records the orthogonalization of a matrix, namely, the construction of an orthogonal set that spans the space of column vectors of A. Doing calculations with orthogonal matrices is preferable because (1) they are easy to invert by definition, and (2) they do not magnify errors.