1
$\begingroup$

let $v$ be a vector, and $\forall i \in 1..k < n : n_i$ be normal vectors of hyperplanes in the in an $R^n$ space. the problem is to compute orthogonal Projection of $v$ onto intersection of all those planes.

I'm looking for an n*n matrix (M), so that $vM = Projection(v)$

Here are two examples: $n=3, k=2, n_1=(1,0,0), n_2=(0,1,0) \Rightarrow p(1,1,1) = (0,0,1)$$n=3,k=2,n_1=(1,0,-1),n_2=(0,1,-1) \Rightarrow p(0,0,1)=({1 \over 3},{1 \over 3},{1 \over 3})$$n=3,k=1,n_1=(0,0,1) \Rightarrow p(1,1,1) = (1,1,0)$

  • 0
    If you are describing hyperplanes by their normal vectors and you want then to consider vectors that lie within those hyperplanes, you must actually want those vectors that are orthogonal to the original (normal) vectors. Perhaps your question should be worded (?) as: component of a vector $v$ that is orthogonal to a set of vectors $n_i$ Such a question could easily be answered here.2012-12-12

2 Answers 2

3

Orthonormalize the normal vectors (e.g. using Gram–Schmidt), then project them out. In vector notation, projecting out a normal vector $n_i$ from a vector $v$ yields

$ v\to v-(v\cdot n_i)n_i\;; $

in matrix notation, this is

$ v\to\left(I-n_in_i^\top\right)v\;; $

and the matrix operation for projecting out all $k$ orthonormal vectors in one go is

$ v\to\left(I-\sum_in_in_i^\top\right)v\;. $

Note that this only works for orthonormal normal vectors $n_i$; you can't apply it directly to your second example, in which the normal vectors aren't orthonormal.

P.S.: It's been pointed out that the question uses a row vector convention. To transform the answer to that convention, simply transpose everything and toggle the transposition markers, which yields

$ v\to v\left(I-\sum_in_i^\top n_i\right)\;. $

  • 0
    Of course it is just a convention that we use column vectors, I happen to think of vectors as rows myself. I find myself editing after the fact many times just in order to conventionalize to what I know others expect.2012-12-13
2

If you are looking for a software solution, you may use singular value decomposition (SVD), which has been implemented in a vast amount of software libraries. See the "external links" section of the Wikipedia page on SVD.

(As the row vector convention is used in the question, in the sequel, all vectors are row vectors; column vectors are represented as vector transposes.) Let $A=(n_1^T,\,\ldots,\,n_k^T)$, i.e. $A$ includes your $k$ normal vectors as column vectors. Now, if $w$ is in the intersection of the planes normal to the $n_i$s, we have $w\cdot n_i=0$ for every $i$. In other words, $ wA=0 $ where "$0$" means the zero $k$-vector $(0,0,\ldots,0)$. Now perform a SVD on $A$. This means $ A = \underbrace{U}_{n\times n} \underbrace{\begin{pmatrix} \sigma_1&&&0&\ldots&0\\ &\ddots&&\vdots&&\vdots\\ &&\sigma_r&\vdots&&\vdots\\ 0&\ldots&0&0&\ldots&0\\ \vdots&&\vdots&\vdots&&\vdots\\ 0&\ldots&0&0&\ldots&0\\ \end{pmatrix}}_{n\times k} \underbrace{V^T}_{k\times k} $ for some $n\times n$ real orthogonal matrix $U$, some $k\times k$ real orthogonal matrix $V$ and some $\sigma_1\ge\ldots\ge\sigma_r>0$, where $r=\mathrm{rank}(A)=$ no. of linearly independent vectors among $n_1, n_2, \ldots, n_k$. (Most computer implementations of SVD will decompose $A$ in this way, but certainly you should read the relevant documentation carefully.)

Now, let the $n$ columns of $U$ be $u_1^T, \ldots, u_n^T$. Since they are orthonormal, the equation $wA=0$ implies that $w\cdot u_i=0$ whenever $i\le r$. In other words, $w$ is spanned by $u_{r+1}, u_{r+2}, \ldots, u_n$. So, the whole problem reduces to finding the orthogonal projection of a vector $v$ to the subspace spanned by $u_{r+1}, u_{r+2}, \ldots, u_n$. And the answer is $ v\leftarrow \sum_{i=r+1}^n (v\cdot u_i)u_i, $ or equivalently, $ v\leftarrow v - \sum_{i=1}^r (v\cdot u_i)u_i. $ Depending on the actual value of $r$, one of these two formulations will be more efficient than the other.

  • 0
    I see, thanks for the explanation.2012-12-13