1
$\begingroup$

Assuming I have a vector $v_1 \in \mathbb{R}^n $ of dimension $n$. (edit what I mean is e.g. $v_1 \in \mathbb{R}^n $)

I want to describe the set of "all the vectors that are orthogonal to $v_1$".

  • Would it be correct to call this set the "null space of $v_1$"?
  • Would "orthogonal complement of $V=\{v_1\}$ in the whole vector space of dimension n $\mathbb{R}^n$" be a better / more correct definition?

Suppose I have the set of "all the vectors that are orthogonal to $v_1$" at hand and I also found a basis $B$ for this set: $B = \{b_1, b_2,..., b_{n-1}\}$

And suppose further that another vector $n$-dimensional vector $v_2 \in \mathbb{R}^n$, $v_2 \neq v_1 $ is expressed in this basis $B$: $[v_2]_B = \sum_{k=0}^{n-1}b_k$

  • What would be an intuitive explanation for the relationship between $v_1$ and $[v_2]_B$? Did this process of obtaining $[v_2]_B$ somehow "remove traces" of $v_1$ in $v_2$? Is there any intuition for this?

Thanks in advance!

EDIT

Sorry some further question:

  • If $v_1 = v_2$, then $[v_2]_B = 0$. So can $|[v_2]_B|$ be seen as some sort of similarity measurement?
  • 0
    @coffeemath: thank you for your input! Any advice on what would be the "better" definition? I tend towards the "orthogonal complement" one but in either case the vector needs to be represented in a different way (as the only element of a subspace or a 1×n matrix).2012-12-08

2 Answers 2

1

An $m \times n$ matrix $M$ represents a linear map from $R^n$ to $R^m$ in the standard bases of these spaces. As such, if a vector $v=(v_1,...,v_n)$ is turned into the $1 \times n$ matrix $M=[v_1,v_2,...,v_n]$ then $M$ represents a linear map from $R^n$ to $R^1$, and the null space of $M$ is the same as the set of vectors $w$ for which $v \cdot w=0$.

Another term used for this is the orthogonal complement of the vector $v$ in $R^n$.

  • 0
    I accepted this answer since I feel it is closer to what is meant by the concept of a "null space of a vector". @Berci's answer is also really useful though. About the intuition behind this whole process I am still unsure and I think I will need to ask a more specific question for that.2012-12-12
1
  1. Perhaps the cleaner if we are talking about the linear functional induced by the given vector $v_1$ (and by the inner product!): $\Bbb R^n\to \Bbb R:\quad x\mapsto \langle v_1,x\rangle$ Then, the subspace ${v_1}^\perp=\{x\mid \langle v_1,x\rangle=0\}$ is just the nullspace of this mapping.
  2. Note that only the orthonormal bases satisfy that for any vector $v_2$, its coordinates can be calculated by the inner product: $v_2=\sum_{k The general definition for $[v_2]_{B}=(\beta_k)_k$ is not less and not more than this one: $v_2=\sum_k \beta_k\cdot b_k\ .$
  3. And note that in this case, if $(b_k)_k$ is indeed a basis for ${v_1}^\perp$, this expressibility of $v_2$ says exactly that $v_2$ is in there, i.e. $v_2\in{v_1}^\perp$, i.e. $v_2\perp v_1$.
  4. In this case, if $v_1=v_2$, then indeed $v_2=0$ follows.
  5. If you want to measure something like how orthogonal $v_2$ is to $v_1$, then the inner product itself would just do it fine: $\langle v_1,v_2\rangle$ or maybe if lengths are ignored (and neither of them is $0$), you might want to use $\frac{\langle v_1,v_2\rangle}{|v_1|\cdot |v_2|}$ where $|v|=\sqrt{\langle v,v\rangle}$ according to the Pythagorean. Anyway, it is just the cosine of the angle between $v_1$ and $v_2$...
  • 0
    BTW No I am not trying to measure orthogonality. I saw this method of expressing $v_2$ in the orthonormal basis $B$ spanning the "null space of $v_1$" (sorry I am using this term again but that's how it was described) somewhere and was wondering what's the reasoning behind doing so. Think of $v_1$ and $v_2 \in \mathbb{R}^n$ as time series of grey values. Still I am lacking this intuition behind what's going on.2012-12-08