3
$\begingroup$

Recently while reading a paper I came across the claim that if we have $n$ distinct vectors $\vec{x}_i \in \mathbb{R}^d$ (they didn't clarify what they mean by 'distinct' but I assume it means no two vectors have all the same values) that it is always possible to select another vector $\vec{a} \in \mathbb{R}^d$ such that $i \neq j$ implies $\langle \vec{x}_i, \vec{a}\rangle \neq \langle \vec{x}_j, \vec{a}\rangle$, where $\langle \cdot , \cdot \rangle$ denotes the inner product (i.e., dot product).

While I believe them that it's true, I can't figure out how to prove it for myself. Note that $n$ is finite, and that we know all $n$ vectors before we have to select $\vec{a}$.

3 Answers 3

1

For any pair of distinct vectors $x_i \not= x_j \in \mathbb{R}^d$, the probability that $\langle x_i, a\rangle = \langle x_j, a\rangle$ for a uniformly random $a \in \mathbb{R}^d$ on the unit sphere is $0$.

You can see that this is true intuitively by recalling the relationship between the dot product and the angle between two vectors. It can be proved formally by integrating over the unit sphere.

Thus, you can choose a uniformly random vector $a$ on the unit sphere and it will work with probability $1$.

3

The idea is to consider the level sets of the function $f_{\vec{a}}:\mathbb{R}^d \rightarrow \mathbb{R}$ given by $f_{\vec{a}}(\vec{x}) = \langle \vec{x}, \vec{a}\rangle$.

The sets $A(\vec{a},t) = \{\vec{x} \in \mathbb{R}^d | \langle \vec{x}, \vec{a}\rangle = t, t \in \mathbb{R} \}$ are all affine subspaces of codimension $1$ orthogonal to $\vec{a}$. So in order to get the desired $\vec{a}$, you have to consider families of parallel codimension $1$ affine subspaces that avoid having any subset of those $n$ distinct vectors on one of those affine subspaces.

Having only finitely many vectors, this can always be done. Just consider the set $L = \{t(\vec{x}_i-\vec{x}_j) \in \mathbb{R}^d|t \in \mathbb{R}; i,j\in\{1,2,\dots,n\} \}$ of forbidden directions. The vector $\vec{a}$ cannot be orthogonal to any of those lines. Therefore, it is in the intersection of finitely many dense open sets, which can't be empty (you can see that this idea works even if you have countably many vectors $\{\vec{x_i},i\in\mathbb{N}\}$).

  • 0
    Thank you for this! While I am not fully able to comprehend the terminology (apologies!), I think I understand at least a basic bit of this, namely, $\vec{a}$ will satisfy our requirement provided that $\vec{a}$ is not orthogonal to any of the $n^2-n$ vectors created by $\vec{x}_i-\vec{x}_j$ (which is sensible, since that would map at least one pair of vectors to non-unique scalars). And, since the number of $\vec{x_i}$ vectors is finite, we can always select a vector $\vec{a}$.2017-02-22
  • 0
    Ah, actually, now that I think about it more, your point at the end is that *for any pair* of vectors $(\vec{x}_i \vec{x}_j)$, the set of all $\vec{a}$ that are *not* orthogonal to $\vec{x}_i - \vec{x}_j$ is a dense open set, and the intersection of these sets cannot be empty. If that is correct, why is that true? Surely you can have dense open sets that do not intersect. However, in this case, it seems like they're all the same set save for the single and omitted 'forbidden direction' in each, yes?2017-02-22
  • 0
    You're welcome! Any two dense open sets on $\mathbb{R}^d$ **do** intersect. This is a result from Baire's Category Theorem [link](https://en.wikipedia.org/wiki/Baire_category_theorem). Because $\mathbb{R}^d$ is a Baire space, the intersection of countably many dense open sets is itself dense. That's why @Qudit said that you have probability 1 of picking an appropriate $\vec{a}$.2017-02-22
  • 0
    Ah, I think I got the definition of "dense" wrong. For instance, I thought $(0, 1)$ was a "dense" set since (naively) I thought it meant that a given set contains every point in some nonzero region (I apparently invented this out of whole cloth). Instead, it seems like a dense set in $\mathbb{R}^d$ is actually more like *all* of $\mathbb{R}$ but with some number of tiny holes poked out of it. Is this correct?2017-02-22
  • 0
    Yes, the definition of dense set $Y$ in a topological space $X$ is that for any element $x \in X$ and for any neighbourhood of $x$ (i.e. an open set containing $x$. In the case os $\mathbb{R}^d$ you can imagine small $d-spheres$) there is an element of $Y$ in said neighbourhood. If you take out some points as you suggested you have a dense set. You can also take out any subspace of dimension lesser than $d$ and you still have a dense set (in my solution I'm taking out subspaces of dimension $d-1$). You can even consider $\mathbb{Q}^d$, it is also dense in $\mathbb{R}^d$2017-02-22
  • 0
    I just realized I've slipped up on the comment that I mention Qudit. The reason why the probability of picking an appropriate $\vec{a}$ is $1$ is that we have a **dense and open** set. Just having a dense set is not enough, for example $\mathbb{Q}^d$ is dense in $\mathbb{R}^d$ and it has measure $0$. Fortunately, as we took a _finite_ intersection of dense and open sets, we had a dense and open set from which to pick our $\vec{a}$. On other words, we had to take out countably many closed sets of measure $0$. Their union also has measure $0$. This also works for countably many vectors.2017-02-22
2

Set $\vec{a} = (a_1,\dots,a_d)$ and consider the function $f \colon \mathbb{R}^d \rightarrow \mathbb{R}$ given by

$$ f(\vec{a}) = f(a_1,\dots,a_d) := \prod_{i \neq j} \left( \left< \vec{x}_i, \vec{a} \right> - \left< \vec{x}_j, \vec{a} \right> \right) = \prod_{i \neq j} \left< \vec{x}_i - \vec{x}_j, \vec{a} \right>. $$

This is a polynomial function in $a_1,\dots,a_d$ and you are looking for $\vec{a} \in \mathbb{R}^d$ such that $f(\vec{a}) \neq 0$. By assumption, for $i \neq j$ the function $\vec{a} \mapsto \left< \vec{x}_i - \vec{x}_j, \vec{a} \right>$ is a non-zero linear polynomial. Hence $f$ is a non-zero polynomial (being the product of non-zero polynomials) and so we can find (even infinitely many) $\vec{a}$ with $f(\vec{a}) \neq 0$.

  • 0
    Awesome! This is an excellent way to think about it. Everything makes sense except the sentence that begins with "By assumption,". It's clear that given an appropriate choice of $\vec{a}$, $\vec{a} \mapsto \left< \vec{x}_i - \vec{x}_j, \vec{a} \right>$ will always be nonzero. But, doesn't this assume that an appropriate choice of $\vec{a}$ exists?2017-02-22
  • 0
    You are given two vectors $\vec{x} = (x_1,\dots,x_d)$ and $\vec{y} = (y_1,\dots,y_d)$ such that $\vec{x} \neq \vec{y}$ (in your case, $\vec{x} = \vec{x}_i, \vec{y} = \vec{x}_j$, I have renamed them to make the notation less heavy). The function $a \mapsto \left< \vec{x} - \vec{y}, \vec{a} \right>$ is then written explicitly as $a \mapsto \sum_{i=1}^d (x_i - y_i) a_i$. Since for some $1 \leq i \leq d$ we have $x_i \neq y_i$, this is a non-zero polynomial.2017-02-22
  • 0
    BTW, I have used the fact that a product of non-zero polynomials in $d$ variables is non-zero. This is clear if $d = 1$ but might seem less clear for $d > 1$. In any case, this is true and a useful fact to know (in algebra jargon, this means the ring $\mathbb{R}[x_1,\dots,x_d]$ is an integral domain).2017-02-22
  • 0
    You don't know in advance that you can find $\vec{a}$ that works for all the functions at the same time but you know that you can find (possibly different) $\vec{a}$ for each $\vec{a} \mapsto \left< \vec{x}_i - \vec{x}_j, \vec{a} \right>$ that makes it non-zero and then use the fact that a product of non-zero polynomials is non-zero and that a non-zero polynomial (over $\mathbb{R}$) is a non-zero function.2017-02-22
  • 1
    Ah! The last comment solidifed everything, thanks so much!2017-02-22