10
$\begingroup$

Question: (From an Introduction to Convex Polytopes)

Let $(x_{1},...,x_{n})$ be an $n$-family of points from $\mathbb{R}^d$, where $x_{i} = (\alpha_{1i},...,\alpha_{di})$, and $\bar{x_{i}} =(1,\alpha_{1i},...,\alpha_{di})$, where $i=1,...,n$. Show that the $n$-family $(x_{1},...,x_{n})$ is affinely independent if and only if the $n$-family $(\bar{x_{1}},...,\bar{x_{n}})$ of vectors from $\mathbb{R}^{d+1}$ is linearly independent.

-

Here is what I have so far, it is mostly just writing out definitions, if you can give me some hints towards how I can start the problem that would be great.

$(\Rightarrow)$ Assume that for $x_{i} = (\alpha_{1i},...,\alpha_{di})$, the $n$-family $(x_{1},...,x_{n})$ is affinely independent. Then, a linear combination $\lambda_{1}x_{1} + ... + \lambda_{n}x_{n} = 0$ can only equal the zero vector when $\lambda_{1} + ... + \lambda_{n} = 0$. An equivalent characterization of affine independence is that the $(n-1)$-families $(x_{1}-x_{i},...,x_{i-1}-x_{i},x_{i+1}-x_{i},...,x_{n}-x_{i})$ are linearly independent. We want to prove that for $\bar{x_{i}}=(1,\alpha_{1i},...,\alpha_{di})$, the $n$-family $(\bar{x}_{1},...,\bar{x}_{n})$ of vectors from $\mathbb{R}^{d+1}$ is linearly independent.

  • 0
    Hint: Consider each position in the vectors $x_i$, $\overline{x_i}$ individually. For example, in the case of the $x_i$'s we have: $$ \sum_{i=1}^n\lambda_ix_i =0 \quad\Leftrightarrow\quad \sum_{i=1}^n\lambda_i\alpha_{ji}=0 \;\text{for } j=1,2,...,d $$2011-11-19
  • 0
    Isn't this a duplicate of http://math.stackexchange.com/questions/82189/affine-independence-and-linear-independence2011-11-19
  • 0
    @matt, I've been able to express these but I'm not sure how it is that I can relate the two, maybe it is very obvious but I am not seeing it. A deeper hint would be helpful.2011-11-22
  • 0
    This is probably a silly thing to ask, but why is the n-family affine independence equivalent to the linear independence of the (n-1) family of 'difference vectors'?2015-10-20

2 Answers 2

10

So, we want to prove that these two statements are equivalent:

  • (a) The points $x_1, \dots , x_n \in \mathbb{R}^d$ are affinely independent.

  • (b) The vectors $\overline{x}_1, \dots , \overline{x}_n \in \mathbb{R}^{d+1}$ are linearly independent.

Where $\overline{x}_i = (1, x_i),\ i = 1, \dots , n$.

Let's go.

$\mathbf{(a)\Longrightarrow (b)}$. Let $\lambda_1, \dots , \lambda_n \in \mathbb{R}$ be such that

$$ \lambda_1 \overline{x}_1 + \dots + \lambda_n \overline{x}_n = 0 \ . \qquad \qquad \qquad [1] $$

We have to show that $\lambda_1 = \dots = \lambda_n = 0$. But $[1]$ means

$$ \lambda_1 (1, x_1) + \dots + \lambda_n (1, x_n) = (0, 0) \ , $$

where $(0,0) \in \mathbb{R} \times \mathbb{R}^d$. And this is equivalent to

$$ \lambda_1 x_1 + \dots + \lambda_n x_n = 0 \qquad \text{and} \qquad \lambda_1 + \dots + \lambda_n = 0 \ . $$

Now, $x_i = x_i - 0 = \overrightarrow{0x_i} , \ i = 1, \dots , n$. (Here, $0 \in \mathbb{R}^d$.) So, since we are assuming $(a)$, it follows that

$$ \lambda_1 = \dots = \lambda_n = 0 \ . $$

$\mathbf{(b)\Longrightarrow (a)}$. Let $p \in \mathbb{R}^d$ be any point. We have to show that

$$ \lambda_1 \overrightarrow{ px}_1 + \dots + \lambda_n \overrightarrow{ px}_n = 0 \qquad \text{and} \qquad \lambda_1 + \dots + \lambda_n = 0 \qquad \qquad \qquad [2] $$

implies $\lambda_1 = \dots = \lambda_n = 0$.

If the point $p$ was $0 \in \mathbb{R}^d$, the conclusion should be clear because, in this case, $\overrightarrow{px_i} = x_i, \ i = 1, \dots , n$, and $[2]$ reads as follows:

$$ \lambda_1 x_1 + \dots + \lambda_n x_n = 0 \qquad \text{and} \qquad \lambda_1 + \dots + \lambda_n = 0 \ . \qquad \qquad \qquad [3] $$

From here, we do the same reasoning as in the previous proof, but backwars: these two things entail

$$ \lambda_1 (1, x_1) + \dots + \lambda_n (1, x_n) = (0, 0) \ . $$

Which is the same as

$$ \lambda_1 \overline{x}_1 + \dots + \lambda_n \overline{x}_n = 0 \ . $$

And this implies

$$ \lambda_1 = \dots = \lambda_n = 0\ , $$

since we are assuming $(b)$.

Hence, we have to show that the particular case $[3]$ already implies the general one $[2]$, for every $p\in \mathbb{R}^d$. But this is obvious:

$$ \lambda_1 \overrightarrow{ px}_1 + \dots + \lambda_n \overrightarrow{ px}_n = \lambda_1 (x_1 -p ) + \dots + \lambda_n (x_n - p) $$

Which is

$$ \lambda_1 x_1 + \dots + \lambda_n x_n - (\lambda_1 + \dots + \lambda_n)p = \lambda_1 x_1 + \dots + \lambda_n x_n = 0 \ . $$

  • 0
    I will award the bounty if you can give a brief explanation of why "$$ \lambda_1 (1, x_1) + \dots + \lambda_n (1, x_n) = (0, 0) \ , $$ where $(0,0) \in \mathbb{R} \times \mathbb{R}^d$. And this is equivalent to $$ \lambda_1 x_1 + \dots + \lambda_n x_n = 0 \qquad \text{and} \qquad \lambda_1 + \dots + \lambda_n = 0 \ . $$" Is a valid inference. I simply do not see why they are equivalent, although it is likely something very obvious. Thank you for the response.2011-12-30
  • 0
    I'm also confused as to why you introduce the point $p$ going from (b) to (a). It cancels out neatly, but can't you do without it?2011-12-30
  • 1
    As for your first question: $\lambda_1(1,x_1) + \dots + \lambda_n(1,x_n) = (\lambda_1 + \dots + \lambda_n , \lambda_1 x_1+ \dots + \lambda_n x_n) = (0,0)$ means exactly that each component is zero.2011-12-30
  • 1
    As for the second one: condition $[2]$ implying $\lambda_1 = \dots = \lambda_n = 0$ at the beginnig of the implication $(b) \Longrightarrow (a)$ is one of the definitions of the points $x_1, \dots , x_n$ being affinely independent in an abstract affine space. If your book only works with the affine space $\mathbb{R}^d$, this condition is equivalent to the same without $p$, so you can safely delete that $p$ from the start. Otherwise said: begin in $[3]$ and forget everything after my "Hence". (But, being trained myself to work with abstract affine spaces, I wanted to check that I could do it.)2011-12-30
  • 1
    Also, if your book says that, for $\mathbb{R}^d$, the definition of being affinely independent is $[3]$ implying $\lambda_1 = \dots = \lambda_n = 0$, you can delete my "Now, $x_i=x_i−0=\overrightarrow{0x_i}, i=1,…,n$. (Here, $0 \in \mathbb{R}^d$.)" because I put it there for the same reason: I was thinking that, in an abstract affine space, the definition is $[2]$ implying $\lambda_1 = \dots = \lambda_n = 0$.2011-12-30
  • 0
    I appreciate the response, this cleared up a lot of later issues I had understanding hyperplanes and open and closed half-spaces. Thank you.2011-12-30
  • 0
    How would this work if, instead of assuming the points $x_1, \ldots, x_n$ to be affinely independent, we assumed a family of vectors $(x_1 - x_0, \ldots, x_n - x_0)$ where $x_0$ is any arbitrary vector to be linearly independent instead?2015-10-20
2

($\Rightarrow$): Suppose $(\bar{x_1},\ldots,\bar{x_n})$ is linearly dependent, so we have $$\left(\sum\limits_{i=1}^n c_i,\sum\limits_{i=1}^n c_i\alpha_{1i},\ldots,\sum\limits_{i=1}^n c_i\alpha_{ni}\right) = \sum\limits_{i=1}^nc_i\bar{x_i} = 0$$ for some set of coefficients $c_i\in\mathbb{R}$, thus $$\sum\limits_{i=1}^nc_ix_i = \left(\sum\limits_{i=1}^n c_i\alpha_{1i},\ldots,\sum\limits_{i=1}^n c_i\alpha_{ni}\right) = 0\text{ and }\sum\limits_{i=1}^n c_i = 0$$ so $(x_1,\ldots,x_n)$ is affinely dependent. Hence if $(x_1,\ldots,x_n)$ is affinely independent, $(\bar{x_1},\ldots,\bar{x_n})$ must be linearly independent.

($\Leftarrow$): Suppose $(x_1,\ldots,x_n)$ is affinely dependent, so we have $$\sum\limits_{i=1}^nc_ix_i = 0\text{ and }\sum\limits_{i=1}^nc_i=0$$ for some set of coefficients $c_i\in\mathbb{R}$. Then $$\sum\limits_{i=1}^nc_i\bar{x_i} = \left(\sum\limits_{i=1}^n c_i,\sum\limits_{i=1}^n c_i\alpha_{1i},\ldots,\sum\limits_{i=1}^n c_i\alpha_{ni}\right) = (0,0,\ldots,0) = 0$$ so $(\bar{x_1},\ldots,\bar{x_n})$ is linearly dependent. Hence if $(\bar{x_1},\ldots,\bar{x_n})$ is linearly independent, $(x_1,\ldots,x_n)$ must be affinely independent.