11
$\begingroup$

Question: (From an Introduction to Convex Polytopes)

Let $(x_{1},...,x_{n})$ be an $n$-family of points from $\mathbb{R}^d$, where $x_{i} = (\alpha_{1i},...,\alpha_{di})$, and $\bar{x_{i}} =(1,\alpha_{1i},...,\alpha_{di})$, where $i=1,...,n$. Show that the $n$-family $(x_{1},...,x_{n})$ is affinely independent if and only if the $n$-family $(\bar{x_{1}},...,\bar{x_{n}})$ of vectors from $\mathbb{R}^{d+1}$ is linearly independent.

-

Here is what I have so far, it is mostly just writing out definitions, if you can give me some hints towards how I can start the problem that would be great.

$(\Rightarrow)$ Assume that for $x_{i} = (\alpha_{1i},...,\alpha_{di})$, the $n$-family $(x_{1},...,x_{n})$ is affinely independent. Then, a linear combination $\lambda_{1}x_{1} + ... + \lambda_{n}x_{n} = 0$ can only equal the zero vector when $\lambda_{1} + ... + \lambda_{n} = 0$. An equivalent characterization of affine independence is that the $(n-1)$-families $(x_{1}-x_{i},...,x_{i-1}-x_{i},x_{i+1}-x_{i},...,x_{n}-x_{i})$ are linearly independent. We want to prove that for $\bar{x_{i}}=(1,\alpha_{1i},...,\alpha_{di})$, the $n$-family $(\bar{x}_{1},...,\bar{x}_{n})$ of vectors from $\mathbb{R}^{d+1}$ is linearly independent.

  • 0
    This is probably a silly thing to ask, but why is the n-family affine independence equivalent to the linear independence of the (n-1) family of 'difference vectors'?2015-10-20

2 Answers 2

12

So, we want to prove that these two statements are equivalent:

  • (a) The points $x_1, \dots , x_n \in \mathbb{R}^d$ are affinely independent.

  • (b) The vectors $\overline{x}_1, \dots , \overline{x}_n \in \mathbb{R}^{d+1}$ are linearly independent.

Where $\overline{x}_i = (1, x_i),\ i = 1, \dots , n$.

Let's go.

$\mathbf{(a)\Longrightarrow (b)}$. Let $\lambda_1, \dots , \lambda_n \in \mathbb{R}$ be such that

$ \lambda_1 \overline{x}_1 + \dots + \lambda_n \overline{x}_n = 0 \ . \qquad \qquad \qquad [1] $

We have to show that $\lambda_1 = \dots = \lambda_n = 0$. But $[1]$ means

$ \lambda_1 (1, x_1) + \dots + \lambda_n (1, x_n) = (0, 0) \ , $

where $(0,0) \in \mathbb{R} \times \mathbb{R}^d$. And this is equivalent to

$ \lambda_1 x_1 + \dots + \lambda_n x_n = 0 \qquad \text{and} \qquad \lambda_1 + \dots + \lambda_n = 0 \ . $

Now, $x_i = x_i - 0 = \overrightarrow{0x_i} , \ i = 1, \dots , n$. (Here, $0 \in \mathbb{R}^d$.) So, since we are assuming $(a)$, it follows that

$ \lambda_1 = \dots = \lambda_n = 0 \ . $

$\mathbf{(b)\Longrightarrow (a)}$. Let $p \in \mathbb{R}^d$ be any point. We have to show that

$ \lambda_1 \overrightarrow{ px}_1 + \dots + \lambda_n \overrightarrow{ px}_n = 0 \qquad \text{and} \qquad \lambda_1 + \dots + \lambda_n = 0 \qquad \qquad \qquad [2] $

implies $\lambda_1 = \dots = \lambda_n = 0$.

If the point $p$ was $0 \in \mathbb{R}^d$, the conclusion should be clear because, in this case, $\overrightarrow{px_i} = x_i, \ i = 1, \dots , n$, and $[2]$ reads as follows:

$ \lambda_1 x_1 + \dots + \lambda_n x_n = 0 \qquad \text{and} \qquad \lambda_1 + \dots + \lambda_n = 0 \ . \qquad \qquad \qquad [3] $

From here, we do the same reasoning as in the previous proof, but backwars: these two things entail

$ \lambda_1 (1, x_1) + \dots + \lambda_n (1, x_n) = (0, 0) \ . $

Which is the same as

$ \lambda_1 \overline{x}_1 + \dots + \lambda_n \overline{x}_n = 0 \ . $

And this implies

$ \lambda_1 = \dots = \lambda_n = 0\ , $

since we are assuming $(b)$.

Hence, we have to show that the particular case $[3]$ already implies the general one $[2]$, for every $p\in \mathbb{R}^d$. But this is obvious:

$ \lambda_1 \overrightarrow{ px}_1 + \dots + \lambda_n \overrightarrow{ px}_n = \lambda_1 (x_1 -p ) + \dots + \lambda_n (x_n - p) $

Which is

$ \lambda_1 x_1 + \dots + \lambda_n x_n - (\lambda_1 + \dots + \lambda_n)p = \lambda_1 x_1 + \dots + \lambda_n x_n = 0 \ . $

  • 0
    How would this work if, instead of assuming the points $x_1, \ldots, x_n$ to be affinely independent, we assumed a family of vectors $(x_1 - x_0, \ldots, x_n - x_0)$ where $x_0$ is any arbitrary vector to be linearly independent instead?2015-10-20
3

($\Rightarrow$): Suppose $(\bar{x_1},\ldots,\bar{x_n})$ is linearly dependent, so we have $\left(\sum\limits_{i=1}^n c_i,\sum\limits_{i=1}^n c_i\alpha_{1i},\ldots,\sum\limits_{i=1}^n c_i\alpha_{ni}\right) = \sum\limits_{i=1}^nc_i\bar{x_i} = 0$ for some set of coefficients $c_i\in\mathbb{R}$, thus $\sum\limits_{i=1}^nc_ix_i = \left(\sum\limits_{i=1}^n c_i\alpha_{1i},\ldots,\sum\limits_{i=1}^n c_i\alpha_{ni}\right) = 0\text{ and }\sum\limits_{i=1}^n c_i = 0$ so $(x_1,\ldots,x_n)$ is affinely dependent. Hence if $(x_1,\ldots,x_n)$ is affinely independent, $(\bar{x_1},\ldots,\bar{x_n})$ must be linearly independent.

($\Leftarrow$): Suppose $(x_1,\ldots,x_n)$ is affinely dependent, so we have $\sum\limits_{i=1}^nc_ix_i = 0\text{ and }\sum\limits_{i=1}^nc_i=0$ for some set of coefficients $c_i\in\mathbb{R}$. Then $\sum\limits_{i=1}^nc_i\bar{x_i} = \left(\sum\limits_{i=1}^n c_i,\sum\limits_{i=1}^n c_i\alpha_{1i},\ldots,\sum\limits_{i=1}^n c_i\alpha_{ni}\right) = (0,0,\ldots,0) = 0$ so $(\bar{x_1},\ldots,\bar{x_n})$ is linearly dependent. Hence if $(\bar{x_1},\ldots,\bar{x_n})$ is linearly independent, $(x_1,\ldots,x_n)$ must be affinely independent.