1
$\begingroup$

I'm reading deBoor's (wonderful) book "A practical guide to splines", revised edition. I'm doing some of the exercises at the end of each chapter just to fix the main ideas before going ahead...

Let's go to the question: Exercise 4 of chapter VII, point (c) claims that the linear independence of $n$ functions $\varphi_i$ belonging to a finite-dimensional linear space (quoted) "is invariably shown by exhibiting a corresponding sequence $\lambda_1,\ldots,\lambda_n$ of linear functionals for which the matrix $(\lambda_i\varphi_j:i,j=1,\ldots,n)$ is obviously invertible, that is, is triangular with non-zero diagonal elements."

In general why is this so? I never met such an argument in my one elementary linear algebra course...

  • 0
    @QiaochuYuan You got it! By contradiction, if it is possible to find a $\{\lambda_i\}_{i=1,\ldots,n}$ such that $\det(\lambda_i\varphi_j)\neq0$, then any non-trivial linear combination $\sum_j c_j\varphi_j\neq 0$, i.e. the $\varphi_i$ are l.i. If you move your comment into an answer$i$will accept it. Thank you.2012-06-10

1 Answers 1

1

Moving the comment by Qiaochu Yuan to its proper place:

Suppose $\sum c_j\phi_j=0$ is a nontrivial linear dependence. Then $\sum c_j\lambda_i \phi_j=0 $ for all $i$, so the matrix whose entries are $\lambda_i\phi_j$ (or possibly its transpose) has a nontrivial element $(c_1,\dots c_n)$ in its nullspace.

And adding another version:

Linear maps do not increase the dimension of vector spaces. The map $\Lambda \varphi = (\lambda_i \varphi )_{i=1,\dots, n}$ is a linear map into $\mathbb R^n$. We want to show that the linear span of $(\varphi_j)_{j=1,\dots,n}$ has dimension $n$. Thus, it suffices to show that its image under $\Lambda$ is all of $\mathbb R^n$.