I am new here, so I hope this question hasn't been asked before. And I hope that it is not out of place to ask this kind of question.
My question is basically the following. Suppose we are given a collection of linear maps $\phi_{1}, \ldots, \phi_{m}$ from $k^n$ to $k$, where $k$ is a field. Suppose moreover that $\phi:k^n\rightarrow{k}$ satisfies $\phi(v)=0$ for all $v$ in the intersection of the kernels of the $\phi_{i}$'s. I am supposed to show that $\phi$ then has to be in $Span\{\phi_{1}, \ldots, \phi_{m}\}$. Translated to matrix language, if the $\phi_{i}$'s are the rows of a matrix with entries in $k$, the result I'm trying to prove says that if a vector is "orthogonal" to the nullspace of the matrix, then it is in the row space. If $k=\mathbb{R}$ this is known since then multiplying a row vector with a column vector is the same as using the standard inner product $\mathbb{R}^n$. But in this more general situation I assume I have to prove things differently.
In the case $m=1$ I have been able to prove it, as follows:
Let $V=ker(\phi_{1})$. If $V=k^n$ then $\phi_{1}=0$ so the result is trivial, since then $\phi=0$ also. So we can assume we can find an element $v_{1}\notin{V}$ satisfying $\phi_{1}(v_{1})\neq{0}$. Choose $a_{1}\in{k}$ such that $\phi(v_{1})=a_{1}\phi_{1}(v_{1})$. Now assume $\phi$ is not generated by $\phi_{1}$, so that we can find $w\notin{V}$ with $\phi(w)\neq{a_{1}\phi_{1}(w)}$. Choose $\gamma_{1}\in{k}, \gamma_{1}\neq{0}$ with $\gamma_{1}\phi_{1}(w)=\phi_{1}(v_{1})$.Let $x=\gamma_{1}w-v_{1}$. Then $\phi_{1}(x)=0$ so $x\in{V}$. But $\phi(x)=\gamma_{1}\phi(w)-\phi(v_{1})=\gamma_{1}\phi(w)-a_{1}\phi_{1}(v_{1})=\gamma_{1}(\phi(w)-a_{1}\phi_{1}(w))\neq{0}$, a contradiction.
Now I have been really struggling to generalize this, even in the case $m=2$. I started out with the following:
Choose $v_{1}, \ldots, v_{m}$ such that $\phi_{i}(v_{i})\neq{0}$, $i=1,\ldots, m$. To see that this is ok note that if for some $i$ we had $\phi_{i}(v)=0$ for all $v$, then $\phi_{i}=0$ and so we could throw it out and assume we had $m-1$ functions. Now IF we also could choose the $v_{i}$ such that $\phi_j(v_{i})=0$ for $j\neq{i}$ then a similar argument to the one given above can be used. However, it is certainly not always the case that we can do this. F.ex. if $m=2$ then we could have $ker(\phi_{1})\subset{ker(\phi_{2})}$ etc. But maybe the proof could be divided into cases, depending on which $\phi_{j}(v_{i})=0$ or not. For $m=2$ I have been able to do it provided not both $\phi_{1}(v_{2})$ and $\phi_{2}(v_{1})$ are different from zero. Even if I could show it in the last case too, I am a bit at loss how to generalize it to a general $m$. The main obstacle is constructing an element $x\in{k^n}$ which vanish at all the $\phi_{i}$,while at the same time satisfying $\phi(x)\neq{0}$.
Is this the right way to go, or is there some much simpler way of proof which I am missing? (I assume the result is true; at least I've been told so).
Any useful help would be greatly appreciated!
UPDATE: I think I have succeeded in proving it using Marc's method:
We claim that we may take the $\phi_{i}$ to be linearly independent. For if,say, $\phi_{j}$ could be written as a linear combination of the others then $V=\bigcap_{i=1}^{m} ker(\phi_{i})=\bigcap_{i=1, i\neq{j}}^{m} ker(\phi_{i})$ and so we could throw out $\phi_{j}$ and reduce to the $m-1$ case (by induction).
Assuming linear independence, we can choose elements $v_{1}, \ldots, v_{m}\in{k}$ satisfying $\phi_{i}(v_{i})\neq{0}$ and $\phi_{j}(v_{i})=0$ for $j, $i=1, \ldots, m$. First note that for all $i$ we can choose $v_{i}$ with $\phi_{i}(v_{i})\neq{0}$, for if not then $\phi_{i}$ would be the zero map and that would contradict the linear independence of the $\phi_{i}$. Fix an index $j$. Suppose we could not choose the element $v_{j}$ as described above. Then $\phi_{j}(v)\neq{0}$ would always imply that $\phi_{i}(v)\neq{0}$ for at least one $i
We claim that $E=V\cup{Span\{v_{1}, \ldots, v_{m}\}}$. Suppose $v\notin{V}$. Then $\phi_{i}(v)\neq{0}$ for at least one $i$. We assume $i$ is the least such index, that is we assume $\phi_{k}(v)=0$ for $k. Choose an element $a\in{k}$ satisfying $\phi_{i}(v)=a\phi_{i}(v_{i})$. Then by construction $v-av_{i}$ is in the kernel of $\phi_{i}$. But if $k then we also have $\phi_{k}(v-av_{i})=\phi_{k}(v)-a\phi_{k}(v_{i})=0-0=0$. Hence $v-av_{i}\in{\bigcap_{j=1}^{i} ker(\phi_{j})}$. If it is also in the kernel of the remaining $\phi_{j}$'s then we are done. If not, let $l$ be the least index such that $\phi_{l}(v-av_{i})\neq{0}$. Applying the same procedure once more, we can choose $b\in{k}$ with $\phi_{l}(v-av_{i})=\phi_{l}(v_{l})$. Then the element $v-av_{i}-bv_{l}$ will be in the intersection of the kernels $ker(\phi_{j})$, $j=1,\ldots, l$. We continue like this, and sooner or later the process must terminate, since $m$ is finite.
For the final part of the proof, we will construct a linear combination of the $\phi_{i}$'s which takes the same values as $\phi$ on all $v_{i}$. We will then have to conclude that this linear combination is equal to $\phi$. First choose $a_{m}\in{k}$ such that $a_{m}\phi_{m}(v_{m})=\phi(v_{m})$. Next choose $a_{m-1}$ such that $a_{m-1}\phi_{m-1}(v_{m-1})=\phi(v_{m-1})-a_{m}\phi_{m}(v_{m-1})$. In general, choose $a_{k}$ such that: \begin{align*} a_{k}\phi_{k}(v_{k})=\phi(v_{k})-a_{k+1}\phi_{k+1}(v_{k})-a_{k+2}\phi_{k+2}(v_{k})-\ldots-a_{m}\phi_{m}(v_{k}) \end{align*} It can then be checked that this choice of $a_{i}$'s will do the job!
END PROOF
Does this look ok? Anyway, thanks for all the useful hints and comments. Maybe I will try and look at the more general situation too( hinted at by Blah)later.