Edit: Consider this solved. I used the wrong definition of linear function. I originally used the polynomial definition, $f(x) = a\cdot x + b$, not the linear map definition $f(\alpha x) = \alpha f(x), f(x + y) = f(x) + f(y)$.
This claim comes from "Convex Optimization" by Boyd and Vandenberghe of recharacterizing a convex optimization problem that has only equality constraints (Section 4.2).
Claim: If a linear function is nonnegative on a subspace then it must be zero on the subspace.
Context: It seems as if this is a general statement, but I've had surprising difficulty trying to prove it in general. The specific problem used is:
Minimize $f_0(x)$ subject to $Ax = b$ where $x \in \mathbb{R}^n$ and $A$ is some $n \times n$ matrix
This claim is applied toward the function $\nabla f_0(x)^T v = \nabla f_0(x) \cdot v \geq 0$ which is a nonnegative linear function over $v \in Nullspace(A) = N(A)$
Work so far: Let $f(x) = a \cdot x + b$ be some nonnegative linear function over some subspace $S$. $0 \in S \rightarrow b \geq 0$. Suppose there exists $x_0 \in S$ such that $a \cdot x_0 + b > 0$. By definition of the subspace, $-x_0 \in S$ such that $-a \cdot x + b \geq 0 \rightarrow b \geq a \cdot x_0 \rightarrow 2b \geq a \cdot x_0 + b > 0$.
By the last relation, if $b = 0$, then contradiction occurs so $b > 0$. At this point, I get the gut feeling I'm doing something wrong.