3
$\begingroup$

Ever since I wrote my first $x$, it was drilled firmly into my head that generally, to "solve" for $n$ variables $\{x_1, \ldots, x_n\}$ you need to specify $n$ functions $\{f_i : X^n \to R\}$ that vanish at all your $x_i$. This was a generally necessary and sufficient condition to get a finite (nonzero) number of solutions.

Some years down the road, I learned linear algebra, and the case for systems of linear equations was clear: "well-posed" solve for x problems were identified by full rank. For rank-deficient matrices, either there were 0 or infinitely many solutions. The solution space in the latter case could still be quantified by dimension.

Is there a way to formalize this idea a little bit more rigorously for continuous functions in general? I sort of understand the idea of dimension in algebraic geometry—my idea is that the "effectiveness" of a system of equations is measured by the dimension of their variety, as each time you mod the coordinate ring by one equation, this is equivalent to "substituting" one equation into the other like we all did when we were kids. Does this algebraic dimension agree with a linear-algebraic intuition for dimension (dimension of tangent space)? Is there a concrete (possibly differential) way to compute this dimension efficiently by hand?

  • 0
    This is false. Take the equation $\sum_{i=0}^n (x_i -c_i)^2=0$. This has $n$ unknowns, and only $1$ equation, but the solution is completely determined: $x_i=0$ for every $i$. And the function $x_1,...,x_n\mapsto \sum (x_i-c_i)^2$ is perfectly continuous.2017-01-24
  • 3
    If you work on $\mathbf R$. On $\mathbf C$, it 's quite different.2017-01-24
  • 0
    Usually, each equation reduces the dimension of the set of possible solutions by $1$. Without any equations, you have all of $\Bbb R^n$ at your disposal, which is $n$-dimensional. You therefore need to reduce the dimension by $n$ in order to get discrete points, and this is done by using $n$ equations. This argument coins be made more rigourous, but it is in these terms that I think about it.2017-01-25
  • 2
    Level sets of continuous functions can be quite wild. If you restrict yourself to differentiable ones then you can make some headway by requiring certain conditions - look in to topics like the regular value theorem and transversal intersections.2017-01-25

2 Answers 2

4

One way for formalize this is by appealing to Sard's theorem: if $F: R^n\to R^m$ is a $C^\infty$ function then for "generic" $b\in R^m$ the solution set of the equation $F(x)=b$ is a smooth manifold of dimension $n-m$. (Possibly empty!) In particular, if $m=n$ then for "generic" $b$, the solution set is a discrete subset of $R^n$ and for $m>n$ for "generic" $b$, the solution set is necessarily empty. Here "generic" means "the complement to a measure zero subset". Instead of taking "generic" $b$ one can consider "generic" $F$ but defining rigorously this requires more (but not much more) work.

One can improve the regularity hypothesis here (reduce the required order of smoothness, see the link), but there are no such theorems (as far as I know) for functions which are merely continuous.

1
  • The solution set of one linear equation $f(x)=0$ in a vector space is a hyperplane.

  • The solution set of one nonlinear equation $f(x)=0$ in an euclidean space is a hypersurface.

In both cases, the dimension of the solution set is one less than the dimension of the ambient space, provided the equation is not the zero equation in the linear case and that $0$ is a regular value of $f$ in the nonlinear case.

From this, it follows that every time you add an equation, the dimension of the solution set goes down by 1, provided each equation is regular with respect to the previous solution set. That translates into independence of the equations in the linear case and transversality in the nonlinear case.

So, after $n$ equations the solution set has dimension $0$ and so is a single point in the linear case and a set of isolated points in the nonlinear case.