I have asked a lot of "soft questions", but this is probably the "softest" yet, so please bear with me.
Example: Consider the quadratic equation $ax^2 + bx +c=0$. When we are "considering a specific equation", then clearly $a,b,c$ are parameters, not variables ($x$ is a variable of course). But when we are "considering the whole family of all quadratic equations" then arguably $a,b,c$ are also variables.
Is this interpretation correct? And if so, how can the intuitive notions of "considering a specific equation" and "considering the whole family of all quadratic equations" be made rigorous?
Question: In other words, how can one make rigorous the notion of "context" in which objects are being considered, in order to be able to distinguish between "parameters" and "variables"?
Or perhaps more specifically, how can one make rigorous notions of "levels of generalization" or "levels of abstraction" or of "contextual hierarchies"? Is it only possible to do so in a relative sense, or is it possible to make absolute measures of "abstractness"?
References to books to read for an answer, or for the name of the field which discusses or answers questions like these would not only be sufficient for an answer, but perhaps even preferable, since I find it difficult to imagine that one could answer this question rigorously without putting in a lot of ground work/leg work/preparation setting up and defining all of the formal logical notions.
Attempt: In this answer to a related question, the argument/assertion is made that "variables" is short-hand for "elements of the set which is the domain of a given/fixed relation". So seemingly a rigorous notion of "context" corresponds to specifying a relation.
However, this seems somewhat circular, since relations are themselves elements of sets (of relations) which can be made the domains of relations (of relations), as the example of the quadratic equation shows above.
(Or consider "collections" which are "sets of sets" or "operators" that are "functions of functions".)
In any case, to identify which objects are elements of the domain of the "given/fixed" relation, we need to be able to "give, fix, specify, choose" a relation (which would otherwise be "arbitrary").
This seems like it is just pushing the problem up a level, from relations to relations of relations. How can that be made precise?
This other answer to the aforementioned question seems to assert something similar, saying that "They [variables, parameters] are all the same sort of thing on different levels of abstraction/generalization." But how can we make sense of "context/level of abstraction"?
Is it possible to do so in any absolute sense? Or only in a relative sense?
(E.g. clearly relations of relations of widgets intuitively should be in some sense more abstract than relations of widgets, but perhaps it is not possible to give an absolute "value" for how abstract either one is. Or the same way that we can prove that the cardinality of the real numbers is strictly greater than that of the natural numbers, but we can't prove, without additional assumptions, that there is or isn't any cardinality "in between" the two.)
Background:
This seems like a problem that can be blithely ignored in mathematics. For example, all the time in proofs one will "fix" or "choose" an "arbitrary" object, without needing to specify an explicit relation of which the object is an element of the domain, and without having to specify the relation of relations of which the relation is an element of the domain, and so on ad infinitum.
However, at least when programming, it seems like the distinction would be more important. Although I don't actually know or understand type theory, formal languages/grammars, or the Curry-Howard correspondence, I am trying to practice thinking about proofs as if they were computer programs. This seems to mean in particular specifying the data type of every object considered. However, I am at a loss for how to specify what are the variables, and what are the parameters, for example, in the proof of the Inverse Function Theorem. And I am also at a loss to specify the various levels of "arbitrariness" being used throughout the proof.
Edit: Example: Given a point $a \in \mathbb{R}^n$, the proof of the inverse function theorem gives us a neighborhood $U$ of $a$ such that the function $f$ restricted to $U$ is injective, and that $f(U)$ is open. So seemingly $a$, being "given" or "fixed" is a parameter. But then we can show that $f$ is even an open mapping restricted to $U$ (and thus a homeomorphism onto its image) by essentially applying what we have already shown to every other point in $U$ (since by construction every point in $U$ satisfies the hypotheses of the inverse function theorem). So then all of the sudden our fixed $a$ in the proof becomes variable (ranging over all points in $U$, a set which was defined in terms of $a$). So if I were "programming" the inverse function theorem, would I initially set the point $a$ as a parameter, since it is fixed initially, or would I set it as a variable, because I then wind up recursively applying the previous portion of the proof at every point in a set? It seems almost "recursively variable".
Possibly related: (1)(2)(3)(4)(5)(6)(7)
Note: If possible, please don't mention the related difficulty of conceptually distinguishing between "random variable" and "parameter" in statistics -- I have had enough philosophical debate about the difference between Bayesian and frequentist statistics for a lifetime.