The set of well-formed formulas (wffs) in first-order logic (FOL) is decidable, because it's straightforward to translate the standard recursive syntax rules into a context free grammar, and all context free languages are decidable.
The set of sentences -- i.e. closed formulas -- of FOL is also decidable, because it's straightforward to decide whether a wff is a sentence.
But the set of sentences of FOL is not context free: if variables are encoded, say, with prme marks (so the set of variables is $x$, x', x'', etc.), an easy application of the pumping lemma for context free languages to a sentence like $(\exists \mu)(F\mu\rightarrow F\mu)$, for a sufficiently long variable $\mu$, yields a contradiction.
So the best we can hope for, thanks to the Chomsky hierarchy, is that the set of sentences of FOL is context sensitive. Whether it is just rides on whether there is a linear bounded Turing machine that decides it.
I have two questions, then:
(1) what is the weakest Turing machine (TM) needed to decide the set of sentences of FOL? (if the answer is linear bounded, then the set of sentences of FOL is context sensitive; otherwise the best we can say is that the set of sentences of FOL is decidable)
and, more generally:
(2) is there a hierarchical structure for languages between context sensitive (type 1) and decidable? e.g., are there grammars naturally associated with polynomially bounded TM's, exponentially bounded TM's, etc.? Obviously I don't expect there to be as nice a structure here as elsewhere in the Chomsky hierarchy -- precisely because I can't seem to find a reference that talks about it. But since there is a natural ordering of Turing machines based on the amount of space they need, I'm curious about what this ordering might track on the side of grammars.