Quantifying over variables that represent predicates of first-order logic is called second-order logic. Quantifying over variables that represent predicates of second-order logic is called third-order logic. Iterating this gives higher-order logic.
Kinds of logic are susceptible to the same sorts of paradoxes that kinds of set theory are susceptible to; e.g. unary predicates really are allowed to have themselves in their domain, you could define $Q(P) :\equiv \neg P(P)$ and then $Q(Q)$ is the Liar's paradox.
The formulation of higher-order logic is the main way people deal deal with this, and is very reminiscent of how ZFC avoids the paradoxes of naive set theory.
If you fix the details right, second-order logic is nothing more than the first-order theory of first-order logic.
However, when using higher-order logic with set theory, people often1 like to restrict the kinds of semantics allowed. In particular, if $X$ is a type, and $\mathcal{P}(X)$ is the type of all unary predicates on $X$, then in an interpretation where the type $X$ is mapped to a set $S$, they insist that the type $\mathcal{P}(X)$ is mapped to the power set $\mathcal{P}(S)$.
This is the so-called full semantics for higher-order logic. It does not have the familiar nice logical properties; for example, the analog of Gödel's completeness theorem does not hold. For this reason, second-order logic tends to be avoided when doing formal logic.
1: By "often" I mean to the point where people find it weird if you don't do so, and instead just use what you would normally use for the semantics of a first-order theory. Or in the case of second-order logic, an intermediary called Henkin semantics is more commonly used, where the interpretation of $\mathcal{P}(X)$ is required to be a subset of $\mathcal{P}(S)$.