The conditional PDF of $Z$ conditionally on $[X\gt Y]$ is the only function $g$ such that, for every bounded measurable function $u$, $ \int u(z)g(z)\mathrm dz=\mathrm E(u(Z)\mid X\gt Y)\propto\mathrm E(u(X); X\gt Y), $ where $\propto$ means that a factor independent of $u$ was omitted. Let us assume that $X$ and $Y$ are independent. Write $f_X$ for the PDF of $X$ (when this exists) and $G_Y$ for the function defined by $G_Y(y)=\mathrm P(Y\lt y)$ for every $y$ (thus, $G_Y$ is a modified CDF of $Y$ where the inequality sign is strict). Then, $ \mathrm E(u(X); X\gt Y)=\iint u(x)\,[x\gt y]f_X(x)\mathrm d\mathrm P_Y(y)=\int u(x)G_Y(x)f_X(x)\mathrm dx. $ By identification, $g(z)\propto f_X(z)G_Y(z)$, hence $ \color{red}{g(z)=c^{-1}f_X(z)G_Y(z)},\qquad c=\int f_X(x)F_Y(x)\mathrm dx=\mathrm E(G_Y(X))=\mathrm P(X\gt Y). $ In particular, $g=f_X$ means that $G_Y$ is constant on $S_X=\{x\mid f_X(x)\ne0\}$. Two examples: (i) If $S_X$ is the whole real line, this is impossible. (ii) If $S_X=(0,+\infty)$, this means that $Y\leqslant0$ with full probability.
In general, assuming the distribution of $X$ is $\mu$, one sees that the distribution of $Z$ conditionally on $[X\gt Y]$ is $\nu$ with $ \color{blue}{\mathrm d\nu(z)=c^{-1}G_Y(z)\mathrm d\mu(z)},\qquad c=\int G_Y(x)\mathrm d\mu(x)=\mathrm E(G_Y(X))=\mathrm P(X\gt Y). $