Notation: we have iid variables $Y,Y_1,\dots,Y_n$ and we observe $X_i=Y_i-Y$.
In general this is not true. If a distribution has a sharp peak at 0 and a heavy tail, then the a posteriori distribution on $Y$ caused by the observation of $x_1,\dots,x_n$ will have discrete peaks near $0,-x_1,\dots,-x_n$, rather than a single large peak at e.g. their empirical average (if $Y$ has a Gaussian distribution) or e.g. their median (if $Y$ has a Laplace distribution). So the distribution of $Y$ won't be unimodal, in the sense that its CDF won't be quasiconcave.
One might wonder if the sum $X_1=Y_1-Y$ might "blur" these peaks, but precisely because $Y_1$ has a sharp peak at 0 this is not going to change anything.
A simple example is, for $\delta<\tfrac{1}{2n}$ and for small $\varepsilon>0$, to let $Y$ be distributed: uniformly in $[-\delta/2,\delta/2]$ with probability $1-\varepsilon$, and uniformly in $[-1/2,1/2]$ with probability $\varepsilon$.
The corresponding density is $\psi=(1-\varepsilon)\chi_{[-\delta/2,\delta/2]}+\varepsilon\chi_{[-1/2,1/2]}$
Defining $x_0=0$, we have the equalities given by leonbloy in his answer: $\psi(y) f(x_1x_2\dots x_n\mid y)=\prod_{i=0}^n \psi(x_i+y)\\ F(x_1x_2\dots x_n)=\int_{\mathbb R} \psi(y) f(x_1x_2\dots x_n\mid y)\,dy$
So we have that when all $x_i$ (including $x_0$) are a distance greater than $\delta$ apart of each other: $\psi(y) f(x_1x_2\dots x_n\mid y)=O(\varepsilon^{n-1})$ $F(x_1x_2\dots x_n) = O(\varepsilon^{n-1})$ But when we relax the condition to allow $x_1=x_k$ (for some $k=0,2,3,\dots,n$), we have $\psi(y) f(x_1x_2\dots x_n\mid y)=\frac{\varepsilon^{n-2}}{\delta^2} \Big[|x_k+y|\le\delta/2\Big]^2 + O(\varepsilon^{n-1})$ $F(x_1x_2\dots x_n) = \varepsilon^{n-2}/\delta + O(\varepsilon^{n-1})$
So, for small enough $\varepsilon$ the conditional density of $X_1$ has $n$ peaks (near $x_0,x_2,\dots,x_n$) with up to $n-1$ valleys (this bound is reached when the peaks are more than $2\delta$ apart), therefore it cannot be unimodal when $n\ge 2$.
This counterexample can also be made smooth and made to have a unique mode by convolving it with a normal distribution of small variance, so the distribution is as well-behaved as we could hope for: it is symmetric, smooth and unimodal.
However, a weaker form of your question is true: if $Y$ has a log-concave distribution, then the a posteriori distribution of $Y$ will be log-concave (since the product of log-concave functions is log-concave), and so will $X_1$ (integrating a multivariate log-concave function along an axis yields a log-concave function, e.g. see theorem 1.8.3 of this book). So if I'm not mistaken your statement is true when replacing "unimodal" with "log-concave". This explains why the Gaussian and Laplace examples made sense!
Edit: Henry was wondering about the large $n$ limit case, for a fixed distribution. It is an interesting question that is true for a larger class of distributions, but I don't think it is true in general. For example if $Y$ is absolutely continuous with a density $\psi$ that diverges in 0 so that $\psi(y)^2$ is not integrable, then the density will diverge in each of $x_2,\dots,x_n$, so the conditional distribution is never unimodal unless $x_2=\dots=x_n$. A formal statement of the large $n$ limit problem could be: "Does the probability that $x_2,\dots,x_n$ satisfy the question converge to 1 as $n\to\infty$?" (according to the measure on $X_2\times\dots\times X_n$)