There's a subtle point here, which bothered me the first time I saw this problem.
Henry's answer has the essential idea, which is to use symmetry. User Did's comment points out that the symmetry comes from the fact that $(\xi, \eta)$ and $(\eta, \xi)$ are identically distributed. But, straight from the definition of conditional expectation, it isn't clear that symmetry in the joint distributions is enough to get the result. I ended up having to prove the following lemma:
Lemma. Let $X,Y$ be random variables. There is a measurable function $f$ such that $E[X\mid Y] = f(Y)$ a.s. Moreover, if $(X', Y')$ is identically distributed to $(X,Y)$, then $E[X' \mid Y'] = f(Y')$ a.s. for the same function $f$.
Proof. The existence of $f$ is a consequence of the Doob-Dynkin Lemma. For the second part, we use the definition of conditional expectation. $f(Y')$ is clearly $\sigma(Y')$-measurable, so it remains to show that for any $A \in \sigma(Y')$, we have $E[1_A f(Y')] = E[1_A X']$. Since $A \in \sigma(Y')$, $A = (Y')^{-1}(B)$ for some Borel set $B$ (this fact is part of the proof of Doob-Dynkin). But since $(X',Y')$ has the same distribution as $(X,Y)$, we get $\begin{align*} E[1_A f(Y')] &= E[1_B(Y') f(Y')] \\ &= E[1_B(Y) f(Y)] \\ &= E[1_B(Y) E[X \mid Y]] \\ &= E[1_B(Y) X] && \text{since $1_B(Y)$ is $\sigma(Y)$-measurable}\\ &= E[1_B(Y') X'] \\ &= E[1_A X'] \end{align*}$ as desired.
It is worth noting that the function $f$ is generally not unique. In particular, we could modify $f$ almost arbitrarily on any set $C \subset \mathbb{R}$ such that $P(Y \in C)=0$.
Also, to address the point in kkk's comment: Just knowing that $\xi, \eta$ are identically distributed is not sufficient. Here is a counterexample. Let $\Omega = \{a,b,c\}$ have three outcomes, each with probability $1/3$ (and $\mathcal{F} = 2^\Omega$). Let $X(a) = 0$, $X(b)=1$, $X(c)=2$; and $Y(a)=1$, $Y(b)=2$, $Y(c)=0$. Thus $X$ is uniformly distributed on $\{0,1,2\}$, and $Y = X + 1 \bmod 2$, so $Y$ is also uniformly distributed on $\{0,1,2\}$.
Now we have $(X+Y)(a) = 1$, $(X+Y)(b)=3$, $(X+Y)(c)=2$. So $X+Y$ is a 1-1 function on $\Omega$ and thus $\sigma(X+Y) = \mathcal{F}$, so both $X,Y$ are $\sigma(X+Y)$-measurable. Thus $E[X\mid X+Y]=X$, $E[Y\mid X+Y]=Y$. However, $X$, $Y$, and $\frac{X+Y}{2}$ are all different.