Let us go for hints then. A crucial tool is $Y_n$ the number of tosses between the $(n-1)$th head and the $n$th head of coin $2$. Then $(Y_n)_{n\geqslant1}$ is an i.i.d. sequence and one can easily determine its distribution. For every positive $n$, let $S_n=\sum\limits_{k=1}^nY_k$.
Now, $[X_n\geqslant2k]=[S_k\leqslant n]$ hence, to evaluate $\mathrm P(X_n\geqslant(1+\varepsilon)\mathrm E(X_n))$, one can choose some $k$ depending on $n$ such that $2k\leqslant (1+\varepsilon)\mathrm E(X_n)$ and use the upper bound $ \mathrm P(X_n\geqslant(1+\varepsilon)\mathrm E(X_n))\leqslant\mathrm P(S_k\leqslant n). $ Likewise, to evaluate $\mathrm P(X_n\leqslant(1-\varepsilon)\mathrm E(X_n))$, one can choose some $\ell$ depending on $n$ such that $2\ell\geqslant (1-\varepsilon)\mathrm E(X_n)$ and use the upper bound $ \mathrm P(X_n\leqslant(1-\varepsilon)\mathrm E(X_n))\leqslant\mathrm P(S_\ell\geqslant n). $ So one must estimate the probability of these $S_k$ and $S_\ell$ events. This is the realm of large deviations theory but a simple version will be enough for us. Assume for example one wants to bound $ A=\mathrm P(S_k\leqslant n). $ Note that for every nonnegative $t$, $[S_k\leqslant n]=[\mathrm e^{-tS_k}\geqslant\mathrm e^{-tn}]$. This remark is at the basis of the so called Chernoff's bounds, in our case, $ A=\mathrm P(\mathrm e^{-tS_k}\geqslant\mathrm e^{-tn})\leqslant\mathrm e^{tn}\mathrm E(\mathrm e^{-tS_k})=\mathrm e^{tn}\mathrm E(\mathrm e^{-tY_1})^k. $ Assume for a moment that $k=\alpha n+o(n)$ when $n\to\infty$, then $A\leqslant\mathrm e^{-\Omega(n)}$ as soon as there exists a nonnegative $t$ such that $ \mathrm e^{t}\,\mathrm E(\mathrm e^{-tY_1})^\alpha \lt1. $ When $t\to0$, the LHS of this inequality is $1+t-\alpha t\mathrm E(Y_1)+o(t)$ hence such a parameter $t$ exists as soon as $1-\alpha \mathrm E(Y_1)\lt0$. You know the value of $\mathrm E(Y_1)$ and the only condition on $\alpha $ is $ 2\alpha n+o(n)\leqslant (1+\varepsilon)\mathrm E(X_n), $ hence the proof that $A\leqslant\mathrm e^{-\Omega(n)}$ is complete if $ (1+\varepsilon)\cdot\mathrm E(Y_1)\cdot\xi\gt2\quad \mbox{with}\quad \xi=\lim\limits_{n\to\infty}\frac{\mathrm E(X_n)}{n}. $ As you probably already guessed, it happens that $ \mathrm E(Y_1)\cdot\xi=2, $ hence this strategy succeeds for every positive $\varepsilon$. Likewise for $B=\mathrm P(S_\ell\geqslant n)$.
The relation $\mathrm E(Y_1)\cdot\xi=2$ is standard renewal theory, but let me finish by explaining the reason why it holds. In the long run, by the most classical version of the law of large numbers, it takes about $n=\mathrm E(Y_1)\cdot N$ tosses for $X$ to increase of $2N$ units, that is, $X_n\approx 2N\approx 2n/\mathrm E(Y_1)$. Since $X_n\approx\mathrm E(X_n)$, this yields $\xi=2/\mathrm E(Y_1)$.