Assuming that $E[X_i] = \lambda^{-1}$, $Y = \sum_i X_i = n\bar{X}$ is a Gamma random variable with mean $n\lambda^{-1}$ and density function
$$f_Y(y) = \lambda \frac{(\lambda y)^{n-1}}{\Gamma(n)}\exp(-\lambda y)\mathbf{1}_{(0,\infty)}.$$
Define $\alpha$ and $\beta$ as the solutions to $F_Y(\alpha)=0.025$
and $F_Y(\beta) = 0.975$ so that
$$P\{\alpha \leq Y \leq \beta\} = F_Y(\beta) - F_Y(\alpha) = 0.95$$
and note that
$$F_Y(\alpha) = \int_0^{\alpha}
\lambda \frac{(\lambda y)^{n-1}}{\Gamma(n)}\exp(-\lambda y) \mathrm dy
= \int_0^{\lambda\alpha}
\frac{t^{n-1}}{\Gamma(n)}\exp(-t) \mathrm dt$$ so that $a = \lambda \alpha$
and similarly $b = \lambda \beta$. Verify for yourself that the following
bounds hold:
$$ \alpha < n\lambda^{-1} < \beta; ~~ a < n < b.$$
Now suppose that the value of $\lambda$ is unknown.
We observe the values of the $X_i$, compute the value
of $Y = \sum_i X_i = n\bar{X}$, and have $95\%$ confidence
that $Y$ is in the interval $[\alpha, \beta]$ whose end-points
are, unfortunately, unknown to us. But, if we assume that
the known value of $Y$ is at the endpoint $\alpha = a/\lambda$,
then we are in effect estimating
that the unknown value of $\lambda$ is $a/Y = a/n\bar{X}$
while if we assume that $Y$ is at the other endpoint
$\beta = b/\lambda$, then we are
in effect estimating
that the unknown value of $\lambda$ is $b/Y = b/n\bar{X}$.
More generally, if we simply assume that $Y$ has taken
on the expected value $n\lambda^{-1}$, we in effect estimate
the value of $\lambda$ as
$$\hat{\lambda} = \frac{n}{Y} = \frac{n}{X_1 + X_2 + \cdots + X_n}$$
which will be readily recognized as the maximum-likelihood
estimate of the parameter $\lambda$ of an exponential random variable $X$
based on $n$ independent observations of $X$. But now we also
have a $95\%$ confidence interval for our estimate.
If the sample mean is $\bar{X}$, then
$\left(\frac{a}{n\bar{X}},\frac{b}{n\bar{X}}\right)$ is an exact $95\%$
confidence interval for the unknown parameter $\lambda$.