The conclusion from this example is that sometimes we need to pay special attention when conditioning on events of zero probability. Suppose, as in the example, that $X_1$ and $X_2$ are i.i.d. exponentials. Fix a,b > 0 with $a < b$, and consider the question what is ${\rm P}\big[ X_1+X_2 \in [a,b] \big| X_1=X_2 \big]$? You may (naturally) wish to interpret this as $ {\rm P}\big[X_1 + X_2 \in [a,b]\big|X_1 - X_2 = 0 \big] = \mathop {\lim }\limits_{h \to 0^ + } {\rm P}\big[ X_1 + X_2 \in [a,b]\big|0 < X_1 - X_2 < h \big], $ so in this case $ {\rm P}\big [ X_1+X_2 \in [a,b] \big| X_1=X_2 \big ] = \mathop {\lim }\limits_{h \to 0^ + } \frac{{{\rm P} [ X_1 + X_2 \in [a,b],0 < X_1 - X_2 < h]}}{{{\rm P} [0 < X_1 - X_2 < h]}}. $ On the other hand, you may (less likely) wish to interpret that as $ {\rm P} \big[ X_1 + X_2 \in [a,b] \big|X_1 / X_2 = 1 \big] = \mathop {\lim }\limits_{h \to 0^ + } {\rm P}\big[ X_1 + X_2 \in [a,b] \big|1 < X_1 / X_2 < 1+h \big], $ leading to $ {\rm P}\big [ X_1+X_2 \in [a,b] \big| X_1=X_2 \big ] = \mathop {\lim }\limits_{h \to 0^ + } \frac{{{\rm P} [ X_1 + X_2 \in [a,b],0 < X_1 - X_2 < X_2 h]}}{{{\rm P} [0 < X_1 - X_2 < X_2 h]}}. $ However, it should not be surprising that the two interpretations lead to different probabilities, since we have limits of the form $a_i (h) / b_i (h)$, $i=1,2$, with $a_i(h),b_i(h) \to 0$ as $h \to 0^+$. Clearly, ${\rm P} [0 < X_1 - X_2 < h]$ and ${\rm P} [0 < X_1 - X_2 < X_2 h]$ are not expected to have the same behavior as $h \to 0^+$. So, the only problem was how to interpret conditioning on $X_1 = X_2$, and this is up to you. In general, however, there is no such problem; you just use the formula $f_{Y|X} (y|x) = \frac{{f_{X,Y} (x,y)}}{{f_X (x)}}$, to find the conditional density function of $Y$ given $X=x$ (where ${f_{X,Y}}$ is the joint density function of $X$ and $Y$). The conditional distribution function of $Y$ given $X=x$ is obtained by integrating the conditional density.
EDIT: The conditional density function of $X_1+X_2$ given $X_1-X_2=0$ (where $X_1$ and $X_2$ are independent exponential($\lambda$) rv's), when interpreting the conditioning with respect to the random variable $X_1-X_2$, is, by Eq. (11) in the book, the exponential$(\lambda)$ density function, $\lambda e^{-\lambda y}$, $y \geq 0$. Hence, ${\rm P}\big[X_1 + X_2 \in [a,b]\big|X_1 - X_2 = 0 \big]$ is given by $ \mathop {\lim }\limits_{h \to 0^ + } {\rm P}\big[ X_1 + X_2 \in [a,b]\big|0 < X_1 - X_2 < h \big] = \int_a^b {\lambda e^{ - \lambda y} \,{\rm d}y} = e^{ - \lambda a} - e^{ - \lambda b}. $ On the other hand, the conditional density function of $X_1+X_2$ given $X_1/X_2=1$, when interpreting the conditioning with respect to the random variable $X_1/X_2$, is, by Eq. (10) in the book, the ${\rm Gamma}(2,\lambda)$ density function, $\lambda ^2 ye^{ - \lambda y}$, $y \geq 0$ (i.e., the density function of $X_1+X_2$; this is since $X_1+X_2$ and $X_1/X_2$ are independent). Hence, ${\rm P} \big[ X_1 + X_2 \in [a,b] \big|X_1 / X_2 = 1 \big]$ is given by $ \mathop {\lim }\limits_{h \to 0^ + } {\rm P}\big[ X_1 + X_2 \in [a,b] \big|1 < X_1 / X_2 < 1+h \big] = \int_a^b {\lambda ^2 ye^{ - \lambda y} \,{\rm d}y} = (\lambda a + 1)e^{ - \lambda a} - (\lambda b + 1)e^{ - \lambda b}. $ Both results agree with numerical simulations (approximating the probabilities for small values of $h$).