If $\max\{-X_{(1)}, X_{(n)}\}$ is sufficient, then necessarily $(X_{(1)},X_{(n)})$ is sufficient, since if you know the latter you can easily compute the former. And the latter, the pair, cannot be a minimal sufficient statistic if the former is sufficient because if you know the maximum you don't have enough information to find the pair, and the pair is sufficient.
Being sufficient does not mean it gives enough information to describe the data; rather it means it gives all information in the data that is relevant to inference about $\theta$, given that the proposed model is right. The model is that the observations come from a uniform distribution on an interval symmetric about $0.$ But the data may also contain information calling that model into question and the sufficient statistic doesn't give that information.
By definition, that the maximum is sufficient means that the conditional distribution of the data given the maximum does not depend on $\theta.$
You are trying to show that $\dfrac{\mathbb{1}_{[\max\{-X_{(1)},X_{(n)}\}<\theta]}}{\mathbb{1}_{[\max\{-Y_{(1)},Y_{(n)}\}<\theta]}} \vphantom{\dfrac 1 {\displaystyle\sum}}$ does not depend on $\theta$ when the two maxima are equal. I think in cases like this, where $0/0$ can appear, one should phrase the result as saying that if the two maxima are equal then there is some number $c\ne0$ such that
$$
\mathbb{1}_{[\max\{-X_{(1)},X_{(n)}\}<\theta]} = c \mathbb{1}_{[\max\{-Y_{(1)},Y_{(n)}\}<\theta]}
$$
and that this equality continues to hold as $\theta$ changes within the parameter space $(0,\infty).$