Approach 1: I would start with the joint distribution of $Y_{(1)}$ and $Y_{(n)}$. (Since you know how to find the individual order statistic distributions you can use a similar argument to get the joint distribution.) This is $f_{Y_{(1)},Y_{(n)}}(y_1, y_n) = \frac{n(n-1)}{\theta^n}(y_n-y_1)^{n-2}, \:\:\:\: 0 < y_1 < y_n < \theta.$
Then do a bivariate transformation to obtain $f_{U,Y_{(n)}}(u,y_n)$. The Jacobian of the transformation is just $Y_{(n)}$, and so you get
$f_{U,Y_{(n)}}(u,y_n) = \frac{n(n-1)}{\theta^n}(y_n - uy_n)^{n-2} y_n, \:\:\: 0 < u < 1, \: 0 < y_n < \theta.$ Since $f_{U,Y_{(n)}}(u,y_n)$ factors into a function of $u$ and a function of $y_n$, $U$ and $Y_{(n)}$ must be independent.
You can fill in the details, but this is the basic argument for this approach.
Approach 2: Because I can't stop myself, let me also give the argument described by did. :)
Obtain the conditional distribution $f_{Y_{(1)}|Y_{(n)}}(y_1|y_n)$ by dividing $f_{Y_{(1)},Y_{(n)}}(y_1,y_n)$ by the marginal distribution for $f_{Y_{(n)}}$. This yields $f_{Y_{(1)}|Y_{(n)}}(y_1|y_n) = \frac{(n-1)(y_n-y_1)^{n-2}}{y_n^{n-1}}, \:\:\: 0 < y_1 < y_n.$
Then calculate $P(U < u | Y_{(n)})$ from $f_{Y_{(1)}|Y_{(n)}}(y_1|y_n)$. This is $\begin{align} P(U < u | Y_{(n)}) &= P(Y_{(1)} < u Y_{(n)} | Y_{(n)} = y_n) \\ &= \int_0^{u y_n} \frac{(n-1)(y_n-y_1)^{n-2}}{y_n^{n-1}} \, dy_1 \\ &= \frac{-1}{y_n^{n-1}} \left[(y_n - y_1)^{n-1} \right]_0^{u y_n} \\ &= \frac{-1}{y_n^{n-1}} \left[(y_n - uy_n)^{n-1} - y_n^{n-1} \right] \\ &= \frac{-1}{y_n^{n-1}} y_n^{n-1} \left[(1 - u)^{n-1} - 1\right]\\ &= 1 - (1-u^{n-1}). \end{align}$ Since $P(U < u | Y_{(n)})$ does not depend on $Y_{(n)}$, $U$ and $Y_{(n)}$ are independent.