In order to do statistical inference on a model, it must be identifiable. In other words, let $\mathcal{P} = \{P_{\theta}: \theta \in \Theta \}$ be a statistical model. Then the model if identifiable if $P_{\theta_1} = P_{\theta_2}$ implies $\theta_1 = \theta_2$ for all $\theta_1, \theta_2 \in \Theta$.
So given design points $\theta_1, \dots, \theta_n$ where we have made observations $y(1), \dots, y(n)$, how do we identify the polynomials $P_{\theta_1}, \dots, P_{\theta_n}$? Can different sets of polynomials identify with the given design points (i.e. different models identify with the given design points)? Note that given $d(x) = (x-\theta_1) \cdots (x-\theta_n)$ then the zeros are the design points.
Added. Suppose $k$ is a field of constants and $\mathcal{K}$ a field of functions $\phi: \Theta \to k$ with $\Theta$ the set of parameters (a linear functional). Let $x = (x_1, \dots, x_d)$ be the control factors, $y = (y_1, \dots, y_p)$ be the response variables and $t = (t_1, \dots, t_h)$ dummy variables. A model is a finite list of polynomials $f_1, \dots, f_q$, $h_1, \dots, h_f$ such that $f_i \in \mathcal{K}[x,y,t]$ and $h_i \in k[x,t]$.