1
$\begingroup$

In order to do statistical inference on a model, it must be identifiable. In other words, let $\mathcal{P} = \{P_{\theta}: \theta \in \Theta \}$ be a statistical model. Then the model if identifiable if $P_{\theta_1} = P_{\theta_2}$ implies $\theta_1 = \theta_2$ for all $\theta_1, \theta_2 \in \Theta$.

So given design points $\theta_1, \dots, \theta_n$ where we have made observations $y(1), \dots, y(n)$, how do we identify the polynomials $P_{\theta_1}, \dots, P_{\theta_n}$? Can different sets of polynomials identify with the given design points (i.e. different models identify with the given design points)? Note that given $d(x) = (x-\theta_1) \cdots (x-\theta_n)$ then the zeros are the design points.

Added. Suppose $k$ is a field of constants and $\mathcal{K}$ a field of functions $\phi: \Theta \to k$ with $\Theta$ the set of parameters (a linear functional). Let $x = (x_1, \dots, x_d)$ be the control factors, $y = (y_1, \dots, y_p)$ be the response variables and $t = (t_1, \dots, t_h)$ dummy variables. A model is a finite list of polynomials $f_1, \dots, f_q$, $h_1, \dots, h_f$ such that $f_i \in \mathcal{K}[x,y,t]$ and $h_i \in k[x,t]$.

  • 0
    I know very little about algebraic statistics, but as no expert seems to turn up here, I can try. I have only see the question the other way around: Given polynomials $P_\theta$ depending on parameters $\theta$ and an observation y(1),...,y(n), find the design points $\theta$ that are the most likely (*not* the polynomials themselves). Is that what your question is about? It would help very much if you told us from which context your question arises. Maybe an introduction into algebraic statistics would help you, how about Chapter 2 of this one: http://tinyurl.com/62rezl6 ?2011-06-20

0 Answers 0