1
$\begingroup$

I'm working with this model:

  • A fixed and finite set of documents: $D = \lbrace d_1, d_2, \dots, d_D\rbrace$
  • A fixed and finite set of terms that can appear in the documents: $V = \lbrace v_1, v_2, \dots, v_V\rbrace$
  • A fixed and finite set of latent classes: $T = \lbrace t_1, t_2, \dots, t_T\rbrace$
  • A probability distribution over the latent classes: $P(t)$
  • For each latent class, a distribution over terms: $P(\textbf{v}|t)$
  • For each latent class, a distribution over documents: $P(\textbf{d}|t)$

Given a corpora of documents and chosen the probability distribution, the likelihood of the model is: $L = \prod_d \prod_v [P(d,v)]^{n(d,v)}$ where n(d,v) is the number of occurences of the term $v$ in the document $d$, and: $P(d,v) = \sum_t P(t)P(v|t)P(d|t)$

It's know as aspect model of the Probability Latent Semantica Analysis.

I would like to know if it is possible to estimate the number of optima of the likelihood function, or if we can say something else about it.

Thank you.

0 Answers 0