2
$\begingroup$

I have a multinomial multivariate normal distribution of the form:

$$\exp\left[-\frac{1}{2\sigma^2}(({\boldsymbol \beta}-\mu)^T\Sigma^{-1}({\boldsymbol\beta}-\mu)\right]$$

I wish to integrate with respect to $\boldsymbol \beta$.

I have found a form of the Gaussian integral from wikipedia to be as following:

$$\int\limits_{-\infty}^\infty\exp\left[-\frac{1}{2}\sum\limits_{i,j=1}^{n}{\bf A}_{ij}x_ix_j\right] d^nx=\sqrt{\frac{(2\pi)^n}{\det A}} $$

I do not know how to work out this integral or use this 'rule', but have come out with:

$$\int\limits_{-\infty}^\infty\exp\left[-\frac{1}{2\sigma^2}(({\boldsymbol \beta}-\mu)^T\Sigma^{-1}({\boldsymbol\beta}-\mu)\right] d^n\beta = \sqrt{\frac{(2\pi)^n}{\det \Sigma^{-1}}} $$

This probably is not right? How do I do the integral? How is the working out done?

  • 1
    In light of the inherent contradictions in the question, could you provide more context? I'm referring specifically to the fact that the first equation does not describe a distribution (presumably for $\beta$) *per se* because it is not normalized and, if it were normalized, then *a fortiori* the integral over $\beta$ would be $1$.2012-08-07
  • 0
    Isn't the integral simply the inverse of the normalizing constant that would be necessary to form a distribution?2012-08-07
  • 0
    Yes @gung, this question is really about how to do the integral, having done the statistical part of getting it into that form, so could migrate it?2012-08-07
  • 0
    I do have a left-over term @Max when putting the distribution into the multinomial normal distribution form, which is $y^TB\Sigma B^Ty$, so would I use this constant to be the answer then, and put it in brakets to the negative 1?2012-08-07
  • 0
    @Max Yes that is what whuber said. Without normalization it would equal the normalization constant and with normalization it would equal 1. I think the question is whehter or not we are dealing with an integral that has a closed form. The OP is looking for a closed form. For the multivariate normal there is but it should not be automatically presumed to be the case. Functions can be integrated numerically to get a normalization constant without the constant being expressible in closed form.2012-08-07
  • 0
    Thi is NOT "multinomial"; it's _multivariate normal_.2012-08-07
  • 0
    Get rid of the "$\sigma^2$". The variance is the matrix $\Sigma$.2012-08-07
  • 0
    Also, instead of dividing $\det \Sigma^{-1}$, simplify that so you're multiplying by $\det\Sigma$.2012-08-07

1 Answers 1

3

You wrote $$\exp\left[-\frac{1}{2\sigma^2}({\boldsymbol \beta}-\mu)^T\Sigma^{-1}({\boldsymbol\beta}-\mu)\right]$$

If you let the new value of $\Sigma$ be $\sigma^2\Sigma$, then you have $$\exp\left[-\frac{1}{2}({\boldsymbol \beta}-\mu)^T\Sigma^{-1}({\boldsymbol\beta}-\mu)\right].$$ There's no reason to separate out that scalar, and it's not conventionally done.

The finite-dimensional case of the spectral theorem says every real symmetric matrix can be diagonalized by an orthogonal matrix, and you have $$ \Sigma = G^T \begin{bmatrix} \lambda_1 \\ & \lambda_2 \\ & & \lambda_3 \\ & & & \ddots \end{bmatrix} G. $$ Since $\Sigma$ is a variance (a "variance-covariance matrix" if you like), all of the $\lambda$s are non-negative, and since $\Sigma$ is nonsingular, all of them are positive. So let $\Sigma^{1/2}$ denote the matrix $$ \Sigma^{1/2} = G^T \begin{bmatrix} \sqrt{\lambda_1} \\ & \sqrt{\lambda_2} \\ & & \sqrt{\lambda_3} \\ & & & \ddots \end{bmatrix} G. $$ and then $\Sigma^{1/2}$ is a positive-definite symmetric matrix, and $(\Sigma^{1/2})^2=\Sigma$, and we let $\Sigma^{-1/2}$ denote the inverse. And since $\Sigma^{1/2}$ is symmetric, we have $(\Sigma^{1/2})^T\Sigma^{1/2}=\Sigma$.

Then we have $$ ({\boldsymbol \beta}-\mu)^T\Sigma^{-1}({\boldsymbol\beta}-\mu) = \Big( \Sigma^{-1/2}({\boldsymbol\beta}-\mu) \Big)^T \Big( \Sigma^{-1/2}({\boldsymbol\beta}-\mu) \Big) = \gamma^T\gamma, $$ where $\gamma=\Sigma^{-1/2}({\boldsymbol\beta}-\mu)$.

Then $$ \begin{align} \int_{\mathbb{R}^n} \cdots\cdots d\beta = \int_{\mathbb{R}^n} \cdots\cdots |\det\Sigma^{1/2}| \, d\gamma & = |\det\Sigma^{1/2}|\int_{\mathbb{R}^n} \cdots \cdots \\[10pt] & = |\det\Sigma^{1/2}|\int_{\mathbb{R}^n} \exp\left[ \frac{-1}{2} \gamma^T\gamma \right]\,d\gamma. \end{align} $$

This integral becomes $$ \int_{\mathbb{R}^n} \exp\left(\frac{-1}{2} \gamma_1^2 \right)\cdots\exp\left(\frac{-1}{2} \gamma_n^2 \right) \, d\gamma_1\cdots d\gamma_n. $$

Then it becomes the $n$th power of $$ \int_\mathbb{R} \exp\left(\frac{-1}{2}\gamma^2\right)\,d\gamma. $$ (And it's not hard to show that $\det(\Sigma^{1/2}) = \left(\det\Sigma\right)^{1/2}$.)

  • 0
    Here's something I haven't seen made explicit in a textbook: $\displaystyle\int_\mathbb{R}\int_\mathbb{R} f(\gamma_1) g(\gamma_2) \,d\gamma_1\;d\gamma_2$ $\displaystyle = \int_{\mathbb{R}} \left(\int_\mathbb{R} f(\gamma_1)\underbrace{{}\ g(\gamma_2)\ {}}\,d\gamma_1\right)\;d\gamma_2$. _No_ "$\gamma_1$" appears over the underbrace, and that is why we can pull it out: $\displaystyle\int_\mathbb{R}\left( g(\gamma_2) \int_\mathbb{R} f(\gamma_1)\,d\gamma_1 \right)\,d\gamma_2$. Now _no_ "$\gamma_2$" appears in the _inside_ integral, and that is why we can pull that out.2012-08-07