Consider a random column vector $\mathbf{x}$, of dimension $m$. That is, it is a random vector, composed of $m$ random variables. The PDF of the random vector $\mathbf{x}$ is thus the joint-PDF of its $m$ random variable components.
Let us assume that one wishes to whiten the random variables of this vector. That is, you want to de-correlate them, and then scale their variances to be unity. One way to do this is to compute a whitening matrix $\mathbf{V_{\mathrm{mxm}}}$, and one way to compute the whitening matrix is by:
$ \mathbf{V_{\mathrm{mxm}}} = \mathbf{E} \mathbf{D^{\mathrm{-\frac{1}{2}}}}\mathbf{E^{T}} $
where $\mathbf{E}$ is the column wise eigen-vector matrix, and $\mathbf{D}$ its corresponding eigen-value matirx, coming from the eigenvector decomposition of x's covariance matrix, $\mathbf{R_{\mathrm{xx}}} = \mathbf{E}\mathbf{D}\mathbf{E^T}$. The $\mathbf{D^{-\frac{1}{2}}}$ implies the " inverse matrix square root". In fact there are many other types of whitening matricies that can be constructed.
My question is, to my knowledge, any construction of a whitening matrix needs to have an inverse operation within it. For me, this is fine if my dimensionality is 'small', but I hesitate to use this method for larger dimensions of $m$.
What other methods of computing a whitening matrix might exist that do not involve the computation of inverses?
Much obliged.