The Kullback-Leibler Divergence is defined as $$K(f:g) = \int \left(\log \frac{f(x)}{g(x)} \right) \ dF(x)$$
It measures the distance between two distributions $f$ and $g$. Why would this be better than the Euclidean distance in some situations?
The Kullback-Leibler Divergence is defined as $$K(f:g) = \int \left(\log \frac{f(x)}{g(x)} \right) \ dF(x)$$
It measures the distance between two distributions $f$ and $g$. Why would this be better than the Euclidean distance in some situations?