0
$\begingroup$

In my research in calculus of variations, I wish to take a derivative of the following seemingly complicated expression:

$\mathcal{L}(q(y|z))$= $ \sum_{z} ({\sum_y {|p(y|z)-q(y|z)|} })^2 $

Where $ q(y|z) $ is a variable function and $ p(y|z) $ is a constant function.

*** Comment: $ p(y|z),q(y|z) $ are conditional probability distributions

I wish to take the variational derivative with respect to a probability distribution $q(y|z)$, which is the one appearing in the above sum, meaning $ \frac{\delta\mathcal{L}}{\delta q(y|z)} = ? $

What is giving me trouble is the square that appears here, hence I need help in differentiating $ \mathcal{L} $ with respect to $ q(y|z) $. I would appreciate all help on this.

  • 1
    Aren't $p$ and $q$ probability mass functions? Hence, why calculus of variations?2017-02-24
  • 0
    @RodrigodeAzevedo : variational optimization for a probabilistic setting, what is so strange?2017-02-24
  • 1
    The domains of $p$ and $q$ are countable.2017-02-24
  • 0
    @RodrigodeAzevedo : would that be a problem?2017-02-24
  • 1
    Are the domains finite?2017-02-24
  • 0
    @RodrigodeAzevedo : Not necessarily...2017-02-24
  • 1
    You have two matrices $\rm P, Q$ and you want to extremize the sum of the squared $1$-norms of the columns of $\rm P-Q$, where $\rm Q$ is the optimization variable. These matrices may be infinite. It would much easier to work with the squared $2$-norms of the columns, i.e., with the squared Frobenius norm of $\rm P - Q$.2017-02-24

0 Answers 0