I am asking about the so-called approximate gradient descent algorithm for solving the convex problem $$ \min_x f(x),$$ whose $(k+1)$th iterate has the form: $$ x_{k+1} = x_k - \gamma \tilde{\nabla}f(x_k),$$ where $\tilde{\nabla}f(x_k)$ is an approximation of $\nabla f(x_k)$ and $\gamma$ is some sufficiently small, positive step size parameter.
Is it possible for one to prove (formally) that $$f(x_{k+1}) \leq f(x_k),$$ or more generally, that the approximate gradient descent "works". If so, how? Note that I am not using a stochastic approximation to the gradient, so probabilistic arguments are not useful in this case.
Any directions, either to relevant papers or thoughts on how one could proceed with a proof (i.e., conditions on $\gamma$ or $\epsilon(x_k) = \tilde\nabla f(x_k) - \nabla f(x_k)$) would be helpful.