$\min(f_0(x))$ $\text{s.t. }f_i(x) \le y_i \forall i, i = 1 ,\ldots, m$
$f_i : \text{convex};\quad x : \text{variable}$
It is also considered that $g(y)$ is the optimal value of the problem and $\lambda^*$ is the optimal dual variable.
Then, it is claimed that $g(z) \ge g(y) - \sum_{i=1}^m \lambda^*_i * (z_i - y_i)\tag{1}$
Hence $- \lambda^*$ is a subgradient of $g$ at $y$.
Though the material I am reading from (Basic Rules for Subgradient Calculus, slide at 51:40 mins) claims the proof of (1) is straightforward, still I can't figure out how to derive that. Can anybody help ?
My Approach:
Assuming $z_i = f_i(x)$, I get the Lagrangian dual function as $g(z) = f_0(x) + \sum_{i=1}^{m}\lambda_i (z_i-y_i)$. Since $g(y)$ is the optimal value of the problem and $\lambda^*$ is the optimal dual variable, I can write
$g(y)=f_0(x) + \sum_{i=1}^{m}\lambda^*_i(z_i-y_i)$ or may be $g(y)=f_0(x)$ since $z_i = y_i$. But then I can't figure out how should I use the $\lambda^*$ in the eq (1) which is to be derived.
The unconstrained problem is $g(y) = \inf_z g(z).$ Based on problem definition, it is also true that $g(y) \le g(z)$. But how can (1) be derived from these relations? Or, I am doing something wrong assumptions here?