I am working on an optimization problem, and I'm getting a little confused about gradients and Jacobians - I haven't taken vector calc in a long time, and I think I'm a little rusty. Hopefully someone can explain this confusion to me. So, earlier in this problem, I was looking at a Taylor series expansion of $f_k(x_k)$ about some other point, $z_k$. The Taylor series expansion (up to first order terms) looks like so:
$$f_k(x_k) = f_k(z_k) - F_k(z_k)[x_k-z_k],$$
where $F_k$ is the Jacobian of $f_k$ with respect to $x$. So, I believe that we have
\begin{equation*} F_{k}(z_k) = \left[ {\begin{array}{*{20}{c}} {\frac{{\delta {f_{k,1}}}}{{\delta {x_{k,1}}}}(z_k)}&{\frac{{\delta {f_{k,1}}}}{{\delta {x_{k,2}}}}(z_k)}& \ldots &{\frac{{\delta {f_{k,1}}}}{{\delta {x_{k,n}}}}(z_k)}\\ \vdots & \vdots & \ddots & \vdots \\ {\frac{{\delta {f_{k,n}}}}{{\delta {x_{k,1}}}}(z_k)}&{\frac{{\delta {f_{k,n}}}}{{\delta {x_{k,2}}}}(z_k)}& \ldots &{\frac{{\delta {f_{k,n}}}}{{\delta {x_{k,n}}}}(z_k)} \end{array}} \right], \end{equation*}
and then $F_k(z_k)[x_k-z_k]$ is just the above matrix times the vector $[x_k-z_k]$, which means that it's like we have a function $F_k$, which we evaluate at $z_k$, and then multiply by $(x_k-z_k)$.
But, let's say that I wanted to find the gradient of the following function:
$$J(x_k) = \frac{1}{2}(h_k(x_k)-y_k)(h_k(x_k)-y_k)^T$$
with respect to $x_k$. Now, it should look something like:
$$\nabla_{x_k}J = H_k^T(h_k(x_k)-y_k).$$
where $H_k$ is the Jacobian of $H_k$ with respect to $x_k$. So, here's my question: using the notation above, where we had $F_k(z_k)[x_k-z_k]$, is $H_k$ then equal to $H_k(x_k)[h_k(x_k)-y_k]$? Do we need that extra $x_k$ in there to emphasize that we are evaluating the function at $x_k$, and then multiplying the whole thing by $h_k(x_k)-y_k$?
Sorry if this question is confusing, I can try to clear it up if you don't understand what I'm asking. Thanks!