I have a functional $$ J(\theta) = \frac{1}{2}\int_0^t \left(k(\tau)-\theta^{T}(t)l(\tau)\right)^2 + (\theta(t)-\theta(\tau))^T W(\theta(t)-\theta(\tau)) d\tau $$ which I would like to minimize wrt. $\theta(t)$. I see it is convex, if W is positive semidefinite, which is assumed to be true.
Let $\Theta(\epsilon,t)$ be a family of test-functions and $\epsilon$ the variation parameter, s.t. $\Theta(\epsilon_0,t)=\theta(t)$. Now I get the first variation $$ \delta J(\theta) = \int_0^t -l(\tau)(k(\tau)-l^T(\tau)\Theta(\epsilon,t)) \; \delta \theta(t) + W(\Theta(\epsilon,t)-\Theta(\epsilon,\tau)) \; (\delta\theta(t)-\delta\theta(\tau)) d\tau $$ using the condition for a local stationary point of $J(\epsilon)$ $$\delta J |_{\epsilon = \epsilon_0} = 0 \,, \qquad \forall \delta \theta$$ I end up with the two Euler-Lagrange equations $$ -l(\tau)(k(\tau)-l^T(\tau)\theta(t)) + W(\theta(t)-\theta(\tau)) =0 \\ - W(\theta(t)-\theta(\tau)) =0 \;. $$
Comparing to the classical least squares solution $$ \int_0^t -l(\tau)\left(k(\tau)-l^T(\tau)\theta\right) d\tau = 0 \,, $$ I cannot see, why the integral disappears in the variational formulation and what exactly those Euler-Lagrange equations (ELE) as a function of $\theta$ and $\tau$ mean (if the above is correct). Can anyone explain to me the connection between those and LS (I know, that there $\theta$ is constant by assumption). In other words: why can I not recover the LS solution with the variational formulation? And what does this second ELE mean?