Here I propose an iterative approach.
[Update 2]: Unfortunately I've taken the regression in the wrong direction, but the general idea is unaffected of this. The numerical results where the regression takes the other direction is at the end
First we know, that the regression-parameter for the slope can be computed by the deviations from the mean only. In the setup for the problem we have two groups of data:
A = 4 (x,y)-measures where both x and y are known and
B = 4 (x,y)-measures, where the x-values are not known, but sum up to 280. While their mean is known, their deviations from the mean is arbitrary except that they sum up to zero. Thus we can define that x-deviations having just the same values as their known y-deviations, scaled by any constant factor b, which is then some arbitrary slope in the scatterplot.
Next we do a regression based on the data in A only. What we get is the equation for the regression $\small \hat{y}_A = 83.4 + 1.35 x_A $ (here the slope is b~1.35)
Since we can determine the x-deviations in B arbitrarily, just to sum up to zero, we can use the deviations of the y-values and rescale them by the factor of $\small {1 \over 1.35} $ What we get then is the table for the B-data
$\small \text{ B =} \begin{array} {rr} 63.350& 160\\ 78.127& 180\\ 52.268& 145\\ 86.255& 191 \end{array} $
If we insert that into the original table we get the sum-of-squares of the residues to about 291.01 (which was also the minimum that I could get with some experimenting).
This might still be incomplete (and thus suboptimal) because the mean of the x-values in the complete data-set is slightly different from the means of the x-values in A (70.25) and in B (70) and the common optimum must be determined over the complete dataset; so possibly this must be extended to a recursive procedure. So if the above is not completely wrong or misleading, but useful so far, then that recursive procedure might be added later.
Update: I did recursion to adapt the solution according to the problem of different means in the
A and
B x-data. I got a small improvement.
B becomes now
$\small \text{ B =} \begin{array} {rr} 63.50640& 160\\ 77.93662& 180\\ 52.68374& 145\\ 85.87324& 191 \end{array} $ the equation becomes $\small \hat{y} = 76.55811894 + 1.385980479 x $ and the sum-of-squares of the residues becomes now 290.887311 which is an improvement of about -0.26. After this the recursion becomes stable in the leading six decimals
[update 2,3] Upps, I've taken the wrong direction of the regression. If I take the other way I get
$\small \text{ B =} \begin{array} {rr} 66.8704363308& 160\\ 73.8250222623& 180\\ 61.6544968822& 145\\ 77.6500445247& 191 \end{array} $ the equation becomes $\small \hat{x} = 9.707035 + 0.34773 y $ and the sum-of-squares of the residues becomes now 72.980855 after a couple of recursions.
[Update 4]
The excel-generated image shows the regression lines for the complete data (black) for the incomplete/estimated data(red) and the complete data(blue). The solution of Joriki might be explainable by the effect, that the 4-fold imputation of the mean of the incomplete set adds the same "weight" of errors to the complete model as the imputation found by the iterative method, because the slope for the incomplete data can be set arbitrarily.