A linear inverse problem is given by:
$\ \mathbf{d}=\mathbf{A}\mathbf{m}+\mathbf{e}$
where d: observed data, A: theory operator, m: unknown model and e: error.
To minimize the effect of the noise; a Least Square Error (LSE) model estimate is commonly used:
$\ \mathbf{\tilde{m}}=(\mathbf{A^\top A})^{-1}\mathbf{A^\top d}$
As an example problem I am considering the model:
$\ T(x,z) = 0.5*sin(2\pi*x) - z$
My observed data is the noisy partial derivatives of T:
$\ P(x,z)=T_x+e$
$\ Q(x,z)=T_z+e$
By introducing matrix finite difference operators I can formulate these two pdes as linear equations:
$\ \mathbf{P=D_x*T} $
$\ \mathbf{Q=D_z*T} $
Below is the LSE solution of these equations:
There is one big problem with this solution: in the model the line where T=0 goes between z=-0.5 and z=0.5, whereas in the estimate it goes between z=-1 and z=1. Some "scaling" of the z coordinate seems to be taking place.
Why is this happening?
Other than this the solution is quite good except for the blocky pattern. This pattern seems to be related to the sampling; the finer the sampling the finer the blocky pattern.
What is the explanation for this pattern?
In order to get a smoother solution I tested with a smoothing regularization constraint: $\ a*\mathbf{L*T}=0$
where a is a scaling parameter and L is the finite difference Laplacian matrix.
I noticed a trade-off between smoothness and correctness depending on the scaling parameter. For large scaling parameters the sine wave pattern of the model turns into a square wave pattern:
How can I determine a suitable a?
Inverse problems are often troubled by instabilities which make it necessary to introduce a dampening regularization term. This does not seem to be the case for this problem. However since A is not square I could not find its condition number using cond() in Matlab.
Is there some way in Matlab to find if an LSE problem with a nonsquare matrix is unstable?
BTW Here is the Matlab code I used:
% number of samples in x and z direction: n = 101; % amount of noise added to partial derivatives: e = 0.5; x = linspace(-1,1,n); z = linspace(-1,1,n); dx = x(2) - x(1); dz = z(2) - z(1); [X,Z] = meshgrid(x,z); % Hidden model: subplot(1,3,1); T = 0.5*sin(2*pi*X) - Z; imagesc(x,z,T); xlabel('x'); ylabel('z'); title('T = 0.5*sin(2*pi*x) - z'); colorbar; % Observed noisy data: P = pi*cos(2*pi*X) + e*randn(n,n); % Tx Q = -ones(n,n) + e*randn(n,n); % Tz subplot(1,3,2); imagesc(x,z,P); xlabel('x'); ylabel('z'); title('P = Tx + e'); colorbar; subplot(1,3,3); imagesc(x,z,Q); xlabel('x'); ylabel('z'); title('Q = Tz + e'); colorbar; % Central difference matrices: E = sparse(2:n, 1:n-1, 1, n, n); D = (E' - E) / 2; I = speye(n); Dx = kron(D,I)/dx; Dz = kron(I,D)/dz; A = []; b = []; A = [A; Dx]; b = [b; P(:)]; A = [A; Dz]; b = [b; Q(:)]; % Least Squares Solution: figure; subplot(1,2,1); imagesc(x,z,T); xlabel('x'); ylabel('z'); title('T'); colorbar; tlse = lsqr(A, b, 1e-3, 1000*100); Tlse = reshape(tlse, [n n]); subplot(1,2,2); imagesc(x,z,Tlse); xlabel('x'); ylabel('z'); title('Tlse'); colorbar; figure; % Least Squares Solution regularized by smoothing Laplace operator: D2 = E + E' + sparse(1:n, 1:n, -2, n, n); Dxx = kron(D2,I)/(dx^2); Dzz = kron(I,D2)/(dz^2); L = Dxx + Dzz; ns = 3; si = 1; for si = 1 : ns; subplot(1,ns,si); % regularization: a = (si - 1)*5e-4; Ar = [A; a*L]; br = [b; zeros(n^2,1)]; tlse = lsqr(Ar, br, 1e-3, 1000*100); Tlse = reshape(tlse, [n n]); imagesc(x,z,Tlse); str = sprintf('Tlse a=%g',a); title(str); si = si + 1; end
EDIT: My central difference matrix D had an edge problem. As an example for size n=5: was:
D = 0 0.5000 0 0 0 -0.5000 0 0.5000 0 0 0 -0.5000 0 0.5000 0 0 0 -0.5000 0 0.5000 0 0 0 -0.5000 0
Rewrote this to
D = -1.0000 1.0000 0 0 0 -0.5000 0 0.5000 0 0 0 -0.5000 0 0.5000 0 0 0 -0.5000 0 0.5000 0 0 0 -1.0000 1.0000
With more correct difference matrices I get much nicer results:
And this is without any regularization. Sorry for the sloppiness. I learned a lot tough :-). Thanks for your answers!