One approach is to consider this as a probabilistic inverse problem. It may be overkill for purely theoretical purposes where there is a geometric formula for the exact solution, but this is how I would do it in a real-world situation where there is uncertainty. The following framework is probably how it's actually done for real GPS devices.
Probabilistic/Bayesian theory:
Define things as follows:
- Call the unknown point location $p=(x,y)$,
- the measurement locations $m_1,m_2,m_3$, ($m_1=(x_{m1},y_{m1})$, etc)
- the measured distances $d=(d_1,d_2,d_3)$,
- and the measurement noise $\eta = (\eta_1,\eta_2,\eta_3)$.
Then the situation can be encapsulated via the following equation: $d=g(p)+\eta,$
where $g(p):=(||m_1-p||,||m_2-p||,||m_3-p||)$. Based on measurements of $d$, we want to estimate $p$. The continuous density version of Bayes' theorem tells us that $f_P(p|d)=\frac{f_D(d|p)f_P(p)}{f_D(d)}.$ Let's break down what this means piece by piece:
- $f_P(p|d)$ is the posterior probability density for your reciever to be at location $p$, taking into account that you measured the distances to be $d$.
- Our goal is to find either the whole distribution $f_P(p|d)$, or maybe just certain properties of it like the expected value, maximum liklihood point, variance, etc.
- The expected value and maximum liklihood point of the posterior $f_P(p|d)$ would be good "best guesses" for where the reciever is, and the variances would say how certain you are of that guess.
- $f_D(d|p)$ is the likelihood that you would measure a set of distances $d$ if the true reciever point was $p$, taking into account the noise.
- Considering a slight rearrangement of the model equation, $\eta=d-g(p)$, we can express the liklihood as $f_D(d|p)=\rho_{\eta}(d-g(p))$ where $\rho_{\eta}$ is the probability distribution of the noise $\eta$.
- The noise distribution $\rho_\eta$ has to be modeled based on your knowledge of the measurement device - more on this later.
- $f_P(p)$ is the prior probability distribution on the reciever location - how likely you think it is to be in certain location before you even measure the distances.
- For example, if any location is as likely as any other, $f_P(p)$ would be a uniform distribution on the box.
- $f_D(d)$ is the prior probability distribution on the distance measurements $d$.
- Since $f_D(d)$ doesn't depend on $p$, it's basically just a normalization constant.
- Properties of $f_D(d)$ are important in theory to show that the bayesian framework is legitimate, but in practice you can just ignore it. It won't effect the mean, maximum liklihood point, variance, etc.
Now lets address the modeling issues mentioned above. It might be reasonable to say that the measurements are mean-zero normally distributed random variables with standard deviations $\sigma_1, \sigma_2,\sigma_3$ for the distance measurements $d_1, d_2$, and $d_3$ respectively, and the prior distribution is uniform on the box. Then the liklihood is, $f_D(d|p)=e^{-\sigma_1^{-2}(d_1-||m_1-p||)^2-\sigma_2^{-2}(d_2-||m_2-p||)^2-\sigma_3^{-2}(d_3-||m_3-p||)^2},$ so the posterior is, $f_P(p|d)=N\begin{cases}e^{-\sigma_1^{-2}(d_1-||m_1-p||)^2-\sigma_2^{-2}(d_2-||m_2-p||)^2-\sigma_3^{-2}(d_3-||m_3-p||)^2}, & p \text{ is in the box} \\ 0 & p \text{ not in the box}\end{cases}$ where $N$ is some fixed normalization constant.
Extracting information from the posterior:
Given the posterior distribution, there are a few things you might want to do.
Drawing samples:
First, you might want to draw samples from it. Markov-chain monte carlo is a good way to do this since it works without you knowing the normalization constant. MCMC also has advantages for high-dimensional problems, but that doesn't matter since this problem is small. However if you have thousands of measurement devices and thousands of points then such considerations will be important.
Best guess for reciever location:
Second, you might want to find the maximum liklihood point - in other words the point $p$ where $f_P(p|d)$ is maximized. To do that, solve the constrained convex optimization problem, \begin{align}min_p & ~ \sigma_1^{-2}(d_1-||m_1-p||)^2+\sigma_2^{-2}(d_2-||m_2-p||)^2+\sigma_3^{-2}(d_3-||m_3-p||)^2 \\ \text{such that } & p \text{ is in the box}\end{align}
Uncertainty quantification:
Third, you might want to find the uncertainty. A quick and dirty way to do this is to draw a few, say 5, samples from the posterior and then take their variance. A more sophisticated way is to take the trace of the inverse of the hessian of the optization functional. It turns out that the inverse of the hessian is the covariance of a gaussian approximation to the posterior, though a proof of this is somewhat involved.
For an explanation on Bayesian inversion for a similar but easier example, you might read the excellent explanatory paper, "Estimating parameters in physical models through Bayesian inversion: a complete example" by Allmaras et. al.
Summary: The situation can be framed as a probabilistic inverse problem, where noise-corrupted measured distances are known and the recievers' position is unknown. Under this framework, Bayes' theorem provides a formula for the posterior probability density for the receiver position, based on (1) the prior probability densities, (2) the noise distribution, and (3) the measured data. If the prior is modeled as a uniform distribution and the noise is gaussian, then the posterior takes a particularly simple form. Samples from the posterior can be drawn with Markov chain monte carlo, and the posterior maximum liklihood point can be found by solving a computationally easy convex optimization problem.
Edit: I did a quick code up of the above maximum likelihood calculation in python using an off-the-shelf optimization routine from scipy, and it seems to work well.
Code: (I release this to the public domain if you want to use it. Doesn't use gradient information)
import numpy as np from scipy.optimize import minimize from math import sqrt from math import pow import matplotlib.pyplot as plt def recieverMAPpoint(d1,d2,d3): #coordinates of constraints box m1x=100.; m1y=0. m2x=0.; m2y=100. m3x=100.; m3y=100. #measurement undertainty=20% s1=.2*d1; s2=.2*d2; s3=.2*d3 #optimization functional J def J(p): x=p[0]; y=p[1] r1=d1-sqrt(pow(m1x-x,2)+pow(m1y-y,2)) r2=d2-sqrt(pow(m2x-x,2)+pow(m2y-y,2)) r3=d3-sqrt(pow(m3x-x,2)+pow(m3y-y,2)) return pow(r1/s1,2) + pow(r2/s2,2) + pow(r3/s3,2) #constraints box B B = ({'type': 'ineq', 'fun' : lambda p: np.array([p[0], p[1]])}, {'type': 'ineq', 'fun' : lambda p: np.array([100-p[0], 100-p[1]])}) #minimize J over B res = minimize(J,np.array([50,50]),constraints=B,options={'disp': True},method='SLSQP') #plot constraints box plt.plot([0,100,100,0,0],[0,0,100,100,0],color='black') #plot distance circles c1=plt.Circle((m1x,m1y),radius=d1,ec='blue',fc='none') c2=plt.Circle((m2x,m2y),radius=d2,ec='blue',fc='none') c3=plt.Circle((m3x,m3y),radius=d3,ec='blue',fc='none') plt.gca().add_patch(c1); plt.gca().add_patch(c2); plt.gca().add_patch(c3) #plot optimal point plt.plot(res.x[0],res.x[1],'ro') plt.axis([-20,120,-20,120]) plt.show() return res.x
Test Cases:
Example from original post:
>>> recieverMAPpoint(53,51,72) Optimization terminated successfully. (Exit mode 0) Current function value: 6.47506375609 Iterations: 9 Function evaluations: 36 Gradient evaluations: 9 array([ 48.3354663 , 50.77404997])

Measurements error overestimates distance:
>>> recieverMAPpoint(70,100,40) Optimization terminated successfully. (Exit mode 0) Current function value: 0.241068422167 Iterations: 12 Function evaluations: 48 Gradient evaluations: 12

Measurement error underestimates distance:
>>> recieverMAPpoint(30,100,70) Optimization terminated successfully. (Exit mode 0) Current function value: 0.472436673945 Iterations: 13 Function evaluations: 52 Gradient evaluations: 13 array([ 85.47617804, 27.93369774])

Distance measurements exact:
>>> recieverMAPpoint(80.6,67.1,92.2) Optimization terminated successfully. (Exit mode 0) Current function value: 1.20656973089e-07 Iterations: 11 Function evaluations: 44 Gradient evaluations: 11 array([ 30.012965 , 39.98444119])

Optimal point on edge of box (constraint active):
>>> recieverMAPpoint(60,120,130) Optimization terminated successfully. (Exit mode 0) Current function value: 0.526683755534 Iterations: 13 Function evaluations: 52 Gradient evaluations: 13 array([ 3.96901773e+01, -1.76150977e-13])
