I think neither methods will lead to the most accurate solution: If one first estimates the planes and then find the intersection point, one does not enforce directly the constraint that all planes intersect exactly in a single point.
If I understand correctly, this is what we know:
- There is a set of $M$ planes (three or more) which intersect in a single point $\mathbf a$. (This is our parametric model.)
- The planes are represented by a set of $K$ points $\mathbf x^{(k)}$ affected by zero-mean Gaussian noise (our data).
- We assume that we know which point belong to which plane. Thus we have a mapping $I: \{1,...,K\} \rightarrow \mathbb N$ which maps a point index $k$ onto a plane index $I(k)$. (If we don't know the mapping $I$, we can find it using RANSAC or a similar method.)
Maximizing the likelihood of the model parameters is equivalent to minimizing the geometric error between the data and the model.
Our model might look as follows:
All planes intersect in a single point $\mathbf a = (a_1,a_2,a_3)^\top$. Each plane has a different orientation specified by a normal vector $\mathbf n^{(m)}$. Now our $M$ planes are represented by $(3+3M)$ parameters: $\mathbf p = (a_1,a_2,a_3,n_1^{(1)},n_2^{(1)},n_3^{(1)} ,...,n_1^{(M)},n_2^{(M)},n_3^{(M)})^\top$ (However, the problem has only $3+2M$ degree of freedom since the length of the normals are insignificant. Thus, we have a gauge freedom of $M$ dimension. If necessary, this free gauge can be removed by using a minimal/two-dimensional parametrisation of the normals. A good possibility is to restrict the normals to lie on a sphere as described in Hartley, Zisserman: "Multiple View Geometry", Second edition, A 6.9.3.)
Now the geometric error we wish to minimize is:
$ S = \sum_{1=k}^K [d(\mathbf x^{(k)}, \mathbf a,\mathbf n^{(I(k))})]^2 $
Here, $d(\mathbf x, \mathbf a,\mathbf n)$ is the distance between a point $\mathbf x$ and a plane $(\mathbf a,\mathbf n)$.
We can find the optimal plane parameters $\mathbf p$ by jointly minimizing $S$ wrt. to $\mathbf p$.