I'm not an expert on this sort of thing; I'm just answering as best I can because you don't seem to be getting any answer otherwise.
I think it would probably be pretty complicated to get an exact covariance matrix for your line parameters. It should be possible to calculate the covariance matrix in linear approximation using propagation of uncertainty as described below. However, note that such linear estimates are usually biased. You can use this to get an idea of how uncertain your line parameters are, but keep in mind that it's only an approximation to the exact covariance matrix.
To avoid misunderstandings: I'll be talking a lot about the covariance matrix of the plane parameters. This is not the covariance matrix you display in the question, which is the covariance matrix of the point coordinates.
I don't know whether the code you're using already gives you a covariance matrix for the plane parameters. I'll say something below about how you might obtain one if it doesn't, but first let's assume you have two plane fits, given by parameter vectors $(d_i,n_{xi},n_{yi},n_{zi})$ with $i=1,2$, where $(n_{xi},n_{yi},n_{zi})$ is the normal for plane $i$ and $d_i$ is the distance of plane $i$ from the origin, together with associated $4\times4$ covariance matrices $M_i$ describing the uncertainty in these parameters. (Since the planes are derived from separate sets of data points, I'm assuming they can be treated as independent.)
Now your line parameters are $(d_1,d_2,(n_1\times n_2)_x,(n_1\times n_2)_y,(n_1\times n_2)_z)$. In a linear approximation, the errors due to errors in $n_1$ and $n_2$ simply add up,
$(n_1+\Delta n_1)\times (n_2+\Delta n_2)\approx n_1\times n_2 \Delta n_1\times n_2+n_1\times\Delta n_2\;,$
and the Jacobian matrix for the errors is given by
$ \begin{pmatrix} \Delta d_1 \\ \Delta d_2 \\ \Delta (n_1\times n_2)_x \\ \Delta (n_1\times n_2)_y \\ \Delta (n_1\times n_2)_z \end{pmatrix} = \begin{pmatrix} 1 \\ &&&&1 \\ &0&n_{z2}&-n_{y2}&&0&-n_{z1}&n_{y1}\\ &-n_{z2}&0&n_{x2}&&n_{z1}&0&-n_{x1}\\ &n_{y2}&-n_{x2}&0&&-n_{y1}&n_{x1}&0\\ \end{pmatrix} \begin{pmatrix} \Delta d_1 \\ \Delta n_{x1} \\ \Delta n_{y1} \\ \Delta n_{z1} \\ \Delta d_2 \\ \Delta n_{x2} \\ \Delta n_{y2} \\ \Delta n_{z2} \end{pmatrix} \;. $
Let's denote this by $\Delta l=J\Delta p$. Then you can compute the covariance matrix $M_l$ of the line parameters from those of the plane parameters using propagation of uncertainty:
$M_l=J\begin{pmatrix}M_1\\&M_2\end{pmatrix}J^{\text T}\;.$
This is relatively straightforward because the line parameters are given as explicit functions of the plane parameters. Approximating the $M_i$ if your code doesn't already provide them is a bit more complicated, since the plane parameters are only given implicitly as solutions of an eigenvalue problem.
The following is somewhat speculative; I haven't tried it out, but this is how I'd approach the problem if I had to solve it. I'll only sketch the solution since I don't know whether you'll be using it.
Determining the change in an eigenvector to linear order in a change in the matrix is a well-known problem in perturbation theory. I'll only treat the non-degenerate case where the lowest eigenvalue of the covariance matrix of the point coordinates is distinct; if it isn't, your plane parameters are useless anyway. I won't use indices $1,2$ for the planes in this part, since this calculation is completely separate for each plane. I'll denote the covariance matrix of the point coordinates by $P$.
You've obtained the eigenvector corresponding to the lowest eigenvalue for the eigenvalue problem
$Pn=\lambda n\;.$
To find the change $\Delta n$ in $n$ corresponding to a change $\Delta P$ in $P$, write
$(P+\Delta P)(n+\Delta n)=(\lambda+\Delta\lambda)(n+\Delta n)\;.$
To zeroth order, this just yields the eigenvalue problem again. To linear order, we get
$P\Delta n+\Delta P n=\lambda\Delta n+\Delta\lambda n\;.\tag1$
We can choose the eigenvectors to be orthonormal. Then multiplying $(1)$ by $n^{\text T}$ yields
$n^{\text T}P\Delta n+n^{\text T}\Delta P n=\lambda n^{\text T}\Delta n+\Delta\lambda n^{\text T}n\;.$
In the first term on the left-hand side, $n^{\text T}P=n^{\text T}\lambda$ (since $P$ is symmetric), so the first terms cancel. With $n^{\text T}n=1$, that leaves
$\Delta\lambda=n^{\text T}\Delta P n\;.$
The result for the change in the eigenvector is unfortunately slightly more complicated. We can multiply $(1)$ by the other two eigenvectors $e_1$ and $e_2$ to obtain
$e_i^{\text T}P\Delta n+e_i^{\text T}\Delta P n=\lambda e_i^{\text T}\Delta n+\Delta\lambda e_i^{\text T}n\;.$
The second term on the right-hand side vanishes because the eigenvectors are orthogonal. In the first term on the left-hand side, $e_i^{\text T}P=e_i^{\text T}\lambda_i$, where $\lambda_i$ is the eigenvalue corresponding to $e_i$. Thus we get
$e_i^{\text T}\Delta n=\frac{e_i^{\text T}\Delta P n}{\lambda-\lambda_i}\;.$
Since the eigenvectors are orthonormal, this is the coefficient of $e_i$ in an expansion of $\Delta n$ in the eigenvectors (the coefficient of $n$ being zero since the change in a unit vector is orthogonal to that vector). So we have
$\Delta n=\sum_{i=1,2}\frac{e_i^{\text T}\Delta P n}{\lambda-\lambda_i}e_i\;.$
Thus, if you solve the entire eigenvalue problem for the covariance matrix of the point coordinates (rather than just finding the eigenvector corresponding to the lower eigenvalue), you have all the ingredients to put together the Jacobian that propagates the uncertainty from the errors $\Delta P$ to the errors $\Delta n$.
The distance $d$ of the plane from the origin is just the scalar product of the centre of mass $c$ of the points with the normal vector $n$,
$d=c^{\text T}n\;,$
so the error for that is
$\Delta d=\Delta c^{\text T}n+c^{\text T}\Delta n\;,$
where $\Delta c$ is just the average of the errors in the point coordinate vectors. The errors $\Delta P$ are also readily obtained to linear order in terms of the errors in the point coordinates. That gives you all the Jacobians all the way from the point coordinates through $\Delta P$ and $\Delta c$ and then through $\Delta d$ and $\Delta n$, which together give $\Delta p$, and then finally to $\Delta l$; you can apply them one after the other to the covariance matrix of the point coordinates, and the end result will be a linear approximation to the covariance matrix of the line parameters.
Note that the result diverges as the lowest eigenvalue of $P$ becomes degenerate; that makes sense since in that case you don't really have any idea about the normal direction of the plane.