I am creating a calibration system for an eye tracking device. This calibration involves having the user look at five points on a screen. The eye tracker then reports where it believes the user was looking. The result is a map of five co-ordinates that are likely to be stretched, twisted and translated with respect to the actual co-ordinates. Something like this:
So, I now know where the eye tracker thinks the user is looking for each of those five points. From this, it should be possible to calculate where the user is really looking for any co-ordinates, so long as they lie within the calibrated zone.
The way I do this at present is by treating both the X and Y axes separately. I plot the real vs. measured X co-ordinates on a scatterplot and find the linear regression equation, and do the same for the Y co-ordinates.
Thus, I end up with a 'y = mx + c' equation for both the horizontal and vertical axes (i.e. a 'scale' and 'intercept' value for each axis). In order to then find out where the user is actually looking for any measured co-ordinates, I simply transform the X and Y axis data separately using these scale and intercept values.
However; I am not a mathematician. I have recently come across the concept of 'eigenvectors' and wonder if this (or another approach) could provide a more robust method of ensuring I am translating my calibration correctly.
In other words, I think I'm doing this correctly, but I really think I ought to run it by someone who is likely to know for sure whether this is likely to work (given that there can be stretch, twist and translation). Any wisdom would be gratefully received.