this is my first post so hopefully this topic is considered OK.
Background: In class we were using a laser (mounted on a planar robot) to measure various profiles of a sample underneath. The system was already calibrated when I used it but I started wondering how this calibration would be performed (i.e. the math behind it). Here is what I came up with for a setup (simplified):
Setup: Assume the calibration gauge is a flat plate (blue, at angle $\alpha$ from x-axis and angle $\beta$ from y-axis, ideally it would be parallel to the x-y plane), then there will be: some initial height offset at the robot's origin ($\vec{Z_0}$, unknown), the laser's position attached to the planar robot ($\vec{P}$, known), the laser's measurement vector ($\vec{D_m}$, |$\vec{D_m}|$ known), the "true" height directly below the laser/robot ($\vec{D_t}$, unknown), and the error between the two ($\vec{E}$, unknown).
Ideally, the laser is to be mounted such that the output beam is parallel to the z-axis but I'm sure there are some mounting angle errors (error angle $\phi$ from the z-axis in x-z plane and angle $\psi$ from the z-axis in the y-z plane).
$i$ is the point where the laser makes contact with the plate (x,y,z).
The "known" variables are |$\vec{D_m}$|, P_x, and P_y.
The unknown variables I need to "calibrate" the system (so I can calculate $\vec{D_t}$) are $\vec{Z_0}$ and the angles $\alpha, \beta, \phi$, and $\psi$.
My Attempt to Figure It Out: First, I found where the laser would intersect the plate/plane based on the robot's (x,y) position: $i = ( P_x + \Phi\frac{AP_x + BP_y + Z_0}{1-A\Phi-B\Psi}, P_y + \Psi\frac{AP_x + BP_y + Z_0}{1-A\Phi-B\Psi}, \frac{AP_x + BP_y + Z_0}{1-A\Phi-B\Psi})$
where $A$ is the slope of the plane along the x-direction, $B$ is the slope of the plane along the y-direction, $\Phi$ is the slope of $\vec{D_m}$'s x-component along the z-direction, and $\Psi$ is the slope of $\vec{D_m}$'s y-component along the z-direction. Using these instead of the angles made the equations cleaner.
The measurement vector: $\vec{D_m} = (\Phi\frac{AP_x + BP_y + Z_0}{1-A\Phi-B\Psi},\Psi\frac{AP_x + BP_y + Z_0}{1-A\Phi-B\Psi}, \frac{AP_x + BP_y + Z_0}{1-A\Phi-B\Psi})$
The Calibration Equations (need 5)
Take measurements along x-axis, regress to find slope, set equal to $\frac{\partial|\vec{D_m}|}{\partial P_x}$.
Take measurements along y-axis, regress to find slope, set equal to $\frac{\partial|\vec{D_m}|}{\partial P_y}$.
Move to origin ($P_x = P_y = 0$ to eliminates variables), take measurement set equal to |$\vec{D_m}$|.
Have two points on the plane ($p_1, p_2$) with a known distance between them ($\Delta$). Move the robot so that the laser dot hits the first point, save ($P_{x1}, P_{y1}$), move the laser to the second point, save that position information. Use $P_{x1}, P_{y1}, P_{x2}, P_{y2}$ in the equation: $\Delta = |i(P_{x1}, P_{y1}) - i(P_{x2}, P_{y2})|$.
...? I'm stuck.
Final Thoughts I thought this was pretty interesting and not being able to solve it has been bugging me. :) I have no idea how the system was actually calibrated but this is how I set it up. If there is a far easier way or if someone knows how these types of systems are actually calibrated please let me know (it would be an interesting read) but I would also like to figure out that fifth equation using my way. I'm not a math major so I figured this would be the place to go!
Thanks!