I've been tasked with working out how much some incorrectly entered calibration coefficients have affected some measurements we've taken. I have the algorithm used, which I can use to work backwards and get some error ranges, but I'm a bit stuck on notation.
The algorithm reads, in part:
$x\ln^2(1000/y)$
but I'm a but flummoxed on how to translate this into, say, MATLAB syntax - especially the $\ln^2$ part.
Sorry for the stupid question, but this is out of my field a fair bit, and high school maths class was a long time ago...