Hmm, I played a bit with my MatMate-calculator which allows matrix-expressions for statistical evaluation, to give a concrete example for one possible interpretation of your problem.
I generate correlated randomdata in the vectors rpm, veloc, pow ; by construction veloc -data are composed of veloc-specific component and of rpm via a quadratic polynomial (including a constant term!) and pow data analoguously but via a cubic polynomial. It is likely important, that after that compositions pow and veloc are not recentered because having a model involving quadratics means to have ratio-scales.
First I show some code to show how the polynomial regression is expressed in MatMate (hopefully this is explanatory enough to see how this can be done generally/in Matlab), then I show two alternative solutions how to express pow by veloc in that same framework, which -being in the framework of regression only- avoids your intended inverting and composing of polynomials.
1.)
The following MatMate-code reproduced very well the composition-parameters for
veloc and
pow. The data vectors are arranged as rowvectors for each variable; I chose a cubic regression-model for
rpm ->
veloc including the constant term.
// comments: m *' means: multiply m with its transpose // an operator followed by # means: do the operator elementwise // 1..4 in indexes means the range 1 to 4, * means the full range // the ´ means concatenation of ranges // original data of rpm veloc and pow were made win n=200 cases // data-vectors are organized rowwise // make data-matrix for model of veloc modelv = {const, rpm , rpm ^# 2, rpm ^# 3, veloc} covv = modelv *' /n ladv = cholesky(covv) betav = ladv[*,1..4] * inv(ladv[1..4,1..4]) // compute regression-coefficients // based on the first four components (const,rpm,rmp^2,rmp^3) betav = betav || ladv[*,5] // append the veloc-specific coeff // do the same with the data of pow modelp = {const, rpm , rpm ^# 2, rpm ^# 3, pow} covp = modelp *' /n ladp = cholesky(covp) betap = ladp[*,1..4] * inv(ladp[1..4,1..4]) betap = betap || ladp[*,5]
These regressions reproduce the coefficients, with which the data were created, very closely approximated (using an n=200 and basically uniformly randomdata for rpm and the item-specific randomvalues).
We find them in the last row of betav resp betap . I think it is not too difficult to translate this into Matlab-code.
2.)
Now you want the explanation of
pow by
veloc (or vice versa). Here we have to do two decisions:
- Do we want to explain one by the other by partialling out the rpm?
- Do we want to explain by another polynomial model of order >1 ?
I've made examples for both options. The first one regresses pow on veloc and rpm, where I propose a quadratic and cubic influence also by veloc
modelvp = {const, rpm, rpm^#2, rpm^#3, veloc, veloc^#2, veloc^#3, pow} covvp = modelvp *' / n ladvp = cholesky(covvp) betavp = ladvp[*,1..7] * inv(ladvp[1..7,1..7]) betavp = betavp || ladvp[*, 8]
Here we get 7 coefficients, with the one for the constant component included.
Next is a model, where the influence of rpm is partialled out of each of pow and veloc. The partialling assumes a cubic polynomial model for each of them. The final model is again cubic, we search for the coefficients of $\small pow.r = \beta_0 + \beta_1 veloc.r + \beta_2 veloc.r^2 + \beta_3 veloc.r^3 + pow\_specific.r $ where the .r indicates that rpm was partialled out.
The additional MatMate-code is :
ladvp_r = ladvp[1´5..8,1´5..8] // this removes the 2..4 rows and columns from ladvp, // which means to remove the influcence of rpm, rmp^2 and rpm^3 betavp_r = ladvp_r [*,1..4] * inv(ladvp_r[1..4,1..4]) // do regression only on // remaining variance betavp_r = betavp_r || ladvp_r [*,5]
Again in the last row of betavp.r we get some coefficients for the cubic model (however they are meaningsless here because the data were created with a simpler model).
I hope this is helpful although it has nothing to do with composition of polynomials and or their inversion. If for some theoretical reason you need an algebraic solution for the handling with your polynomials anyway we have to attack this differently.