I want to approximate a function $f(x)$ on [a,b]. The condition number for computing a function $f$ at a point $x$ is defined by
$\kappa=\frac{x f^{\prime}(x)}{f(x)}$
For my function $f$ this grows without bound as $x \rightarrow b$. So I make a change of variable $y=h(x)$ and as it happens the function $g(y)=f(h^{-1}(y))$ is alot simpler to approximate on $[h(a),h(b)]$. In particular the condition number of $g$ as $y\rightarrow h(b)$ is small, so I can easily construct an approximation for $g$ on $[h(a),h(b)]$. I can then approximate $f$ using my approximation $g_A$ to $g$ as follows.
$ f(x)\approx g_A(h(x))$
I have some questions relating I would like to clear up in my head.
1) I was wondering if this is a common approach in numerical analysis. i.e. if the condition number is large then to look for a change of variable which reduces it? If so can you point to any references? In particular in the context of function approximation.
2) My problem isn't exactly evaluating a function $f$ at $x$. More precisely it is to build some type of approximation $f_A$ to $f$ then approximate $f$ by $f_A$. So have I used the above definition of condition number out of context or am I right to observe that with this definition of $\kappa$ the problem is ill conditioned? I thought it made sense, since the library routines I'm using to build the approximation try and evaluate $f$ at hundreds of points, and they fail to build the approximant (I guess because the problem of evaluating $f$ is an ill conditioned one).
3) Key to success in numerical analysis is to apply stable algorithms to well conditioned problems. Does this mean that a stable algorithm can become unstable when applied to an ill conditioned problem?