I think the question is basically asking 'What does big-O mean?'.
Wikipedia Big-O notation
So $f(n)\leq Ag(n)$ for some constant $A$ as $n$ gets large.
Now if an algorithm takes $f(n)$ time to run for problem size $n$, $g(n)$ can be used to predict the worst case run-time within the bounds of the constant $A$. That is, $g(n)$ is of the right order (or more) for the complexity of the algorithm considered.
The reason for doing this is that $g(n)$ can probably be evaluated easily, but $f(n)$ can probably only be evaluated exactly by running the algorithm.
For example, for an implementation of the simple bubble sort algorithm, $f(n)$ can only be evaluated for particular data sets of size $n$ and determining the worst case would be hard. However, it can be shown that $g(n)=n^2$ is of the correct order and can be evaluated easily.
For an interview question, maybe you could show off by talking about related concepts such as $\Theta$ notation!