The question is about the proof of Theorem 2.17 (Page 36) of the book Introduction to Analytic Number Theory by Apostol:
Theorem 2.17. Let $f$ be multiplicative. Then $f$ is completely multiplicative if, and only if, $f^{-1}(n)=\mu(n)f(n)\text{ for all }n\geq 1.$ Proof. Let $g(n)=\mu(n)f(n)$. If $f$ is completely multiplicative we have $(g\ast f)(n)=\sum_{d\mid n}\mu(d)f(d)f(\tfrac{n}{d})=f(n)\sum_{d\mid n}\mu(d)=f(n)I(n)=I(n)$ since $f(1)=1$ and $I(n)=0$ for $n>1$. Hence $g=f^{-1}$.
Conversely...
Why is it $(g\ast f)(n) = \cdots = f(n)I(n) = I(n)\qquad?$
I think it should be $(g*f)(n) = \cdots = f(n)I(n) = f(n)$
Thanks in advance!