The integral that you have is the Kullback-Leibler divergence between distributions P and Q, $D_{KL}(P \parallel Q)$. This divergence is roughly a kind of a "distance" between the two distributions. The reason "distance" is in quotes is because this divergence is not symmetric and is hence not a metric. However, a useful way to think about the divergence $D_{KL}(P \parallel Q)$ is that it is the penalty paid for mistaking distribution $P$ as distribution $Q$. This statement can be made precise using information theory. If $D_{KL}(P \parallel Q)$ is infinity, it is because the two distributions are quite unlike each other that you incur an infinite penalty for mistaking $P$ as $Q$. This can happen for instance when distribution $P$ can produce values that distribution $Q$ can never do - in this case, mistaking $P$ as $Q$ is indeed a grievous error. I will leave it to you to interpret this in terms of the integral above to derive the condition under which the divergence is infinite.
Update: Adding more information in response to Marco's comment below. It got too unwieldy to be left as a comment: Given any $M>0$ and any distribution $P$, we can find $Q$ such that $D_{KL}(P \parallel Q) > M$. But note that as $M$ grows, we need to adaptively change $Q$ to make sure the divergence grows larger than $M$. This is different from saying $D_{KL}(P \parallel Q) = \infty$ for a given $P$, $Q$ which is what the question is stating. I think this can happen only if the support of $P$ includes points not in the support of $Q$.