There's no general way, but only a wealth of techniques. Here are some examples:
- You prove that $f(n)$ is the minimum time required to read the input. For example, shortest path problem cannot be solved in less than $O(V + E)$.
- You prove that an algorithm with fewer will not get enough data about the input to output the correct solution. Thus, once can change the input in a way that changes the result without the algorithm "noticing" the difference. An example of this would be the proof that sorting cannot be done in less than $O(n \log n)$.
- You prove that an algorithm running in less time will lead to an existence of an algorithm that would solve a problem in less than the optimal time (e.g. sorting in less than $O(n \log n)$). This is achieved via reduction.
- You prove that the size of the output is in worst case as big as $f(n)$ therefore, one cannot take shorter time to compute it. For example, outputting the $n^{th}$ Fibonacci number cannot be solved in less than $O(2^{n})$ since the $n^{th}$ Fibonacci is ~ $\phi^{n}$ where $\phi$ is around $1.6$
There are more techniques, but those are some examples off the top of my head.
EDIT: As pointed out in the comments, the size of the $n^{th}$ Fibonacci numbers is exponential in the length (read log) of n, therefore it cannot be computed in less than $O(n)$ not $O(2^{n})$.