Your question has two answers (although one is considerably more likely).
Firstly, the author could mean that the running time must be slower or as slow as some $C\log{n}$ (as pointed out by anon in the comments), i.e:
$f(n)\ge C\log{n}\implies f(n)\in\mathcal{\Omega}{(\log{n})}$
This means that given some set of $n$ inputs, the algorithm will complete no faster than $C\log{n}$, for some arbitrary constant $C\in\mathbb{R}^{+}$.
However, as you have stated it is a piece of CS literature, what is more likely is that the author intended that the algorithm would complete faster or as fast as some $C\log{n}$, i.e:
$f(n)\le C\log{n} \implies f(n)\in\mathcal{O}{(\log{n})}$
This means that the algorithm, given a set of $n$ inputs as before, will complete no slower than $C\log{n}$, for some arbitrary constant $C\in\mathbb{R}^{+}$.
Bear in mind that we often use asymptotic notation (e.g. $f(n)\in\mathcal{\Omega}(\log{n})$, $f(n)\in\mathcal{O}(n^{2})$) in computer science to approximate the running time of algorithms. Also note that $f(n)\in\mathcal{O}(n)$ could also be written as $f(n)=\mathcal{O}(n)$ (indeed this notation is more common in CS literature).
Hope this helps clear things up!