As a physicist, I'm quite positive they don't refer to the normal definition of $O(1).$ As you'll see many more times in your studies, sometimes you don't care about numerical constants as $e^2, \sqrt{2}, \pi^2/6, 1/3, \ldots$. For lots of reasons, authors might choose to summarize such a constant as $O(1)$. There's not a rigorous definition of what can be classified as $O(1)$; personally, I wouldn't classify 0.0000781232 or 33233432 as $O(1),$ but there are no solid conventions.
As an example the true Heisenberg inequality is $\Delta x \Delta p \geq \hbar / 2,$ or with $k := p/\hbar,$ $\Delta x \Delta k \geq 1/2.$ In order to make you think about the physics behind this inequality, and not just about the factor 1/2, the authors write O(1).
[Edit for clarification.] Generically, the product of the standard deviation depends on some parameters of the problem and some quantum numbers; for a wave packet, you have a wave number $k$, for discrete states you have $n \in \{0,1,2,\ldots\}$, etc. Therefore a mathematical definition of O(1) in this context might be $\Delta x \Delta k \geq f(k,n,\ldots) \geq 0$ for some function f, and there exists a number R and a constant C, such that for all $\sqrt{k^2 + n^2 + \ldots} > R$, $f(k,n,\ldots) < C.$ [1]
That's (likely) not what is meant; most of the time, the right hand side actually diverges as a function of these parameters. Physically: if you look at large wave numbers, or states with large quantum numbers, you'll find a large $\Delta x \Delta k.$ You use O(f) to specify what you know about the behaviour of $f$ near infinity, but in that case we only know that the global minimum of $f$ will be larger than 1/2. The factor of 1/2 is thus a 'best case scenario' that you seldomly encounter, and often you find a number that is somewhat larger, but still 'of the order of 1', or as your textbook writes, O(1).
[1] This doesn't even make a lot of sense, since x and k have different dimensions.