For any two algorithms with different order (big-$\mathcal{O}$) complexity, we can calculate the actual ratio of their complexity as a function of n. Being a bit more general in your case by using symbolic constants, and assuming there are no other constant multiplicative or additive factors, we get
$R=$ exponential/linear $=n^b/{(n\cdot 10^a)}=n^{b-1}/10^a$, with $a=100, b=1.001$ for your example
Calculate the "crossover point" by setting $R=1$ and solving, giving
$n=10^{a/{(b-1)}}$.
Plug in your values and you get that these two algorithms "cross" at
$n=10^{100/.001}=10^{100000}$ or 10 followed by 100,000 zeros.
Indeed that's an astronomically large number, nope, even bigger than astronomical, probably one of those silly metaphors like more than the number of electrons if all of known space were packed tightly with them. So, yes, anything even remotely practical should be done with the "exponential" algorithm in this case.
But if $a = .5, b = 1.1$ you'd get the crossover at $n=10^5=$ 100,000. So, if $n>$ 100,000 you'd prefer the linear algorithm, and if $n=$ 1,000,000, which is not at all unreasonable, the linear algorithm is faster by a factor of $R \approx$ 7.96, and the ratio grows exponentially for larger $n$.
So, yes, $\mathcal{O}$ analysis is quite reliable, and a very useful tool for evaluating algorithms.