0
$\begingroup$

In the question, $O()$ is the big O notation https://en.wikipedia.org/wiki/Big_O_notation .

For me, since $\frac{d}{m}>1$ always holds, so $\frac{d^2}{m}<\frac{d^3}{m^2}$ holds. Then $O(\frac{d^3}{m^2}+\frac{d^2}{m})= O(\frac{d^3}{m^2})$.

Then, I have another confusion. Assume an estimation has an error=$O(\frac{d^3}{m^2})$ with $1

Why am I wrong and what is right? Thanks.

BTW, how about for the constraint $1

  • 0
    If $d$ is a constant, then $1$m$ indefinitely; it can't increase above $d.$ Hence the asymptotic behavior as $m$ increases is meaningless. – 2017-02-22
  • 0
    Once you talk about fixing $d$, you can't meaningfully use the $O()$ notation because that implies a limiting behavior as $m\to\infty$ - but $m\to\infty$ will (eventually) violate your constraint that $m\lt d$.2017-02-22
  • 0
    By the way, one thing to be careful about with big-O notation is to be clear whether you are talking about the order of magnitude as the input variables get very large, or as they get very small. I'm guessing you are using a "get very large" big-O.2017-02-22
  • 0
    In fact, this example shows the issues with using the big-O notation in a multi-variable situation; you'll note that the Wikipedia article only speaks about the definition of $O(f(x))$ for a function $f()$ of one variable. There's good reason for that, because multivariate limits are innately multidimensional, and many subtleties crop in.2017-02-22
  • 0
    Thanks. BTW, can I use $O(\frac{d^2}{m})\leq \epsilon$ to get $m\geq d^2/\epsilon$ for $\epsilon\geq \Omega(d)$? because $\frac{d^2}{m}\geq d$.2017-02-23
  • 0
    @David Yes gets very large. I use this big O notation, Just because my error bound is $\frac{d^3}{m^2}+2\frac{d^2}{m}$ while the other compared methods can get error bound like $0.9*\frac{d^3}{m^2}+1.5*\frac{d^2}{m}+\frac{d}{m}+\frac{1}{m}$. It is not easy to compare them., but I am interested in the case $1$m$ and $d$ jointly. I think, we can study $m$ with $d$ fixed, but we should not forget the constraint that $12017-02-23
  • 0
    @Steven Stadnicki more helps please?2017-02-23
  • 0
    @olivia You should probably go to chat for extended discussion, but the point is that you _cannot_ have all three of these things at once: 'Asymptotic behavior' (i.e., big-O notation) as a function of $m$, fixing $d$, and the constraint that $m\lt d$ (or $m\ll d$). They're mutually incompatible. If you care about _precise_ error bounds (i.e., without the big-O constant in front of them) then that's a different matter; note that your error bound and the other methods you're talking about are *exactly equivalent* from the perspective of asymptotic notation.2017-02-23
  • 0
    Briefly, it sounds like asymptotic notation just isn't the tool for this job, but without understanding exactly what you're trying to do - and that's presumably a question that's not suitable for a Q-and-A format - it's hard to tell you exactly what tools _are_ appropriate.2017-02-23
  • 0
    @Steven Stadnicki I use this notation just for these considerations: 1) I have the constraint 12017-02-24
  • 0
    @Steven Stadnick If I dont need to compare the convergence rate of m with d fixed, then such big-O notation make senses? Without big-o notation, it is really very hard for me to compare the precise error bounds of different algorithms.2017-02-24

0 Answers 0