I have some questions about the examples I came by in a book that helps me to explore the world of programming.
I understand what Big-O notation is and interested in the following:
1. To prove if $g(n) \in O(f)$, everything we need is to find only one set of ($c$ and $n_0$) that $g(n) \leq c\cdot f(n)$ , $n>n_0$. So, for example, if I found 1 set that holds this inequality and 10 that don't, $g(n) \in (O(f))$ anyway. Other words, it's sufficient to find only ONE set of($c$,$n_0$) that holds the inequality to prove it or not?
2. To disprove it, we need to find $N$ that after $c$ and $n_0$ have been chosen doesn't hold the inequality. If I can't do it with the chosen set of $c$ and $n_0$ does it mean that I can't disprove it?
Here is the example I am perplexed about:
We know that $n$ is in $O(n^2)$ but $(n^2)$ is not in $O(n)$.
To prove the first statement we have to prove that $n \leq c\cdot n^2$ and it's clear enough that doesn't need any explanation.
But if we reverse it and try to prove that $n^2$ is in $O(n)$, which is obviously not true, there is some mystery that I can't find an answer to. We pick $c=n$ and the inequality becomes $n^2 \leq n\cdot n$, so they become equal and from that is seems that $n^2$ is IN $O(n)$, because we can't find any values of n that the inequation doesn't hold.
Thanks in advance and sorry if the question is very stupid.