The point is to estimate how long an algorithm will take and how it scales with the size of the input. In your example of a matrix multiply you have $mn$ entries in A. Each one has to get multiplied by an entry in b, so there are $mn$ multiplies. Then you have to do $(m-1)n$ additions to get the entries in the result. We can ignore the $-1$ and say the effort expended is $2mn$ floating point operations. If you can do a billion operations in a reasonable amount of time you can afford to have $m$ and $n$ be tens of thousands but not millions.
Some operations have alternate algorithms with different timing. Sorting is a prime example. Some simple minded sorting algorithms scale as the square of the number of items sorted, so again you could afford to sort a few tens of thousands of things but not millions. There are other algorithms that work in $n \log n$ time. Now you can afford to sort tens of millions. That is an enormous increase.