I am studying conditional probability, and I came across this Bayes theorem. Though I am no statistician, I kinda get the idea of the theorem, that probabilities can be updated using newly given informations.
What I am curious about is: I heard that Bayesian statistics was a "new" approach to statistics, and therefore was met with many objections and criticisms. But I learned that the Bayesian theorem $$P(B|A)=\frac{P(A|B)P(B)}{P(A)}$$ can be derived from the definition of conditional probability. I can't understand why this was met with criticisms. I heard that they were due to different views about the meaning of probability and inference (like frequentists and propensitists), but I don't really see any difference between these.
Would someone please explain how Bayesians were at odds with other views about statistics, and why Bayesian approach aroused numerous arguments and confusions (with some examples if possible)?
Now my question might sound a bit vague, but I have almost no background information about advanced statistics, so this is the best I can give. Thank you in advance!