I have found from the internet that I need to know these topics for understanding Artificial Intelligence:
Matrix algebra: most machine learning models are represented as matrices and vectors. Concepts like eigenvectors and singular value decomposition appear all over the place.
Bayesian statistics: probability, Bayes' rule, common distributions (e.g., beta, Dirichlet, Gaussian), etc.
Multivariable calculus: most learning techniques use gradients and Hessians at their core to fit parameters. (If you want to get fancier, study numerical optimization.)
Information theory: entropy, KL divergence, etc. Just the basics here.
In limited cases, higher-level math can be useful. E.g., to understand manifold learning, you'll want to know some basic notions from geometry and topology. Occasionally abstract algebra is used (e.g., see "expectation semirings" for learning on hyper-graphs). I would learn these as-needed, but if you have a chance to learn them early it can't hurt.
Now whenever I want to learn these I got confused with symbols, functions, vectors, sets, subsets, etc. Provided I know only the basic math, how can I learn those? I am confused which things should I learn first and which second.