In the mathematical optimization literature it is common to distinguish problems according to whether or not they are convex. The reason seems to be that convex problems are guaranteed to have globally optimal solutions, so one can use methods such as gradient (steepest) descent to find such solutions, but I am not convinced.
For example, the function $|x|^{0.001}$ is not convex (see the yellow-shaded area in the picture below) but it has a single global minimizer.
Update:
From the comments and answers I learned/recalled that the function above is quasiconvex. If my understanding is correct, and quasiconvex functions such as the one above have a single minimizer:
- Why is convexity more important than quasiconvexity in optimization? Why do all optimization texts focus on convexity and not on quasiconvexity?