If $a$, $b$, and $c$ are real numbers for which $a < 0$, then $x^* = \dfrac{−b}{2a}$ is a maximizer of $f(x) = ax^2 + bx + c$.
The author gives the following proof:
Let $x$ be a real number. If $x^* \ge x$, then $x^* − x \ge 0$ and $a(x^* + x) + b \ge 0$. So, $(x^* − x)[a(x^* + x)+b] \ge 0$. On multiplying the term $x^* − x$ through, rearranging terms, and adding $c$ to both sides, one obtains that $a(x^*)^2 + bx^* + c \ge ax^2 + bx + c$. A similar argument applies when $x^* < x$.
I am confused as to where the inequality $a(x^* + x) + b \ge 0$ comes from. I have analysed the proposition, but I cannot see how such an inequality can be derived.
I would greatly appreciate it if the knowledgeable members of MSE could please take the time to clarify this.
Thank you.