It seems that you already have the part about inequality.
Without loss of generality, we assume that $\|x\|_{\infty} = 1$.
This is saying simply that you can rescale eigenvector.
All parts of the proof before this sentence work for any eigenvector $\mathbf x$ corresponding to the eigenvalue $\lambda$. If $\mathbf x$ is an eigenvector, then so is $c\mathbf x$ for any non-zero scalar $c\ne0$. So we may simply take the eigenvector
$$\mathbf y=\frac{\mathbf x}{\|\mathbf x\|_\infty}.$$
The vector $\mathbf y$ is an eigenvector corresponding to the same eigenvalue $\lambda$. Moreover, this vector has the property $\|\mathbf y\|_\infty=1$. So we can work with this eigenvector in the rest of the proof.
Basically the whole sentence "Without loss of generality, we assume that $\|x\|_{\infty} = 1$" is saying what I wrote in the above paragraph and that in the rest of the proof we denote this eigenvector by $\mathbf x$ (and not by $\mathbf y$ in order to simplify notation.)
Just in case you are asking not what is meant by this sentence but you are asking why is this useful in the proof and how does it help: The answer is that it does not change that much. If you really want, you can try to do same proof with arbitrary non-zero eigenvector $\mathbf x$. (I.e., without the assumption $\|\mathbf x\|_\infty=1$.) Basically the changes in the proof will be that you choose $i$ as the index with maximal value of $|x_i|$ (instead of $|x_i|=1$) and you would use inequality $|x_j|\le|x_i|$ (instead of $|x_j|\le1$). You should be able to arrive to almost the same inequality, but each term is multiplied by $|x_i|$.
$$|\lambda| \cdot |a_{ii}| \cdot |x_i| \le \sum\limits_{j=i+1}^n |a_{ij}| \cdot |x_i| + |\lambda| \sum_{j=1}^{i-1} |a_{ij}|\cdot |x_i|$$
Since $\mathbf x$ is non-zero and $|x_i|\ne0$, you can divide both sides of the inequality by $|x_i|$ and then continue precisely in the same way.