This is simply the extension of a given norm to a matrix. It should be better written: $$\sup \left\{\frac{\|Ax\|}{\|x\|}: x \in \mathbb{R}^n, x\neq 0\right\} = \sup \left\{\|Ax\|:x \in \mathbb{R}^n, \|x\|=1\right\}.$$
That is, go through all normalized vectors, and see where the product $\|Ax\|$ is maximized.
-----Edit-----
As an example, let us consider the $L^1$-norm. For some matrix $A \in \mathbb{R}^n$, by definition we have that $$\|A\|_1 = \sup \left\{\|Ax\|_1:x \in \mathbb{R}^n, \|x\|_1=1\right\}.$$
Now, for a given vector $x \in R^n$, we have that $$ \displaystyle\|Ax\|_1 = \left\|\begin{pmatrix}
a_{11} & a_{12} & \cdots & a_{1n}\\
a_{21} & a_{22} & \cdots & a_{2n}\\
\vdots & \vdots & \ddots & \vdots\\
a_{n1} & a_{n2} & \cdots & a_{nn}
\end{pmatrix}\cdot \begin{pmatrix}
x_1\\
x_2\\
\vdots\\
x_n
\end{pmatrix}\right\|_1 = \left\|\begin{pmatrix}
\sum_{j=1}^na_{1j}x_j\\
\sum_{j=1}^na_{2j}x_j\\
\vdots\\
\sum_{j=1}^na_{nj}x_j
\end{pmatrix}\right\|_1 = \sum_{i=1}^n\left|\sum_{j=1}^na_{ij}x_j\right|.$$
Thus, we can write the $L^1$-norm of $A$ as $$\displaystyle \|A\|_1 = \sup_{\|x\|=1}\left\{\sum_{i=1}^n\left|\sum_{j=1}^na_{ij}x_j\right|\right\}.$$ We can bound this in the following way, $$\|A\|_1 \leq \sup_{\|x\|=1}\left\{\sum_{i=1}^n\sum_{j=1}^n\left|a_{ij}x_j\right|\right\} = \sup_{\|x\|=1}\left\{\sum_{j=1}^n\left|x_j\right|\sum_{i=1}^n\left|a_{ij}\right|\right\}.$$ Now, consider the fact that $$\displaystyle \sum_{j=1}^n\left|x_j\right|\sum_{i=1}^n\left|a_{ij}\right| \leq \sum_{j=1}^n\left|x_j\right|\max_{1\leq j \leq n}\left\{\sum_{i=1}^n\left|a_{ij}\right|\right\} = \|x\|_1\cdot \max_{1\leq j \leq n}\left\{\sum_{i=1}^n\left|a_{ij}\right|\right\}.$$ Now, since our matrix norm is only concerned with normalized vectors, we have clearly that $$\displaystyle \|A\|_1 \leq \sup_{\|x\|=1}\left\{\|x\|_1\cdot \max_{1\leq j \leq n}\left\{\sum_{i=1}^n\left|a_{ij}\right|\right\}\right\} = \max_{1\leq j \leq n}\left\{\sum_{i=1}^n\left|a_{ij}\right|\right\}.$$ Therefore, we have an upper bound. But this upper bound is clearly attainable, we simply take the $j^*$ corresponding to the column we are maximizing over in the previous equation, and let $x_{j^*} = 1, x_i = 0$ ($i \neq j^*$). Thus, the $L^1$-norm for a matrix is simply the maximum of the absolute column sums. So, as you see, the definition may be readily understood, but determining the value of the norm may be less so. And, in many cases, a closed form of the norm doesn't exist, and so you need to numerically solve the optimization (maximization) problem for each given $A$.