Any $N \times N$ triangular system can be solved in $\mathcal{O}(N^2)$.
For instance, if it is an upper triangular system, start from the last equation $(N^{th}$ equation) which requires only one division to get the $N^{th}$ unknown. Once you have this go to the previous equation $((N-1)^{th}$ equation) which requires only one multiplication, one subtraction and one division to get the $(N-1)^{th}$ unknown. Go to the $(N-2)^{nd}$ equation which requires $2$ multiplications, $2$ subtractions and $1$ division. In general, the $(N-k)^{th}$ equation requires $k$ multiplications, $k$ subtractions and $1$ division. Hence, the total cost is $1+2+3+\cdots+(N-1) = \frac{N(N-1)}{2} \text{ multiplications}$ $1+2+3+\cdots+(N-1) = \frac{N(N-1)}{2} \text{ subtractions}$ $N \text{ divisions}$
The same idea works for lower triangular systems as well (in which case you start from the first equation and proceed all the way down).
In fact, the idea behind all matrix decomposition algorithms is to make the solving part cheaper so that given a linear system even if the right hand side were to change you can solve it in a relatively inexpensive way once you have the decomposition of the matrix. The two main decomposition algorithms which are used satisfy this requirement.
- $A=LU$. Factoring $A$ into a lower triangular times an upper triangular. The factorization cost is $\mathcal{O}(N^3)$. But once this is done solving $Ax = b$ requires solving $Ly = b$ and $UX = y$ both costing $\mathcal{O}(N^2)$ since both are triangular systems.
- $A = QR$. Factoring $A$ into an orthonormal matrix times an upper triangular matrix. The factorization cost is $\mathcal{O}(N^3)$. But once this is done solving $Ax = b$ requires solving $Qy = b$ and $RX = y$. The nice thing about orthonormal matrices is that the inverse is nothing but the transpose. Hence, $y=Q^Tb$ which is nothing but a matrix vector product which costs $\mathcal{O}(N^2)$. Solving $Rx = y$ costs $\mathcal{O}(N^2)$ since it is a triangular system.
Other decomposition algorithms like the SVD where $A = U \Sigma V^T$ where $U$ and $V$ are orthonormal and $\Sigma$ is a diagonal also satisfy the requirement. Once we have $A = U \Sigma V^T$, solving $Ax = b$ is equivalent to solving $Uy = b$, whose solution is given by $y = U^Tb$ and costs $\mathcal{O}(N^2)$, $\Sigma z = y$, which can be easily inverted since $\Sigma$ is just a diagonal matrix and hence costs $\mathcal{O}(N)$, and $V^Tx = z$, whose solution is given by $x = Vz$ and costs $\mathcal{O}(N^2)$.
EDIT In case you want the inverse of the lower triangular operator, you proceed as follows. (In numerical linear algebra it is one of the cardinal sins to find the inverse explicitly. In any application you will never need to find the inverse explicitly.) $L = \begin{pmatrix}1 & 0 & 0 & 0 & \cdots & 0 \\ l_{21} & 1 & 0 & 0 & \cdots & 0 \\ l_{31} & l_{32} & 1 & 0 & \cdots & 0 \\ l_{41} & l_{42} & l_{43} & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ l_{n1} & l_{n2} & l_{n3} & l_{n4} & \cdots & l_{nn} \\ \end{pmatrix}$ We can write $L$ as $L = L_1 L_2 L_3 \cdots L_{n-1}$ where $L_k = \begin{pmatrix}1 & 0 & 0 & 0 & \cdots & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & \cdots & 0 & 0 & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\ 0 & 0 & \cdots & 1 & \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & \cdots & l_{k+1,k} & 1 & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & \cdots & l_{k+2,k} & 0 & \ddots & \vdots & \vdots & \vdots \\ 0 & 0 & \cdots & l_{k+3,k} & 0 & \cdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & \vdots & 0 & \cdots & \cdots & \ddots & \vdots\\ 0 & 0 & \cdots & l_{n,k} & 0 & \cdots & \cdots & \cdots & 1 \end{pmatrix}$ Then $L^{-1} = L_{n-1}^{-1}L_{n-2}^{-1}L_{n-3}^{-1} \cdots L_{1}^{-1}$ where $L_k^{-1} = \begin{pmatrix}1 & 0 & 0 & 0 & \cdots & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & \cdots & 0 & 0 & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\ 0 & 0 & \cdots & 1 & \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & \cdots & -l_{k+1,k} & 1 & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & \cdots & -l_{k+2,k} & 0 & \ddots & \vdots & \vdots & \vdots \\ 0 & 0 & \cdots & -l_{k+3,k} & 0 & \cdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & \vdots & 0 & \cdots & \cdots & \ddots & \vdots\\ 0 & 0 & \cdots & -l_{n,k} & 0 & \cdots & \cdots & \cdots & 1 \end{pmatrix}$