I was just answering a very similar question over on mathematica.SE . That answer uses a probabilistic algorithm, assumes that $K$ is $\mathbb{C}$, the entries of your matrices are in a subfield of $\mathbb{C}$ where we can do exact arithmetic, and $n$ is small enough that it is reasonable to perform linear algebra computations on matrices of size $n^2$. It would help us limit the size of our answers to know what $K$ you are working in; whether the entries of your matrices are exact or numeric; how large $n$ is; and whether you want a practical answer or a complexity analysis.
Here is a sketch of the general picture:
$\bullet$ Let $R$ be the sub-$K$-algebra of the $n \times n$ matrices generated by the $A_i$. All the methods I know think in terms of $R$; I don't know of any way that the problem gets easier if we assume $R$ is generated by two elements.
$\bullet$ If your $A$ and $B$ are chosen at random, there is a very high probability that the ring $R$ is all of $\mathrm{Mat}_n(K)$ (and thus there are no invariant subspaces). It is worth testing for this before getting too involved in the problem. If $n$ is small enough that it is reasonable to compute a basis of $R$; this is straightforward. If not, a good probabilistic test is to take some elements $A_i$ in $R$ and compute the polynomial $\det(x_1 A_1 + \cdots x_k A_k)$. If this multivariate polynomial is irreducible, then $R$ must be all of $\mathrm{Mat}_n(K)$. If the $A_i$'s are chosen randomly enough in $R$, then the converse holds. However, it is possible for $A$ and $B$ to generate $\mathrm{Mat}_n(K)$ and nonetheless $\det(x \mathrm{Id} + A y + B z)$ factors.
$\bullet$ It is not a reasonable problem to list all invariant subspaces. As an example of the issues that come up, consider $2d \times 2d$ matrices, organized into $d \times d$ blocks, of the form $\left( \begin{smallmatrix} \mathrm{Id} & 0 \\ 0 & 2 \mathrm{Id} \end{smallmatrix} \right)$, $\left( \begin{smallmatrix} \mathrm{Id} & A \\ 0 & \mathrm{Id} \end{smallmatrix} \right)$, $\left( \begin{smallmatrix} \mathrm{Id} & B \\ 0 & \mathrm{Id} \end{smallmatrix} \right)$, $\left( \begin{smallmatrix} \mathrm{Id} & C \\ 0 & \mathrm{Id} \end{smallmatrix} \right)$. Inside the Grassmannian $G(3,2d)$, we can ask which $3$-planes are preserved by all of these matrices. There is the obvious $G(3,d)$, consisting of subspaces which are contained in the first $d$ coordinates. But there is also another component, birational to the plane curve $\det(x A + y B + z C)=0$, which would be quite a pain to write down in coordinates. In general, any projective variety can occur as a component of the variety of invariant subspaces for some set of matrices acting on a finite dimensional vector space.
What is reasonable is to compute a Jordan-Hölder filtration. The key to this is the ability to find one invariant subspace $V \subset K^n$; one then inductive finds invariant subspaces in $V$ and in $K^n/V$ until the entire filtration is found.
$\bullet$ You should therefore be thinking in terms of the general theory of subalgebras of $\mathrm{Mat}_n(K)$. This tells you that we can write $R = S + \mathrm{rad}(R)$, where $S$ is a direct sum of matrix algebras over division rings and $\mathrm{rad}(R)$ is a nilpotent two-sided ideal. Inside $\mathrm{Mat}_n(K)$, we can arrange that the elements of $S$ are block diagonal and the elements of $\mathrm{rad}(R)$ are block upper triangular (with the same blocks). Caution: It may be the case that $S$ as an abstract ring is, for example, $\mathrm{Mat}_{n/2}(K)$ and it is embedded in $\mathrm{Mat}_n(K)$ as $A \mapsto \left( \begin{smallmatrix} A & 0 \\ 0 & A \end{smallmatrix} \right)$. In other words, the number of summands of $S$ may be less than the number of blocks because some blocks may be equal.
By the way, I'd love a recommendation for an intro to finite dimensional algebras that actually talks about how to interpret all of these things in terms of matrix operations; I pointed to Pete Clark's notes in my answer on mathematica.SE, but they aren't as explicit as what I wrote above.
$\bullet$ If two blocks are $S$ are unequal, and $\theta$ is a random element of $R$, then it is highly probable that some eigenspace of $\theta$ is invariant for $R$. If $R$ has a nontrivial direct sum decomposition, then let $C(R)$ be $\{ z \in \mathrm{Mat}_n(K) : rz=zr \ \mbox{for all}\ r \in R \}$. Let $\alpha$ be a random element of $C(R)$. The eigenspaces of $\alpha$ will be invariant subspaces.
If neither case occurs, and $R \neq \mathrm{Mat}_n(K)$, then all of the blocks of $S$ are equal, and $\mathrm{Rad}(R)$ is nontrivial. In this case, the goal is to construct an element of $\mathrm{rad}(R)$. The way I proposed to do this was to choose a random $A$ in $R$ and decompose $A$ as $A_{ss} + A_{nil}$, where $A_{ss}$ is diagonalizable (over $K^{\mathrm{alg}}$) and $A_{nil}$ is nilpotent. Both matrices lie in $R$ and can be computed with exact arithmetic. At least if $K$ is infinite, it is highly likely that $A_{nil}$ is a nontrivial element of the radical. The standard paper on this subject uses a more complicated method to get an element of the radical; I'm not sure what the advantages of that approach are. (ADDED An alternate idea: Let $A$ be a random element of $R$ and let $p$ be the square free polynomial whose roots are the eigenvalues of $A$. Then $p(A)$ is likely to land in the radical, and is probably a bit easier to compute that $A_{nil}$.)
Once you have an element $B$ of $\mathrm{Rad}(R)$, it's image is contained in $\mathrm{Rad}(K^n)$. So compute the $R$-submodule of $K^n$ generated by $\mathrm{Im}(B)$, and that will be a nontrivial subspace.
$\bullet$ If you want to search for more information, a useful thing to look for is "meataxe algorithm".