When you left-multiply a vector by a matrix, the result is a linear combination of the columns of the matrix. So, the equation $A\vec x=\vec b$ can be solved iff $\vec b$ is in the column space of $A$, that is, in the span of the columns of $A$. If this span is smaller than the codomain, then there will be some vectors $\vec b$ for which the equation has no solution. On the other hand, if the columns span the entire codomain, then there will be a solution for all $\vec b$.
This lets us answer the question for two of the matrices without much work. These matrices have three rows each, so their codomains are three-dimensional. All of the columns of the last matrix are identical, so its column space is one-dimensional. Three of the columns of the second matrix are identical, so its column space is at most two-dimensional.
The first matrix requires a bit more work. You’ll have to find the dimension of its column space (also known as the rank of the matrix), which you can do by row-reducing it. The rank of the matrix is the number of non-zero rows in the result. If there are any non-zero rows, then the matrix doesn’t have full rank and so there will be some vectors $\vec b$ for which $A\vec x=\vec b$ will not have a solution.
We could of course have row-reduced the other two matrices to find the answer for them, too, but if you develop an eye for dependencies among columns of a matrix, you can often save yourself a lot of work. It will also help you solidify the underlying concepts in your mind instead of blindly following a mechanical process.