Suppose you are given $k$ vectors $v_1, \dots, v_k$ in $\mathbb{R}^m$ and $k$ vectors $w_1, w_2, \dots, w_k$ in $\mathbb{R}^n$. Here are two problems:
Determine whether or not there is a linear map $T: \mathbb{R}^m \to \mathbb{R}^n$ satisfying $T(v_j) = w_j$ for all $1 \leq j \leq k$. (Note: the answer to this is simply yes or no, not a linear transformation.)
When the answer to the previous question is yes, to determine all linear transformations $T$ with this property, or at least one of them.
Both can be answered via a routine matrix calculation. There are lots of ways of doing this, actually, but here is one way to organize the work.
Let $M$ denote the $j \times (m + n)$ matrix whose $j$th row is the entries of the vector $v_j$ followed by the entries of the vector $w_j$.
Using elementary row operations, turn $M$ into a matrix $R$ in row echelon form.
The matrix $R$ is what you use to answer the above two questions.
If there a row of $R$ whose first $m$ entries are zero, but has nonzero entries later on, then the answer to the first question above is "no". Otherwise the answer to the first question is yes.
In this case, do the following:
- Let $A$ denote the $j \times m$ block of $R$ consisting of the first $m$ columns of $R$, and for $1 \leq j \leq n$ let $b_j$ denote the $m+j$th column of $R$, thought of as a column vector.
- Use the standard algorithm you hopefully know to find the general solution to $Ax = b_j$, with $x$ unknown, and write this general solution (with its free variables in them; use different free variables for each $j$) as a row vector $r_j$. Make these row vectors the rows of a matrix $C$. (If you aren't interested in the most general $T$ that does the job, but just want one of them, you could, for example, assign the value $0$ to any free variable that pops up in the calculation.)
The matrix $C$ you've just made is the matrix of the most general linear transformation $T$ from $\mathbb{R}^m$ to $\mathbb{R}^n$ satisfying $T(v_j) = w_j$ for $1 \leq j \leq k$. (If $C$ has no free variables in it, then there is a unique $T$; otherwise, you get a different $T$ for each assignment of values to all those free variables.)
You should hopefully be able to see at a glance that the first step of the algorithm (producing the answer "yes" or "no") corresponds to an algorithmic version of what Henning was talking about where you are given too many vectors for a basis, and checking that any linear relations satisfied by the $v_j$'s are also satisfied by the $w_j$'s. The situation where the algorithm produces "no" is precisely when the $v_j$'s satisfy some nontrivial linear relationship that is not satisfied by the $w_j$'s.
If you want to understand why the algorithm works more formally, I will leave the details to you (it is very helpful to think these things over for oneself, if you want to understand matrix calculation). But as a series of steps to understanding the algorithm: note that finding matrix solutions $C$ to the system of vector equations $C v_j = w_j$, $1 \leq j \leq k$, is the same thing as finding matrix solutions $C$ to the matrix equation $C (v_1 | \dots | v_k) = (w_1 | \dots | w_k)$, and if you transpose both sides, this is the same as finding matrix solutions $C$ to the equation $(v_1 | \dots | v_k)^T C^T = (w_1 | \dots | w_k)^T$, and this in turn equivalent to finding the most general vector solution $x$ to each of the matrix equations $(v_1 | \dots | v_k)^T x = (\text{the $j$th column of $(w_1 | \dots | w_k)^T$)}$, for $1 \leq j \leq n$ (having done that for each $j$, put the solutions side by side to make the columns of the matrix $C^T$). If you unravel all of that in your mind, and feed it into your understanding of how you solve matrix equations $Ax = b$, it should make sense.
Here is the method in action with $k = 2$, $v_1 = (1,-1,1)$, $v_2 = (1,1,1)$, and $w_1 = (1,0)$ and $w_2 = (0,1)$. We have $ M = \begin{pmatrix} 1 & -1 & 1 & 1 & 0 \\ 1 & 1 & 1 & 0 & 1 \end{pmatrix}. $ If I put $M$ in reduced row echelon form I get $ R = \begin{pmatrix} 1 & 0 & 1 & \frac{1}{2} & \frac{1}{2} \\ 0 & 1 & 0 & -\frac{1}{2} & \frac{1}{2} \end{pmatrix} $ Using the standard algorithm to solve $\begin{pmatrix} 1 & 0 & 1 \\ 0 & 1 & 0 \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} = \begin{pmatrix} \frac{1}{2} \\ -\frac{1}{2} \end{pmatrix}$ for $x$, I find that the general solution has the form $(\frac{1}{2} - t_1, -\frac{1}{2}, t_1)$, with $t_1$ a free variable. (Note that as per the algorithm I am thinking of this solution as a row vector.)
Using the standard algorithm to solve $\begin{pmatrix} 1 & 0 & 1 \\ 0 & 1 & 0 \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} = \begin{pmatrix} \frac{1}{2} \\ \frac{1}{2} \end{pmatrix}$ for $x$, I find that the general solution has the form $(\frac{1}{2} - t_2, \frac{1}{2}, t_2)$, with $t_2$ a free variable. (Note that as per the algorithm I am thinking of this solution as a row vector.)
So the matrix of the most general linear transformation $T$ that does what we want is $ \begin{pmatrix} \frac{1}{2} - t_1 & -\frac{1}{2} & t_1 \\ \frac{1}{2} - t_2 & \frac{1}{2} & t_2 \end{pmatrix}, $ with $t_1$ and $t_2$ arbitrary.
In action with your second example, with $k = 3$ and $v_1 = (1, -1)$, $v_2 = (2, -1)$, and $v_3 = (-3, 2)$, and $w_1 = (1,0)$, $w_2 = (0,1)$, and $w_3 = (1,1)$, you have $ M = \begin{pmatrix} 1 & -1 & 1 & 0 \\ 2 & -1 & 0 & 1 \\ -3 & 2 & 1 & 1 \end{pmatrix} $ which can be row reduced to $ R = \begin{pmatrix} 1 & 0 & 0 & 2 \\ 0 & 1 & 0 & 3 \\ 0 & 0 & 1 & 1 \end{pmatrix} $ and we can see from this third row (it begins with two zeros, but has nonzero stuff after) that the answer in this case is "no".