I think you meant to say that $B$ was obtained by performing elementary row operations on $A$ (which happens if and only if $A$ was obtained by performing elementary row operations on $B$ of course).
If you identify the columns of $B$ that contain a leading row entry (the columns that don't correspond to the free variables), then the corresponding columns of $A$, columns 1 and 2 here, form a basis of the column space of $A$. Note then that the other columns of $A$ are linear combinations of the first two columns and $S=\text{span}\{c_1,c_2\}$.
Now, we find a basis of the null space of $A$:
Taking the columns of $A$ approach:
As mentioned above, any vector in the column space of $A$ is a linear combination of the first two columns, $c_1$ and $c_2$, of $A$. The dimension of the column space of $A$ is 2; thus, the dimension of the null space of $A$ is two (since the rank of $A$ is equal to the dimension of the column space of $A$, and since the rank of $A$ plus the nullity of $A$ is the number of columns of $A$).
Now a vector in the null space of $A$ is a vector ${\bf x}$ satisfying $A{\bf x}={\bf 0}$. But multiplying $A$ by ${\bf x}$ amounts to taking a linear combination of columns of $A$ (the coordinates of $\bf x$ being the coefficients of the linear combination).
So, to find a basis for the null space of $A$, we need two independent vectors. By the previous paragraph, this can be done by considering linear combinations of the columns of $A$: we wish to find two linear combinations ("independent") of columns of $A$ that produce the zero vector.
But, we know that columns three and four of $A$ are each linear combinations of columns one and two. So, we can solve the equations: $ \alpha_1 c_1+\alpha_2 c_2+ c_3 ={\bf 0} $ and $ \beta_1 c_1+\beta_2 c_2+ c_4={\bf 0}. $ And then a basis for the null space of $A$ would be given by $\left\{ \biggl[{{\alpha_1\atop\alpha_2}\atop{1\atop0}}\biggr] , \biggl[{{\beta_1\atop\beta_2}\atop{0\atop1}}\biggr] \right\}$ (note that these will be independent).
But, I think a better, and faster, way to find a basis for the null space of $A$ is the typical method:
A basis for the null space of $A$ is given by a basis for the null space of $B$ (if two matrices are similar, i.e. one can be obtained by performing row operations on the other, then they have the same null space). So, we will find a basis for the null space of $B$.
To find a basis for the null space of $B$, you may, after insuring that it is echelon form, identify the columns of $B$ that do not contain a leading row entry: $ B = \begin{bmatrix} 1&0&\color{maroon}2&\color{maroon}0 \\ 0&1&\color{maroon}1&\color{maroon}1 \\ 0&0&\color{maroon}0&\color{maroon}0 \\ 0&0&\color{maroon}0&\color{maroon}0 \end{bmatrix}$
These columns will corespond to the free variables. Using $x_1,x_2,x_3,x_4$ as the variable names (which corespond to the columns of $B$ in the obvious manner), $x_3$ and $x_4$ are free.
So, give the free variables arbitrary values: $\eqalign{ x_3&=a\cr x_4&=b}; $ and solve for the others:
$\ \ \ $ row 1 $\Rightarrow x_1 =-2x_3=-2a$
$\ \ \ $ row 2 $\Rightarrow x_2 =-x_3-x_4=-a-b$
Now form the general solution to $B{\bf x}=0$: $ {\bf x}=\left[\matrix{x_1\cr x_2\cr x_3\cr x_4 }\right] =\left[\matrix{-2a\cr -a-b\cr a\cr b}\right]; $ and "split it up" into the "$a$ part" and "$b$" part": $ {\bf x}=a\left[\matrix{-2\cr -1\cr 1\cr 0 }\right] +b\left[\matrix{0\cr -1\cr 0\cr 1}\right]; $ The vectors $\Bigl[{{-2\atop-1}\atop{1\atop0}}\Bigr]$ and $\Bigl[{{0\atop-1}\atop{0\atop1}}\Bigr]$ form a basis of the null space of $B$ and, hence, of the null space of $A$.