If you write down the linear independence equation for the new vectors, you have the following $a(\mathbf{v_1} - \mathbf{v_2}) + 2b(\mathbf{v_2} - \mathbf{v_3}) + 3c(\mathbf{v_3} - \mathbf{v_4}) = \mathbf{0}$ for scalars $a, b, c$. Now this can be rearranged to $a\mathbf{v_1} + (2b - a)\mathbf{v_2} + (3c - 2b)\mathbf{v_3} -3c\mathbf{v_4} = \mathbf{0}$ What can you say about the coefficients of this equation?
Edit: I feel I need to add a bit more to this.
First, recall the definition of linear independence. If a set of vectors $\{\mathbf{v_1}, \cdots, \mathbf{v_n}\}$ is linearly independent, then the only solution to the equation $a_1\mathbf{v_1} + \cdots + a_n\mathbf{v_n}=0$ is if all the scalars $a_1,\ \cdots,\ a_n$ are zero. Note we are not aiming to sum the vectors to zero, but rather we are interested in how they sum to zero. If there is a non-trivial solution, i.e. you can add the vectors such that a non-zero linear combination makes the zero vector, then the vectors are said to be linearly dependent. There is a dependence amongst the vectors in the sense that some of the vectors in the set can be written as a linear combination of the others.
In your case, we are interested in a set of four vectors. So we care about how these four vectors $\{\mathbf{v_1}, \mathbf{v_2}, \mathbf{v_3}, \mathbf{v_4}\}$ sum to zero. I could use $a_1, a_2, a_3$ as the scalar coefficients, or I can use $b_1, b_2, b_3$ but I chose $a, b, c$ for convenience (no subscripts). You should note that how we choose to represent the scalars (or the vectors) have no real effect on the question; it doesn't matter what we name the coefficients, they're just names.
Therefore, we are interested which $a$s, $b$s and $c$s can make this equation zero $a(\mathbf{v_1} - \mathbf{v_2}) + 2b(\mathbf{v_2} - \mathbf{v_3}) + 3c(\mathbf{v_3} - \mathbf{v_4}) = \mathbf{0}$ Now we know nothing about these vectors, but we know about the constituents. So we separate each of the vectors and we get $a\mathbf{v_1} + (2b - a)\mathbf{v_2} + (3c - 2b)\mathbf{v_3} -3c\mathbf{v_4} = \mathbf{0}$ This is an equation we recognize. We know that the set of vectors $\{\mathbf{v_1}, \mathbf{v_2}, \mathbf{v_3}, \mathbf{v_4}\}$ is linearly independent, so all the coefficients that solve the above equation must be 0. Namely $a=0$ $2b - a = 0$ $3c - 2b = 0$ $-3c = 0$ From this simple system, we can see that the only solutions for $a, b, c$ is $a=b=c=0$. That means the new vectors are indeed linearly independent. It would be impossible to find a non-zero solution to the linear independence equation.
The way to prove that a set of vectors is linearly independent, is to show that they cannot "make" the zero vector in a non-trivial way. What I mean is the following. Say we work in $\mathbb{R}^3$. Then the equation $a_1\begin{pmatrix}1 \\ 0 \\ 0\end{pmatrix} + a_2\begin{pmatrix}0 \\ 1 \\ 0\end{pmatrix} + a_3\begin{pmatrix}0 \\ 0 \\ 1\end{pmatrix} = \begin{pmatrix}0 \\ 0 \\ 0\end{pmatrix}$ is the equation which determines if the three vectors above are linearly independent. Now we don't know what $a_1$ or $a_2$ or $a_3$ is, but we'd like to find out. Certainly there does exist a solution, namely $a_1 = a_2 = a_3 = 0$ will solve the equation. But this solution is too "obvious" and rather uninteresting, so we say this is the trivial solution. Now we can ask, are there other solutions to the equation? Non-trivial solutions where the coefficients are not all zero? The answer for this particular set of vectors is no, and it's rather obvious to see. $a_1$ must be zero or the first component will be non-zero. Likewise $a_2$ must be zero or the second component will be non-zero, same with $a_3$. So we've shown that the above equation has only one solution, the trivial solution. Linearly independent sets are by definition the sets in which there exists only the trivial solution. In that sense, we have proven that the above set is linearly independent. If there exists non-trivial solutions, then the set is call linearly dependent. One example is the following $a_1\begin{pmatrix}1 \\ 1 \\ 0\end{pmatrix} + a_2\begin{pmatrix}1 \\ 2 \\ 1\end{pmatrix} + a_3\begin{pmatrix}1 \\ 0 \\ 0\end{pmatrix} + a_4\begin{pmatrix}0 \\ 0 \\ 1\end{pmatrix} = \begin{pmatrix}0 \\ 0 \\ 0\end{pmatrix}$ Again, we want to find out which values of $a_1, \cdots, a_4$ solve the equation. We still have the trivial solution. It's always there. But more importantly, you can verify that $a_1 = -2, a_2 = 1, a_3 = 1, a_4 = -1$ also solves the equation. In fact, there exists infinitely many values of the coefficients which solves the above equation. Since there exists non-trivial solutions to the above equation, we have proven that the above set is linearly dependent.
This is how the method works in general, perhaps not as easily and clearly as the above examples, but the same principles carry over. To prove that a set is linearly independent, you must prove that the set cannot add to the zero vector in a non-trivial way. If you have anymore confusion after this, please ask me.