(I am turning this into Community wiki, since the original version made an obvious mistake).
The result follows, for example, from the Stone-Weierstrass theorem, once one justifies that the limit of some integrals is the integral of the limit, which can be done (overkill) using Lebesgue's dominated convergence theorem or (more easily) using simple estimates from the fact that $f$ is bounded, since it is continuous.
Below I give full details, which you should probably not read until after your homework is due, since this also solves your homework.
Spoilers:
There is a sequence of polynomials $p_n(x)$ that converges uniformly to $xf(x)$ on ${}[0,1]$. We have $\int_0^1xf(x)p_n(x)dx=0$ for all $n$, by assumption, since $xp_n(x)$ is a sum of monomials the integral of whose integral with $f$ is 0. Now take the limit as $n\to\infty$ to conclude that $\int_0^1x(f(x))^2dx=0$.
This gives us that $f=0$ because if $f(x_0)\ne 0$, continuity ensures a positive $\epsilon>0$ and an interval $(a,b)$ with $a>0$ such that $|f(x)|\ge\epsilon$ for all $x\in(a,b)$. But then $\int_0^1xf(x)^2dx\ge la\epsilon^2>0$, where $l=b-a$ is the length of the interval.
To see that the limit of the integrals is 0 without using dominated convergence, let $M\ge|f(x)|$ for all $x\in[0,1]$. The, for any $\delta>0$, if $n$ is large enough, we have $\int_0^1f(x)xp_n(x)dx=\int_0^1f\times(p-xf+xf)dx=\int_0^1xf(x)^2dx+\int_0^1f\times(p-xf)dx,$ and the second integral is bounded by $\int_0^1|f||p-xf|dx\le M(\delta/M)=\delta$.
In fact, even this is approach is an overkill. (For example, Müntz's theorem gives a more general fact, as already mentioned in another answer.)
(Apologies for the original mistake.)