The main task here is to guess what are the correct hypotheses of the statement to be proven... Let us assume that the random variables $X_k$ are (1) identically distributed, (2) almost surely nonnegative, (3) not necessarily independent.
Introduce the event $A=[X_1+\cdots+X_6\leqslant3]$ and the random variable $N=\sum\limits_{k=1}^6\mathbf 1_{X_k\leqslant1}$.
Then $\mathrm E(N)=6\mathrm P(X_1\leqslant1)$ and $A\subseteq[N\geqslant3]$ since $X_1+X_2+\cdots+X_6\geqslant N$ pointwise. Thus, $\mathrm P(A)\leqslant\mathrm P(N\geqslant3)\leqslant\frac13\mathrm E(N)=2\mathrm P(X_1\leqslant1)$, where $\mathrm P(N\geqslant3)\leqslant\frac13\mathrm E(N)$ is Markov inequality.
This proves the desired inequality. The same approach proves more generally that $ \mathrm P(X_1+\cdots+X_n\leqslant k)\leqslant\frac{n}{n-k}\,\mathrm P(X_1\leqslant1). $ One sees that neither independence nor finite moments are required. On the other hand, almost sure nonnegativity is crucial. To see this, assume for instance that $\mathrm P(X_k=-7)=p=1-\mathrm P(X_k=2)$ for every $k$ and that the random variables $X_k$ are independent.
Then $[N\geqslant1]\subseteq A$ hence $\mathrm P(A)\geqslant1-\mathrm P(N=0)$. Furthermore, $\mathrm P(X_1\leqslant1)=p$ and $\mathrm P(N=0)=(1-p)^6=1-6p+o(p)$ when $p\to0^+$, hence $\mathrm P(A)\geqslant1-(1-p)^6=6p+o(p)\gt3p=3\mathrm P(X_1\leqslant1)$ for every $p$ small enough.