For a binomial distribution, we use the bilinearity of covariance and that the indicator random variables used are for success in each of the $n$ independent Bernoulli trials operating with identical success rate $p$.
$$\begin{align}\mathsf {Var}(X) &= \sum_{k=1}^n\sum_{h=1}^n \mathsf {Cov}(X_k, X_h) \\ &= \sum_{k=1}^n\mathsf{Var}(X_k) +2\sum_{k=1}^{n-1}\sum_{h=k+1}^n \mathsf{Cov}(X_k,X_h) \\ &= n(\mathsf E(X_1^2)-\mathsf E(X_1)^2)+0\end{align}$$
Alternatively, the same result via the definition of variance.
$$\begin{align}\mathsf {Var}(X) &= \mathsf E(X^2)-\mathsf E(X)^2
\\ &= \mathsf E((\sum_{k=1}^n X_k)(\sum_{h=1}^n X_h))-(\mathsf E(\sum_{k=1}^n X_k))^2\\ & =\sum_{k=1}^n\mathsf E(X_k^2)+2\sum_{k=1}^{n-1}\sum_{h=k+1}^n\mathsf E(X_kX_h)-\sum_{k=1}^n\mathsf E(X_k)^2-2\sum_{k=1}^{n-1}\sum_{h=k+1}^n\mathsf E(X_k)\mathsf E(X_h)\\ &= n\mathsf E(X_1^2)-n\mathsf E(X_1)^2\end{align}$$
Now from the definition of expectation:
$$\begin{align}\mathsf E(X_1) &= 1\cdot\mathsf P(X_1=1)+0\cdot\mathsf P(X_1=0) \\[1ex] &= p \\[2ex]\mathsf E(X_1^2) &= 1^2\cdot\mathsf P(X_1=1)+0^2\cdot\mathsf P(X_1=0) \\[1ex] &= p\end{align}$$
Just put it together.