Let $(\mathfrak{g},[,])$ be a finite dimensional simple complex Lie Algebra. Suppose that $\mathfrak{g}$ admits a Cartan subalgebra $\mathfrak{h}$, which (for me) is an abelian, self-centralizing, ad-semisimple Lie-subalgebra. Let $R$ denote the set of roots of the pair $(\mathfrak{g},\mathfrak{h})$ (That is the set of all nonzero $\alpha \in \mathfrak{h}^{\ast}$ for which the space $\mathfrak{g}^{\alpha}= \{X \in \mathfrak{g}\, |\, \forall H \in H\,: [H,X] = \alpha(H)X\}$ is nonzero). The assumptions imply that $\mathfrak{g} = \mathfrak{h} \oplus \bigoplus_{\alpha \in R}{\mathfrak{g}^{\alpha}}$.
Suppose in addition that there exists non-degenerate symmetric bilinear form $(,)$ on $\mathfrak{g}$ which satisfies $([H,X],Y) + (X,[H,Y]) = 0$ for all $X,Y,H \in \mathfrak{g}$.
According to my notes these conditions should suffice to show that $\alpha \in R$ if and only if $- \alpha \in R$. I do not understand why this is true. I know that for all $\lambda_1, \lambda_2 \in \mathfrak{h}^{\ast}$ we have $(\mathfrak{g}^{\lambda_1},\mathfrak{g}^{\lambda_2}) = 0$ if $\lambda_1 + \lambda_2 \neq 0$ but this does not seem to help, or does it?