Mathcast had it; in fact, in practical work, one uses the Cholesky decomposition $\mathbf G\mathbf G^T$ for efficiently testing if a symmetric matrix is positive definite. The only change you need to make to turn your decomposition program into a check for positive definiteness is to insert a check before taking the required square roots that the quantity to be rooted is positive. If it is zero, you have a positive semidefinite matrix; if neither zero nor positive, then your symmetric matrix isn't positive (semi)definite. (Programming-wise, it should be easy to throw an exception within a loop! If your language has no way to break a loop, however, you have my pity.)
Alternatively, one uses the $\mathbf L\mathbf D\mathbf L^T$ decomposition here (an equivalent approach in the sense that $\mathbf G=\mathbf L\sqrt{\mathbf D}$); if any nonpositive entries show up in $\mathbf D$, then your matrix is not positive definite. Note that one could set things up that the loop for computing the decomposition is broken once a negative element of $\mathbf D$ is encountered, before the decomposition is finished!
In any event, I don't understand why people are shying away from using Cholesky here; the statement is "a matrix is positive definite if and only if it possesses a Cholesky decomposition". It's a biconditional; exploit it! It's exceedingly cheaper than successively checking minors or eigendecomposing, FWIW.