Others have answered your question well, so this answer will be directed more to cryptographic security of pseudorandom random number generators.
The only totally secure encryption scheme is the one-time pad, which requires pre-sharing random keys of arbitrary length. In practice this is rarely done, as pre-sharing is not computationally feasible or potentially not possible at all. However, if both parties can use a cryptographically secure pseudorandom number generator that somehow started at the same point, then they could create a stream cipher that would be a fairly good approximation to the one-time pad. Note that the deterministic nature of the PRNG makes it possible for it to be used in this way. If it were nondeterministic the receiver would be unable to recreate the keystream, and thus would be unable to decrypt and recover the message.
How do we make sure that a PRNG is "sufficiently random" to be secure? It is not enough for the probability of each outcome to be equiprobable. We need it to have forward secrecy, meaning that if someone reads an arbitrary length of the output they will have no advantage in predicting the next outcome. The most common way of measuring this is to use Claude Shannon's definition of entropy. The basic definition of entropy is $J = - \sum_{i=1}^k \, p(x_i) \cdot \log_2 p(x_i).$ where the $x_i$ are the potential outcomes and $p(x_i)$ is the probability of that outcome occurring. If the PRNG is uniformly distributed, then we have $J = \log_2 k$. But we desire to test for more than just the randomness of each individual output, so we look at nth order entropy: $J_n = \frac{-1}{n} \sum_{i=1}^k \, p(x_i) \cdot \log_2 p(x_i)$ where the $x_i$ are now a sequence of $n$ outputs concatenated together. With a PRNG, many of the outputs will never happen, so many of the probabilities will be $0$. Then finally, we can define the absolute minimum entropy $E = \min_{1 \leq n \leq \infty} J_n$ We should note that for a PRNG, which is periodic, the absolute minimum entropy must be $0$ (why?). But, by looking at approximations to the absolute minimum entropy using large values of $n$, we can get a better idea of the probabilistic nature of the PRNG. This is something you could potentially compute for a RNG of smaller period.
In practice, a massive amount of tests are done on a potential CSPRNG, including Knuth's algorithms in The Art of Computer Programming Vol.2, the DIEHARD tests (mentioned by J.M.), the Crypt-XS package, standards from the Handbook of Applied Cryptography, and the powerful NIST Statistical Test Suite. The latter contains several of the tests from each of the others, in addition to many of its own. To elaborate on some of the subtler points mentioned throughout the answer, the process of getting a CSPRNG to "start at the same place" is done by seeding. A seed is a relatively short input to a PRNG that, in a very loose sense, gives it a starting position. The seed should ideally be truly random, which in practice is best approximated by hardware random number generators, and can be shared via some encryption scheme between the two parties due to its small length. And finally, randomness can be deterministically increased from insufficient sources of entropy. While the built-in random functions in most programming languages are not sufficiently random, several outputs can be concatenated together and hashed with a cryptographic hash function such as one of the SHA family. This increases the first order entropy, and if concatenations of different (random) length are used, then higher order entropy also increases. This forms the basic idea of randomness extractors.