I've poked about in some other questions, and I'm no sure how to deal with my problem and my knowledge of statistics has atrophied. Particularly that I'm trying to choose a sample size for a population that I don't know the size of (potentially infinite, but it could be 10,000's or 100,000's or more). How do I choose a sample size that will give me a meaningful answer.
Is it reasonable just to plug in a very large number, and see what comes out - does it approach a limit?
My real world problem is this:
I have two computer systems (Able and Baker). My user community believes Able is faster than Baker. I can run a simple test on both, and see how long it takes to run one each. However, there are inconsistencies in performance (probably do to the network, which will have spikes in activity and I unfortunately can't removed from the test).
Baker will be running for years into the future, so I have no idea how many transactions will run in it over its lifetime.
Assuming the performance issues caused by the network are random, how many tests do I have to run each on Able and Baker to to be 90% confident that Able is faster than Baker?
Perhaps I'm asking the wrong question? Should I just be finding the average of a 100 tests on Able and 100 tests on Baker and compare? Can I make than number 100 smaller (to say like 20)