0
$\begingroup$

Imagine that we have a system, which has a natural possibility of failure(incredibly small number). Let's put it this way, we input random numbers into this system, and it outputs always 0. It should always output 0. It's not hardcoded. 0 is always a result of complex calculations. But system has possibility of failure, sometimes it outputs 1. For some ridiculously small percentage of values it outputs 1.

The question is: if once in a while we manually input value which should give 1 on the output, would this somehow change the natural possibility of system failure? Would it decrease it? Or would it at least give us some safeness that the failure wouldn't occur in the nearest future?

Background of the problem: Friend of mine has finished her book, and we are trying to cut all possible technical incorrectness from it.

  • 0
    I am not sure which problem are you talking about, but there is a similar problem called rare-event identi$f$i$c$ation, solution sometimes is done via importance sampling. I've just recently heard about this $f$rom the author of [this paper](http://dl.a$c$m.org/ci$t$ation.cfm?id=2185665) where he discusses some methods to solve it2012-04-23

1 Answers 1

3

Forgive me if I'm wrong, but it It sound like you're reasoning something like:

Since the probability of failure in any run is small (say, one to a million), the probability to two failures close to each other (say, within the same thousand runs) will be even smaller (say, one to a billion). So after one failure the system will be less likely than usual to fail again soon. If we install a special button that makes the system pretend it has failed, will pressing that button also make the system less likely to (actually) fail soon?

Unfortunately the premise of that reasoning is completely bogus. A failure does not become less likely after you have already seen one, so faking a failure will not influence future probabilities either.

(That is, of course, unless you have deliberately designed the system to remember previous failures and somehow change what it does such that new failures become less likely -- say, by enabling more careful error checking when there has been a recent failure. Security systems that involve people often implicitly work that way, to some extent. But the standard assumption in probability is that subsequent runs of a system are completely independent, unless something else is explicitly specified).

It is easy and only slightly tedious to test this oneself: Roll a die a few hundred times and write down all the results. Arbitrarily declare rolling a 1 to be "failure". You will find that failures occur with a probability of about $1/6$ and two failures in a row are much rarer. However, among all of the rolls that follow directly after a failure, the experimental probability of failure is still $1/6$.

  • 1
    That subsequent tries do not influence each other is not really a matter of proof. It is an _assumption_ which can be true or false about any given real-world system -- and whether it is is ultimately up to experience/experiment.2012-04-23