When the health department tested private wells in a county for two impurities commonly found in drinking water, it found that 20% of the wells had neither impurity, 40% had impurity A, and 50% had impurity B. (Obviously, some had both impurities.) If a well is randomly chosen from those in the county, find the probability distribution for Y , the number of impurities found in the well.
The answers:
P($Y=0$) = 0.2
P($Y=1$) = P($A$) + P($B$) - P($A\cap B$) ** , which is 0.7
P($Y=2$) = P($A\cap B$), which is 0.1
However, my friends and I disagreed about how to find P($A\cap B$). If you're given P(A), which is 0.4, and you're given P(B), which is 0.5, then in order to find P($A\cup B$), don't you just do $0.4 + 0.5 - 0.4(0.5$) as according to the theorem?
But then the answer when you multiply the two is 0.2, which is NOT 0.1 as given.
My friend, on the other hand, subtracted 0.1 from 0.4 and from 0.5, so she added 0.3 + 0.4 together in order to get the result 0.7. But if you're using the theorem, you can't subtract from both. What are we doing wrong? Did the 0.2 have something to do with this?