My friend and I were arguing for way too long the other night about how much it would cost you to buy every single thing in a grocery store. Our first go at it went something like this:
Assume there are $N_{\text{items}}$ items per row in the grocery store, and let $p_{\text{avg}}$ be the average price for each item. Then say that there are $N_{\text{rows}}$ rows. Multiplying this out we get a total price $P_{\text{total}}$ as
$$ P_{\text{total}} = N_{\text{items}}p_{\text{avg}}N_{\text{rows}}$$
The only issue is, there is a vast range of difference prices for items, and vast ranges of items per row, depending on what row you're in. For instance, if you go down the aisle with all the spices, there's a ton of items at very low cost, but the coffee aisle has a lot of items at very high cost; the meat aisle has relatively average number of items at a much higher cost, as well as the kitchen-utensils/kitchenware aisle etc.
This got me thinking that there must be a better way to do an accurate estimation for a problem like this. Perhaps come up with some sort of intelligent distribution for prices (I was thinking maybe a log-normal distribution with a maximum around some arbitrary "most-probable" price, based on observation). And possibly do the same thing with the number of items per row? Estimating $N_{\text{rows}}$ is relatively straight forward since most grocery stores have somewhere between ten and twenty rows, so letting be $N_{\text{rows}}$ Gaussian centered at ten should take care of that, if we even want to get that fancy with that variable.
Anyway, I'm not that savy with probability/statistics in the first place, so I thought I would ask you brilliant people: how would you most intelligently try to take a stab at this estimation?