Hungry dogs and decision-making under uncertainty

0

My wife rescued a dog a couple of years ago from a rural North Carolina rest stop . We named her "DOTi" in honor of the Department of Transportation. It took a while for us to get into the swing of being responsible owners; sometimes the first to leave for work would forget to leave a note indicating that the dog had been fed. Of course, in the age of cell phones, it's a single text to clarify whether the dog ate. But I'd like to do a thought experiment to compare two methods of handling the uncertainty, since it illustrates the strategic challenge organizations face, in addition to that of dog owners.

My strategy, as a dyed-in-the-wool probabilist, was simple. If there was any uncertainty about whether DOTi had eaten, I would flip a fair coin. Heads? Feed her the usual ration of two scoops. Tails? Don't feed her. People who have never owned a dog might think "Why not just ask her if she was hungry?" People who have owned dogs will understand that the typical canine answer to that question has never been anything other than wide-eyed, tail-wagging enthusiasm, signaling an unequivocal need to feed. If DOTi had already eaten, she stood a 50% chance of eating again; and if not, she had a 50% chance of going hungry. Neither outcome is optimal, and decision theorists will recognize them as a "False Positive" and a "False Negative." However, I had worked out that if the probability of DOTi having already eaten was p, then the expected value of extra scoops of food was 2p-1. And, unless I messed up my algebra, the variance of that count of extra scoops was just [1+4p(1-p)]. It had the satisfying finality of a problem with a solution as simple as flipping a coin.

My wife, as usual, was both wiser and more attuned to statistical optimality than I was. Her strategy, which didn't even require a coin toss, was to feed the dog one scoop if there was any uncertainty at all. So her decisions were always wrong---the dog either ate one scoop too many or one scoop too few. I sat down to do a little bit of arithmetic and show her the error of her ways (you very likely can see where this is going). While my strategy at least had a chance of feeding the dog the exact right amount 50% of the time, the other 50% of the time I was off by either +2 or -2 scoops. My wife never got the amount exactly right, but she was never wrong by more than +1 or -1 scoops. While I was busy optimizing on the number of correct decisions, she was devising a strategy that yielded the same expected value of extra scoops (2p-1), but the variance of the extra scoops associated with her strategy was [4p(1-p)]. So we had the same expected error rate, but she had a smaller variance.

If you need to make decisions under uncertainty, then you need a way to objectively measure the effectiveness of those decisions. In this toy example, we have to pick between two scoops dependent on a coin toss or one scoop every time; even so, there are arguments to be made for either strategy. If the situation calls for getting the decision exactly right at least some of the time (like showing an ad to the right customer or auditing a fraudulent tax return), then a strategy should be evaluated based on that criterion. The profit matrix in SAS┬« Enterprise Miner™ gives you an easy way to weight those outcomes relative to their importance. However, if your goal is to get an accurate estimate of an amount (like the amount of food the dog needs to eat or the dollar amount associated with the ad or audit decision above), then a measure of the size of the error might be more appropriate. Either way, the objective of your decision making should drive the objective evaluation of competing decision making strategies.

Share

About Author

Dan Kelly

Related Posts

Comments are closed.

Back to Top