tag:blogger.com,1999:blog-27007922949971279312019-12-25T22:37:35.893-08:00MathemathinkingUnknownnoreply@blogger.comBlogger18125tag:blogger.com,1999:blog-2700792294997127931.post-9328296103688332292014-11-16T12:56:00.005-08:002017-01-02T09:46:56.990-08:00New locationMost of these posts will move to:<br /><div><a href="http://corysimon.github.io/">corysimon.github.io</a>,</div><div>and I will post there instead of here.</div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2700792294997127931.post-56613310902500494432014-10-05T14:54:00.003-07:002015-04-21T21:58:02.544-07:00Berkson's paradoxBerkson's paradox is a counter-intuitive result in probability and statistics. Imagine that we have two independent events A and B. By definition of independence, the conditional probability of event A given B is the same as the probability of event A:<br />P(A | B) = P(A).<br />i.e., knowing that event B occurred gives us no information about the probability that A occurred.<br /><br />Berkson's paradox is that, if we restrict ourselves to the cases where events A <i>or</i> B occur-- where least one of the events A or B occurs-- knowledge that B has occurred makes it less likely that A has occurred:<br />P(A | B, A or B) < P(A | A or B) *.<br />The reason that this result is counter-intuitive is that A and B are independent events, but they become negatively dependent on each other when we restrict ourselves to the cases that A or B occurs. We will see that Berkson's paradox is a form of selection bias; in restricting ourselves to A or B, we ignore the cases where both A and B do not occur.<br /><br />Berkson's paradox can be used to explain the stereotype that the most handsome men are jerks and that the nicest men are ugly, proposed by Jordan Ellenberg in his book How Not to Be Wrong.<br /><br />Let's assume for the moment that looks and niceness are independent variables in the population of males so that men are randomly distributed on the looks-niceness plane:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-V_tbCmJ_Dtw/VDHmEjIu0aI/AAAAAAAAMuc/8J3YSnat8RE/s1600/2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-V_tbCmJ_Dtw/VDHmEjIu0aI/AAAAAAAAMuc/8J3YSnat8RE/s1600/2.png" height="300" width="400" /></a></div>So each guy is a point in this plane. Every girl wants to date a guy in the top right corner of this plot**: a man that is both handsome and nice. However, if a guy is a jerk sometimes, she might still date him if he is super good-looking. Also, if a guy is extremely nice, she might still date him even if he is lacking in the looks category. Thus, the guys that she is willing to date are probably where:<br />niceness + looks > some constant value,<br />the green points in the upper-right corner:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-uTWlC2Qh_qc/VDHmQV9bsAI/AAAAAAAAMuk/YI6inPvC-J4/s1600/1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-uTWlC2Qh_qc/VDHmQV9bsAI/AAAAAAAAMuk/YI6inPvC-J4/s1600/1.png" height="300" width="400" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"></div>From this natural compromising behavior in her dating criterion, many of the best-looking guys that this girl dates are not so nice; many of the nicest guys she dates are not as good-looking. By restricting herself to this set of guys, she sees a negative correlation between looks and niceness, <i>despite</i> these two variables being independent in the population! This is Berkson's paradox, and now you can see that this induced correlation stems from selection bias.<br /><br />We can go one step further: maybe the guys in the very top-right corner (red points) are so nice and handsome that they will not consider dating the girl we are considering, who is just decently nice and good-looking.<br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-PWB1EMbqbSg/VDHmVKLD7-I/AAAAAAAAMus/4uJqortqDAA/s1600/0.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-PWB1EMbqbSg/VDHmVKLD7-I/AAAAAAAAMus/4uJqortqDAA/s1600/0.png" height="300" width="400" /></a></div><br />Now her dating pool is even more restricted due to selection bias, and the negative correlation between good looks and niceness is even more severe.<br /><br />The lesson here is that we can see spurious correlations between variables as a result of selection bias. We must think carefully about if our experiences and data collection strategies adequately sample the population in question to make sound conclusions from our observations.<br /><br />The paradox is named after Joseph Berkson, who pointed out a selection bias in case-control studies to identify causal risk factors for a disease. If the control group is taken from <i>within</i> the hospital, a negative correlation could arise between the disease and the risk factor because of different hospitalization rates among the control and case sample.<br /><br />Another example comes from the book Causality: Models, Reasoning, and Inference by Judea Pearl regarding university admissions criteria. The admissions office at a university may consider both GPA and SAT scores for admissions. Of course, the university wants students with both a high GPA and high SAT. However, a schools may still admit a student with a poor GPA if he or she has a very high SAT score and admit a student with a poor SAT score if he or she compensates with a high GPA. Even if the GPA and SAT scores were independent variables, this selection bias could induce a negative correlation between GPA and SAT scores among the student body.<br /><br />[1] Berkson, Joseph (June 1946). "Limitations of the Application of Fourfold Table Analysis to Hospital Data". <a href="http://en.wikipedia.org/wiki/Biometrics_(journal)">Biometrics Bulletin</a> 2 (3): 47–53<br />* We need 0 < P(A), P(B) < 1 for this. That is, the events have to be interesting enough that A/B do not occur with certainty or never.<br />** Of course, niceness and handsomeness are [hopefully] not the only variables that a girl will consider in the guys that she dates. Please excuse this simplification.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2700792294997127931.post-20605009262487434812014-07-31T15:10:00.002-07:002014-08-01T12:22:04.603-07:00Frequentist vs. Bayesian StatisticsTwo approaches to problems in the world of statistics and machine learning are that of frequentist and Bayesian statistics. This comic from XKCD illustrates a difference between the two viewpoints.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://imgs.xkcd.com/comics/frequentists_vs_bayesians.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://imgs.xkcd.com/comics/frequentists_vs_bayesians.png" height="640" width="420" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div>We have a neutrino detector that measures whether the sun has gone nova. The detector is not perfect; 1/36 times, the detector incorrectly indicates whether or not the sun has gone nova. [The probability of rolling two fair die as both 6 is 1/6 * 1/6 = 1/36.]<br /><br />Next, we push the button on the detector to try it, and it indicates that the sun has gone nova. Has the sun indeed exploded? [The sun is shining on the other side of the earth, and there is a time lag for the light from the sun to reach us.]<br /><br />The frequentist might make the following point. The probability that the detector would lie to us is 1/36. That's pretty small...<br /><br />The Bayesian statistician cannot argue with this point. However, and this is the distinction between the two mindsets, the Bayesian statistician would take his or her <i>prior knowledge</i> into account that the probability of the sun exploding today is <i>very</i> small, so most likely the detector is lying, in spite of the fact that the detector is unlikely to lie.<br /><br />Let's put this into some formal mathematics. Define two events:<br />E: the sun has exploded<br />D: the neutrino detector has detected an explosion.<br /><br />We know that the probability that the detector has detected an explosion given that there was not actually an explosion is P(D | ~E) = 1/36. The quantity that we're interested in for making this bet is the probability that there was actually an explosion given the data that the detector has detected an explosion, <span style="color: #38761d;">P(E | D)</span>. A fancy name for this probability is the <a href="http://en.wikipedia.org/wiki/Posterior_probability">posterior probability</a>.<br /><br />To get the posterior <span style="color: #38761d;">P(E | D)</span>, we need to use Bayes' theorem:<br /><br /><div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='320' height='266' src='https://www.youtube.com/embed/xVTogEEVmBA?feature=player_embedded' frameborder='0' /></div><span style="color: #38761d;">P(E | D)</span> =<span style="color: blue;"> P(D | E) </span><span style="color: red;">P(E)</span> / P(D).<br /><br />The term <span style="color: blue;">P(D | E) </span>is called the <i>likelihood</i>; this is the probability of observing the data given the event. Frequentists focus on the likelihood when determining whether or not an event has occurred. In this case,<br /><span style="color: blue;">P(D | E)</span> = 1 - P(~D | E) = 1 - 1/36 = 35/36.<br />The likelihood is quite high, so the frequentist might conclude that the sun indeed exploded.<br /><br />However, of course, Bayes' theorem tells us that there is another contribution to the story: the term <span style="color: red;">P(E)</span>, which is called the <i>prior</i>. This is simply the probability that the sun has exploded. We know that this is indeed very small, and the Bayesian statistician would inject this "prior knowledge" into the problem, hence taking the prior probability <span style="color: red;">P(E)</span> into account in determining <span style="color: #38761d;">P(E | D)</span>.<br /><br />The probability of the detector detecting an explosion P(D) is somewhat redundant since we can write it in terms of other quantities:<br />P(D) = <span style="color: blue;">P(D | E) </span><span style="color: red;">P(E) </span>+ [1 - <span style="color: blue;">P(D | E)</span>] [1 - <span style="color: red;">P(E)</span>].<br /> ... = 34/36 P(E) + 1/36.<br />Note that this term goes to 1/36 as <span style="color: red;">P(E) </span>gets small and is 35/36 if <span style="color: red;">P(E)</span> = 1 (i.e. if the sun exploded with certainty) .<br /><br />Thus, we should write the probability that the sun actually exploded, given our evidence that the neutrino detector went off,<br /><span style="color: #38761d;">P(E | D)</span> = <span style="color: blue;">35/36</span> <span style="color: red;">P(E)</span> / ( 34/36 <span style="color: red;">P(E)</span> + 1/36).<br />At small prior probabilities, this looks like<br /><span style="color: #38761d;">P(E | D)</span> ~ 35 <span style="color: red;">P(E)</span>.<br />The Bayesian statistician knows that the astronomically small <span style="color: red;">prior P(E)</span> overwhelms the high <span style="color: blue;">likelihood </span><span style="color: blue;">P(D | E)</span>.<br /><br />In this problem, we clearly have a reason to inject our belief/prior knowledge that <span style="color: red;">P(E)</span> is very small, so it is very easy to agree with the Bayesian statistician. However, in many cases, it is difficult or unjustified to assume prior knowledge <span style="color: red;">P(E)</span>.<br /><br />This bet is quite moot anyway. Consider the "loss function": if the Bayesian statistician is correct, you lose $50. If the Bayesian statistician is incorrect, the world will end and you probably won't have a chance to spend the $50.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2700792294997127931.post-73493168186401243762014-07-20T16:03:00.000-07:002014-07-20T16:03:09.620-07:00Solution to the card-deck challengeHello again for part 2 of my guest appearance.<br /><br />You guys are fantastic! Cory and I were very impressed by your suggestions and creativity at solving the <a href="http://mathemathinking.blogspot.ca/2014/07/a-card-deck-challenge.html" target="_blank">card-deck challenge</a>:<br /><br /><span style="background-color: white; color: #333333; font-family: 'Droid Sans'; font-size: 16px; line-height: 22.176000595092773px;"><i>Five random cards are drawn from a deck of 52 cards (four suits and 13 cards per suit). You see all five cards and can decide which four of those cards you want to reveal to your partner, and in which order. Is there an ordering scheme that you and your partner can agree on beforehand that allows your partner to determine the fifth (hidden) card from the choice and order of the four cards that you reveal?</i></span><br /><br />The three correct solutions that we received were from Douglas, James, and Vince. Congratulations to all three of you!<br /><br />If you want to solve it yourself, this is your last chance. I will reveal the solution below this cute <a href="http://en.wikipedia.org/wiki/Lolcat" target="_blank">lolcat</a>:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://i.chzbgr.com/maxW500/4912229120/h87AF72FC/" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="265" src="https://i.chzbgr.com/maxW500/4912229120/h87AF72FC/" width="400" /></a></div><br /><br />Okay, here we go. Surprisingly, such a scheme can be found, and it's actually easy enough to memorize so that you can perform this at your next dinner party. Below I explain <i>one possible scheme</i>, and there are certainly many other conventions that would work just as well.<br /><br />As a reminder, all we have to choose is which card we hide and want our partner to guess, let's call this card <i>the target</i>, and in what order we want to reveal the other four cards.<br /><br /><br />We need to signal the suit and the value of the target card. The former is easy: Since we are given five cards of a deck of four suits, we can employ one of my favourite <a href="http://www.quora.com/Mathematics/What-is-the-most-you-dont-say-theorem-in-mathematics" target="_blank">"you don't say" math theorems</a>, the <a href="http://en.wikipedia.org/wiki/Pigeonhole_principle" target="_blank">Pigeonhole principle</a>, to realize that there will always be a pair of cards with the same suit. Hence:<br /><br /><br /><b>Rule 1: The first card that you reveal is the same suit as the target (the card that will be hidden).</b><br /><br />So far so good, by revealing one card we reduce the number of possible targets from 48 (52 - 4 revealed cards) to 12 (13 - 1 revealed card of the same suit).<br /><br />To signal the rank is harder, because in the worst case we don't have any influence about which three cards Rule 1 leaves us with. So we have to assume these three cards are entirely random. <a href="http://www.oxforddictionaries.com/words/all-right-or-alright" target="_blank">All right</a>, how many ways do we have to order three random cards? That's 6, because there are three choices for which card to show first, each of those gives us two choices of which card to show second, and the last card is determined by that: 3*2 = 6. This means that we can signal any number 1, 2, 3, 4, 5, or 6. A simple scheme of how to signal which number exactly is below, all we need to know at this point is that six different patterns is all the three middle cards can give us.<br /><br /><br /><b>Rule 2: Use the three middle cards to signal the distance in rank between the first card and the target.</b><br /><br />That's almost the entire trick, but there is one more idea required to reduce the number of possible targets from 12 to 6. To begin with, we realize that Rule 1 just says that one of the two cards of the same suit needs to be revealed first, but it does not state which one. For example, we <i>could</i> agree to always show the lower of the two cards, and then using the three middle cards to signal how many ranks higher the target is. This works well in half of the cases: Whenever the higher card is at most six ranks higher than the lower card. For example, given these five cards<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-BU6LXlloLIQ/U8tdgk3uWGI/AAAAAAAABAE/GDTvlrAhCh0/s1600/IMG_1490.JPG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://1.bp.blogspot.com/-BU6LXlloLIQ/U8tdgk3uWGI/AAAAAAAABAE/GDTvlrAhCh0/s1600/IMG_1490.JPG" height="240" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Following Rule 1, we have to choose one of the clubs as the target. Choosing <span style="font-size: small; text-align: start;">♣K </span>as target we can use the non-clubs to signal that the target is five ranks above the <span style="font-size: small; text-align: start;">♣8</span>, which is possible by Rule 2.</td></tr></tbody></table><br />we would choose the target to be ♣K, the first card to be ♣8, and the three remaining cards to signal a five, since the King is five ranks higher than the Eight.<br /><br />And what do we do when the two cards of the same suit are more than six apart? The beautiful answer is: <i>They are not!</i> At least not modulo 13:<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-VngOEWjKNWk/U8tdqU3I3NI/AAAAAAAABAM/RLAG7sxtXNg/s1600/IMG_1492.JPG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://2.bp.blogspot.com/-VngOEWjKNWk/U8tdqU3I3NI/AAAAAAAABAM/RLAG7sxtXNg/s1600/IMG_1492.JPG" height="240" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">No two cards can be more than six ranks apart (modulo 13).</td></tr></tbody></table><br />It turns out that the distance between any two of the 13 cards of the same suit is at most 6 (=floor(13/2)), if you continue counting from the beginning once you hit the Ace. For example, the distance between Q and 3 is four, the distance between J and 4 is six.<br /><br /><br /><b>Rule 3: Choose which of the two possible first cards to show (Rule 1) such that the targets' rank is at most 6 </b><b>(modulo 13) ranks above the first card.</b><br /><br />And that's all there is to it. We are now able to choose four cards in such a way that our partner will be able to calculate the remaining fifth card. For example:<br /><br />Given ♣2, ♣A, ♠Q, ♡3, ♡Q, we can either play for clubs and show ♣A first and signal <i>one</i>, or play for hearts, show ♡Q first and signal <i>four</i>.<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-xFuOe1W4brs/U8tdvC3bwjI/AAAAAAAABAU/VYl5kiz5wy4/s1600/IMG_1494.JPG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://1.bp.blogspot.com/-xFuOe1W4brs/U8tdvC3bwjI/AAAAAAAABAU/VYl5kiz5wy4/s1600/IMG_1494.JPG" height="240" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">We can choose if we want to let <span style="font-size: small; text-align: start;">♣2</span> or <span style="font-size: small; text-align: start;">♡3</span> be the target.</td></tr></tbody></table><br />As a final step, here is <i>a</i> scheme of how use the three middle cards to signal the number 1, 2, 3, 4, 5 or 6: Start with choosing an order for all 52 cards. For example: The higher the rank, the higher the card, and if the rank is the same, then use the suit order ♣ > ♠ > ♡ > ♢. Once we agree on the order of the three cards, low, mid, high, it is easy to translate that into the numbers from 1 to 6:<br /><br />low, mid, high -> 1<br />low, high, mid -> 2<br />mid, low, high -> 3<br />mid, high, low -> 4<br />high, low, mid -> 5<br />high, mid, low -> 6<br /><br /><br /><b>Concluding examples</b><br /><br />1) Given these five cards, ♣8, ♣K, ♠Q, ♡6, ♢Q, which cards should we reveal and in which order?<br /><br />According to Rule 1, the first and target card will be a club. The ♣K is 5 ranks higher than the ♣8, while the ♣8 is 8 ranks higher than the ♣K. Hence, according to Rule 3, we first reveal ♣8 and let ♣K be the target. With the remaining cards ♠Q, ♡6, ♢Q we have to signal 5, and hence reveal the highest card second, ♠Q, then the lowest, ♡6, and as fourth card the middle, ♢Q.<br /><br />2) Given the cards in the initial screenshot, the target card must be ♢A, because that's the spade which is one (the three middle cards are sorted in low, mid, high) above the ♢K.<br /><br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-ovEen2zvE7w/U7o6OS0bKaI/AAAAAAAAA_Y/ipDa9TdHQ-I/s1600/IMG_1434.JPG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://4.bp.blogspot.com/-ovEen2zvE7w/U7o6OS0bKaI/AAAAAAAAA_Y/ipDa9TdHQ-I/s1600/IMG_1434.JPG" height="240" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><div style="text-align: start;">The hidden target card is <span style="font-size: small; text-align: start;">♢A, </span>because that's the spade one rank higher than the first card <span style="font-size: small;">♢K.</span></div></td></tr></tbody></table><br /><b>Addendum:</b><br /><br />A similar, yet distinct, solution is the following: Ignoring Rule 1 and the suits gives us 4*3*2 = 24 ways to reveal four well-ordered cards. We reveal 4 of the 52 cards, leaving us with 48 possible targets. Since two cards can be at most 48/2 = 24 ranks apart (use a larger version of the circle and calculate modulo 48) we can agree that the target card is 1, 2, ..., or 24 ranks higher than the highest revealed card.<br /><br /><br /><br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2700792294997127931.post-34701082873768525292014-07-07T11:12:00.004-07:002014-07-07T11:12:42.659-07:00A card-deck challengeHello Mathemathinking readers,<br /><br />today's post is a bit special in two ways: First, I'm not Cory. My name is <a href="https://twitter.com/BernhardKonrad" target="_blank">Bernhard</a>, and I was Cory's Math office mate at UBC in Vancouver.<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://2.bp.blogspot.com/-qZ5sP3yWxiY/U7rf6R4c8dI/AAAAAAAAA_k/oyBtXbo4hJQ/s1600/179945_10100671384287692_1066016664_n.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://2.bp.blogspot.com/-qZ5sP3yWxiY/U7rf6R4c8dI/AAAAAAAAA_k/oyBtXbo4hJQ/s1600/179945_10100671384287692_1066016664_n.jpg" height="200" width="200" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Janelle, Cory and me at a hike last summer</td></tr></tbody></table><br />Several times Cory and I chatted about writing a blog together, and I'm happy to finally contribute to his page. In fact, it will be a double header: I want to challenge you to a fun quiz, and will give a week's time to find a solution, before I reveal my answer. Email full solutions to <a href="mailto:bernhard.konrad@gmail.com" target="_blank">me</a>, or discuss the problem in the comments below (hints are welcome, but please don't post full solutions in the comments). Ready? Here we go:<br /><br />You need a partner, and a standard 52-card deck with four suits and 13 cards per suit (no jokers). Five random cards are drawn from the deck. While you see none of the cards, your partner is allowed to view all five. Your partner then decides which four of those cards she wants to reveal to you and in which order. Is there an ordering scheme that you and your partner can agree on beforehand that allows you to determine the fifth (hidden) card from the choice and order of the four cards that your partner shows?<br /><br />If you think that's always possible, explain your scheme.<br />Otherwise, explain why such a general algorithm can not exist.<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="http://4.bp.blogspot.com/-ovEen2zvE7w/U7o6OS0bKaI/AAAAAAAAA_U/YLOhPD7kwZY/s1600/IMG_1434.JPG" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" src="http://4.bp.blogspot.com/-ovEen2zvE7w/U7o6OS0bKaI/AAAAAAAAA_U/YLOhPD7kwZY/s1600/IMG_1434.JPG" height="240" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Can you find a scheme that uniquely determines the fifth hidden card?</td></tr></tbody></table>To clarify: Since your partner can choose which four cards to show, he also chooses the card you have to guess. Also, all the information you get is the rank and the suit of the four cards, and the order in which they are revealed. There is no additional information in the way the cards are revealed (eg your partner can not flip some of the cards to signal additional information).<br /><br />Good luck, I look forward to <a href="mailto:bernhard.konrad@gmail.com" target="_blank">your answers</a>!Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2700792294997127931.post-79265895571312694632014-06-09T22:10:00.001-07:002014-06-10T22:21:16.880-07:00Voronoi cookies and the Post Office Problem<script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js" type="text/javascript"> MathJax.Hub.Config({ HTML: ["input/TeX","output/HTML-CSS"], TeX: { extensions: ["AMSmath.js","AMSsymbols.js"], equationNumbers: { autoNumber: "AMS" } }, extensions: ["tex2jax.js"], jax: ["input/TeX","output/HTML-CSS"], tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], processEscapes: true }, "HTML-CSS": { availableFonts: ["TeX"], linebreaks: { automatic: true } } }); </script>In an attempt to bake cookies, I failed to place the balls of dough far enough apart and, instead, a cookie cake formed. The result resembles a very useful diagram in mathematics called a Voronoi tessellation.<br /><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-0drKktI3row/U5aFevutpyI/AAAAAAAALTw/77aDIPc5bNI/s1600/20140608_221839.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-0drKktI3row/U5aFevutpyI/AAAAAAAALTw/77aDIPc5bNI/s1600/20140608_221839.jpg" height="240" width="320" /></a></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">What I was excited about when I pulled this out of the oven was the boundary that formed between the regions intended to be separate cookies. The regions of space circumscribed around these boundaries (the "intended cookies") are called Voronoi cells. I drew the lines that approximately partition the area of the pan into Voronoi cells. You can also imagine the center of the balls of cookie dough that I placed in the pan before I put it in the oven, marked as red points that I will call <i>sites </i>$p_k$.</div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-9znt2PjYGts/U5aE1nKWUcI/AAAAAAAALTs/Tg0B4M4uUp8/s1600/voronoicookies.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-9znt2PjYGts/U5aE1nKWUcI/AAAAAAAALTs/Tg0B4M4uUp8/s1600/voronoicookies.jpg" height="240" width="320" /></a></div><div class="separator" style="clear: both; text-align: left;">In a Voronoi diagram, all points on these lines are equidistant from the two sites of the adjoining Voronoi cells. All points inside the regions circumscribed by these lines are thus closer to the site of that Voronoi cell than any other Voronoi cell sites.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">This leads to the mathematical definition of a Voronoi cell or region $R_k$ of a site $p_k$ (the cookie centers) as all points $x$ that are closer to site $p_k$ than any other site:</div><div class="separator" style="clear: both; text-align: left;">$R_k:=\{x : d(x,p_k) < d(x,p_j)$ for all $j \neq k\}$.</div><div class="separator" style="clear: both; text-align: left;">Here, $d(x,p_k)$ is notation for the distance between point $x$ and site $p_k$.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">How are Voronoi cells useful? Consider a classic optimization problem called the Post Office Problem. A new city has just built 10 post offices. With the aim of reducing the cost of delivering mail, how can we optimally assign each residence a post office from which to receive mail?</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Assuming that the density of residences is spatially uniform, we seek to assign each house to the nearest post office*. Thinking of the location of the post offices as sites $p_k$, the solution to the optimization problem is to assign all houses in the Voronoi cell $R_k$ to post office $k$. This is because each point in the Voronoi cell $k$ is by definition closest to post office $k$ (located at $p_k$) than any other post office.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Another amusing story is that of a physician John Snow, who in London in 1854, suggested with a Voronoi diagram that Cholera was being spread by drinking water instead of through the air, as thought at the time. He plotted the Cholera cases on a map of London and imagined the water pumps as sites $p_k$, drawing the Voronoi diagram of these sites. He noticed that the victims appeared within a Voronoi cell of a certain [infected] water pump. The assumption here is that people get their water primarily from the closest water pump. This suggested that instead Cholera is water-borne and identified the problematic water source [1].</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">In my research group at Berkeley, we use Voronoi diagrams to model the pore space of nanoporous materials [2]. This geometrical representation of a material helps us search for materials that are suitable for applications such as storing natural gas for vehicular fuel tanks and separating carbon dioxide from the flue gas of coal-fired power plants.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">In machine learning, the branch of artificial intelligence that enables self-driving cars and the recommendation algorithms of Netflix, one supervised classification algorithm uses the Voronoi network, called nearest-neighbor classification. Let's say Netflix has a vector of data about you, including your age, gender, and what movies you've watched/how you've rated them. This is some representation of you in higher dimensional space. One strategy for recommending your next movie might be to simply find the person that is most like you in their database, see what movies that person likes, and recommend these movies to you. This corresponds to finding the site $p_k$ (representing the person most like you) in high dimensional space such that you lie in the Voronoi cell $R_k$.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">My colleague at Berkeley, upon reading this post, mentioned that she uses Voronoi tessellations to characterize how polymers interact with water, aiding in the identification of polymers for drug delivery and lubricants.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Finally, why did the cookie cake form a Voronoi diagram? I think that one can prove that baking cookies will form a Voronoi network if you assume that each cookie expands from its center at a constant, uniform rate and stops expanding when it meets the surface of another cookie. The points where the boundaries of the intended cookies meet will form the lines in the Voronoi diagram. My cookie cake is not a perfect Voronoi diagram because a) each cookie was not initially the same shape and b) the boundary of the pan does not allow for a radially symmetric cookie expansion.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">* The problem is realistically more complicated because mailmen must take roads, and these are not straight from the post office to the residences. Still, this may be a reasonable approximation. </div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">[1] http://plus.maths.org/content/uncovering-cause-cholera</div><div class="separator" style="clear: both; text-align: left;">[2] <span style="white-space: pre-wrap;">Marielle Pinheiro, Richard L. Martin, Chris H. Rycroft, Andrew Jones, Enrique Iglesia, Maciej Haranczyk, Characterization and comparison of pore landscapes in crystalline porous materials, Journal of Molecular Graphics and Modelling, Volume 44, July 2013, Pages 208-219,</span></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2700792294997127931.post-29471911751342205942014-05-22T00:06:00.000-07:002014-06-20T00:09:12.623-07:00Feel like your lovers had more lovers than you?<script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js" type="text/javascript"> MathJax.Hub.Config({ HTML: ["input/TeX","output/HTML-CSS"], TeX: { extensions: ["AMSmath.js","AMSsymbols.js"], equationNumbers: { autoNumber: "AMS" } }, extensions: ["tex2jax.js"], jax: ["input/TeX","output/HTML-CSS"], tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], processEscapes: true }, "HTML-CSS": { availableFonts: ["TeX"], linebreaks: { automatic: true } } }); </script> <br /><div>Most people have had fewer lovers than their lovers have, on average.</div><div><br /></div><div>The fact that sex is symmetric makes this seem paradoxical. If person X slept with person Y, of course person Y slept with person X. So, how can we possibly expect that a given person has had fewer lovers than their lovers?</div><div><br /></div><div>The answer is that you are more likely to have sex with someone that has had sex with a lot of people. Let's say person X has slept with 25 people, whereas person Y has slept with one. Then, [if you choose your partners at random] you are 25 times as likely to have slept with person X than person Y.</div><div><br /></div><div>This is a form of sampling bias. As individuals, we are more likely to sample [sleep with] those that are more sexually active than others in the population, and this causes us to observe an average higher than the true network average.</div><div><br /></div><div>In the extreme case, if person Z is a virgin, he or she cannot possibly count towards anyone's sample to determine the average number of lovers!</div><div><br /></div><div>This idea of sampling bias is behind the reas<span style="font-family: inherit;">on "<span style="background-color: white; line-height: 23px;">why people experience airplanes, restaurants, parks and beaches to be more crowded than the averages would suggest. When they’re empty, nobody’s there to notice." [Steven Strogatz, Ref. 1]</span></span></div><div><span style="font-family: inherit;"><br /></span></div><div><span style="font-family: inherit;">The paradoxical phenomena that most people have had fewer lovers than their lovers have also holds for our friendships. Your friends likely have more friends than you -- this is the "the friendship paradox" discovered by Scott Feld [2]. A study analyzing the social network of friends on Facebook (Ref. 3) showed that 93% of people on Facebook have fewer friends than the average of their friends's number of friends (Ref. 1)!</span></div><div><br /></div><div>Similarly, if you are in academia, your coauthors likely have more publications and more citations than you do (Ref. 4); you are more likely to have published with someone who has many publications than someone who has only one.</div><div><br /></div><div>Not only is the friendship paradox interesting, it is useful. During the H1N1 outbreak in 2009, researchers monitored the health of two disparate sets of students at Harvard University (Ref. 5). The first group consisted of randomly selected individuals. The second group consisted of a group of friends of the individuals in the first group. They found that the progression of the flu epidemic in the second group occurred two weeks earlier than the first! This is because those in the second group are more likely to have more friends than those in the first group by the friends paradox and hence be in contact with more individuals/get the flu earlier. This is a clever sampling method to sense a disease outbreak earlier than observing randomly selected individuals: instead observe<i> the friends</i> of randomly selected individuals (Ref. 5).<br /><br />----------<br /><br />Finally, what you've all been waiting for. A proof of the friends paradox. A network of friends can be represented as an undirected graph $G(V,E)$. This graph $G$ consists of a set $V$ of vertices, representing people, and a set $E$ of edges, representing friendships. An edge connects two vertices.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-iJrZ7s5TCc0/U32yNxZ6M8I/AAAAAAAALBs/cvO223FwUk0/s1600/Presentation1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-iJrZ7s5TCc0/U32yNxZ6M8I/AAAAAAAALBs/cvO223FwUk0/s1600/Presentation1.jpg" height="200" width="171" /></a></div><br />The only notation we need is:<br />$d(v)$: the degree of vertex $v$. (The number of friends that the person represented by the vertex $v$ has)<br />$|V|$: the total number of vertices (The number of people in our network)<br />$|E|$: the total number of edges in our network (The number of friendships).<br /><br />What is the expected number of friends a person has in our network? This corresponds to the expected value of $d(v)$ of a randomly chosen person in the network.<br />$E(d)=\sum_{v \in V} \frac{1}{|V|} d(v)$.<br />So we sum over all people and count their friends, giving each count a weight $\frac{1}{|V|}$ since this is the probability that the given person was the randomly chosen person (the same for everyone in the network).<br />This can be simplified by the handshaking lemma, which notes that if we loop over all people in the network and add up their count of friends to a total, we will double-count the number of edges in the graph. Thus<br />$E(d)= \frac{2 |E|}{|V|}$.<br /><br />Now, I will write the expected value of some characteristic $x$ of a neighbor of a randomly chosen person in the network as $\langle x \rangle_n$. This is computed as:<br />$\langle x \rangle_n = \dfrac{ \sum_{v \in V} d(v) x(v) } { \sum_{v \in V} d(v) }$.<br />The degree $d(v)$ appears in the top sum because vertex $v$ appears as a neighbor $d(v)$ times. Mathematically, this weight is where the idea of a sampling bias comes into play. The denominator is a normalization. Now it becomes clear that the expected number of friends of a neighbor is:<br />$\langle d \rangle_n = \dfrac{ \sum_{v \in V} d(v)^2 } { \sum_{v \in V} d(v) }$.<br /><br />Using the relation between variance of a random variable $X$, $E([X - E(X)]^2)$, the expected value of $X$, and the expected value of $X^2$:<br /> $E([X - E(X)]^2) = E(X^2) - E(X)^2$,<br />we can relate $\langle d \rangle_n$ to $E(d)$:<br />$\langle d \rangle_n = E(d) + E([d-E(d)]^2) / E(d)$<br /><br />This ends the proof by noting the following. The expected number of friends of a randomly chosen individual $\langle d \rangle_n$ is larger than the expected number of friends in the graph $E(d)$ by an amount $ E([d-E(d)]^2) / E(d)$, which is positive. Thus, our friends likely have more friends than we do!<br /><span style="font-family: inherit;"><br /></span></div><div><span style="font-family: inherit;">References</span></div><div><span style="font-family: inherit;">[1] http://opinionator.blogs.nytimes.com/2012/09/17/friends-you-can-count-on/?_php=true&_type=blogs&_r=0</span></div><div><span style="font-family: inherit;">[2] <span style="background-color: white; color: #222222; line-height: 16.1200008392334px;">Feld, S. L. (1991). Why Your Friends Have More Friends than You Do. </span><i style="background-color: white; color: #222222; line-height: 16.1200008392334px;">AJS</i><span style="background-color: white; color: #222222; line-height: 16.1200008392334px;">,</span><i style="background-color: white; color: #222222; line-height: 16.1200008392334px;">96</i><span style="background-color: white; color: #222222; line-height: 16.1200008392334px;">(6), 1464-77.</span></span></div><div><span style="font-family: inherit;"><span style="background-color: white; color: #222222; line-height: 16.1200008392334px;">[3] </span><span style="background-color: white; color: #222222; line-height: 16.1200008392334px;">Ugander, J., Karrer, B., Backstrom, L., & Marlow, C. (2011). The anatomy of the facebook social graph. </span><i style="background-color: white; color: #222222; line-height: 16.1200008392334px;">arXiv preprint arXiv:1111.4503</i><span style="background-color: white; color: #222222; line-height: 16.1200008392334px;">.</span></span></div><div><span style="font-family: inherit;"><span style="background-color: white; color: #222222; line-height: 16.1200008392334px;">[4] </span><span style="background-color: white; color: #222222; line-height: 16.1200008392334px;">Eom, Y. H., & Jo, H. H. (2014). Generalized friendship paradox in complex networks: The case of scientific collaboration. </span><i style="background-color: white; color: #222222; line-height: 16.1200008392334px;">Scientific reports</i><span style="background-color: white; color: #222222; line-height: 16.1200008392334px;">, </span><i style="background-color: white; color: #222222; line-height: 16.1200008392334px;">4</i><span style="background-color: white; color: #222222; line-height: 16.1200008392334px;">.</span></span><br /><span style="font-family: inherit;"><span style="background-color: white; color: #222222; line-height: 16.1200008392334px;">[5] Christakis, N. A., & Fowler, J. H. (2010). Social network sensors for early detection of contagious outbreaks. </span><i style="background-color: white; color: #222222; line-height: 16.1200008392334px;">PloS one</i><span style="background-color: white; color: #222222; line-height: 16.1200008392334px;">, </span><i style="background-color: white; color: #222222; line-height: 16.1200008392334px;">5</i><span style="background-color: white; color: #222222; line-height: 16.1200008392334px;">(9), e12948.</span></span><br /><span style="font-family: inherit;"><span style="background-color: white; color: #222222; line-height: 16.1200008392334px;">[6] </span><span style="color: #222222;"><span style="line-height: 16.1200008392334px;">http://en.wikipedia.org/wiki/Friendship_paradox</span></span></span></div><div><span style="font-family: inherit;"><span style="background-color: white; color: #222222; line-height: 16.1200008392334px;">[7] </span>http://www.psychologytoday.com/blog/the-scientific-fundamentalist/200911/why-your-friends-have-more-friends-you-do</span></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2700792294997127931.post-72581330114443600992013-07-05T17:13:00.000-07:002013-07-06T01:00:01.672-07:00Oceanic algae bloomsCheck out <a href="http://abcnews.go.com/Travel/slideshow/china-beach-swamped-algae-turns-sea-green-19584557">these photos</a> of the overgrowth of algae on a beach in China. There are two main probable causes.<br /><br /><a href="http://2.bp.blogspot.com/-F5Mjks9W_lw/UddZZJCsn1I/AAAAAAAAFHc/XMMLaxTKTSk/s1600/Screen+shot+2013-07-05+at+4.39.38+PM.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="400" src="http://2.bp.blogspot.com/-F5Mjks9W_lw/UddZZJCsn1I/AAAAAAAAFHc/XMMLaxTKTSk/s400/Screen+shot+2013-07-05+at+4.39.38+PM.png" width="355" /></a><i><b>Pollution</b></i>. Algae are autotrophs, which means that they synthesize their own carbohydrates, fats, and proteins from carbon dioxide and more basic chemical substances in their environment. In general, an algae population will keep growing until its resources become limited or a predator keeps it in check. Under normal conditions, the ocean water does not have enough nutrients, such as phosphorous, to sustain a large growth rate of algae. The resources are thus limited.<br /><br />The fertilizers that farmers use for crops, which are high in phosphorous content, can "run off" into a river or directly into the ocean. Similarly, a chemical plant may pollute a river or the ocean with a substance high in phosphorous. In both cases, the pollution serves as a source of the limiting nutrient that the algae needed to grow, and an algae bloom unfolds.<br /><br /><i><b>Higher water temperatures.</b></i> There are optimal temperatures at which algae grow. In colder waters, the rate of algal growth is limited. At warmer water temperatures, however, the conditions are favorable for higher algae growth rates.<br /><br /><br /><a name='more'></a><br /><br />Besides the aesthetic costs, horrible smell, and clean-up required to make the beach a nice place again, the algae overgrowth induced by human tampering has detrimental consequences to the local ecosystem. Algae do not live too long, and when they die, they start to decay. In the process of decaying, dissolved oxygen in the ocean water is depleted so that the carbon in the dead algae can be converted to carbon dioxide again (see <a href="http://mathemathinking.blogspot.com/2013/06/trees-come-from-air-and-environmental.html">Trees come from air</a>). As the oxygen is depleted from the water, the fish begin to die. Some species algae release toxins as they grow, which is another cause of detriment to the local ecosystem.<br /><div class="separator" style="clear: both; text-align: left;"><br /></div><br />Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2700792294997127931.post-84648971757856727942013-06-20T21:54:00.002-07:002013-06-23T13:26:37.579-07:00Trees come from air and environmental implications<blockquote class="tr_bq">"People look at a tree and think it comes out of the ground," but "trees come from air."<br>-Richard Feynman</blockquote>A tree is around 50% carbon atoms by mass, and these carbon atoms come from carbon dioxide (CO<sub>2</sub>) in the air. Carbohydrates, which form most of the substance of the tree, are formed by breaking a carbon-oxygen bond in CO<sub>2</sub> and combining it with water, which condensed from clouds in the air to form rainfall. <br><br>Since, energetically, carbon atoms would prefer to stay as CO<sub>2</sub> instead of reside in carbohydrates, it takes energy to rip apart a C-O bond. Trees get the energy to do this synthesis of carbohydrates from the sunlight (photons)-- hence photosynthesis. In the process, oxygen (O<sub>2</sub>) is released.<br><br>When we put a log in the fireplace and provide heat to kickstart the reverse reaction, the oxygen from the air grabs the carbon atoms back from the log to make CO<sub>2</sub> and water again (combustion). Because carbon loves to be in CO<sub>2</sub> the fire spontaneously carries on. The extra energy it took to break the C-O bonds in the first place is released as light and heat. In a sense, the sunlight is being emitted back out to complete the balanced cycle.<br><br>Famous physicist Richard Feynman explains this in such a riveting way:<span style="background-color: white; color: #222222; font-family: Calibri, sans-serif; font-size: 14px;"></span><br><br><span style="background-color: white; color: #222222; font-family: Calibri, sans-serif; font-size: 14px;"></span> <br><div class="separator" style="clear: both; text-align: center;"><span style="font-size: x-large;"><iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='640' height='532' src='https://www.youtube.com/embed/ifk6iuLQk28?feature=player_embedded' frameborder='0' /></span></div><br>What does this mean for the environment? Around 45% of our CO<sub>2</sub> emissions are from burning fossil fuels. But a sizeable portion, 17%, is from deforestation. [1] When we clear land for agriculture or for buildings and burn the trees, CO<sub>2</sub> that was once stored in the trees gets released back into the atmosphere to cause global warming. One action we can take is minimize deforestation to prevent further release of CO<sub>2</sub> from the incumbent trees.<br><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="clear: right; float: right; margin-bottom: 1em; text-align: right;"><tbody><tr><td style="text-align: center;"><a href="http://1.bp.blogspot.com/-4K0vDVu2BB4/UcPeD-o4-XI/AAAAAAAAFA4/iDPdg43AOYw/s1600/20130609_155727.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="240" src="http://1.bp.blogspot.com/-4K0vDVu2BB4/UcPeD-o4-XI/AAAAAAAAFA4/iDPdg43AOYw/s320/20130609_155727.jpg" width="320"></a></td></tr><tr><td class="tr-caption" style="text-align: center;">A rotting tree releases CO2 back into the atmosphere.</td></tr></tbody></table><br><span style="font-family: inherit;"></span>So carbon dioxide is food for a growing tree, and, as a tree grows, it acts as a carbon sink since it takes carbon dioxide out of the air and stores it in its trunk. Planting a new tree can thus offset some of our CO<sub>2</sub> emissions. But, given that we plant a new forest on a piece of land, how much of an impact can we make? A square meter of tree cover can sequester 0.306 kg of carbon per year. [2] The average passenger vehicle in the US consumes roughly 1300 kg of carbon per year. [3] This means that, to offset the CO<sub>2</sub> emissions from one vehicle, one would need to maintain 4,250 m<sup>2</sup> of growing trees. For comparison, an American football field is 5,300 m<sup>2</sup>.<br><br>I added the word 'growing' in front of 'trees' in my discussion above. Actually, a mature forest does not absorb much CO<sub>2.</sub> When trees die, fall over, and rot-- a natural process in a mature forest-- micro-organisms decompose the rotting tree, releasing the CO<sub>2</sub> once stored in the tree trunk back into the atmosphere. A mature forest is in a kind of equilibrium, where new trees can grow and sequester CO<sub>2</sub> only to take the place of an older, fallen tree which is emitting CO<sub>2</sub>.<br><br>Therefore, to make a substantial impact on offsetting anthropogenic CO<sub>2</sub> emissions, we must plant new forests <i>while maintaining the ones we have</i>. That is, we can't count on the mature forest land we have today to keep working hard to eventually absorb all of the the CO<sub>2</sub> that we emit. One way to retain the structure of a tree that has been cut down is by turning it into lumber. This helps perpetuate it as a carbon sink. However, keep in mind that a tree must be transported and processed to be turned into lumber. This takes energy and releases more carbon. Only if this carbon is less than that stored in lumber is this a net negative CO<sub>2</sub> emitting process.<br><span style="font-family: inherit;"><br></span><span style="font-family: inherit;">[1] </span><a href="http://www.epa.gov/climatechange/ghgemissions/global.html">http://www.epa.gov/climatechange/ghgemissions/global.html</a><br>[2] Nowak et. al. Carbon storage and sequestration by trees in urban and community<br>areas of the United States. 2013.<br>[3] <a href="http://www.epa.gov/cleanenergy/energy-resources/refs.html">http://www.epa.gov/cleanenergy/energy-resources/refs.html</a>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2700792294997127931.post-78491804157912997812013-05-20T00:30:00.000-07:002013-06-02T18:03:57.169-07:00How do airlines choose by how many customers to overbook flights?<script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js" type="text/javascript"> MathJax.Hub.Config({ HTML: ["input/TeX","output/HTML-CSS"], TeX: { extensions: ["AMSmath.js","AMSsymbols.js"], equationNumbers: { autoNumber: "AMS" } }, extensions: ["tex2jax.js"], jax: ["input/TeX","output/HTML-CSS"], tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], processEscapes: true }, "HTML-CSS": { availableFonts: ["TeX"], linebreaks: { automatic: true } } }); </script>A few years ago, I was returning home from a trip to the Florida Keys, which required two layovers. After my first flight, the airline announced that the next flight was overbooked. A \$500 voucher would be awarded to the costumer that relinquishes his or her seat. Since this was the beginning of my lazy summer before I started graduate school, I jumped at the opportunity and took the \$500 voucher and free hotel room for the night.<br /><br /><blockquote class="tr_bq"><i>Overselling or overbooking is the sale of a volatile good or service in excess of actual capacity.</i> -Wikipedia.</blockquote><br /><a href="http://3.bp.blogspot.com/-iDMzAdOWmvo/UZnSpCYyVHI/AAAAAAAABiU/6ciNhcYjOJc/s1600/jj.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="240" src="http://3.bp.blogspot.com/-iDMzAdOWmvo/UZnSpCYyVHI/AAAAAAAABiU/6ciNhcYjOJc/s320/jj.jpg" width="320" /></a>For the next year, the voucher rotted in my inbox until it expired, as I didn't take the opportunity to fly with that airline again. While airlines likely count on a fraction of the vouchers to expire, overbooking can maximize profits even when customers are payed off with these pricey vouchers and hotel rooms.<br /><br />Consider that a fraction of flyers do not show up in time for their flights due to a delay in their preceding connection flight or to personal circumstances. In anticipation of this, airlines overbook the plane (sell more tickets than capacity) and hope that just the right amount of customers show up to get a full plane.<br /><br />Let's assume that an airline gives full refunds for flights missed due to personal circumstances, or equivalently for the math, that all missed flights are due to delays in preceding connection flights. Of course, airlines do not charge twice when a customer misses a connection because of a preceding delay and takes the next flight out. With this, an airline receives revenue from a passenger equal to the ticket price only when he or she actually boards the flight. Here, each empty seat is lucidly lost revenue: if the seat is empty, the airline does not receive the revenue from the ticket sale.<br /><br />Overbooking makes it likely that a flight is full of passengers so the airline receives the most amount of income (seat capacity * ticket price). But, if the airline overbooks too much, it must fork out costly vouchers and hotel rooms to the passengers that get bumped from the flight and give them a seat on another plane, potentially perpetuating the cycle and, most importantly, <i>decreasing</i> the revenue. Obviously, if the airline overbooks too many flights, it is just giving out vouchers. Somewhere in between is the sweet spot that maximizes revenue.<br /><br />Let's put ourselves in the place of the airline and say the cost (airline voucher + hotel room + ticket for next available flight + lost customer loyalty) of bumping a passenger is \$800, and we have a 100-seat plane that flies from SFO --> ORD at a ticket price of \$250**. By how many seats should we overbook the plane on this route?<br /><br /><i>Data-driven decisions. </i>Out of the thousands of SFO --> ORD flights over the past ten years, our airline company knows:<br />the total number of airline seat tickets sold: A<br />the number of these A customers that actually showed up on time for the flight: B<br />Given a random customer, the probability that he or she will show up for their flight is thus p=B/A. We will use $p=0.9$, close to <a href="http://www.nytimes.com/2007/05/30/business/30bump.html?pagewanted=print&_r=0">this source</a> that reports 7-8% of customers are no-shows.<br /><br />We can treat the event that a customer boards the flight as being independent of the other passengers boarding*** and occurring with probability $p$. Our goal is to find the number of tickets beyond capacity that we should sell, which we call $x$. The number of customers $N$ that show up for their flight on the 100-seat plane is thus a <a href="http://en.wikipedia.org/wiki/Binomial_distribution">binomial random variable</a> with $100+x$ trials and probability of success $p$:<br />$P(N=n)=\binom{100+x}{n} p^{n}(1-p)^{100+x-n}$.<br />The term $p^{n}(1-p)^{100+x-n}$ is the probability of a specific sequence of $n$ out of $100+x$ customers boarding their flight, whereas the term $\binom{100+x}{n}$ gives the number of combinations of such sequences (we don't care which of the customers show up-- just whether they do or not!).<br /><br />One approach might be to choose $x$ such that the expected value of $N$ is equal to the number of seats so that just the right amount of customers show up in the long run:<br />$E(N)=(100+x)p=100$.<br /><br />This approach is short-sighted since it does not take into account the cost of the airline ticket or the voucher award. For example, if the airline gives out \$1 million vouchers to overbooked customers, the airline wouldn't overbook at all.<br /><br />A better approach is to find a formula for the expected value of the revenue of this flight with our policy of overbooking by $x$ customers and plot the expected revenue as a function of $x$ to see which $x$ maximizes revenue. The revenue $r=r(n)$ depends on the number of passengers $n$ out of $100+x$ ticket purchasers that actually show up. We get income from each person that boards the plane and lose income from each person we bump off of the plane in the case that we are over capacity ($n>100$):<br />$r(n) = 250n$ if $n<100$ [if less than 100 show, we get \$250 for each passenger that shows, and we don't lose any revenue since no customers were bumped.]<br />$r(n) = (250)(100) - 800(n-100)$ if $n\ge 100$ [if more than 100 show, we get \$250 only for first 100 passengers, and we lose \$800 for each of the $(n-100)$ customers that were bumped.].<br /><br />Now, the revenue that we expect to make, given an overbooking policy:<br />$E($revenue $|x)=\displaystyle \sum _{n=0}^{100+x} P(N=n$ $| x) r(n)$.<br />The $P(N=n)$ is given by the binomial$(100+x,p)$ distribution given a few lines above. Since we are more likely to get a full plane with increasing overbooking $x$, we get more and more likely to get the maximum possible income \$(250)(100) from the flight as $x$ increases. On the other hand, we are more and more likely to go over a full plane as $x$ increases, and the \$800 cost of bumping passengers starts to erode our revenue stream.<br /><br />Using the normal approximation to the binomial distribution (with a continuity correction), I plot the expected revenue as a function of overbooking $x$ in the graph below. There are a number of remarks from this plot that aid our intuition.<br /><ul><li>During a full flight, the revenue would be \$250(100 seats)=\$25000, the upper y-limit on this graph. Note that, in the long-run, we cannot expect to fill every airplane seat-- even if we choose a good $x$.</li><li>Selling 100 tickets for 100 seats ($x=0$) does not maximize the revenue. The maximum expected revenue occurs when we sell 109 tickets! That is, revenue is maximized when we oversell the flight 9 seats beyond capacity. [$x=9$ maximizes revenue, and is therefore the best choice.]</li><li>Beyond 109 seats, the revenue decreases because the cost of bumping customers (vouchers, getting the next flight, this customer will fly on a different airline in the future) outweighs the higher certainty of getting a full plane and getting income from 100 full seats. Eventually, when we overbook the plane by 46, the airline is expected to pay more for bumping passengers than it receives in ticket sales!</li></ul><br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-f4mr5R9lGpw/UaEq2tckI3I/AAAAAAAABjc/uxz3u4WwXq0/s1600/dude.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="482" src="http://3.bp.blogspot.com/-f4mr5R9lGpw/UaEq2tckI3I/AAAAAAAABjc/uxz3u4WwXq0/s640/dude.png" width="640" /></a></div><br /><br />It should be clear why and how airlines choose to overbook flights to maximize their profits. Each empty seat is lost money, but the airline must weigh this against the risk of paying for vouchers and hotels for customers that couldn't fit on the full flight-- and the lost customer loyalty that ensues*.<br /><br />This analysis considers only the revenue of the airline. However, there is an externality associated with bumping passengers. Think about how this passenger may lose out on one day of pay, how his or her employer loses out of one day of valuable work, and how the local ice cream shop loses out on one customer that would have otherwise taken his or her family out for ice cream that day.<br /><br />* Lost customer loyalty was theoretically included in the "cost of bumping a customer" and the analysis holds.<br /><br />** Ticket price changes with season! We can see how complicated this gets.<br /><br />*** Realistically, airlines will have models that take into account customer demographics. Perhaps even customer-specific data: one with a history of missing flights can be assumed to be more likely to miss a flight again. Further, tickets sold in a group may be treated differently: e.g., a whole family buying a set of tickets vs. a single businessman. See <a href="http://www.nytimes.com/2007/05/30/business/30bump.html?pagewanted=all&_r=0">this article</a> for how complicated airline models realistically may be. An interesting factor is the airport from which one is flying. Think about it: leaving Las Vegas vs. Cleveland-- who is more likely to miss their flight?<br /><br /><br />Unknownnoreply@blogger.com4tag:blogger.com,1999:blog-2700792294997127931.post-61387897895079009922013-01-03T15:26:00.000-08:002013-01-03T15:28:28.382-08:00Basemap: Toolkit in Python for plotting data on maps<script src="http://syntaxhighlighter.googlecode.com/svn/trunk/Scripts/shCore.js" type="text/javascript"></script><script src="http://syntaxhighlighter.googlecode.com/svn/trunk/Scripts/shBrushCpp.js" type="text/javascript"></script><script src="http://syntaxhighlighter.googlecode.com/svn/trunk/Scripts/shBrushCSharp.js" type="text/javascript"></script><script src="http://syntaxhighlighter.googlecode.com/svn/trunk/Scripts/shBrushCss.js" type="text/javascript"></script><script src="http://syntaxhighlighter.googlecode.com/svn/trunk/Scripts/shBrushDelphi.js" type="text/javascript"></script><script src="http://syntaxhighlighter.googlecode.com/svn/trunk/Scripts/shBrushJava.js" type="text/javascript"></script><script src="http://syntaxhighlighter.googlecode.com/svn/trunk/Scripts/shBrushJScript.js" type="text/javascript"></script><script src="http://syntaxhighlighter.googlecode.com/svn/trunk/Scripts/shBrushPhp.js" type="text/javascript"></script><script src="http://syntaxhighlighter.googlecode.com/svn/trunk/Scripts/shBrushPython.js" type="text/javascript"></script><script src="http://syntaxhighlighter.googlecode.com/svn/trunk/Scripts/shBrushRuby.js" type="text/javascript"></script><script src="http://syntaxhighlighter.googlecode.com/svn/trunk/Scripts/shBrushSql.js" type="text/javascript"></script><script src="http://syntaxhighlighter.googlecode.com/svn/trunk/Scripts/shBrushVb.js" type="text/javascript"></script><script src="http://syntaxhighlighter.googlecode.com/svn/trunk/Scripts/shBrushXml.js" type="text/javascript"></script> <script language="javascript">dp.SyntaxHighlighter.BloggerMode(); dp.SyntaxHighlighter.HighlightAll('code'); </script> I found a cool package in Python for plotting data on maps. It's called <a href="http://matplotlib.org/basemap/">Basemap</a>, a toolkit under Matplotlib. The <a href="http://matplotlib.org/basemap/users/examples.html">example gallery</a> shows off some of its capabilities for meteorology/climatology data and the images you see on the small screen on the back of the seat in front of you on an airplane.<br /><br />As another example, I plotted the location of the 15 most populous cities in the United States and made the size of the solid circle proportional to the population. Interestingly, more than half of the 15 most populous cities are in California and Texas. The output:<br /><br /><a href="http://4.bp.blogspot.com/-Gli4XeFi01Q/UOYSbPT1hjI/AAAAAAAABFk/mBLlXsbWqeo/s1600/image.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="http://4.bp.blogspot.com/-Gli4XeFi01Q/UOYSbPT1hjI/AAAAAAAABFk/mBLlXsbWqeo/s1600/image.png" height="475" width="640" /></a>The code is below. I put the data (population, latitude, longitude) for each of the cities in a dictionary. Using some basemap commands, I plotted a map of the US. The scatter object then plots each city on the map with a red, solid circle whose size is proportional to the population of the city.<br /><br /><br /><html xmlns="http://www.w3.org/1999/xhtml"><head><title>states.py</title> <style type="text/css">.S0 { color: #808080; font-size: 10pt; } .S1 { font-family: 'Courier New'; color: #007F00; font-size: 10pt; } .S2 { color: #007F7F; font-size: 10pt; } .S4 { font-family: 'Courier New'; color: #7F007F; font-size: 10pt; } .S5 { font-weight: bold; color: #00007F; font-size: 10pt; } .S10 { font-weight: bold; color: #000000; font-size: 10pt; } span { font-family: 'Courier New'; color: #000000; font-size: 10pt; } </style></head><body bgcolor="#FFFFFF"><span><span class="S5">import</span><span class="S0"> </span>pylab<span class="S0"> </span><span class="S5">as</span><span class="S0"> </span>plt<br /><span class="S5">from</span><span class="S0"> </span>mpl_toolkits<span class="S10">.</span>basemap<span class="S0"> </span><span class="S5">import</span><span class="S0"> </span>Basemap<br />plt<span class="S10">.</span>close<span class="S10">(</span><span class="S4">'all'</span><span class="S10">)</span><br /><br /><span class="S1"># Data of city location (logitude,latitude) and population</span><br />pop<span class="S10">={</span><span class="S4">'New York'</span><span class="S10">:</span><span class="S2">8244910</span><span class="S10">,</span><br /><span class="S4">'Los Angeles'</span><span class="S10">:</span><span class="S2">3819702</span><span class="S10">,</span><br /><span class="S4">'Chicago'</span><span class="S10">:</span><span class="S2">2707120</span><span class="S10">,</span><br /><span class="S4">'Houston'</span><span class="S10">:</span><span class="S2">2145146</span><span class="S10">,</span><br /><span class="S4">'Philadelphia'</span><span class="S10">:</span><span class="S2">1536471</span><span class="S10">,</span><br /><span class="S4">'Pheonix'</span><span class="S10">:</span><span class="S2">1469471</span><span class="S10">,</span><br /><span class="S4">'San Antonio'</span><span class="S10">:</span><span class="S2">1359758</span><span class="S10">,</span><br /><span class="S4">'San Diego'</span><span class="S10">:</span><span class="S2">1326179</span><span class="S10">,</span><br /><span class="S4">'Dallas'</span><span class="S10">:</span><span class="S2">1223229</span><span class="S10">,</span><br /><span class="S4">'San Jose'</span><span class="S10">:</span><span class="S2">967487</span><span class="S10">,</span><br /><span class="S4">'Jacksonville'</span><span class="S10">:</span><span class="S2">827908</span><span class="S10">,</span><br /><span class="S4">'Indianapolis'</span><span class="S10">:</span><span class="S2">827908</span><span class="S10">,</span><br /><span class="S4">'Austin'</span><span class="S10">:</span><span class="S2">820611</span><span class="S10">,</span><br /><span class="S4">'San Francisco'</span><span class="S10">:</span><span class="S2">812826</span><span class="S10">,</span><br /><span class="S4">'Columbus'</span><span class="S10">:</span><span class="S2">797434</span><span class="S10">}</span><span class="S0"> </span><span class="S1"># dictionary of the populations of each city</span><br /><br />lat<span class="S10">={</span><span class="S4">'New York'</span><span class="S10">:</span><span class="S2">40.6643</span><span class="S10">,</span><br /><span class="S4">'Los Angeles'</span><span class="S10">:</span><span class="S2">34.0194</span><span class="S10">,</span><br /><span class="S4">'Chicago'</span><span class="S10">:</span><span class="S2">41.8376</span><span class="S10">,</span><br /><span class="S4">'Houston'</span><span class="S10">:</span><span class="S2">29.7805</span><span class="S10">,</span><br /><span class="S4">'Philadelphia'</span><span class="S10">:</span><span class="S2">40.0094</span><span class="S10">,</span><br /><span class="S4">'Pheonix'</span><span class="S10">:</span><span class="S2">33.5722</span><span class="S10">,</span><br /><span class="S4">'San Antonio'</span><span class="S10">:</span><span class="S2">29.4724</span><span class="S10">,</span><br /><span class="S4">'San Diego'</span><span class="S10">:</span><span class="S2">32.8153</span><span class="S10">,</span><br /><span class="S4">'Dallas'</span><span class="S10">:</span><span class="S2">32.7942</span><span class="S10">,</span><br /><span class="S4">'San Jose'</span><span class="S10">:</span><span class="S2">37.2969</span><span class="S10">,</span><br /><span class="S4">'Jacksonville'</span><span class="S10">:</span><span class="S2">30.3370</span><span class="S10">,</span><br /><span class="S4">'Indianapolis'</span><span class="S10">:</span><span class="S2">39.7767</span><span class="S10">,</span><br /><span class="S4">'Austin'</span><span class="S10">:</span><span class="S2">30.3072</span><span class="S10">,</span><br /><span class="S4">'San Francisco'</span><span class="S10">:</span><span class="S2">37.7750</span><span class="S10">,</span><br /><span class="S4">'Columbus'</span><span class="S10">:</span><span class="S2">39.9848</span><span class="S10">}</span><span class="S0"> </span><span class="S1"># dictionary of the latitudes of each city</span><br /><br />lon<span class="S10">={</span><span class="S4">'New York'</span><span class="S10">:</span><span class="S2">73.9385</span><span class="S10">,</span><br /><span class="S4">'Los Angeles'</span><span class="S10">:</span><span class="S2">118.4108</span><span class="S10">,</span><br /><span class="S4">'Chicago'</span><span class="S10">:</span><span class="S2">87.6818</span><span class="S10">,</span><br /><span class="S4">'Houston'</span><span class="S10">:</span><span class="S2">95.3863</span><span class="S10">,</span><br /><span class="S4">'Philadelphia'</span><span class="S10">:</span><span class="S2">75.1333</span><span class="S10">,</span><br /><span class="S4">'Pheonix'</span><span class="S10">:</span><span class="S2">112.0880</span><span class="S10">,</span><br /><span class="S4">'San Antonio'</span><span class="S10">:</span><span class="S2">98.5251</span><span class="S10">,</span><br /><span class="S4">'San Diego'</span><span class="S10">:</span><span class="S2">117.1350</span><span class="S10">,</span><br /><span class="S4">'Dallas'</span><span class="S10">:</span><span class="S2">96.7655</span><span class="S10">,</span><br /><span class="S4">'San Jose'</span><span class="S10">:</span><span class="S2">121.8193</span><span class="S10">,</span><br /><span class="S4">'Jacksonville'</span><span class="S10">:</span><span class="S2">81.6613</span><span class="S10">,</span><br /><span class="S4">'Indianapolis'</span><span class="S10">:</span><span class="S2">86.1459</span><span class="S10">,</span><br /><span class="S4">'Austin'</span><span class="S10">:</span><span class="S2">97.7560</span><span class="S10">,</span><br /><span class="S4">'San Francisco'</span><span class="S10">:</span><span class="S2">122.4183</span><span class="S10">,</span><br /><span class="S4">'Columbus'</span><span class="S10">:</span><span class="S2">82.9850</span><span class="S10">}</span><span class="S0"> </span><span class="S1"># dictionary of the longitudes of each city</span><br /><br />m<span class="S0"> </span><span class="S10">=</span><span class="S0"> </span>Basemap<span class="S10">(</span>llcrnrlon<span class="S10">=-</span><span class="S2">119</span><span class="S10">,</span>llcrnrlat<span class="S10">=</span><span class="S2">22</span><span class="S10">,</span>urcrnrlon<span class="S10">=-</span><span class="S2">64</span><span class="S10">,</span>urcrnrlat<span class="S10">=</span><span class="S2">49</span><span class="S10">,</span><br />projection<span class="S10">=</span><span class="S4">'lcc'</span><span class="S10">,</span>lat_1<span class="S10">=</span><span class="S2">33</span><span class="S10">,</span>lat_2<span class="S10">=</span><span class="S2">45</span><span class="S10">,</span>lon_0<span class="S10">=-</span><span class="S2">95</span><span class="S10">,</span>resolution<span class="S10">=</span><span class="S4">'c'</span><span class="S10">)</span><br />m<span class="S10">.</span>drawcoastlines<span class="S10">()</span><br />m<span class="S10">.</span>drawstates<span class="S10">()</span><br />m<span class="S10">.</span>drawcountries<span class="S10">()</span><br />max_size<span class="S10">=</span><span class="S2">80</span><br /><span class="S5">for</span><span class="S0"> </span>city<span class="S0"> </span><span class="S5">in</span><span class="S0"> </span>lon<span class="S10">.</span>keys<span class="S10">():</span><br /><span class="S0"> </span>x<span class="S10">,</span><span class="S0"> </span>y<span class="S0"> </span><span class="S10">=</span><span class="S0"> </span>m<span class="S10">(-</span>lon<span class="S10">[</span>city<span class="S10">],</span>lat<span class="S10">[</span>city<span class="S10">])</span><span class="S0"> </span><br /><span class="S0"> </span>m<span class="S10">.</span>scatter<span class="S10">(</span>x<span class="S10">,</span>y<span class="S10">,</span>max_size<span class="S10">*</span>pop<span class="S10">[</span>city<span class="S10">]/</span>pop<span class="S10">[</span><span class="S4">'New York'</span><span class="S10">],</span>marker<span class="S10">=</span><span class="S4">'o'</span><span class="S10">,</span>color<span class="S10">=</span><span class="S4">'r'</span><span class="S10">)</span><br />plt<span class="S10">.</span>show<span class="S10">()</span></span></body></html> Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2700792294997127931.post-8774448202808367282012-12-24T10:58:00.000-08:002012-12-26T18:40:48.984-08:00The Principle of Maximum Entropy<script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js" type="text/javascript"> MathJax.Hub.Config({ HTML: ["input/TeX","output/HTML-CSS"], TeX: { extensions: ["AMSmath.js","AMSsymbols.js"], equationNumbers: { autoNumber: "AMS" } }, extensions: ["tex2jax.js"], jax: ["input/TeX","output/HTML-CSS"], tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], processEscapes: true }, "HTML-CSS": { availableFonts: ["TeX"], linebreaks: { automatic: true } } }); </script> The principle of maximum entropy is invoked when we have some piece(s) of information about a probability distribution, but not enough to characterize it completely-- likely because we do not have the means or resources to do so. As an example, if all we know about a distribution is its average, we can imagine infinite shapes that yield a particular average. The principle of maximum entropy says that we should humbly choose the distribution that maximizes the amount of unpredictability contained in the distribution, under the constraint that the distribution matches the average that we measured. Taking the idea to the extreme, it wouldn't be scientific to choose a distribution that simply yields the average value 100% of the time. Below, we define entropy and show how it can be interpreted as unpredictability or uninformativeness.<br /><br />Take a sample space of $n$ events, where event $i$ occurs with probability $p_i$. The surprisal of event $i$ is defined as $-\log{p_i}$. Since $p_i \in [0,1]$, the surprisal runs monontonically from infinity to zero. This is intuitive because, if an event $i$ will occur with certainty ($p_i=1$), we will be zero surprised when we see it occur; if an event $i$ cannot possibly occur ($p_i=0$), we will be infinitely surprised. Why choose the logarithm as the monotonically increasing function from zero to infinity? If we have two independent events $i$ and $j$, the probability of, after two observations, seeing event $i$ and then event $j$ is $p_i p_j$. Via the famous property of the logarithm, the surprisal to see this occur is $\log{p_i p_j}=\log{p_i}+\log{p_j}$, making the surprisals additive.<br /><br />Entropy is a characteristic of not an event, but of the entire probability distribution. Entropy is defined as the average surprisal in the entire distribution $<-\log{p_i}>=-\displaystyle \sum _{i=1}^n p_i \log{p_i}$. The entropy is a measure of how uninformative a given probability distribution is-- a high entropy translates to high unpredictability. Thus, maximizing entropy is consistent with maximizing unpredictability, given the little information we may know about a distribution. The most informative distribution we can imagine is where we know that an event will occur 100% of the time, giving an entropy of zero. The least informative distribution we can imagine is a uniform distribution, where each event in the sample space has an equal chance of occurring, giving an entropy of $\log{n}$. The uniform distribution is the least informative because it treats each event in the sample space equally and gives no information about one event being more likely to occur than another. Next, we show mathematically that, when we know nothing about a probability distribution, the distribution that maximizes the entropy is the uniform distribution. This is the principle of equal a priori probab<span style="font-family: inherit;">ilities: "<span style="background-color: white;">in the absence of any reason to expect one event rather than another, all the possible events should be assigned the same probability" [1]</span>.</span><br /><br />Something we always know about a probability distribution is that it must be normalized so that $\displaystyle \sum _{i=1}^n p_i =1$. So, we maximize the entropy $-\displaystyle \sum _{i=1}^n p_i \log{p_i}$ under the normalization constraint. Using a Lagrangian multiplier, we recast this problem as:<br /><br />$\max \left( \displaystyle -\sum _{i=1}^n p_i \log{p_i} +\lambda ( \sum _{i=1}^n p_i-1) \right).$<br /><br />Taking the derivative with respect to $p_i$ and setting it equal to zero for a maximum, we find that $p_i$ is the same for every event $i$. By the normalization constraint, this gives us a uniform distribution $p_i=\frac{1}{n}$ and an entropy of $\log{n}$. So the principle of equal a priori probabilities is really a subcase of the principle of maximum entropy!<br /><br />[1] http://www.thefreedictionary.com/Principle+of+equal+a-priori+probability.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2700792294997127931.post-26581236165820521272012-08-24T16:26:00.001-07:002012-12-21T11:23:44.471-08:00The arithmetic mean is not always appropriate<script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js" type="text/javascript"> MathJax.Hub.Config({ HTML: ["input/TeX","output/HTML-CSS"], TeX: { extensions: ["AMSmath.js","AMSsymbols.js"], equationNumbers: { autoNumber: "AMS" } }, extensions: ["tex2jax.js"], jax: ["input/TeX","output/HTML-CSS"], tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], processEscapes: true }, "HTML-CSS": { availableFonts: ["TeX"], linebreaks: { automatic: true } } }); </script> Although many think of statistics as a very objective field, the application of statistics to a data set requires care, otherwise the conclusions that result may be misleading. Here we provide examples of how important it is to choose a proper descriptive statistic when measuring the central tendency of a variable.<br /><div><br /></div><div>Restricting ourselves to a scalar variable $x$ (e.g., salaries, the speed of cars on a highway, body weight), let's assume that we have collected data for every member in the population in question. With our data set, the first question one usually asks is 'What is the typical value of our variable?'. Usually, one would think of the <b>arithmetic mean</b>, where we add up all of the numbers and divide by how many numbers present in the data set ($\frac{1}{N}\displaystyle \sum _{i=1}^N x_i$, where $x_i$ is the value of the $i$th observation or measurement of $x$). While in many cases the arithmetic mean gives a perfectly reasonable measure of the typical value in a data set, the following examples serve to break any stereotypes that the arithmetic mean is always an appropriate measure of central tendency.<br /><div><br /></div><div><b><span class="Apple-style-span" style="color: blue;">Case 1: What is the typical salary of an employee at a small company X?</span></b></div><div>Here, we have the observations $x_i$ of the salary of each person in company X. The arithmetic mean aggregates all of the money that everyone makes in one year into a large pool, and then divides the money equally among each employee to determine the salary of each employee. Given the salary data in the table below, the arithmetic mean of the annual salary is around \$108,000. Is this really a reasonable measure of the central tendency for what the typical employee makes at company X? Only one out of the 14 employees makes more than <i>half</i> of \$108,000. The arithmetic mean is clearly not an appropriate descriptive statistic for the typical value of the annual salary at company X.<br /><br /></div><div><br /></div><div><table border="2" bordercolor="#0033FF" cellpadding="3" cellspacing="3" style="background-color: #99ffff; width: 100%px;"><tbody><tr><th>Employee</th><th>Annual Salary</th></tr><tr><td>CEO</td><td>$1,000,000</td></tr><tr><td>Computer Scientists (10 of them)</td><td>$45,000</td></tr><tr><td>Accountant</td><td>$30,000</td></tr><tr><td>Janitor</td><td>$20,000</td></tr><tr><td>Intern</td><td>$10,000</td></tr></tbody></table></div></div><br />Instead, the descriptive statistic almost always used to report the central tendency of salaries is the <b>median</b>. The arithmetic mean is very sensitive to outliers, as this example illustrates. The median salary is one that divides the employees into two equally sized groups-- the group with those with lower salaries and those with higher salaries. Sorting the list of salaries and choosing the one in the middle, we get a median salary of \$45,000, which seems a much more reasonable 'average' salary at company X.<br /><br /><b><span class="Apple-style-span" style="color: blue;">Case 2: What is the typical speed of a set of cars cruising on a highway?</span></b><br />Speed is defined as the distance traveled in a given time unit, and any reasonable average speed should reflect the aggregated distance traveled by the cars divided by the aggregated total time spent traveling among the cars. Depending on how the data is collected, the arithmetic mean may be inappropriate. Assume that each driver is cruising at a constant speed.<br /><br />One way to collect data is to observe the distance $d_i$ traveled by each car on the highway after our hour of traveling. Then, the mean speed is the total distance traveled over the total time traveled among all of the cars:<br />$\dfrac{\displaystyle \sum_{i=1}^N d_i \mbox{ km}}{N \mbox{ traveling hours}} = \frac{1}{N} \displaystyle \sum_{i=1}^N v_i \mbox{ km/hr}$,<br />which is the same as the arithmetic mean of the velocity $v_i$ of each car on the highway during the hour long journey.<br /><br />Another way is to measure the time $t_i$ taken by each car to travel a distance of 1 km. Then the mean speed is the total distance traveled over the total time traveled among all of the cars, but we arrive at a different formula:<br />$\dfrac{N \mbox{ km}}{\displaystyle \sum_{i=1}^N t_i \mbox{ traveling hours}} = \dfrac{N}{\displaystyle \sum_{i=1}^N \frac{1}{v_i} \mbox{ km/hr} }$.<br />The latter formula is the called the <b>harmonic mean</b> of the velocity. It turns out that the harmonic mean is better for finding the typical rate of a process when sampling the times that it takes to complete a rate process.<br /><br /><span class="Apple-style-span" style="color: blue;"><b>C<span class="Apple-style-span" style="font-family: inherit;">ase 3: </span></b></span><span class="Apple-style-span" style="line-height: 20px;"><span class="Apple-style-span" style="color: blue; font-family: inherit;"><b>The Human Development Index (HDI)</b></span></span><br /><span class="Apple-style-span" style="font-family: inherit;">The Human Development Index (HDI) is "<span class="Apple-style-span" style="line-height: 18px;">a single statistic which was to serve as a frame of reference for both social and economic development" [1] that ranks the development level of countries around the world. The index incorporates the factors: life expectancy at birth, years of education, and gross national income per capita. [Norway is #1 and the US is #4, look it up.] The old HDI was computed with an arithmetic mean to combine all of the data. However, it recently changed to use the geometric mean in combining the life expectancy, years of education, and income per capita to arrive at the amalgamated HDI.</span></span><br /><span class="Apple-style-span" style="font-family: inherit;"><span class="Apple-style-span" style="line-height: 18px;"><br /></span></span>The <b>geometric mean </b>is defined as:<br /> $GM(x_1,x_2,...,x_N)=(x_1 \cdot x_2 \cdot \cdot \cdot x_N)^{\frac{1}{N}}$.<br />It is called the "geometric" mean because, in two dimensions, the geometric mean of two numbers $a$ and $b$ is defined as the length of a side of a square whose area is the same as the rectangle composed of segments of length $a$ and $b$.<br /><br />The reason the HDI is now computed with the geometric mean stems from a useful property of the geometric mean that does not hold for any other mean: the geometric mean is invariant to normalizations. Think about the drastic change in scale between the amount of money someone makes (e.g., 60,000) and the life expectancy (e.g., 60). The arithmetic mean of the life expectancy and income would place a greater emphasis on the differences in income between countries since a 1% change in income would be large (e.g., 600) in comparison to a 1% change in life expectancy (e.g., 0.6). The geometric mean is somewhat magical in that it "ensures that a 1% decline in index of say life expectancy at birth has the same impact on the HDI as a 1% decline in education or income index" [1] by virtue of its mathematical property: <br /><br />$GM \left(\dfrac{X_i}{Y_i} \right)=\dfrac{GM(X_i)}{GM(Y_i)}$.<br /><br />A person reasonably good with numbers would attempt to normalize the data on the life expectancy, years of education, and income, and then compute the arithmetic mean. But, the normalization reference chosen is somewhat arbitrary here, and it can be shown that the ranking using the arithmetic mean changes depending on the reference value chosen for the normalization, while the ranking under a geometric mean is invariant to the normalization reference. See [2] for an example.<br /><br /> [1] http://hdr.undp.org/en/statistics/hdi/ [2] http://en.wikipedia.org/wiki/Geometric_meanUnknownnoreply@blogger.com2tag:blogger.com,1999:blog-2700792294997127931.post-26479542297231483152012-08-18T15:42:00.003-07:002012-08-19T11:18:41.013-07:00How a statistically inept jury led to a wrongful conviction<script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js" type="text/javascript"> MathJax.Hub.Config({ HTML: ["input/TeX","output/HTML-CSS"], TeX: { extensions: ["AMSmath.js","AMSsymbols.js"], equationNumbers: { autoNumber: "AMS" } }, extensions: ["tex2jax.js"], jax: ["input/TeX","output/HTML-CSS"], tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], processEscapes: true }, "HTML-CSS": { availableFonts: ["TeX"], linebreaks: { automatic: true } } }); </script> <br />In 1999, Sally Clark of Britain was wrongly convicted of murdering her two infant sons that actually died of sudden infant death syndrome (SIDS).<span style="font-family: inherit;"> "<span style="background-color: white; line-height: 17px; text-align: left;">SIDS is the unexpected, sudden death of a child under age 1 in which an autopsy does not show an explainable cause of death. [1]" </span></span><span style="background-color: white;">The case was overturned a little more than three years later and Sally was released. In 2007, Sally was tragically found dead in her home due to alcohol over-intoxication.</span><br /><br />An "expert" witness pediatrician Roy Meadow served for the case, and he is thought to have played a major role in convincing the jury of the Sally Clark's guilty verdict by making two major statistical errors in accessing the probability of Sally Clark's innocence. [In my opinion, using arguments of probability instead of hard evidence for convicting criminals is not so just, but let us proceed with this premise regardless.]<br /><br /><span style="color: blue;"><b>The assumption of independence.</b></span><br /><b>Roy Meadow's Claim 1:</b> <i>Data indicate that 1/8,543 infants born from mothers in class A* die of SIDS. Sally belongs to class A. Thus, the probability of Sally's first <b>and</b> second child dying of SIDS is (1/8,543)(1/8,543) = 1 in 73 million.</i><br /><br />The data say that, given a randomly selected infant born from a mother in class A, the probability that this newborn will die of SIDS is 1/8,543. The probability that Sally's first child would die of SIDS is then 1/8,543. However, the probability that her second infant would die of SIDS is <b>not</b> 1/8,543 (which is what Roy Meadow assumed to obtain the 1 in 73 million) because we now know something more about Sally: her first child had died of SIDS. In fact, since SIDS is likely due to genetic factors, the probability of Sally's second child dying of SIDS is almost certainly <i>much</i> higher than 1/8,543 because we now know Sally might carry a genetic element that predisposes her children to SIDS. Instead, Roy Meadow made the assumption that Sally's second infant dying of SIDS is completely independent of the event of her first infant's death by SIDS and declared the probability of her second son dying as 1/8,543 in calculating the 1 in 73 million probability.<br /><br /><b><span style="color: blue;">The prosecutor's fallacy.</span></b><br /><b>Roy Meadow's Claim 2:</b> <i>The probability that two infants of the same mother both naturally die of SIDS (not by murder) is very small. Thus, the probability that Sally Clark is innocent is comparably small.</i><br /><br />For simplicity, let us neglect all other possible explanations for the death of Sally Clark's two infants and consider that either one of the two happened: (1) Sally Clark murdered both of her infants. (2) Both infants died of SIDS. <span style="background-color: white;">Let us denote two events as:</span><br />$I$: Sally Clark is innocent.<br />$E$: the evidence that two of Sally's infants died is observed.<br /><br />$P(E | I)$ is the probability that, given Sally did not murder her two infants, the evidence would be observed. Since we are neglecting any other explanations of the death of Sally's two infants, this corresponds to hypothesis (2). If even Ray Meadow's underestimate of $P(E|I)$ in Claim 1 were correct, we can still negate Ray Meadow's Claim 2.<br /><br />$P(I | E)$ is the probability that, given the evidence, Sally Clark is innocent. $P(I | E)$ is the most important quantity that the jury would like to know for a basis in its decision and the quantity that Ray Meadow wrongly assumed to be equal to $P(E | I)$. This is precisely the prosecutor's fallacy: assuming that $P(I | E) = P(E | I)$. In words, the fallacy is that, because the likelihood of Sally's two children both dying without her murdering them is very small, the probability of Sally's innocence given the observed evidence is comparably small. <br /><br />We show that they are not equal using Bayes' Theorem, derived in <a href="http://mathemathinking.blogspot.jp/2012/06/why-cocaine-addicts-should-learn-bayes.html">Why cocaine users should learn Bayes' Theorem</a>.<span style="font-family: inherit;"> "<span style="background-color: white; line-height: 18px;">Bayes Theorem allows you to separate how likely alternative explanations of an event are, from how likely it was that the event should have happened in the first place. [2]" Relating the conditional probabilities $P(E | I)$ and $P(I | E)$, we immediately see that they are not equal, but carry on to investigate how exactly they differ:</span></span><br /><br />$P(I | E) = \dfrac{P(E | I) P(I) }{ P(E)}$.<br /><br />Next, rewriting the probability that the evidence is observed by considering the only two ways one may observe the evidence, namely by Sally being innocent or not innocent, $P(E) = P(E | I) P(I) + P(E | \mbox{~} I)P(\mbox{~} I)$ where $\mbox{~}$ denotes a negation so that $P(\mbox{~} I) = 1 - P(I)$ is the probability Sally is not innocent. Substitute this expression for $P(E)$ into Bayes' Theorem above to arrive at: <br /><br />$P(I | E)=\dfrac{P(E | I)P(I)}{P(E | I) P(I) + P(E | \mbox{~} I) P(\mbox{~} I)}$.<br /><span style="background-color: white;"><br /></span>Well, $P(E | \mbox{~}I)$ is the probability that the evidence would be observed given that Sally murdered her two children-- this is one. Dividing the numerator and denominator of the right hand side by $P(\mbox{~}I)$ brings in a quantity which is the ratio of the probability of an event happening to that of it not happening-- the odds.<br /><br />$P(I | E) = \dfrac{P(E | I) odds(I) }{ P(E | I) odds(I) + 1}$,<br /><br />and, according to the prosecutor's claim, $P(E | I)$ is very small so we can say that:<br /><br />$P(I | E) \approx P(E | I) odds(I)$<br /><br />$odds(I)$ is not conditioned on any evidence. Although we don't know its value, the odds that a random mother from class A (Sally is essentially this random mother chosen since we don't have any evidence on her) will not murder her two children is quite larger than one. The large $odds(I)$ quantity in this case makes $P(I | E) >> P(E | I)$-- invalidating Roy Meadow's claim 2. <b>Just because the probability of observing the evidence if the defendant were innocent is very small, the probability of the defendant being innocent given the evidence, which may be very valuable to a jury, is not necessarily very small. </b>An estimate in [2] estimates $P(I | E)$ to be 2/3 in Sally's case-- a far distance from the 1 in 73 million figure obtained from making two serious statistical errors that ruined someone's life.<br /><br /><span style="background-color: white;">[1] </span><a href="http://www.ncbi.nlm.nih.gov/pubmedhealth/PMH0002533/" style="background-color: white;">http://www.ncbi.nlm.nih.gov/pubmedhealth/PMH0002533/</a><br />[2] <a href="http://plus.maths.org/content/beyond-reasonable-doubt" style="background-color: white;">http://plus.maths.org/content/beyond-reasonable-doubt</a> A much better (but longer) account than mine here.<br />[3] <a href="http://en.wikipedia.org/wiki/Prosecutor's_fallacy">http://en.wikipedia.org/wiki/Prosecutor's_fallacy</a><br />*Class A: infants born from non-smoking, affluent families with a mother over 26 [2].<br /><br /><br /><br />Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-2700792294997127931.post-16885889322251923712012-08-10T05:08:00.003-07:002012-11-07T21:19:59.218-08:00The Kelly Betting Stategy<script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js" type="text/javascript"> MathJax.Hub.Config({ HTML: ["input/TeX","output/HTML-CSS"], TeX: { extensions: ["AMSmath.js","AMSsymbols.js"], equationNumbers: { autoNumber: "AMS" } }, extensions: ["tex2jax.js"], jax: ["input/TeX","output/HTML-CSS"], tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], processEscapes: true }, "HTML-CSS": { availableFonts: ["TeX"], linebreaks: { automatic: true } } }); </script> <span style="font-family: Helvetica Neue, Arial, Helvetica, sans-serif;">The Kelly betting strategy is for optimizing the expected growth rate of an investment when making a series of bets in which one has an advantage. The strategy is well-known in economics (as well as in gambling) and plays a huge role in real-life investing. </span><br /><blockquote class="tr_bq"><span style="color: #222222; font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif; line-height: 23.399999618530273px;">"the Kelly criterion is integral to the way we manage money." -</span><span style="color: #222222; font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif; line-height: 23.399999618530273px;">Legg Mason Capital Management CEO Bill Miller [1].</span></blockquote><span style="font-family: Helvetica Neue, Arial, Helvetica, sans-serif;">As an example, consider a game where a single die is rolled. An investor is willing to make with us a series of bets on each roll that a 6 is not rolled. That is, our $k$th bet is that a 6 will be rolled. If we put down a bet for a particular roll and win, the investor will give us back $x$ times the amount we bet (this includes the money we initially put down). If we lose, the investor will keep our money. Let $p$ be the probability that we will win the bet-- in this case $\frac{1}{6}$.</span><br /><br /><div class="MsoNormal"><span style="font-family: Helvetica Neue, Arial, Helvetica, sans-serif;"></span><span style="font-family: Helvetica Neue, Arial, Helvetica, sans-serif;">Before we think of a betting strategy, we first need to confirm that the expected money that we win in one bet is positive. Otherwise, it would be improvident for us to make any bets against this investor. Taking a basis of a one dollar bet and considering that we gain \$($x-1$) with probability $p$ (a win) and lose \$1 with probability $1-p$:</span><br /><span style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;">E(\$ you earn in a single bet of \$1) = $p$[\$$(x – 1)$] - $(1 – p)$[\$1] $= px-1$.</span><br /><br /><span style="font-family: Helvetica Neue, Arial, Helvetica, sans-serif;">We need $px>1$ for the law of large numbers to dictate that we will have a net gain after placing many bets. Thinking of $p$ as fixed, if $x$ is too small and drives $px<1$—the investor is not willing to give us a good payoff* if we win—then we are expected to lose money because the investor has an advantage by risking little of his money in the bet but getting a relatively large reward from our bet if he wins. Thinking of $x$ as fixed, we need the probability of winning a single bet, $p$, to be large enough to drive $px>1$-- if we are very unlikely to win the bet, it would be shortsighted for us to bet at all. </span></div><div class="MsoNormal"><span style="font-family: Helvetica Neue, Arial, Helvetica, sans-serif;"><br /></span></div><div class="MsoNormal"><span style="font-family: Helvetica Neue, Arial, Helvetica, sans-serif;">Let’s assume that the investor offers us a payoff such that $px>1$. You have a bank account with $V_0$ dollars that you intend to invest. The dilemma is this: you definitely want to make bets that a 6 is rolled, as $px>1$ that guarantees you will earn a positive return after a large number of bets. If you win a bet, you should bet some of the extra money you just won in the next bet to increase the magnitude of your return; $px>1$ after all (akin to compounding interest). However, you don’t want to bet all of your investment pool each time or you will eventually lose your initial $V_0$ investment as well as any money you won up to that point when a number other than 6 is rolled (which is quite likely to happen).<o:p></o:p></span><br /><span style="font-family: Helvetica Neue, Arial, Helvetica, sans-serif;"><br /></span></div><div class="MsoNormal"><span style="font-family: Helvetica Neue, Arial, Helvetica, sans-serif;">At the two extremes, (1) you bet nothing for every bet $k$ and lose the opportunity to gain money (2) you bet everything for every bet $k$ and risk losing your entire savings (you certainly will eventually), after which you cannot place more bets. The Kelly betting system concerns when, for every bet $k$, your strategy is to bet a fixed fraction $\alpha$ of your current investment pool**. Clearly, there is an optimum fraction $\alpha$ of your bank account that you should bet each round in order to maximize your expected return. Let’s find it, following the derivation in the excellent book [2].</span><br /><span style="font-family: Helvetica Neue, Arial, Helvetica, sans-serif;"><o:p></o:p></span></div><div class="MsoNormal"><span style="font-family: Helvetica Neue, Arial, Helvetica, sans-serif;"><br /></span><br /><div class="MsoNormal"><span style="font-family: Helvetica Neue, Arial, Helvetica, sans-serif;">The random variable in this process is $R_k$, which we define by:</span></div><span style="font-family: Helvetica Neue, Arial, Helvetica, sans-serif;">$R_k= \{ 1,\mbox{ a 6 is rolled}$</span></div><div class="MsoNormal"><span style="font-family: Helvetica Neue, Arial, Helvetica, sans-serif;"><o:p></o:p></span></div><div class="MsoNormal"><span style="font-family: Helvetica Neue, Arial, Helvetica, sans-serif;"> $=\{ 0,\mbox{ a six is not rolled}$.</span><br /><span style="font-family: Helvetica Neue, Arial, Helvetica, sans-serif;"><br /></span></div><div class="MsoNormal"><span style="font-family: Helvetica Neue, Arial, Helvetica, sans-serif;">Let $V_k$ be the size of our investment pool after the $k$th bet ($V_0$ fits in this notation). After the first bet, we have an investment pool of $V_1= V_0 ( 1-\alpha) + V_0 \alpha x R_1$ since we keep the $V_0(1-\alpha)$ of our bank account that we did not bet and get back $V_0 \alpha x$ only if we win ($R_1=1$ in this case). That is, $V_1=V_0(1-\alpha +\alpha x R_1)$. After the second bet, $V_2=V_1(1-\alpha) +V_1 \alpha x R_2$ since we keep the $V_1(1-\alpha)$ that we did not bet and get back $V_1 \alpha x$ only if we win. We trace back to $V_0$ by our expression for $V_1$:<o:p></o:p></span></div><div class="MsoNormal"><span style="font-family: Helvetica Neue, Arial, Helvetica, sans-serif;">$V_2=V_1 (1-\alpha+ \alpha x R_2)$<o:p></o:p></span></div><div class="MsoNormal"><span style="font-family: Helvetica Neue, Arial, Helvetica, sans-serif;">$V_2=V_0 (1-\alpha+ \alpha x R_1) (1-\alpha+ \alpha x R_2)$ and if we continue:<o:p></o:p></span></div><div class="MsoNormal"><span style="font-family: Helvetica Neue, Arial, Helvetica, sans-serif;">$V_k=V_0 (1-\alpha+ \alpha x R_1) (1-\alpha+ \alpha x R_2)\cdot \cdot \cdot (1-\alpha+ \alpha x R_k)$.<o:p></o:p></span></div><div class="MsoNormal"><span style="font-family: Helvetica Neue, Arial, Helvetica, sans-serif;"><br /></span><span style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;">This is where a trick comes in (sorry). Let use define a growth rate $G_k$ such that we can write an exponential formula for the growth of our investment pool:</span></div><div class="MsoNormal"><span style="font-family: Helvetica Neue, Arial, Helvetica, sans-serif;">$V_k=V_0 e^{kG_k}$.<o:p></o:p></span></div><div class="MsoNormal"><span style="font-family: Helvetica Neue, Arial, Helvetica, sans-serif;">Taking the natural logarithm of both sides, we find that $G_k=\frac{1}{k} \log \left(\dfrac{V_k}{V_0} \right)$. We know exactly what $\dfrac{V_k}{V_0}$ is from two expressions above! Plugging this in and using a property of logarithms, that the log of a product of terms is the sum of the log of each term, we elucidate why we defined our growth factor this way:<o:p></o:p></span></div><div class="MsoNormal"><span style="font-family: Helvetica Neue, Arial, Helvetica, sans-serif;">$G_k = \frac{1}{k} \log \left((1-\alpha+ \alpha x R_1) (1-\alpha+ \alpha x R_2)\cdot \cdot \cdot (1-\alpha+ \alpha x R_k) \right)$<o:p></o:p></span></div><div class="MsoNormal"><span style="font-family: Helvetica Neue, Arial, Helvetica, sans-serif;"> $= \frac{1}{k} \displaystyle \sum_{n=1}^{k} \log (1-\alpha +\alpha x R_n)$<o:p></o:p></span></div><div class="MsoNormal"><span style="font-family: Helvetica Neue, Arial, Helvetica, sans-serif;">The above is just the average value that $\log (1-\alpha +\alpha x R_n)$ takes on after $k$ trials. Using the law of large numbers and letting $k \rightarrow \infty$, this is the expected value of $\log (1-\alpha +\alpha x R)$. We calculate this by knowing that $R=1$ with probability $p$ and $R=0$ with probability $1-p$:<o:p></o:p></span></div><div class="MsoNormal"><span style="font-family: Helvetica Neue, Arial, Helvetica, sans-serif;">$G_k=E(\log (1-\alpha +\alpha x R))= p \log (1-\alpha +\alpha x)+ (1-p) \log (1-\alpha)$<o:p></o:p></span></div><div class="MsoNormal"><span style="font-family: Helvetica Neue, Arial, Helvetica, sans-serif;">Noting that $G_k=G_k(\alpha)$ is a function of the betting fraction $\alpha$, we differentiate the growth rate $G_k$ and set it to zero to solve for the $\alpha$ that maximizes $G_k$ (get out your paper). <b>We finally get the optimum betting fraction in terms of the payoff for the bet and the probability of winning each bet:</b><o:p></o:p></span></div><div class="MsoNormal"><span style="font-family: Helvetica Neue, Arial, Helvetica, sans-serif;">$\boxed{\alpha = \dfrac{px-1}{x-1}}$.<o:p></o:p></span></div><div class="MsoNormal"><span style="font-family: Helvetica Neue, Arial, Helvetica, sans-serif;">The numerator $px-1>0$ is the expected return on a bet of one dollar and causes the optimum betting fraction to increase, which is intuitive. Of course, $x>1$ or we would be giving out money. As the investor is willing to payoff more, $\alpha$ starts to look like $p$ (take the limit as $x \rightarrow \infty$).<o:p></o:p></span></div><div class="MsoNormal"><span style="font-family: Helvetica Neue, Arial, Helvetica, sans-serif;"><br /></span><span style="font-family: Helvetica Neue, Arial, Helvetica, sans-serif;">* the payoff odds are defined to be $x-1$, since this is really the money that you gain in the case of a win.</span></div><div class="MsoNormal"><span style="font-family: Helvetica Neue, Arial, Helvetica, sans-serif;">**investment pool is $V_0$ + whatever cash you won from all previous bets – whatever cash you lost from all previous bets. It is akin to buying stock and reinvesting dividends. <o:p></o:p></span></div><div class="MsoNormal"><span style="font-family: Helvetica Neue, Arial, Helvetica, sans-serif;"><br /></span><span style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;">[1] </span><a href="http://www.businessweek.com/stories/2005-09-25/get-rich-heres-the-math" style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;">http://www.businessweek.com/stories/2005-09-25/get-rich-heres-the-math</a><br /><span style="background-color: white; color: #222222; font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif; font-size: 13px; line-height: 18px;">[2] Understanding Probability by Henk Tijms. </span></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-2700792294997127931.post-13282645739024266862012-07-26T05:19:00.000-07:002012-07-29T01:08:58.567-07:00The Lost Boarding Pass<script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js" type="text/javascript"> MathJax.Hub.Config({ HTML: ["input/TeX","output/HTML-CSS"], TeX: { extensions: ["AMSmath.js","AMSsymbols.js"], equationNumbers: { autoNumber: "AMS" } }, extensions: ["tex2jax.js"], jax: ["input/TeX","output/HTML-CSS"], tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], processEscapes: true }, "HTML-CSS": { availableFonts: ["TeX"], linebreaks: { automatic: true } } }); </script> <br /><div style="margin-bottom: .0001pt; margin: 0in;"><span style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;"><b><span style="background-color: white;">One hundred passengers are lined up to board a full flight. The first passenger lost his boarding pass and decides to choose a seat randomly. Each subsequent passenger (responsible enough to not lose their boarding pass) will sit in his or her assigned seat if it is free and, otherwise, randomly choose a seat from those remaining. What is the probability that the last passenger will get to sit in his assigned seat?</span></b><o:p></o:p></span></div><div style="margin-bottom: .0001pt; margin: 0in;"><span style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;"><br /></span></div><div style="margin-bottom: .0001pt; margin: 0in;"><span style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;"><i><b>A systematic solution by induction [1,2]</b></i><o:p></o:p></span></div><div style="margin-bottom: .0001pt; margin: 0in;"><span style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;"><span style="background-color: white;">To gain insight that hopefully leads to solving the full problem, mathematicians usually study a reduced scenario that is easier to solve. Taking this approach, we consider two passengers boarding a two-passenger plane. In this case, the only way for the second (last) passenger to sit in his assigned seat is if the first passenger happens to choose his own seat. Since the first passenger chooses one of the two seats randomly, the probability that the second passenger gets his own seat is 1/2.</span><o:p></o:p></span></div><div style="margin-bottom: .0001pt; margin: 0in;"><span style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;"><br /></span></div><div style="margin-bottom: .0001pt; margin: 0in;"><span style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;">Let $p(n)$ be the probability that the $n$th passenger gets his or her assigned seat on an n passenger plane. We determined that $p(2)=\frac{1}{2}$.<o:p></o:p></span></div><div style="margin-bottom: .0001pt; margin: 0in;"><span style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;"><br /></span></div><div style="margin-bottom: .0001pt; margin: 0in;"><span style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;">Next, consider when there are three people boarding a three-passenger plane.<span class="apple-converted-space"> </span><span style="color: blue;">If the first passenger takes his own seat</span>, the second passenger's seat will be free for him or her to sit, and the third (last) passenger will get his own seat. However,<span class="apple-converted-space"><span style="color: #38761d;"> </span></span><span style="color: #38761d;">if the first passenger takes the second passenger's seat</span>, the second passenger must randomly choose between the first passenger and the third passenger's seat. The third passenger then gets the assigned seat only<span class="apple-converted-space"><span style="color: #cc0000;"> </span></span><span style="color: #cc0000;">if the second passenger happens to choose the first passenger's seat</span>, which has a probability of 1/2 since there are only two options and the choice is random. If the first passenger chooses the third passenger's seat, there is no hope that the third passenger will get his or her seat, of course. We have exhausted all ways of the last (3rd) passenger getting his or her seat.</span><br /><span style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;">$p(3)=$ <span style="color: blue;">1/3</span> + (<span style="color: #38761d;">1/3</span>) (<span style="color: #cc0000;">1/2</span>)</span></div><div style="margin-bottom: .0001pt; margin: 0in;"><span style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;">Notice this fact that screams induction: when the first passenger chose the 2nd passenger's seat, the second passenger now plays the role of the first passenger in that when considering $p(2)$-- the 2nd passenger has two seats to randomly choose from, and one of them is the seat of the third, and last, passenger. So, we could write:<o:p></o:p></span></div><div style="margin-bottom: .0001pt; margin: 0in;"><span style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;">$p(3) = \frac{1}{3} + \frac{1}{3} p(2)$.<o:p></o:p></span></div><div style="margin-bottom: .0001pt; margin: 0in;"><span style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;"><br /></span></div><div style="margin-bottom: .0001pt; margin: 0in;"><span style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;">Now, for four passengers, consider all ways that the last passenger can occupy his own seat.<o:p></o:p></span></div><div style="margin-bottom: .0001pt; margin: 0in;"><span style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;">$p(4) =$<span class="apple-converted-space"> </span><span style="color: blue;">1/4</span><span class="apple-converted-space"> </span>+<span class="apple-converted-space"> </span><span style="color: #38761d;">1/4</span><span class="apple-converted-space"> </span>p(3) +<span class="apple-converted-space"> </span><span style="color: #cc0000;">1/4</span><span class="apple-converted-space"> </span>p(2) = 1/4 + (1/4) (1/2) + (1/4) (1/2) = 1/2.<o:p></o:p></span></div><div style="margin-bottom: .0001pt; margin: 0in;"><span style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;"><span style="color: blue;">The first passenger takes his own seat.</span><o:p></o:p></span></div><div style="margin-bottom: .0001pt; margin: 0in;"><span style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;"><span style="color: #38761d;">The first passenger takes passenger #2's seat,<span class="apple-converted-space"> </span></span>and passenger #2 has three seats to randomly choose from-- one of which is passenger #4's seat. This is the $p(3)$ problem.<o:p></o:p></span></div><div style="margin-bottom: .0001pt; margin: 0in;"><span style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;"><span style="color: red;">The first passenger takes passenger #3's seat,</span><span class="apple-converted-space" style="color: #741b47;"> </span>and passenger #3 has two seats to randomly choose from-- one of which is passenger #4's seat. This is the $p(2)$ problem.<o:p></o:p></span></div><div style="margin-bottom: .0001pt; margin: 0in;"><span style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;">Again, we get 1/2. We reduced the problem into the previous cases for $p(2)$ and $p(3)$, which we already know.<o:p></o:p></span></div><div style="margin-bottom: .0001pt; margin: 0in;"><span style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;"><br /></span></div><div style="margin-bottom: .0001pt; margin: 0in;"><span style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;">For finding $p(n)$ ($n>1$), the probability that the first passenger takes his own seat is $\frac{1}{n}$. If the first passenger takes the $K$th passenger's seat ($K>1$), passengers $2,3,4,...,K-1$ will get their seats, but passenger $K$ will be faced with the reduced problem of having $n - (K-1)$ seats to randomly choose from, one of which is the last passenger's seat-- this is the $p(n-K+1)$ problem! So, as above, we consider all possible seats that the first passenger can possibly choose (with a $\frac{1}{n}$ chance) and bring the $p(n-K+1)$ problems into play:<o:p></o:p></span></div><div style="margin-bottom: .0001pt; margin: 0in;">$p(n) = \frac{1}{n} + \displaystyle \sum_{K=2}^{n-1} \frac{1}{n} p(n-K+1) = \frac{1}{n} \left(1 + \displaystyle \sum_{K=2}^{n-1} p(n-K+1) \right).$</div><div style="margin-bottom: .0001pt; margin: 0in;"><span style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;"><br /></span></div><div style="margin-bottom: .0001pt; margin: 0in;"><span style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;">By the above recursion starting with $p(2)=\frac{1}{2}$, we can show that </span><span style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;">$p(n)=\frac{1}{n} \left(1+(n-2)\frac{1}{2} \right)=\frac{1}{2}$<span style="background-color: white;">.</span><span style="background-color: white;"> F</span></span><span style="background-color: white; font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;">or every $n$-- and this includes $n=100$ for this problem-- we have that the probability that the last passenger sits in his assigned seat is 1/2.</span><br /><span style="background-color: white; font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;"></span><br /><span style="background-color: white; font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;">I expected it to be much less.</span></div><div style="margin-bottom: .0001pt; margin: 0in;"><span style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;"><br /></span></div><div style="margin-bottom: .0001pt; margin: 0in;"><span style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;"><i><b>A more insightful solution: paraphrased from [3]</b></i></span></div><div style="margin-bottom: .0001pt; margin: 0in;"><span style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;">The trick is to consider just before the last passenger boards the plane, and one seat is left. <span style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;"><br /></span></span><br /><span style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;"><span style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;">Claim: The last free seat is either the first or the last passenger's assigned seat.</span><br /><span style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;"><i>proof by contradiction</i>. Assume that the last seat belongs to passenger #$x$ that is not the first or last passenger. Since passenger $x$ boarded the plane before the last passenger, passenger $x$ did not sit in his or her assigned seat, violating the problem statement.</span><br /><span style="background-color: white; font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;"><br />One seat is now left for the last passenger. </span><i style="background-color: white; font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;">The event that the last person's seat was taken is equivalent to the event that the first passenger's seat was taken before the last person's seat.</i><span style="background-color: white; font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;"> This is because of the above claim: if the last passenger's seat is taken, then the last passenger must sit in the assigned seat of the first passenger. If the first passenger's seat is taken, then the last passenger sits in his own assigned seat. Since each time a passenger chooses a seat that is not assigned to them, the choice is random, there is no bias for a passenger to choose the first passenger's seat over the last. Thus, the probability of the last passenger sitting in his assigned seat is 1/2. </span></span></div><div style="margin-bottom: .0001pt; margin: 0in;"><span style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;"><br /></span></div><div style="margin-bottom: .0001pt; margin: 0in;"><span style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;"><span style="background-color: white;">[1] </span><a href="http://www.mscs.mu.edu/~paulb/Puzzle/boardingpasssolution.html"><span style="background-color: white;">http://www.mscs.mu.edu/~paulb/Puzzle/boardingpasssolution.html</span></a><o:p></o:p></span></div><div style="margin-bottom: .0001pt; margin: 0in;"><span style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;">[2] Understanding Probability by Henk Tijms. This book is written in a colloquial style and has very interesting examples-- highly recommended.<o:p></o:p></span><br /><span style="font-family: 'Helvetica Neue', Arial, Helvetica, sans-serif;">[3] </span><a href="http://www.nd.edu/~dgalvin1/Probpuz/probpuz3.html">http://www.nd.edu/~dgalvin1/Probpuz/probpuz3.html</a></div><div class="MsoNormal"><br /></div>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-2700792294997127931.post-35011858413845870672012-06-19T23:16:00.001-07:002012-07-28T09:02:22.980-07:00Why cocaine users should learn Bayes' Theorem<script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js" type="text/javascript"> MathJax.Hub.Config({ HTML: ["input/TeX","output/HTML-CSS"], TeX: { extensions: ["AMSmath.js","AMSsymbols.js"], equationNumbers: { autoNumber: "AMS" } }, extensions: ["tex2jax.js"], jax: ["input/TeX","output/HTML-CSS"], tex2jax: { inlineMath: [ ['$','$'], ["\\(","\\)"] ], displayMath: [ ['$$','$$'], ["\\[","\\]"] ], processEscapes: true }, "HTML-CSS": { availableFonts: ["TeX"], linebreaks: { automatic: true } } }); </script> Diagnostic tests for diseases and drugs are not perfect. Two common measures of test efficacy are sensitivity and specificity. Precisely, sensitivity is the probability that, given a drug user, the test will correctly identify the person as positive. Specificity is the probability that a drug-free patient will indeed test negative. Even if the sensitivity and specificity of a drug test are remarkably high, the false positives can be more abundant than the true positives when drug use in the population is low.<br /><br />As an illustrative example, consider a test for cocaine that has a 99% specificity and 99% sensitivity. Given a population of 0.5% cocaine users, what is the probability that a person who tested positive for cocaine is actually a cocaine user? The answer: 33%. <b>In this scenario with reasonably high sensitivity and specificity, two thirds of the people that test positive for cocaine are not cocaine users.</b><br /><br />To calculate this counter-intuitive result, we need Bayes' Theorem. A geometric derivation uses a Venn Diagram representing the event that a person is a drug user and the event that a person tests positive as two circles, each of area equal to the probability of the particular event occurring when one person is tested: $P(\mbox{user})$ and $P(+)$, respectively. Since these events can both happen when a person is tested, the circles overlap, and the area of the overlapping region is the probability that the events both occur [$P(\mbox{user and }+)$].<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-UTRHaJWZtTo/T-FdqNbqJGI/AAAAAAAAA4U/5EXzSPkwbPM/s1600/Bayes.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="184" src="http://1.bp.blogspot.com/-UTRHaJWZtTo/T-FdqNbqJGI/AAAAAAAAA4U/5EXzSPkwbPM/s320/Bayes.png" width="320" /></a></div><br /><br />We write a formula for the quantity that we are interested in, the probability that a person who tests positive is indeed a drug user, $P(\mbox{user} | +)$, (Read the bar as "given that". This is a 'conditional probability'.) by acknowledging that we are now only in the world of the positive test circle. The +'s that are actually drug users can be written as the fraction of the '+ test' circle that is overlapped by the 'drug user' circle:<br />$P(\mbox{user} | +) = \dfrac{P(\mbox{user and } +)}{ P(+)}$.<br /><br />We bring the sensitivity into the picture by considering the fraction of the drug users circle that is occupied by positive test results:<br />$P(+ | \mbox{user}) = \dfrac{P(\mbox{user and }+)}{P(\mbox{user})}$.<br /><br />Equating the two different ways of writing the joint probability $P(\mbox{user and }+)$, we derive <span style="color: blue;"><b>Bayes' Theorem:</b></span><br /><b> $P(\mbox{user} | +) = \dfrac{P(+ | \mbox{user}) P(\mbox{user})}{P(+)}$</b>.<br /><br />We already see that, in a population with low drug use, the sensitivity first gets multiplied by a small number. Since we do not directly know $P(+)$, we write it differently by considering two exhaustive ways people can test positive, namely by being a drug user and by not being a drug user. We weigh the two conditional events by the probability of these two different ways:<br />$P(+) = P(+ | \mbox{user}) P(\mbox{user}) + P(+ | \mbox{non-user}) P(\mbox{non-user})$<br /> $= P(+ | \mbox{user}) P(\mbox{user}) + [1 - P(- | \mbox{non-user})] [1-P(\mbox{user})]$<br />The specificity comes into the picture and $P(+)$ can be computed by the known values as $P(+)=0.0149$. Finally, using Bayes' Theorem, we calculate the probability that a person that tests positive is actually a drug user:<br />$P(\mbox{user} | +) = \dfrac{(99\%) (0.5\%) }{ (1.49\%) }= 33\%$.<br /><br />The reason for this surprising result is that <i>most </i>(99.5%) people that are tested are not actually drug users, so the small probability that the test will incorrectly identify a non-user as positive results in a reasonable number of false positives. While the test is good at correctly identifying the cocaine users, this group is so small in the population that the total number of positives from cocaine users ends up being smaller than the number of positives from non-drug users. There are important implications of this result when zero tolerance drug policies based on drug tests are implemented in the workforce.<br /><br />The same idea holds for diagnostic tests for rare diseases: the number of false positives could be greater than the number of positives for people that actually have the disease.<br /><br />[1] <a href="http://en.wikipedia.org/wiki/Bayes'_theorem">http://en.wikipedia.org/wiki/Bayes'_theorem</a> See 'drug testing'. This is where I obtained the example.Unknownnoreply@blogger.com9tag:blogger.com,1999:blog-2700792294997127931.post-9107257560435139112012-06-12T23:45:00.002-07:002013-05-20T23:45:00.488-07:00Simpson's ParadoxThe Simpson's Paradox is a non-intuitive phenomena where a correlation that is present in several groups is the opposite of what is found when the groups are amalgamated together. The Simpson's Paradox elucidates the need to be skeptical of reported statistics that may be drastically dependent upon how the data are aggregated [1] and to be aware of lurking variables that may negate a conclusion about what <i>causes </i>the correlation in the data.<br /><br />The most interesting example comes from a case in 1973 where UC Berkeley was sued for discrimination against women in graduate school admissions. The data of percent acceptance indisputably show that, if a male applies, it is more likely for him to be admitted than if a female applies (44% vs. 35%). At first glace, one may propose the causal conclusion that Berkeley is biased against females.<br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-WRCbwx6UrBA/T9gsgu7U3iI/AAAAAAAAA34/-tvgXjH6Cu8/s1600/amalg.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="480" src="http://2.bp.blogspot.com/-WRCbwx6UrBA/T9gsgu7U3iI/AAAAAAAAA34/-tvgXjH6Cu8/s640/amalg.png" width="640" /></a></div><br />However, if we partition the data by department to investigate the most discriminatory department, we reveal that, in 4/6 of the departments, a female applicant is more likely to be accepted than a male applicant. In the remaining two departments, the disparity between men and women is not nearly as drastic as the amalgamated data above. This data refute the causal conclusion that Berkeley has a significant bias against women.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-406Lnmq3Iz0/T9gqi0vliDI/AAAAAAAAA3o/cSwuWsbdWZ0/s1600/department.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="481" src="http://3.bp.blogspot.com/-406Lnmq3Iz0/T9gqi0vliDI/AAAAAAAAA3o/cSwuWsbdWZ0/s640/department.png" width="640" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div>The reason for this reversal of correlation in the aggregated data set by partitioning it [Simpson's paradox] is because of a lurking variable that had not been considered when the law suit was filed, namely the department to which one applies. Let us look at the number of males and females that apply to each particular department. We see that the least competitive departments A and B are heavily dominated by male applicants, while the most competitive departments E and F are dominated by female applicants.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-8MDLi_H2O4Q/T9guEcAZ-bI/AAAAAAAAA4A/hv1mHmMskjs/s1600/pops.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="481" src="http://3.bp.blogspot.com/-8MDLi_H2O4Q/T9guEcAZ-bI/AAAAAAAAA4A/hv1mHmMskjs/s640/pops.png" width="640" /></a></div>The reason that, in the amalgamated data, a significantly higher percentage of male applicants are accepted than women, is that females applied to more competitive departments than the males did. Thus, as a whole, it <i>was</i> more likely that a male applicant would be accepted to Berkeley. But, this is because, according to the data, a woman was more likely to apply to a department that has a lower average acceptance rate.<br /><br />Several other examples, such as batting averages, kidney stone treatments, and birth weights, of a real-life Simpson's paradox can be found on the Wikipedia page [2] where this data were taken from.<br /><br />[1] P. J. Bickel, E. A. Hammel, J. W. O'Connell. Sex Bias in Graduate Admissions: Data from Berkeley. Science 187, (4175). 1975. pp. 398-404.<br />[2] <a href="http://en.wikipedia.org/wiki/Simpson's_paradox">http://en.wikipedia.org/wiki/Simpson's_paradox</a>Unknownnoreply@blogger.com2