Video Transcript: Fallacies of Probability
Hi, I'm David Feddes. And this talk is about fallacies of probability. I'm going to be leaning on two books one by Daniel Kahneman Thinking Fast and Slow and the other, The Art of Thinking Clearly for many of the examples in this talk. Let's begin with a question what is more likely? A Chicago airports are closed, flights are canceled. Or B. Chicago airports are closed due to bad weather. Flights are canceled. Here's another question. Vincent studies the Bible a lot. He prays for others. He's kind he has leadership gifts. He's a talented speaker. What's more likely? A Vincent is an accountant for B. Vincent is an accountant and a bi-vocational pastor. What do you think? A is more likely, no matter how many things about Vincent match your idea of a pastor it is impossible for B to be more likely than A it's impossible for Vincent to be an accountant. And bi-vocational pastor without being an accountant, everyone who is an accountant and pastor is an accountant. So it can't be more probable that he's both than that he's just one of those two. As for the airports being closed, A is more likely, it's more likely that Chicago airports are closed period than that they're closed for bad weather. No matter what you think of Chicago weather, it's impossible for B to be more likely than a every closing for weather is a closing. So whether closing can't be more probable than a closing period, there are going to be some closes maybe for a bomb threat, or a closing because of a pandemic or closing because of a workers strike. Maybe weather closings are the biggest number of closings, but the total number of closings is bigger than just the weather closings. And therefore the probability of merely closing is greater than the probability of closing for weather in particular. What's going on here is the conjunction fallacy. Judging the conjunction of two things as more likely than just one of the things. The statement A and B is never more likely than A alone. And it's never more likely than B alone. A conjunction can never be more probable than either of its conjuncts. That's the rule. And so it's a fallacy to say that two things are more likely than just one of the things being true. Now, why is this fallacy attractive? Well, Daniel Kahneman says it's because of representativeness. If you give extra details that match a mental stereotype, it feels more plausible. But each added item of detail actually decreases. The probability of the overall picture being true. plausibility blinds us to probability, because the more details there are, the more plausible it feels. But the less probable it actually is that somebody fitting all those details is there. Now, if you commit the conjunction fallacy, you're not the only one, Daniel Kahneman says that he was at an international conference of business experts. And he asked the question, what's more likely that A oil consumption will decrease by 30% Next year, or B a dramatic rise in oil prices will lead to a 30% reduction in oil consumption next year, the majority of the participants chose B, a dramatic rise in oil prices will lead to a 30% reduction in oil consumption next year. But that's the conjunction fallacy, because A just says they'll decrease by 30%, it can't be more probable that they're going to be decreasing by 30%. And that the decrease is going to be because of a rise in oil prices. It could be because of changes in the economy like say a recession, it could be because of a pandemic. There might be other reasons. But all those reasons are captured in it's just decreasing by 30%. And then any explanation you add gives more detail and makes it sound more plausible, but it makes it less likely that that particular detail and the decrease will both be true at the same time. So even the experts commit the conjunction fallacy. And as you try to learn critical thinking you're trying to remove more and more fallacies from your own thinking. While we're down to the conjunction fallacy, let's ask a few more questions. James has a gorgeous wife. He owns a big house. He drinks fine wines. And he wears expensive clothes. What car? Is James more likely to drive? A Ferrari? Or a Toyota? Got it? Next question. Susan wears glasses. She has a degree from a university. She reads poetry and novels. And she loves classical music. What is Susan more likely to be A an English professor, or B a stay at home mom? What do you think? The correct answer is a stay at home mom. There are literally millions of stay at home moms. And there are only 1000s of female English professors. And so for every female English professor, there are large, large numbers of stay at home moms. And so even though the description would fit what you think a professor would be like, there just aren't that many professors out there compared to the total number of stay at home moms who will may also happen to have glasses a degree from the University and like to read and listen to classical music. What's being ignored is the base rate of how many stay at home moms are actually are compared to how many female English professors are actually out there. The same holds true for James and his gorgeous wife and what kind of car is going to drive. He's going to probably be driving a Toyota over a Ferrari because 1000 Toyota's are sold for every Ferrari. In the world. In the most recent year, 10,000 Ferraris were sold 10 million Toyota's were sold. And so you may think that James sounds like the kind of guy who would drive a Ferrari, and maybe he does, but it's much more likely that he drives a Toyota because the base rate of Toyota's is so much higher. The base rate fallacy comes when you focus on information of a specific case. But you ignore the general base rate in the population at large. There's an old saying when you hear hoofbeats behind you don't expect a zebra because it's more likely to be a horse. The base rate errors can come from two different kinds of things, at least one is just your intuition. jumping to conclusions, you hear a description, and the picture of a Ferrari guy driving guy comes to mind, or the picture of a professor comes to mind and you ignore how few professors there are compared to total women out there or how few Ferrari drivers there are compared to how many Toyota drivers there are. So intuition just jumps to conclusions. And another kind of base rate error can come from simply not being able to figure out the math involved in base rate calculations. Let's look first of all just how our intuition can commit this fallacy. Your intuition ignores the overall probability and jumps to conclusions. Why does it make that jump? Well, again, one of the culprits according Daniel Kahneman is representativeness details that match your mental stereotype feel more plausible, despite the lower base rate, you hear all these details? And they seem to fit your picture. But you don't ask yourself how many of those pictures are actually out there? It fits the picture of the Ferrari driver. But what about those 10 million Toyota drivers? So there's representativeness where you're going what's plausible, rather than what's probable. And then there's also substitution? Where what's the probability question gets changed into a resemblance question? Does this resemble your stereotype of a Ferrari driver or your stereotype of a professor? And then, if it fits, you change the probability question into just more of a resemblance question. Now, base rates are important in a lot of different areas of life. One is the area of medicine. And doctors have to pay attention to base rates. A doctor may have a patient come who complains of abdominal pain. Now abdominal pain can mean a lot of different things. It could mean cancer, it could mean an ulcer, it could mean that the patient has a flu bug. It might just be a case of indigestion. So what's a doctor to do? Well, doctors, even those who are specialists are trained to check for the most common problems before rare conditions. And so if someone came with abdominal pain, even if you were an oncologist, a cancer specialists, you would not immediately start checking for cancer. You would first try to find out something about indigestion or whether they've had the flu or something like that. Because your specialty does not mean that the most common cause of abdominal pain is the thing you specialize in. So doctors will go down a diagnostic list of what they consider to be the highest base rates or what they know from their studies have the highest base rates that would cause this, and then move on to the lower base rates. Sometimes the likelihood of a diagnosis, maybe they say, I've run the test, and it looks like it's colon cancer, even then they may not tell the patient that right away. Because you know what, sometimes the tests make more mistakes than the likelihood of them having the disease there are certain rare diseases where the testing rate of error is greater than the frequency of having such a disease. And so doctors will want to double check, maybe even triple check, before they tell a patient that they have a super serious, but rare illness. So base rates matter in medicine, they also matter in counseling and in pastoral care. For instance, there are a variety of causes of very bizarre behavior, or of seeing things or hearing voices. There might be a biochemical imbalance, there might be drug abuse, there might be relational wounds from the past that have kind of erupted into some mental illness. There might be demon possession. Now, a pastor should look for causes with the higher base rate before looking for demons. Demon, possession being totally controlled by a demon is possible. It's not all that frequent. It's more frequent for there to be medical biochemical imbalances or the person has been using drugs, or needs to work through some relational wounds from the past and a wise pastor, a wise counselor will not immediately jump to the most striking conclusion where they say, Boy, this sure sounds like a case of demon possession in the Bible. Well, maybe it does. But you still have to pay attention to base rate not just resemblance, okay? Because it may click with a resemblance to what you know of what you've studied in the Bible. But that does not mean that it's automatically that diagnosis, it could be, but others are more probable. And you need to be aware of that. It's apparently important, practical application of not committing a fallacy of probability, the base rate fallacy. base rate errors, as I've mentioned, can come from your intuition jumping to conclusions because it matches a description. Or it may come just from the inability to figure out some math. So if you want to calculate the probability of an event, you have to include the base rate data, and then you have to do the math correctly. So you need to know what is the overall base rate involved here? And how do I do the math, I'll give a couple of examples. Let's say you're testing for a virus. And you know from research that this virus has infected 5% of people in the population. Now Amy comes and she feels fine, but she gets tested for this virus in order to be safe. Amy's test comes back positive. Now you also know that a test is correct in 90% of cases, that test has a 90% accuracy. So now, you've got Amy here with a positive test, what's the probability that She's infected? Think about it. What's the probability that Amy actually has the virus? If she's been tested positive and the virus test is correct in 90% of cases. The most common answer to that is 90%. The correct answer is approximately 32%. And here's how you work it out. If you were to picture 1000 people, for every 1000 people in the population, there would be 45 true positives and 95 false positives. And the way you get the probability is to take the true positives for a virus or the true positives overall, and then divide by the sum of the true positives plus the false positives because you can't ignore the false positives. So you work it out, and you do the math and you say, well, in the numerator, we have 5% that have this virus in the population. And we have a 90% accuracy. So .5x.90 is going to give you a .45 or 45 people out of 1000. Now in the denominator, do not forget the denominator, the denominators, the true positives, plus the false positives. So you have to add to that .045 You have to add the fact that 95% of the people in the population don't have this virus. And yet, the virus has the test has a 90% accuracy rating, which means it has a 10% false positive rate. And so 95% of the population who don't have it has to be multiplied by a 10%. inaccuracy rate, which would give you .095. You add .045+.095 you get . 14. And you divide .045 by that you'll get 32%. You can tap it all in your calculator. If you're doubting my math here, but the point isn't so much getting the math straight. But getting the formula correct that when you're trying to figure out what's the likelihood of a test, actually identifying a person having it you go with the true positives, but then you have to divide by true positives plus the number of false positives. Let's take a similar problem. A city has two cab companies, green cab company and blue cab company 85% of the cabs in the city, or green 15%, are blue. A cab was in a hit and run at night. And a witness says the cab was blue. Witnesses are 80% Correct, discerning between blue and green cabs at night. That's what the analysis has determined. So given an 80% rate of correctly, seeing the difference between blue and green and identifying the color correctly, what's the probability that the cab was blue? Well, think about it. The common answer is 80%. The correct answer to this one is approximately 41%. For every 100 claims to see blue 12 of those claims will be true. But 17 will be mistaken. How do we get that? Well, yeah, again, you take the true positives for a witness seeing blue, but you have to divide it by the true positives, plus the false positives of witnesses seeing and so .15 is the number of blue cabs, and then you have an 80% accuracy rate of an eyewitness that gets 1.12. And then you have to in the know, numerator, you say, Well, you've got that. And then any denominator, you add that number to the number of false positives, the number of false positives, what's that going to be? Well, 85% of the cabs are green. And if the witnesses 80%, right, he means he's 20% Wrong. And so .85x.20 is .17, add . 12 to .17 you get .29 you divide that into .12 or 12% divided by 29%. You get 41%. Is the likelihood that this actually was a blue cab that was in the accident. It doesn't seem like that's right, does it? But that's what happens when you actually account for base rates. It's true whether you're testing for viruses, it's true. In witness cases, it's true in a lot of other situations that you have to you have to think about the base rate. And don't assume that just because you hear hoofbeats behind you, it's a zebra. Well, let's move on to another kind of question. The study of kidney cancer in the 3141 counties of the United States found that the lowest rates of kidney cancer were in counties, which are mostly rural, sparsely populated, and in traditionally Republican states. Why would that be? Why would the lowest rates of kidney cancer be in rural sparsely populated areas in traditionally Republican states? Well, unless you're a really, really zealous Republican, do you think that Republican politics would prevent kidney cancer? Probably not. Well, how about this explanation? The rural lifestyle, no air pollution, no water pollution, clean living access to fresh food without additives that lower the cancer rates. That explanation makes sense to you? Well, a study of kidney cancer and the 3141 counties in the United States found that the highest rates of kidney cancer were in counties which are mostly rural, sparsely populated and in traditionally Republican states. Why Well, how about this explanation? The rural lifestyle poverty, less access to good medical care, high fat diet, too much alcohol, too much tobacco, that all increases cancer rates, the rural lifestyle increases cancer rates. And that explains why those in the lowest population counties have the highest cancer rates. What do you think of that explanation? Well, the rural lifestyle can't be the cause of extreme lows and extreme highs. It can't cause both. But the fact of the matter is, the counties with the lowest population tended, if you were looking for the highest rates of cancer, it came in lower populated counties. And if you look for the lowest rates of cancer, it came in lower populated counties, though not the same counties. What's the real cause here? Well, the real cause is greater variation. When you're dealing with smaller sample sizes. If you get extreme outcomes, both low and high, they're more likely to be found in small samples than in large samples, the bigger the sample, the closer it's going to be to what the real overall rates are. And if you get extreme lows, or extreme highs that often occurs in small samples, and when it does, don't go looking for other explanations. This can happen to the best of us and sometimes to the richest of us. It was noted that the most successful schools tend to be among the smaller schools. And so the Bill and Melinda Gates Foundation spent $1.7 billion in studies and experiments to learn why small schools are best. But, here's an inconvenient fact, the least successful schools also tend to be among the smaller schools and the cause. In this whole deal is the small numbers fallacy. When you have smaller schools, you'll get more extreme scores towards the high end, but also more extreme scores towards the bottom. And because the bigger the sample, the less likely you're going to have the extremes. And so according to the law of small numbers, you get your extremes on the small samples, both the high extremes and the low extremes. And you won't want to launch a big study, to find out why rural living makes you less likely to get cancer. If rural living also makes you more likely to get cancer, the fact of the matter is rural living didn't cause cancer period, it was a change in the sample sizes. So if you're looking at the samples, with a huge population, you didn't get the extreme percentages of cancer rates. And that's the same that thing that's true of the schools. Now, you could have learned that more cheaply just by taking a little course, about inductive logic, and about fallacies of probability rather than spending $1.7 billion on it. You know, when you're one of the great geniuses of the world, and you're worth 10s of billions of dollars, and you're spending 1.7 billion, you probably should take into account the small numbers fallacy. We seek causes, we seek certainty, we seek source stories that make sense. That's just kind of how we're wired overall. And so we look for explanations right away, we pay more attention to the content of messages, then to information about their reliability. If there's a low probability, that is not something we even ask about, but we more likely are going to ask whether it's a gripping and interesting and persuasive message. But if you're going to try to find a causal explanation of some statistics, you're going to go wrong if it turns out that the cause was just a small sample size. If you're trying to explain extremes, but you don't have much of a sample size, you're probably just falling for a false explanation. That's the small number fallacy. Well, let's move on. Daniel Kahneman tells about someone whom he was working with who was consulting with him and this person was a military trainer. He said, If I praise super performance, the person usually does worse the next time. If I scold, poor performance, the person usually does better. Don't tell me that reward works better than punishment. Clearly, scolding gets better results than praise. When I scold, the people who didn't do as well got better the next try when I praise the people who did great seem to do more poorly the next time. Daniel Kahneman says he was dealing with this officer and a bunch of other officers. And he did not feel that they would be very open to a lesson in algebra right away and algebraic probability. And so he just asked them to draw a target on the floor, put a target there. And then he gave each of those officers two coins. And Mark, those two coins, then he asked them to turn their backs on the target, and throw the coins over their shoulder, first one coin, and then the second. And then they charted out where the coins landed. And it was noticed that the coins that landed closest to the target on the first throw, well, on the second throw, those who would come the closest weren't quite as close that second time around. Whereas those who missed most badly on their first throw, turned out to be not quite so bad. On their second throw, they were closer to the target than on that first really bad throw. The outstanding throw on the first one tended to be less outstanding. On the second, the really stinky throw on the first one tended to be a lot less stinky and closer to the target on the second one. And Kahneman says Now I've tried to show you that the extremes tend to get closer to the average. And so you shouldn't assume that extremely good performance is likely to be followed by extremely good performance no matter what you did, whether you praise them or not. And if you scolded them don't think that their improvement was due to your excellent job of scolding. It's called reversion to the mean, if they had an extremely poor performance the first time, they were likely to do better the second time, even if you hadn't said a word. Below average performance tends to improve close to the average the next time above average performance drops closer to average the next time in such regression to the mean, is normal. And it doesn't prove anything about the effectiveness of praise, or scolding. The entire cause is just regression to the mean. Here's a question. Very smart women tend to marry men less smart than they are. Might you think that? Is that kind of an interesting question. Or B consider this one correlation between intelligence scores of spouses is not perfect? Well, A is provocative. And we wonder, how do we explain that that the super smart women marry people who aren't as smart B kind of boring statement that makes you yawn and tell the math professor to go away? Well, boring B means that provocative, A is statistically unavoidable. It's an absolute certainty that if B is true then A is true, and trying to look for any other reason, is a fallacy. You're committing the regression to the mean fallacy. If you say, Hmm, I wonder why very smart women tend to marry men who aren't as smart. It's because the correlation between intelligence scores of spouses is not perfect. Not very exciting is it, you'd like an explanation that says, you know, those, despite their intelligence, they'd like to feel like they're smarter, they want to be better. And that's why they marry men who are beneath them. The real reason is they're outliers. They're extremely smart. And whoever they marry is likely to not be as smart as they are, they're going to likely be close to the average, the same is true of men. extremely intelligent men are going to marry women who aren't quite as intelligent as they are on the average, simply because they're farther from the average than most of the population. In order for super smart men, only to marry women equal to or greater than they are in intelligence, you'd have to have a perfect correlation between the intelligence scores of spouses. So that boring statistical fact that you don't have perfect correlation between intelligence scores, gives you the really exciting fact that the super smart women tend to marry men less smart than they are, but it's just kind of a fact of regression to the mean. And it's a fallacy to come up with any explanation for why smart women tend to marry men who aren't quite as intelligent as they are. There's another example of regression to the mean, depressed children taking pills improve over a three month period. Well, maybe so but depressed children hugging cats are also likely to improve over a three month period. Depressed children, drinking tea, are likely To improve over a three month period. Why is that? Well, no matter what happens with a extremely depressed child, they're likely to not be at the lowest point of how they feel a few months from now, extremes tend to regress to the mean over time, regardless of other factors. Now, that does not mean that no medication ever does any good. Okay, I'm not saying that what that means is that if you're going to test the value of a medication for depressed people, the treated group has to not only improve, because they're going to improve by regression to the mean, most of the time, they have to improve more than the control group. So you'd have two groups of extremely depressed people. And if you're testing the antidepressant, you have to prove that the treated group improved more than the control group. That's a basic principle of all scientific and medical testing. Because they know that regression to the mean happens in the extremes are likely to change somewhat over time. And so they have to determine whether the change was caused just by regression, sometimes by a placebo by by taking something they thought was good for them when it was just a nothing pill. But overall, they need to have a control group, not just for placebo purposes, but also because of the regression to the mean. So they have to have a group, both groups are likely to improve and understand both groups are likely to improve somewhat, and the drug has to improve people more than those who didn't take it. regression to the mean comes through in sports, sportscaster spent a great deal of their time explaining things that are nothing more than regression to the mean, hey, was fantastic in game one, but not in game two, he must crumbled under the pressure. Well, maybe not. Maybe he was just out of, you know, just playing and getting lucky and having a fantastic game far beyond his usual performance in game one. And now he regress to the mean to his average level of performance he didn't crumble under the pressure he just was more himself in game two. this even happens in the spiritual life, I felt so close to God last month, or last year or whatever point in time. But now, I don't feel as spiritual and must be doing something wrong. Maybe not. Maybe it's regression to the mean. Maybe people don't always live at a super extreme high all the time. And they shouldn't beat themselves up. If they don't feel as spiritual as they did a month ago, extremes tend to return to the norm or to the mean regression to the mean. Here's another fallacy of probability the gamblers fallacy, it's assuming that unrelated events influence each other. There's a couple of examples. My coin flip came up heads twice in a row, it's due to come up tails. No, it's not. If it came up heads twice in a row, there's a 50/50 chance on the next throw, that it will come up either heads or tails 50/50 of each. So no matter how many times you flip it, and it comes up heads, it doesn't become any more likely unless you find out that from flipping it all the time, and it always comes up heads, then you know, it's a loaded coin. But that's a whole different thing. Here's another example. And maybe I can take it from my own life, we've had three girls in a row, if we have another baby, it's almost sure to be a boy. Well, there's a 50% chance that it'll be a boy, no matter how many you have in a row, the chances of the next one is, it's either going to be a boy or a girl. And anything else that you think about it is the gamblers fallacy, we have three girls in a row and then we have girl number four. And then we have some boys after that three boys in a row. And in all of those if you're just trying to estimate probabilities in advance, based on what you know, it's going to be 50/50. And the gamblers fallacy is denying that it's thinking that the past history of things that were unpredictable is going to be influenced by that. And unrelated events are going to influence future random or uncontrolled events. So the gamblers fallacy a lot of people think that way, but it's just a fallacy of probability. Here's another one, just plain neglect of probability. Lottery players focus on the size of a jackpot, not on the likelihood of winning. So they see Mega Millions and they don't think to themselves and there is only a gazillion chance that I'm ever going to win that thing that what goes in their head. is the size of the jackpot and the story of the winner they saw on TV. And the actual math of probability is not so large to them. lotteries are a tax on people who aren't very good at math. After news of a plane crash, travelers cancel flights, well, the crash of that one flight in many, many 1000s Doesn't change the probabilities. For the remaining flights, the probability remains minuscule that your particular flight is going to crash. But many people will cancel flights, just because of that one that got their attention. There's another area in which there's neglected probability. Amateur investors are always thinking about how much a fund is going to yield, what percent did it gain. And they're thinking of how much a fund could make. But they don't look at each funds level of risk. And there are many, many different kinds of investment, some which are extremely high risk, and they better return a lot because you're at a high level of risk. And you need to know levels of risk. But when you neglect probability, then you're only looking at the potential of how much you could grow, not at its volatility, or at its levels of risk. Well, when we talk about probability, and about inductive reasoning, one danger is that you don't know much about probability and you commit a lot of fallacies, and you ignore inductive reasoning and you don't let past things influence your thoughts about the likelihood of something happening again. But there is also an inductive error you can make by being overconfident in induction by thinking about the past is a guaranteed predictor of the future, thinking and acting as though past events are a safe guide to the future because there are some situations in which that's not true. David Hume, centuries ago, in writing about this gave the example of a goose and he says every day, the farmer feeds the goose. And at first, the goose might be a little nervous about why this farmer is feeding it. But after a while, the goose reaches that conclusion. Every day, the farmer is going to feed me. Every day, I eat that food every day, I feel good after I eat it. This farmer has my best interests at heart. And if you use inductive reasoning, that is exactly right. Yesterday, the day before the day before the day before the day before the farmer fed the goose, and tomorrow the farmer's probably going to feed the goose. But there will come a day when the goose will feed the farmer when the farmer eats the goose. And only then does the goose find out that the farmer did not have the goose's best interests at heart inductive reasoning does not give the final truth. Rolf Dobelli tells of a friend of his a thrill seeker, who said to him, I've done 1000 Crazy jumps he's a base jumper. he's jumped from buildings and cliffs and all sorts of strange things. He says, I've done 1000 of these things, and I've never been hurt. A few months later, he was dead. Because you can do 1000 of them successfully. But if 1001 happens, be the one that isn't successful. It's not just that you succeeded 1000 times and failed only once it means you're dead, if you fail on the temple 1001. So there are times when in spite of a long track record of things going a certain way, it is no guarantee that it won't go a different way. And it's no guarantee that that one time it goes wrong is absolutely catastrophic. Induction and probability are helpful. But only as long as current trends continue. There are black swan events named after the fact that a black swan actually showed up after many, many years of people thinking that the only color swans could ever be is white, a black swan event is something that's improbable. It's unforeseen. It's an event that nobody sees coming, but it changes everything. When you're trying to anticipate what's going to happen in the near future, or in the longer term future. You can't anticipate what's going to be invented, what are going to be some of the innovations. They're going to change things so much that current induction current probabilistic reasoning just isn't going to tell you very much when those inventions actually come along. Or there may be wars, that all of a sudden erupt that nobody could have anticipated. There may be pandemics that take off that nobody knew in advance and wasn't prepared for. There are various kinds of disasters. In those cases, induction and probability, don't turn out to be an excellent guide. Most of the time, they're helpful as long as current trends just keep on going. But both. When you get a black swan, when you get a strange event, then you're overconfidence. In induction is going to fail you. And this is true, in the ultimate sense as well, induction and probability address short term unknowns. But they don't help you with final certainty. Inductively. Today, Your death is unlikely. Tomorrow, Your death is very unlikely, it's improbable. But your death is certain. unless Jesus comes again, on any given day, it's improbable that you're going to die. And so you could use induction to say, because it's improbable that I'm going to die on this particular day. And because I've lived a lot of days and haven't died, therefore, it's probably not going to happen that I'm going to die. But of course, you will, unless Jesus returns first. And some people have reasons that way, even in relation to Jesus return, they say, Hey, year after year after year has gone by and Jesus hasn't come back yet. It's not going to happen. Inductive reasoning would tell us that if you go this many years without him coming back, then it's not going to happen. Now, let's think about the probability. The probability is that this year, it's unlikely that Jesus will return he might, but it's unlikely. But his return is certain. So induction, overconfidence means that you take the past as a guide for the future. And even though it's kind of a rough and ready and sort of helpful guide for today, and one day at a time, it's not at all helpful in analyzing the likelihood of an event happening finally, in your life or in your death, as the case may be. So as we think about inductive logic, don't get too confident in inductive logic, and don't get too confident in applying probabilities, where probabilities just don't apply. The existence of the universe is improbable. It's here. Your death on any given day, is improbable. But it will happen. Jesus told the story of that rich man who saved up many years, and he was in good health and his finances just kept getting better and better. And he wanted to tear down his store houses and build bigger store houses and take early retirement and live it up because, well, God said, Tonight, your soul will be required of you it who's going to get that all. He had induction overconfidence. His money had been doing well, his body had been doing well. And so he assumed that tomorrow was going to be like today. And it wasn't. Those are some fallacies of probability that conjunction fallacy where you think that two things together are more likely than either one of those things. And that's impossible, that if one is probable, then it's more probable than if you add in another uncertainty, the base rate fallacy, where you ignore how much something happens overall. And instead, you'd focused in on specific cases, the small number fallacy, where you try to explain what happens in small test samples, either extremes, either extremely high or extremely low, but you're not allowing for the fact that it might just be random variation, and not a big enough sample size. The regression to the mean fallacy, where you ignore the fact that extreme performance can go back to more average, that extreme behaviors tend to go back to more average that extreme emotions even tend to go back to what's a little more average, the gamblers fallacy, that if you flip the coin a few times in a row, one thing it's going to come up the other way, the next time, no previous experience has not changed the probability there. neglect of probability where you play the lottery, because you're thinking about all those dollars, not about the low, low, low probability of getting any of those dollars and the absolute certainty that the lottery is making hundreds of millions of dollars off of people who are very bad at math. And then induction overconfidence thinking about probability and saying, Wow, I'm quite a mathematical person, thinking about induction and taking past experience as a guide to the future and putting so much confidence in it, that you don't take into account surprises. And that you don't take into account the ultimate events such as death and the end of the world and the creation of a new world. So think critically think clearly and be aware of fallacies of probability and learn to do better