Archive for the 'reason' Category

More Good Stuff on Mercier and Sperber’s Argumentative Reasoning Theory

Wednesday, May 4th, 2011

You folks in the comments should stop arguing for a minute and check this out. Jonah Lehrer’s latest Wired Science column has more reaction to that very cool recent study by Mercier and Sperber on how confirmation bias can be explained as an evolutionary adaptation to the particular needs of reaching good decisions in a group context: The reason we reason. (With professional basketball content, too!)

It includes a link to an even cooler interview with Hugo Mercier from The argumentative theory: A conversation with Hugo Mercier.

Psychologists have shown that people have a very, very strong, robust confirmation bias. What this means is that when they have an idea, and they start to reason about that idea, they are going to mostly find arguments for their own idea. They’re going to come up with reasons why they’re right, they’re going to come up with justifications for their decisions. They’re not going to challenge themselves.

And the problem with the confirmation bias is that it leads people to make very bad decisions and to arrive at crazy beliefs. And it’s weird, when you think of it, that humans should be endowed with a confirmation bias. If the goal of reasoning were to help us arrive at better beliefs and make better decisions, then there should be no bias. The confirmation bias should really not exist at all. We have a very strong conflict here between the observations of empirical psychologists on the one hand and our assumption about reasoning on the other.

But if you take the point of view of the argumentative theory, having a confirmation bias makes complete sense. When you’re trying to convince someone, you don’t want to find arguments for the other side, you want to find arguments for your side. And that’s what the confirmation bias helps you do.

The idea here is that the confirmation bias is not a flaw of reasoning, it’s actually a feature. It is something that is built into reasoning; not because reasoning is flawed or because people are stupid, but because actually people are very good at reasoning — but they’re very good at reasoning for arguing. Not only does the argumentative theory explain the bias, it can also give us ideas about how to escape the bad consequences of the confirmation bias.

People mostly have a problem with the confirmation bias when they reason on their own, when no one is there to argue against their point of view. What has been observed is that often times, when people reason on their own, they’re unable to arrive at a good solution, at a good belief, or to make a good decision because they will only confirm their initial intuition.

On the other hand, when people are able to discuss their ideas with other people who disagree with them, then the confirmation biases of the different participants will balance each other out, and the group will be able to focus on the best solution. Thus, reasoning works much better in groups. When people reason on their own, it’s very likely that they are going to go down a wrong path. But when they’re actually able to reason together, they are much more likely to reach a correct solution.

See? I knew there was a reason for continuing to engage with shcb. Fulfilling the evolutionary imperative for argumentation since 1996.

More Mooney on Scientists on Reasoning

Monday, April 25th, 2011

Chris Mooney comments on a new study by Hugo Mercier and Dan Sperber in this item: Is Reasoning Built for Winning Arguments, Rather Than Finding Truth? (Short answer: Quite possibly yes.)

Here’s the abstract of Mercier and Sperber’s paper (see Why Do Humans Reason? Arguments for an Argumentative Theory):

Reasoning is generally seen as a means to improve knowledge and make better decisions. However, much evidence shows that reasoning often leads to epistemic distortions and poor decisions. This suggests that the function of reasoning should be rethought. Our hypothesis is that the function of reasoning is argumentative. It is to devise and evaluate arguments intended to persuade. Reasoning so conceived is adaptive given the exceptional dependence of humans on communication and their vulnerability to misinformation. A wide range of evidence in the psychology of reasoning and decision making can be reinterpreted and better explained in the light of this hypothesis. Poor performance in standard reasoning tasks is explained by the lack of argumentative context. When the same problems are placed in a proper argumentative setting, people turn out to be skilled arguers. Skilled arguers, however, are not after the truth but after arguments supporting their views. This explains the notorious confirmation bias. This bias is apparent not only when people are actually arguing but also when they are reasoning proactively from the perspective of having to defend their opinions. Reasoning so motivated can distort evaluations and attitudes and allow erroneous beliefs to persist. Proactively used reasoning also favors decisions that are easy to justify but not necessarily better. In all these instances traditionally described as failures or flaws, reasoning does exactly what can be expected of an argumentative device: Look for arguments that support a given conclusion, and, ceteris paribus, favor conclusions for which arguments can be found.

Mercier explained it to Mooney this way:

If reasoning evolved so we can argue with others, then we should be biased in our search for arguments. In a discussion, I have little use for arguments that support your point of view or that rebut mine. Accordingly, reasoning should display a confirmation bias: it should be more likely to find arguments that support our point of view or rebut those that we oppose. Short (but emphatic) answer: it does, and very much so. The confirmation bias is one of the most robust and prevalent biases in reasoning. This is a very puzzling trait of reasoning if reasoning had a classical, Cartesian function of bettering our beliefs – especially as the confirmation bias is responsible for all sorts of mischief… Interestingly, the confirmation bias needs not be a drag on a group’s ability to argue. To the extent that it is mostly the production, and not the evaluation of arguments that is biased – and that seems to be the case – then a group of people arguing should still be able to settle on the best answer, despite the confirmation bias… As a matter of fact, the confirmation bias can then even be considered a form of division of cognitive labor: instead of all group members having to laboriously go through the pros and cons of each option, if each member is biased towards one option, she will find the pros of that options, and the cons of the others – which is much easier – and the others will do their own bit.

Mooney observes:

I think this evolutionary perspective may explain one hell of a lot. Picture us around the campfire, arguing in a group about whether we need to move the camp before winter comes on, or stay in this location a little longer. Mercier and Sperber say we’re very good at that, and that the group will do better than a lone individual at making such a decision, thanks to the process of group reasoning, where everybody’s view gets interrogated by those with differing perspectives.

But individuals – or, groups that are very like minded – may go off the rails when using reasoning. The confirmation bias, which makes us so good at seeing evidence to support our views, also leads us to ignore contrary evidence. Motivated reasoning, which lets us quickly pull together the arguments and views that support what we already believe, makes us impervious to changing our minds. And groups where everyone agrees are known to become more extreme in their views after “deliberating” – this is the problem with much of the blogosphere.

It’s interesting to me how the legal system recreates and formalizes the roles of this (hypothetical) evolutionary collective decision-making mechanism. We have the interested parties advocating on each side. We have the impartial referee (the judge) whose job it is to make sure the advocates follow a set of rules designed to ensure fairness. And then we have the jury: A group of objective, disinterested observers who evaluate the arguments of the advocates, and, free of the distortions of their own confirmation bias, decide which side is right.

This plays out really interestingly in the age of the Internet. The Internet reduces the distance between me and confirmatory evidence to zero, so I’m able to amass what appears to me to be an unassailable edifice of fact in support of whatever position I’m arguing. It also makes it very convenient for me to assemble with others who share my views (since no matter how outré those views are, the Internet reduces the distance between me and my would-be cohorts to zero as well). With our evolutionary craving for societal connection satisfied, we crazies can hive off into our own little world untroubled by the arguments of those who disagree with us.

I also like how this applies to politics. The advocates on both sides are all convinced of their own rightness, regardless of reality. The “jury” ends up being those infamous independent voters, those who don’t really care about the big issues of the day, and wake up every few years just before election time to do a quick, fairly disengaged evaluation of the arguments of each side. It’s an interesting notion that those low-information voters actually could be by far the most important players in the process. Because of our evolutionary tendency to lean on the scales of our own judgment, we are fatally biased as decision-makers on any subject about which we care deeply. We desperately need those blithe, unconcerned voters in the middle to help us reach good collective decisions.

We need civility. We need reasonable, rational middlemen (and -women). We need venues where advocates from both sides can engage with each other and lay out their arguments, but where relatively disinterested observers are also present to evaluate those arguments. We need communities that we are connected to, and feel a part of, but that nevertheless span ideological boundaries.

That used to be the way all communities worked, because they were by definition local, defined by the limits of transportation and communication to consist of the people who lived and worked in a given location. They don’t necessarily work that way anymore, which I’ve tended to think of as a good thing. But in light of the implications of Mercier and Sperber’s research, I may need to reconsider.