Say you ask a group of people to take a position on a hot topic, like is Bitcoin a scam. Some will say yes and some will say no. What would it take for them to change their minds?

Perhaps you show them the reasoning of someone with the opposing (reasonable) viewpoint. Let’s call this the “counterargument”. Will exposure to the counterargument:

  • A: Increase the likelihood they change their minds
  • B: Decrease the likelihood they change their minds
  • C: Have no effect

By and large, the answer is B. Exposure to the opposing viewpoint will inspire even more confidence in their original position, making it less likely than before they change their minds.

This result isn’t particularly surprising. After all, each participant has already independently formed their point of view on the topic and we like to find evidence for our views, not against. Something compelling for one person could be a turn off for another. Such is our pluralistic world.

But what if the counterargument they are shown was their original argument? Same experiment as before, but when you go back in and say, here’s the argument from somebody with the opposing viewpoint, you trick them and show them literally their original position?

That person should recognize that it’s their position and agree with it, right?

Turns out, that only happens for about half of people. The other half start to form arguments against it, just as if it were actually a counterargument to their original position. The simple (false) perception of an argument against their own inspires the person to form a contradictory stance on the spot.

These behaviors were discovered by French psychologists Sperber and Mercier. In their paper, they found that reasoning only sometimes leads to rational beliefs and decisions:

Some of the evidence reviewed here shows not only that reasoning falls short of delivering rational beliefs and rational decisions reliably, but also that, in a variety of cases, it may even be detrimental to rationality. Reasoning can lead to poor outcomes not because humans are bad at it but because they systematically look for arguments to justify their beliefs or their actions

These shortcomings of reason are the foundation for confirmation bias, popularized by increased interest in the “filter bubbles” produced by social networks. Confirmation bias says that people interpret facts with the subconscious intent to support their existing beliefs. Two people with opposing viewpoints can think the same set of facts support their individual viewpoint.

Confirmation bias is often discussed as a flaw in human reasoning, but the authors argue that it’s actually the opposite: when reasoning serves its intended purpose, confirmation bias naturally occurs. Reasoning, they argue, is for arguing.

The main function of reasoning is argumentative: Reasoning has evolved and persisted mainly because it makes human communication more effective and advantageous.

They call this The Argumentative Theory of Reasoning. In short, they contend that evolutionarily, the returns on being perceived as right outweigh the returns of being actually right. As this New Yorker piece covering the topic says so plainly, “for any individual, freeloading is always the best course of action.” If I can convince Bob to go hunt while I stay back and, umm, plan for the next hunt, then I get to eat delicious meat without taking on any of the risk.

In sum, facts don’t necessarily change minds, reasoning is for arguing, and being perceived as right is better than being actually right.


While crypto communities have wilted and withered a bit in this darker season, in them you’ll still find a lot of arguing. Why?

The very cynical viewpoint is that everybody is just seeking personal profit and cryptonetworks tend to yield the greatest returns to the earliest adopters. This can be stated in more noble terms like aligning incentives amongst stakeholders and stuff but at its very core, there’s this property where early adopters are incentivized to encourage others to join because it makes the network more valuable.

As patio11 stated as he steel-manned1 the bull-case for Bitcoin:

By using a distributed base of tech-savvy evangelists who personally profit from advocating for cryptocurrency everywhere, we flipped the standard tech innovation script from “Nobody wants to be first mover” to “Every bank wants to be the first on the new network.”

The less cynical viewpoint is that these are True Believer missionaries building mass movements for what they believe will substantially improve society.

For the purposes of this post, the simple profit-seeker and the True Believer are a difference of degree and not kind: they both want to convince others to join their gang.

Applying The Argumentative Theory of Reasoning, it’s less important that their arguments are factual and more important that their arguments are convincing. Remember, being perceived as right is more important than being actually right. So what we’re left with is a bunch of opposing gangs lobbing often spurious arguments back and forth hoping some bystander will see one of the acerbic tweets and think hmm that guy is really on to something.

I’ll leave you with two thoughts.

First: be wary of all the noise. And if you want to be actually right, you need to audit your incentives. Because where there are incentives, there is the incentive to construct arguments supporting that position.

And second: if you’re trying to attract users–real, mainstream, salt of the earth users–the arguments you and your friends find convincing might not appeal to them. Your arguments likely confirm your current position, which these ethos-agnostic users do not hold.

If you liked what you read, please share it with your friends

Member updates

June 27, 2019

Pricing vs valuing cryptoassets (plus EdgeWare ROI model)

June 20, 2019

Libra versus the state and lessons from the Algorand auction

June 14, 2019

Who has power over Facebook's GlobalCoin?