Saturday, July 12, 2014

Fw: Farnam Street: The Uses Of Being Wrong

Sent from my Verizon Wireless BlackBerry

From: Farnam Street <newsletter@farnamstreetblog.com>
Sender: noreply+feedproxy@google.com
Date: Sat, 12 Jul 2014 08:58:27 +0000
To: <mainandwall@aol.com>
Subject: Farnam Street: The Uses Of Being Wrong

Farnam Street: The Uses Of Being Wrong

Link to Farnam Street

The Uses Of Being Wrong

Posted: 11 Jul 2014 05:35 AM PDT

Confessions of wrongness are the exception not the rule.

Daniel Drezner, a professor of international politics at the Fletcher School of Law and Diplomacy at Tufts University, pointing to the difference between being wrong in a prediction and making an error, writes:

Error, even if committed unknowingly, suggests sloppiness. That carries a more serious stigma than making a prediction that fails to come true.

Social sciences, unlike physical and natural sciences, finds a shortage of high-quality data on which to make predictions.

How does Science Advance?

A theory may be scientific even if there is not a shred of evidence in its favour, and it may be pseudoscientific even if all the available evidence is in its favour. That is, the scientific or non-scientific character of a theory can be determined independently of the facts. A theory is ‘scientific’ if one is prepared to specify in advance a crucial experiment (or observation) which can falsify it, and it is pseudoscientific if one refuses to specify such a ‘potential falsifier’. But if so, we do not demarcate scientific theories from pseudoscientific ones, but rather scientific methods from non-scientific method.

Karl Popper viewed the progression of science as falsification — that is science progresses by elimination of what doesn’t work and hold. Popper’s falsifiability criterion ignores the tenacity of scientific theories, even in the face of disconfirming evidence. Scientists, like many of us, do not abandon a theory because the evidence may contradict it.

The wake of science is littered with discussions on anomalies and not refutations.

Another theory on scientific advancement, proposed by Thomas Kuhn, a distinguished American philosopher of science, argues that science proceeds with a series of revolutions with an almost religious conversion.

Imre Lakatos, a Hungarian philosopher of mathematics and science, wrote:

(The) history of science, of course, is full of accounts of how crucial experiments allegedly killed theories. But all such accounts are fabricated long after the theory has been abandoned.

Lakatos bridged the gap between Popper and Khun by addressing what they failed to solve.

The hallmark of empirical progress is not trivial verifications: Popper is right that there are millions of them. It is no success for Newtonian theory that stones, when dropped, fall towards the earth, no matter how often this is repeated. But, so-called ‘refutations’ are not the hallmark of empirical failure, as Popper has preached, since all programmes grow in a permanent ocean of anomalies. What really counts are dramatic, unexpected, stunning predictions: a few of them are enough to tilt the balance; where theory lags behind the facts, we are dealing with miserable degenerating research programmes.

Now, how do scientific revolutions come about? If we have two rival research programmes, and one is progressing while the other is degenerating, scientists tend to join the progressive programme. This is the rationale of scientific revolutions. But while it is a matter of intellectual honesty to keep the record public, it is not dishonest to stick to a degenerating programme and try to turn it into a progressive one.

As opposed to Popper the methodology of scientific research programmes does not offer instant rationality. One must treat budding programmes leniently: programmes may take decades before they get off the ground and become empirically progressive. Criticism is not a Popperian quick kill, by refutation. Important criticism is always constructive: there is no refutation without a better theory. Kuhn is wrong in thinking that scientific revolutions are sudden, irrational changes in vision. [The history of science refutes both Popper and Kuhn: ] On close inspection both Popperian crucial experiments and Kuhnian revolutions turn out to be myths: what normally happens is that progressive research programmes replace degenerating ones.

***

A lot of the falsification effort is devoted to proving others wrong and not ourselves. “It's rare for academics, Drezner writes, to publicly disavow their own theories and hypotheses.”

Indeed, a common lament in the social sciences is that negative findings—i.e., empirical tests that fail to support an author's initial hypothesis—are never published.

Why is it so hard for us to see when we are wrong?

It is not necessarily concern for one's reputation. Even predictions that turn out to be wrong can be intellectually profitable—all social scientists love a good straw-man argument to pummel in a literature review. Bold theories get cited a lot, regardless of whether they are right.

Part of the reason is simple psychology; we all like being right much more than being wrong.

As Kathryn Schulz observes in Being Wrong, “the thrill of being right is undeniable, universal, and (perhaps most oddly) almost entirely undiscriminating … . It's more important to bet on the right foreign policy than the right racehorse, but we are perfectly capable of gloating over either one.”

As we create arguments and gather supporting evidence (while discarding evidence that does not fit) we increasingly persuade ourselves that we are right. We gain confidence and try to sway the opinions of others.

There are benefits to being wrong.

Schulz argues in Being Wrong that “the capacity to err is crucial to human cognition. Far from being a moral flaw, it is inextricable from some of our most humane and honorable qualities: empathy, optimism, imagination, conviction, and courage. And far from being a mark of indifference or intolerance, wrongness is a vital part of how we learn and change.”

Drezner argues that some of the tools of the information age give us hope that we might become increasingly likely to admit being wrong.

Blogging and tweeting encourages the airing of contingent and tentative arguments as events play out in real time. As a result, far less stigma attaches to admitting that one got it wrong in a blog post than in peer-reviewed research. Indeed, there appears to be almost no professional penalty for being wrong in the realm of political punditry. Regardless of how often pundits make mistakes in their predictions, they are invited back again to pontificate more.

As someone who has blogged for more than a decade, I've been wrong an awful lot, and I've grown somewhat more comfortable with the feeling. I don't want to make mistakes, of course. But if I tweet or blog my half-formed supposition, and it then turns out to be wrong, I get more intrigued about why I was wrong. That kind of empirical and theoretical investigation seems more interesting than doubling down on my initial opinion. Younger scholars, weaned on the Internet, more comfortable with the push and pull of debate on social media, may well feel similarly.

Still curious? Daniel W. Drezner is the author of The System Worked: How the World Stopped Another Great Depression.


Brought to you by: CURIOSITY: A curiously unconventional ad agency that helps you stand out in today's crowded world.

No comments: