Subscribe to Annie's Substack

“When Underdogs Win, How Much Do Forecasters Lose” – Annie’s Newsletter, March 23rd, 2018

WHEN UNDERDOGS WIN, HOW MUCH DO FORECASTERS LOSE?
A view from Nate Silver, and why predictions are treated differently in sports than in politics

We seem to have a fundamentally different – and more reasonable – view of the probabilistic nature of forecasts in sporting events than in politics. @NateSilver538, who is in the business of gathering data on forecasts for both, noted this on Monday on Twitter, after the first two rounds of the NCAA Men’s Basketball Tournament:

Later in Silver’s thread, he conjectured that this might have to do with “repeated exposure.” Everyone fills out their own bracket and realizes that favorites win most of the time but that upsets do happen.

I think that is likely part of it. Poker players get pretty comfortable with uncertainty through repeated exposure. When you play poker, you see underdogs win all the time and I think it is really helpful for understanding how often 10% or 20% or 30% happens.

I wonder if another contributing factor could be that the skill in basketball is easy to see. We can make judgments about which side is more skilled and reconcile that with the other side occasionally winning. In basketball, we don’t feel like much is hidden from view: we see it on the court.

In politics, however, it’s much murkier. In fact, we’re looking to forecasters to tell usthe skill differential. We’re not as forgiving when the underdog wins: we don’t have the experience or understanding of what goes into such predictions, and we rely on what they’ve told us. That’s when we start resulting.

When I suggested this on Twitter, @BrentWheeler replied:


That’s an interesting observation, something I hadn’t thought of. Maybe people think political forecasters are predicting something that is completely skill. (If you read through a bunch of replies to Silver’s tweet, you can see that there are definitely people who think there is no element of chance in the outcome of an election.)

I like that hypothesis.


CRITICAL FORECASTING MOMENT FOR “THE NEEDLE” AT @NYTIMES
“The needle” is safe and “the jitter” is back – for now

On March 13, the day of the Pennsylvania Special Election, @Nate_Cohn @JshKatz@NYTimes explained how its election-night forecasting device – “The Needle” – worked and made the prediction, with near certainty, that the needle will be “wrong” at some point (if by wrong you mean the candidate the needle favors eventually loses).

Similar to Nate Silver’s Twitter thread, the piece about the needle highlights the difficulties media organizations have communicating probabilistic forecasts to an audience that craves certainty.

This is particularly a problem in political forecasts because the skill-vs.-luck element isn’t as readily transparent as it is in sports so the “right” answer is much more opaque.

The article explained the literal and figurative ups-and-downs of the needle’s short history:

• Reflecting expected turnouts and patterns for votes yet counted, the needle moved from Clinton to Trump on election night in November 2016. Although the needle was accountable to when the election shifted, many saw the needle as an equivocation, as a recognition the previous forecasts has been “wrong.”

• “All hail the needle!” – In the December AL Senate election, @NYTimes needle had eventual winner, Jones, at 62% to win (based on Democratic-heavy precincts yet to report) when Moore was ahead by 8 points with 2/3 of precincts counted.

• For Pennsylvania, the Times returned one of the needle’s earlier features: “the jitter,” which quivers to reflect uncertainty. Some readers disliked the jitter, the Timesomitted it from 2017 special elections, but brought it back, giving readers the option of turn it off. And this warning:

  


SELF-DECEPTION, MADOFF, AND AN EXAMPLE OF DUNNING-KRUGER
An important cognitive bias, compounded by the trouble with working backward

The Dunning-Kruger effect, identified by social psychologists David Dunning and Justin Kruger, explains that one feature of expertise is that experts are more aware of what they don’t know than those with less competence:

@Qz published a great piece by @OliviaGoldhill titled “The person who’s best at lying to you is you” about self-deception and Dunning’s work. The article provides a useful background on Dunning-Kruger and the dangers of overestimating one’s abilities. It includes some information directly from Dunning, based on a presentation he is due to make in Hungary in July.

The presentation, as described in the article, includes an anecdote which – even if it’s a valid demonstration – shows the difficulties in working backward from results to figure out if a decision was a mistake or not, especially after an extreme outcome has occurred.

In 2008, a psychiatrist, Stephen Greenspan, published a summary of decades of research on how to avoid being gullible.

Two days later, he lost a third of his retirement savings. Why? He had invested with Bernie Madoff.

For sure there’s some serious irony in that.

This is a funny example of the gullibility expert getting conned. But is it clear that the disastrous result was because he made a disastrous decision? Or is this an example of resulting?

Greenspan bought into a hedge fund that was a feeder fund for Madoff (whose name he probably never heard, nor would it have mattered if he did). The person offering him the opportunity was himself an investor in the fund. The fund was a subsidiary of Mass Mutual Life. Lots of people invested money this way and, by every metric, they were investing in a professionally managed investment operation with an excellent track record.

What better due diligence could he have done than the thousands of other investors (including many professionals) who invested with Madoff. Should we expect Greenspan to be savvier in his vetting of Madoff than the SEC, which missed the Ponzi scheme for decades?

What happened with Greenspan is a charming anecdote, but it’s unclear whether it is a good example. It’s easy to look at the situation after and say Madoff investors should have looked deeper.

Especially an investor who is an expert on gullibility.

But that might just be resulting.


IS THERE A FREE SPEECH CRISIS ON COLLEGE CAMPUSES?
The debate continues, and where you can find it

Last week, I included an item about the debate over the current state of free speech on college campuses. It started as a lengthy Twitter thread by @JeffreyASachs and a preview of a coming response from @JonHaidt.

To follow up, here is where you can find the additional discussion by Sachs and Haidt, as well as @MattYglesias (who wrote an article expanding on the Sachs position), @SeanTStevens (who co-wrote the response with Haidt), and @RobbySoave (an editor at @Reason who responded to Yglesias in a series of tweets and an article on Reason.com):

• @Voxdotcom – Yglesias article, March 12 – “Everything we think about the political correctness debate is wrong.”

• @WashingtonPost @MonkeyCageBlog – Sachs article, March 16 – “The ‘campus free speech crisis’ is a myth. Here are the facts.”

• @HdxAcademy – Stevens & Haidt article, March 19 – “The Skeptics are Wrong: Attitudes About Free Speech On Campus are Changing.”

• @Reason – Soave article, March 19 – “Pundits Say There’s No Campus Free Speech ‘Crisis.’ Here’s Why They’re Wrong.” Here is Soave’s Twitter thread on the article.

 


THIS WEEK’S VISUAL ILLUSION
Rings appear to rotate

This week’s illusion, from @AkiyoshiKitaoka: