Subscribe to Annie's Substack

“A Pair of Recent Articles on Poker’s Lessons for Decision Making Amid Uncertainty” & Much More

TALKS AT GOOGLE
My recent conversation in Mountainview

I had the opportunity to have an amazing conversation with Jordan Thibodeau as part of “Talks at Google.” It was a deep dive on Thinking in Bets. I’m sharing the video of the conversation here, as well as Jordan’s notes.


A PAIR OF RECENT ARTICLES ON POKER’S LESSONS FOR DECISION MAKING AMID UNCERTAINTY

Poker can teach us a lot of lessons about decisions we make throughout our lives. Its challenges – uncertainty deriving from incomplete information and luck which create noisy outcomes– are a good model for the challenges we face in making most of our decisions.

Three recent examinations of decision-making in poker, primarily related to similarities between poker and investing, illustrate how we can use things poker players learn about uncertainty to deal with uncertainty throughout our own decision-making lives.

Last week, @TheEconomist ran a short piece on the usefulness of a rules-based approach in domains where exact probabilities aren’t knowable when decisions are executed: poker and investing.

This fits with my article from last week’s newsletter about Daniel Kahneman’s growing interest in “noise” and how it creates wicked learning environments, where outcome quality and decision quality are not tightly tied together.

The looser the relationship between how things turn out and the decisions you make along the way, the more difficult it is to cull the appropriate lessons from experience.

Expertise becomes harder to develop.

In poker, you don’t know the probabilities in advance because of incomplete information. And you usually don’t even get that information afterward.

The Economist article pointed out how, in both investing and poker, it’s easy to be tricked into believing that, because you can run theoretical calculations, you can perfectly calculate the probabilities in advance.

@MarketWatch also posted an item, written by @Kari_Paulconnecting decisions in poker and investing.

The first part of the article cites a recent paper by Seth Fry, Dominic Albino, and Paul Williams, analyzing the difference between successful and unsuccessful poker players in dealing with uncertainty and incomplete information.

(Disclosure: the latter part of the article quotes me on the importance of recognizing and embracing uncertainty.)

You should read the MarketWatch article and the Fry-Albino-Williams paper for yourself, but here are my key takeaways:

  • Basically, people are pretty good at knowing their own cards, but they’re not great at recognizing that they don’t know other people’s cards. Expert poker players are just better at this thing. They understand that you have to have an interplay between your own holdings and what you suspect the other person’s holdings are and how you suspect the other person might react to their holdings.
  • Uncertainty affects everybody in poker. The best players don’t create certainty in the situation but take advantage of their comfort with uncertainty.
  • Expert poker players are better at taking the perspective of their opponent, understanding how others at the table might view them. This means poker players are better at incorporating different viewpoints into their mental models.

COUNTERFACTUALS IN EVALUATING HISTORY (AND DECISION MAKING)
Why we need to think about the decisions we didn’t make 

Fifty years ago, in June 1968, Robert Kennedy was assassinated the night he won the California Democratic Primary.

Jeff Greenfield (@Greenfield64), at the time a young speechwriter for RFK who went on to report on American politics and media for fifty years, recently shared his recollections of that night and the political ramifications in an interview with @CNNand an article for @TheDailyBeast.

Greenfield’s interview and article are excellent expressions of counterfactual thinking, the consideration of a past (or multiple pasts) that didn’t happen.

The way history has unfolded is probabilistic and far from inevitable, yet the popular view of history – that is, the lessons we take from history – tends to focus on the one way things actually turned out, plotting the moves in a chess-like sequence as if history marches on to an inevitable conclusion.

RFK’s assassination offers a good example of this thinking. It goes something like this:

  • RFK was assassinated at the height of his campaign’s success.
  • Because of his death, a chaotic, much-divided Democratic Party chose Hubert Humphrey as its nominee.
  • Because of that, the fractures in the party didn’t get repaired and Humphrey lost to Nixon by a narrow margin.
  • Because of that, a damaged opposition party made possible Nixon’s reelection and abuses of power and even after Nixon’s resignation, the Democrats failed to hold it together and after one term in the White House (Carter), the Republicans recaptured the presidency for twelve more years.

(The same kind of reasoning pinned World War I – and, due to its resolution, World War II – on the June 1914 assassination of Austro-Hungarian Archduke Franz Ferdinand.)

Greenfield has the perspective to question a narrative of twenty years of American politics by pointing to all the obstacles RFK would have faced had he not been murdered.

There were numerous, specific reasons why he might not have gotten the Democratic nomination, or not run a more successful campaign than Humphrey, or not have fixed the deep conflicts within the Democratic Party.

History can offer us rich lessons in how to use counterfactuals as a tool to improve decision making.

The decisions we don’t make and the things that don’t happen (which are essentially the decisions we chose not to make or alternatives that didn’t happen) can teach us just as much about decision making as the choices we actually make (if not more).

But it is very difficult for us to examine that negative space.

If potential decisions (and the futures that might result from those decisions) are branches on a tree, once we make a choice we systematically cut off those other branches, those things we didn’t choose to do.

We cognitively close out those positions and, once we have closed the position, we don’t generally explore and poke at those other branches to examine what would have happened had we taken a different path.

An exception, of course, is when there is no way to avoid examining the unchosen branch of the tree.

When Monty Hall (see last week’s newsletter if the reference is unfamiliar) shows us what’s behind the door that we didn’t choose, we are forced to look at that branch of the tree.

And it’s painful to see the car behind the door we didn’t choose.

When we decide not to buy shares in Google’s IPO because we don’t think it’s a good investment, we see Google’s success in our face every single day and it’s painful to realize that we missed out on making a fortune.

This gives us a clue as to why we often avoid exploration of the unchosen path—the regret we will feel is so painful that any upside we might get from improving our decision making is dwarfed by the pain we experience when we feel like we made the wrong choice.

What we don’t know can’t hurt us.

Except that it really can because, without exploring the routes we didn’t take, our decision making will suffer and we will extract poor lessons from our experience.

There is so much to be learned by tracking the decisions that we don’t make. Don’t be afraid of the exploration.

Yes, sometimes you will find out that you missed out on a Google. But the pay off in the long run will be worth the regret you feel in the moment.

Greenfield was speaking and writing from an historical perspective. If you want to hear another great historian discussing counterfactuals, I highly recommend Niall Ferguson’s (@NFergus) interview with @SamHarrisOrg on Harris’s podcast from February 2018, “Networks, Power, and Chaos.” Ferguson explains the importance of counterfactuals in decision analysis.


NEW EVIDENCE INVALIDATING THE FAMOUS STANFORD PRISON EXPERIMENT –
How did the findings survive failure to replicate and long-time criticism of unscientific methods?
How do we dislodge such a powerfully durable belief, even with a retraction?

In 1971, the Stanford Prison Experiment became one of the most famous psychological studies ever conducted, a chilling demonstration of how easily the average person could commit horrific atrocities.

Professor Philip Zimbardo recruited students as paid volunteers for a two-week experiment, assigning them roles as “prisoners” and “guards” in a simulated prison in the basement of the school’s psychology building.

After just six days, Zimbardo shut the study down.

In multiple instances, he documented student-guards abusing their authority, even causing emotional breakdowns in the student-prisoners.

The study instantly became a sensation in popular culture and in psychology classrooms.

In the last forty-seven years, it has become a permanent part of the discussion about, as Hannah Arendt termed it, “the banality of evil”: an explanation of prison riots, police abuses, the Holocaust, My Lai Massacre, Abu Ghraib, and numerous war crimes and atrocities committed by people in authority.

The key takeaway: ordinary people can commit extraordinary atrocities if put in the right circumstances.

It turns out that the study was a lie, a lie that has persisted, in part, because the narrative is so appealing.

@BenZBlum in his astonishing @Medium article, “The Lifespan of a Lie,” explained:

The study’s influence weathered immediate and serious criticism from the likes of Leon Festinger and Erich Fromm, along with recurring factual questions about the authenticity of the behavior.

Blum documents this as background to his recent interviews with subjects of the study in his Medium article and the piece is well worth the read.

In 2001, scientist and historian @AlexanderHaslam and colleagues attempted to replicate Zimbardo’s results, in the BBC Prison Study.

The findings were significantly at odds with Zimbardo’s, and despite his efforts in the seventeen years since, the Stanford Prison Experiment remained a go-to in psychology classes and for explanations about the conditions that make possible horrific, abusive behavior.

The Stanford Prison Experiment has become so entrenched in the cultural zeitgeist that despite lingering and continuing questions about the result, it was made into a film of the same title in 2015.

At that year’s Sundance Film Festival, its director won the Alfred P. Sloan Feature Film Prize, and its screenwriter won the Waldo Salt Screenwriting Award. (Quoted in IMDb: “A screenplay that brings to life a true story that is as powerful and unfortunately relevant even almost moreso today than it was 44 years ago.”)

It turns out there is good reason the study doesn’t replicate. The results were a lie.

Blum’s new evidence includes recent interviews with subjects about how the chief result of the experiment – how easily people can be turned by circumstances into monsters – was manipulated or manufactured.

Among Blum’s examples:

@JayVanBavel, a professor at NYU, got a chance to listen to an audio recording of Jaffe (the “warden”) coaching the guards. (The recording was part of a talk by Haslam).

Van Bavel then posted a 12-tweet thread about the recording and the experiment well worth the read.



How do you retract something believed by millions? How long does it take – if ever – to dislodge such a belief?

Why did the study achieve such a lofty status despite serious, recurring questions about its authenticity and then a failure to replicate the results?

A partial answer to all this is that everybody wanted to believe.

The original study is so sexy. Our friends and neighbors – and we ourselves – are just a nudge and a flimsy pretense of authority away from committing atrocities.

It fits a cool narrative that’s terrifying yet alternately reassuring.

And that’s a recipe for bias to land such a big blow on science.


CELEBRATING THE LIFE OF JOHN NASH

Earlier this month, June 13, marked the 90th anniversary of the birth of John Nash. Nash, a brilliant mathematician, was a pioneer in game theory, creator of the “Nash Equilibrium,” and a Nobel Prize winner. He is also known as the subject of A Beautiful Mind, a bestselling biography and Academy-Award-winning film.

OpenCulture.com, following Nash’s death, posted an item noting that his dissertation (which became known as the Nash Equilibrium) was just 26 pages and cited just 2 sources in the bibliography – one of them a prior article by Nash himself.


I thought that neatly summed up the enormity of his genius and contributions.


THIS WEEK’S ILLUSION
Lavender Field Illusion, thanks to @AkiyoshiKitaoka

The Lavender Field Illusion, from @AkiyoshiKitaoka. Even if it doesn’t look that way, all the boxes are square and all the lines and columns are straight lines.