#I can prove the likelihood of that scenario with JUST AS MUCH narrative information as we currently have
Explore tagged Tumblr posts
triflesandparsnips · 2 years ago
Note
I always saw Izzy's "still it's a nice room, lots of possibilities" bit as Izzy thinking about what he'd do with the space once Stede is dead (now that he's gotten Ed to recommit to the plan) and trying to be a dramatic, menacing villain about it but failing because he's not very good at being a dramatic, menacing villain.
Don't get me wrong-- there are lots of things it could be, and that's definitely one of them.
But he could also be
off his stride because he didn't intend to be in there before Stede
off his stride because he did intend to be in there before Stede but didn't think Stede would, like, not even notice someone else was in the room already
off his stride because he didn't consider any kind of opening line to actually catch Stede's attention, what the fuck Bonnet
off his stride because maybe he did consider an opening line and it was a fucking neutral inquiry about the day's activity like a normal person and he's in the cabin because Stede is weird and lets all sorts of people in his cabin pretty regularly so one of them was rude first and it wasn't Izzy
or maybe this was entirely intentional on his part and this is him trying to transition naturally toward getting Stede to go through with the fuckery and apparently his idea of "natural" conversation is to compliment a room's general size and scope
or failing all that maybe it's not a natural transition for him but he's determined to be an upperclass version of polite to actively encourage Stede to do the fuckery (which we all know to be villainous but Stede doesn't)-- and it's fascinating to see that it's not until Stede tells Izzy to piss off that Izzy's mannerisms break down and he calls drops the politeness in favor of calling Stede a little shit
It's the thing where there are SO MANY possibilities. And until we know what's really up (assuming we ever know), they could all be true! SCHRODINGER'S CHARACTERIZATIONS. Delicious.
14 notes · View notes
theconservativebrief · 6 years ago
Link
I’ve been a hypochondriac for much of my life.
When I was 13, I read an article about a girl my age who had recently lost her hair to alopecia. For the next six months, my teenage self developed an obsessive hair-counting habit every time some collected in my hairbrush.
A few years later, as a freshman at university, a three-day headache led me to call home in tears, convinced I had a brain tumor. (I did not.)
In 2008, my 24 years of neuroticism reached their dizzying peak. I had gone wakeboarding on a warm lake during a trip to Las Vegas, and I woke up a few days later feeling a little under the weather. One three-hour Google spiral later, I was in a full-blown panic.
You see, there is an extremely rare but nevertheless horrifying amoeba called Naegleria fowleri that occasionally appears in warm freshwater lakes in the southern states and, if said lake water gets into your sinuses through a mistimed splash, the amoeba can climb up your olfactory nerve, reproduce, and quite literally eat your brain. Even though I understood the meaning of the words “extremely rare,” the narrative was just too perfect — neurotic hypochondriac who always worried needlessly about rare terrible diseases succumbs to rare terrible disease.
Of course, I was wrong again. The only thing eating my brain was my own irrational anxiety, and after a few sleepless nights, I felt sheepishly well enough to rejoin the Vegas revelry.
Fast-forward to today, and I’m pleased to say that my hypochondria — and my reasoning skills in general — have significantly improved. A large part of that was my choice of profession; I began playing professional poker shortly after the amoeba episode, and 10 years later, the game has trained my mind to better handle uncertainty.
But the most powerful antidote to my irrationality came from a surprising source: an 18th-century English priest named Reverend Thomas Bayes. His pioneering work in statistics uncovered an immensely powerful mental tool that, if properly used, can drastically improve the way we reason about the world.
Our modern world is notoriously unpredictable and complex. Should I buy bitcoin? Is that news headline reliable? Is my crush actually into me, or just stringing me along?
Whether it’s our finances or our careers or our love lives, we have to tackle tricky decisions on a daily basis. Additionally, our smartphones bombard us around the clock with a never-ending stream of news and information. Some of that information is reliable, some is noise, and some is intentionally created to mislead. So how do we decide what to believe?
Reverend Bayes made enormous steps toward solving this age-old problem. A statistician by training, his work on the nature of probability and chance laid the groundwork of what is now known as Bayes’s theorem. While its formal definition appears as a rather intimidating mathematical equation, it essentially boils down to this:
Javier Zarracina/Vox
In other words, whenever we receive a new piece of evidence, how much should it affect what we currently believe to be true? Does the information support that belief, dispute it, or not affect it at all?
This line of questioning is known as Bayesian reasoning, and chances are, you have been using this method of belief-building all your life without realizing it has a formal name.
For example, imagine a co-worker comes to you with a shocking piece of news: He suspects that your boss has been siphoning money from the company. You’ve always respected your boss, and if you had been asked to estimate the likelihood of him being a thief prior to hearing any gossip (the “prior odds”), you would think it extremely unlikely. Meanwhile, your colleague has been known to exaggerate and dramatize situations, especially about people in managerial positions. As such, their word alone carries little evidential weight — and you don’t take their accusation too seriously. Statistically speaking, your “posterior odds” stay pretty much the same.
Now, take the same scenario but instead of verbal information, your colleague produces a paper trail of company money going into a bank account in your boss’s name. In this case, the weight of evidence against him is much stronger, and so the likelihood of “boss = thief” should increase proportionally. The stronger the evidence, the stronger your level of belief. And if the evidence is compelling enough, it should make you change your mind about him entirely.
If this feels obvious and intuitive, it should. The human brain is, to some extent, a natural Bayesian reasoning machine through a process known as predictive processing. The trouble is, almost all our intuitions evolved out of simpler times for savannah-type survival situations. The complexity of more modern-day decisions can sometimes cause our Bayesian reasoning to malfunction, especially when something we really care about is on the line.
What if, instead of respecting your boss, you’re annoyed at him because you feel he’d been unfairly promoted to his current position instead of you? Objectively speaking, your “prior” belief that he is an actual account-skimming thief should be almost as unlikely as in the previous example.
However, because you dislike him for another reason, you now have extra motivation to believe the gossip from your co-worker. This can result in you excessively shifting your “posterior” likelihood despite the lack of hard evidence … and perhaps even doing or saying something unwise.
The phenomenon of being swayed from accurate belief-building by our personal desires or emotions is known as motivated reasoning, and it affects every one of us, no matter how rational we think we are. I’ve lost count of how many times I’ve made an objectively stupid play at the poker table thanks to an excessive emotional attachment to a particular outcome — from chasing lost chips with reckless bluffs after an unlucky run of cards, to foolhardy heroics against opponents who’ve gotten under my skin.
When we identify too strongly with a deeply held belief, idea, or outcome, a plethora of cognitive biases can rear their ugly heads. Take confirmation bias, for example. This is our inclination to eagerly accept any information that confirms our opinion, and undervalue anything that contradicts it. It’s remarkably easy to spot in other people (especially those you don’t agree with politically), but extremely hard to spot in ourselves because the biasing happens unconsciously. But it’s always there.
And this kind of Bayesian error can have very real and tragic consequences: Criminal cases where jurors unconsciously ignore exonerating evidence and send an innocent person to jail because of a bad experience with someone of the defendant’s demographic. The growing inability to hear alternative arguments in good faith from other parts of the political spectrum. Conspiracy theorists swallowing any unconventional belief they can get their hands on until they think the Earth is flat, or movie stars are lizards, or that a random pizza shop is the base for a sex slavery ring because of a comment thread they read on the internet.
So how do we overcome this deeply ingrained part of human nature? How can we become better Bayesians?
For motivated reasoning, the solution is somewhat obvious: self-awareness.
While confirmation bias is usually invisible to us in the moment, its physiological triggers are more detectable. Is there someone who makes your jaw clench and blood boil the moment they’re mentioned? A societal or religious belief you hold so dear that you think anyone is ridiculous to even want to discuss it?
We all have some deeply held belief that immediately puts us on the defensive. Defensiveness doesn’t mean that belief is actually incorrect. But it does mean we’re vulnerable to bad reasoning around it. And if you can learn to identify the emotional warning signs in yourself, you stand a better chance of evaluating the other side’s evidence or arguments more objectively.
With some Bayesian errors, however, the best remedy is hard data. This was certainly the case with my battle against hypochondria. Examining the numerical probabilities of the ailments I feared meant I could digest the risks the same way I would approach a poker game.
Sick of my neuroticism, a friend looked up the approximate odds that someone of my age, sex, and medical history would have contracted the deadly bug after swimming in that particular lake. “Liv, it’s significantly less likely than you making royal flush twice in a row,” he said. “You’ve played thousands of hands and that has never happened to you, or anyone you know. Stop worrying about the fucking amoeba.”
If I wanted to go one step further, I could have plugged those prior odds into Bayes’s formula and multiplied it by the evidential strength of my headache-y symptoms. To do this mathematically, I’d consider the counter case: How likely are my symptoms without having the amoeba? (Answer: very likely!) As headaches happen to people all the time, they provide very weak evidence of an amoebic infection, and so the resulting posterior odds remain virtually unchanged.
And this is a crucial lesson. When dealing with statistics, it is so easy to focus on fear-mongering headlines, like “thousands of people died from terrorism last year,” and forget about the other equally relevant part of the equation: the number of people last year who didn’t die from it.
Occasionally, “red-pill” or conspiracy enthusiasts fall into a similar statistical trap. On its face, questioning mainstream belief is a good scientific practice — it can uncover injustice and prevent systemic mistakes from repeating in society. But for some, proving the mainstream wrong becomes an all-consuming mission. And this is especially dangerous in the internet era, where a Google search will always spit out something that fits a chosen narrative. Bayes’s rule teaches you that extraordinary claims require extraordinary evidence.
And yet for some people, the less likely an explanation, the more likely they are to believe it. Take flat-Earth believers. Their claim rests on the idea that all the pilots, astronomers, geologists, physicists, and GPS engineers in the world are intentionally coordinating to mislead the public about the shape of the planet. From a prior odds perspective, the likelihood of a plot so enormous and intricate coming together out of all other conceivable possibilities is vanishingly small. But bizarrely, any demonstration of counterevidence, no matter how strong, just seems to cement their worldview further.
If there is one thing Bayes can teach us to be certain of, however, it is that there is no such thing as absolute certainty of belief. Like a spaceship trying to reach the speed of light, a posterior likelihood can only ever approach 100 percent (or 0 percent). It can never exactly reach it.
And so, anytime we say or think, “I’m absolutely 100 percent certain!” — even for something as probable as our globe-shaped Earth — we’re not only being foolish, we’re being factually wrong. By that statement, we’re effectively saying there is no further evidence in the world, no matter how strong, that could change our minds. And that is as ridiculous as claiming, “I know everything about everything that could ever possibly happen in the universe, ever,” because there are always some unknown unknowns we cannot conceive of, no matter how knowledgeable and wise we think we are.
Which is why science never officially “proves” anything — it just seeks evidence to improve or weaken current theories until they approach 0 percent or 100 percent. This should serve as a reminder that we should always remain open to the possibility of changing our minds if strong enough evidence emerges. And most importantly, we must remember to see our deepest beliefs for what they ultimately are: just another prior probability, floating in a sea of uncertainty.
Liv Boeree is a science communicator and TV host specializing in astrophysics, rationality, and poker.
Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.
Original Source -> How an 18th-century priest gave us the tools to make better decisions
via The Conservative Brief
0 notes