Tumgik
#William Macaskill
Text
MacAskill is determined that endless economic growth—fueled by having more kids, or perhaps even just creating ‘digital’ worker-people who could take over the economy—is desirable. It’s extraordinary to us that, in a world where the pronatalist norm of bringing new people into the world remains widely unquestioned as a central aim in life, the ‘longtermists’ present themselves as being edgy by favoring ever-more births.
In the ‘longtermist’ view, the more humans there are with lives that aren’t completely miserable, the better. MacAskill says he believes that only ‘technologically capable species’ are valuable. This is why he writes that ‘if Homo sapiens went extinct,’ the badness of this outcome may depend on whether ‘some other technologically capable species would evolve and take our place.’ Without such a species evolving to ‘take our place,’ the biosphere—with all its wonders and beauty—would be worthless. We find this to be a very shallow perspective. 
Perhaps it isn’t surprising, then, that MacAskill doesn’t see much of a place for non-human animals in the future. True, some ‘longtermists’ have spent a lot of time recently worrying about the suffering of wild animals. You might find it touching that they fret about such innocent creatures, including shrimp. But look out: their concern has a sinister side. Some ‘longtermists’ have suggested that wild animals’ lives are not worth living, so full (allegedly) of suffering are they. MacAskill tends toward envisaging those animals’ more or less complete replacement: by (you guessed it) many more humans. Here’s what MacAskill says: ‘if we assess the lives of wild animals as being worse than nothing, which I think is plausible … then we arrive at the dizzying conclusion that from the perspective of the wild animals themselves, the enormous growth and expansion of Homo sapiens has been a good thing’—a growth that MacAskill wants to endlessly inflate.
MacAskill excuses the proposed near-elimination of the wild by saying that most wild animals (by neuron count) are fish, and by offering reasons for thinking fish have peculiarly bad lives typically compared to land animals. But there is only a preponderance of fish because we have extirpated an even higher percentage of wild land animals. He doesn’t even consider the possibility that we ought to reverse that situation and rewild much of the Earth. 
44 notes · View notes
Text
Tumblr media
A Long Term Birthday Problem
18 notes · View notes
maaarine · 6 months
Text
Tumblr media
Sam Harris:
"There are details of how he [Sam Bankman-Fried] behaved with people, that struck me as arrogant to the point of insanity.
In these investors' calls, he's describing his business and soliciting hundreds of millions of dollars from firms, and he is simultaneously playing video games.
This is celebrated as this delightful affectation.
But clearly he's someone who thinks he need not give 100% of his attention because he's got so much bandwidth he can just play video games while having these important conversations.
And there are things in Michael Lewis' book that revealed that he was quite a strange person.
He claimed that he didn't know what people meant when they said they experienced the feeling of love.
So he's neuro-atypical at a minimum.
Shouldn't there have been more red flags earlier on in terms of his integrity or capacity for ethical integrity?
If someone tells me that they have no idea what one means when they say they love other people, that is an enormous red flag.
Collaborating with this person, putting trust in them, it's an enormous red flag."
William MacAskill:
"On his inability to feel love, that's not something that was striking or notable to me.
After the Michael Lewis book and lots of things came out, it seemed like he had emotional flatness across the board.
Whether this was a result of depression or ADHD or autism, it's not clear to me.
But that wasn't something that seemed obvious at the time.
I guess I interact with people who are relatively emotionally flat quite a lot, [chuckles]."
1 note · View note
linusjf · 1 year
Text
William MacAskill: We are the ancients
Tumblr media
View On WordPress
0 notes
maxksx · 2 years
Text
MacAskill’s approach is based on what is known as subjective Bayesianism. This means that the probabilities it deploys are not intended to be objective measures of the frequency of events, but rather degrees of belief that they will occur, or credences, which are iteratively updated as we acquire new information. This explains why, when crunching the numbers that rank AGI as a more pressing risk than pandemics, world war, or climate change, MacAskill relies on estimates distilled from the credences of surveyed experts, rather than conceptual arguments about the underlying causes (91). I don’t mean to suggest that this sort of approach is entirely unjustified. The problem with dealing with uncertainty about the future is that we often lack anything resembling either objective statistics or a solid conceptual framework, and yet we still have to make decisions. Bayesianism is a framework that lets us start somewhere (even just a hunch) and improve the decisions we make over time (depending on how those hunches shake out). But in practice it tends to assign non-zero probabilities in every case, simply lowering them with repeated updates. This is usually not an issue, but when those seemingly tiny probabilities are multiplied by the astronomical quantities of positive or negative value posited by longtermists, their expected value suddenly overrides that of outcomes whose likelihood we have a much better statistical and/or conceptual grasp on. This means that no matter how good a conceptual argument is that the sudden emergence of effectively godlike AGI is impossible, it might never lower the probability beneath a threshold where it must be taken more seriously than much more tangible dangers. Paradoxically, this way of updating their beliefs might actually insulate longtermists against having to modify their priorities.
https://www.thephilosopher1923.org/post/the-weight-of-forever
1 note · View note
carolinemillerbooks · 2 years
Text
New Post has been published on Books by Caroline Miller
New Post has been published on https://www.booksbycarolinemiller.com/musings/the-futures-emerald-city/
The Future's Emerald City
Tumblr media
A strategist for the democratic party, Lisa Smith, sums up her political view of the world. It’s radical that being reasonable is radical and being normal is abnormal. ( ”Radical Reason” by Joe Hagan, Vanity Fair, October 2022, pg.43.)  Profound as well as surprising, the comment comes from a magazine with a picture of an actress exposing skin on the cover.  Incongruity is a specialty of human beings.  Whether it works as a successful survival tactic remains to be seen. The average span for a mammalian species is 1 million years, so ours is young by 700,000 years.  (“The Beginning of History,” by William Macaskill, Foreign Affairs, Sept./Oct., 2022 pg. 13.)  Having proved we are clever enough to plunder the earth to meet our desires, we appear to have brought ourselves to the brink of extinction.  Either we will go up in a cloud of nuclear smoke because we can’t get along with each other, or we will compete until we fish the oceans dry. The world we have created is one we no longer can manage with ease.  As Smith portends, humans must reach a consensus about what is normal and abnormal soon. Most of us agree we aren’t in Kansas* anymore. Bill Clinton, our 42nd President, believes we can find our way to the Emerald City if we bypass a divided political system and place our faith in philanthropy.  The basis for his sentiment is insubstantial hope.   I think there is a longing for people to get together and meet with an end in mind.  I want to ask what people and what end Clinton promotes. The guiding principle at the moment seems to be the will to power. Without a driving incentive for cohesion, cooperation has as much chance to survive as a dead Lilly that is plunged into a water glass.   William Macaskill, a philosopher who teaches at Oxford University, approaches human survival differently than Clinton.  He begins with a question. Can humanity manage “the danger of its own genius?” (The Beginning of History,” Foreign Affairs, Sept/Oct 2022 pg. 12.) He believes we’d increase our longevity if we shifted our attention from the present to the future.  Do we want to preserve the planet for our descendants? Regardless of differences in religion, culture, or politics, a majority would opt to save the planet for our unborn generations. The question is, how do we achieve it? Macaskill isn’t alone in his concerns for the future. Writers Dani Rodrik and Stephen M. Walt make a startling proposal in “How to Build a Better Order.” (Foreign Affairs, Sept./Oct., 2022, pgs. 142-155.) Past attempts to reign in repugnant governments with sanctions do little to effect change or liberate the oppressed. Generally, the pain falls hardest upon the powerless. A future world, they say, will need to accommodate non-Western powers and tolerate greater diversity in national arrangements and practices. (Ibid, pg. 145)  That means the political stage must make room for dictatorships like North Korea, the Taliban, and the military dictatorship of Myanmar. Macaskill agrees and points out the benefits of bringing incorrigibles into the fold. With their existence no longer threatened, military spending in those countries would decline, and, presuming corruption doesn’t explode, some money might be left over to feed the hungry. The arrangement would reduce the need for cyber war, or nuclear and bioweapons. (“The Beginning of History,” by William Macaskill, Foreign Affairs, Sept./Oct., 2022 pg. 19.) History shows us cooperation between advisories is possible when mutual destruction is at stake. After World War I, nations turned their backs on gas warfare. A few leaders have flaunted the Geneva Protocol, but they are deemed to be war criminals and imprisoned if caught. Nuclear agreements, though old and out of date, remain in force for the same reason.  Cloning is another example of universal cooperation. That scientific breakthrough may be appropriate for farm animals, but no nation has broken with the consensus that human cloning is abhorrent.    Using the future to guide the present provides a greater incentive for cooperation than Clinton’s call for collaboration.  His appeal to our better angels ignores nature’s pecking order, the instrument for natural selection. Mutual survival is a more compelling incentive, especially when, today, people of all philosophies recognize that human life is at a flexion point. Those who imagine we can save the future by returning to the past are already dead, fossils of a bygone era.  Humans have no choice but to observe the dictates of time. We must go forward. If the future threatens our extinction, then we must change the rules of engagement. To quote Eric Hoffer, In times of change, learners inherit the earth, while the learned find themselves beautifully equipped to deal with a world that no longer exists.   *Reference to The Wizard of Oz.
0 notes
justalittlesolarpunk · 4 months
Text
I’ve teased it. You’ve waited. I’ve procrastinated. You’ve probably forgotten all about it.
But now, finally, I’m here with my solarpunk resources masterpost!
YouTube Channels:
Andrewism
The Solarpunk Scene
Solarpunk Life
Solarpunk Station
Our Changing Climate
Podcasts:
The Joy Report
How To Save A Planet
Demand Utopia
Solarpunk Presents
Outrage and Optimisim
From What If To What Next
Solarpunk Now
Idealistically
The Extinction Rebellion Podcast
The Landworkers' Radio
Wilder
What Could Possibly Go Right?
Frontiers of Commoning
The War on Cars
The Rewild Podcast
Solacene
Imagining Tomorrow
Books (Fiction):
Ursula K. Le Guin: The Left Hand of Darkness The Dispossessed The Word for World is Forest
Becky Chambers: A Psalm for the Wild-Built A Prayer for the Crown-Shy
Phoebe Wagner: When We Hold Each Other Up
Phoebe Wagner, Bronte Christopher Wieland: Sunvault: Stories of Solarpunk and Eco-Speculation
Brenda J. Pierson: Wings of Renewal: A Solarpunk Dragon Anthology
Gerson Lodi-Ribeiro: Solarpunk: Ecological and Fantastical Stories in a Sustainable World
Justine Norton-Kertson: Bioluminescent: A Lunarpunk Anthology
Sim Kern: The Free People’s Village
Ruthanna Emrys: A Half-Built Garden
Sarina Ulibarri: Glass & Gardens
Books (Non-fiction):
Murray Bookchin: The Ecology of Freedom
George Monbiot: Feral
Miles Olson: Unlearn, Rewild
Mark Shepard: Restoration Agriculture
Kristin Ohlson: The Soil Will Save Us
Rowan Hooper: How To Spend A Trillion Dollars
Anna Lowenhaupt Tsing: The Mushroom At The End of The World
Kimberly Nicholas: Under The Sky We Make
Robin Wall Kimmerer: Braiding Sweetgrass
David Miller: Solved
Ayana Johnson, Katharine Wilkinson: All We Can Save
Jonathan Safran Foer: We Are The Weather
Colin Tudge: Six Steps Back To The Land
Edward Wilson: Half-Earth
Natalie Fee: How To Save The World For Free
Kaden Hogan: Humans of Climate Change
Rebecca Huntley: How To Talk About Climate Change In A Way That Makes A Difference
Christiana Figueres, Tom Rivett-Carnac: The Future We Choose
Jonathon Porritt: Hope In Hell
Paul Hawken: Regeneration
Mark Maslin: How To Save Our Planet
Katherine Hayhoe: Saving Us
Jimmy Dunson: Building Power While The Lights Are Out
Paul Raekstad, Sofa Saio Gradin: Prefigurative Politics
Andreas Malm: How To Blow Up A Pipeline
Phoebe Wagner, Bronte Christopher Wieland: Almanac For The Anthropocene
Chris Turner: How To Be A Climate Optimist
William MacAskill: What We Owe To The Future
Mikaela Loach: It's Not That Radical
Miles Richardson: Reconnection
David Harvey: Spaces of Hope Rebel Cities
Eric Holthaus: The Future Earth
Zahra Biabani: Climate Optimism
David Ehrenfeld: Becoming Good Ancestors
Stephen Gliessman: Agroecology
Chris Carlsson: Nowtopia
Jon Alexander: Citizens
Leah Thomas: The Intersectional Environmentalist
Greta Thunberg: The Climate Book
Jen Bendell, Rupert Read: Deep Adaptation
Seth Godin: The Carbon Almanac
Jane Goodall: The Book of Hope
Vandana Shiva: Agroecology and Regenerative Agriculture
Amitav Ghosh: The Great Derangement
Minouche Shafik: What We Owe To Each Other
Dieter Helm: Net Zero
Chris Goodall: What We Need To Do Now
Aldo Leopold: A Sand County Almanac
Jeffrey Jerome Cohen, Stephanie Foote: The Cambridge Companion To The Environmental Humanities
Bella Lack: The Children of The Anthropocene
Hannah Ritchie: Not The End of The World
Chris Turner: How To Be A Climate Optimist
Kim Stanley Robinson: Ministry For The Future
Fiona Mathews, Tim Kendall: Black Ops & Beaver Bombing
Jeff Goodell: The Water Will Come
Lynne Jones: Sorry For The Inconvenience But This Is An Emergency
Helen Crist: Abundant Earth
Sam Bentley: Good News, Planet Earth!
Timothy Beal: When Time Is Short
Andrew Boyd: I Want A Better Catastrophe
Kristen R. Ghodsee: Everyday Utopia
Elizabeth Cripps: What Climate Justice Means & Why We Should Care
Kylie Flanagan: Climate Resilience
Chris Johnstone, Joanna Macy: Active Hope
Mark Engler: This is an Uprising
Anne Therese Gennari: The Climate Optimist Handbook
Magazines:
Solarpunk Magazine
Positive News
Resurgence & Ecologist
Ethical Consumer
Films (Fiction):
How To Blow Up A Pipeline
The End We Start From
Woman At War
Black Panther
Star Trek
Tomorrowland
Films (Documentary):
2040: How We Can Save The Planet
The People vs Big Oil
Wild Isles
The Boy Who Harnessed The Wind
Generation Green New Deal
Planet Earth III
Video Games:
Terra Nil
Animal Crossing
Gilded Shadows
Anno 2070
Stardew Valley
RPGs:
Solarpunk Futures
Perfect Storm
Advocacy Groups:
A22 Network
Extinction Rebellion
Greenpeace
Friends of The Earth
Green New Deal Rising
Apps:
Ethy
Sojo
BackMarket
Depop
Vinted
Olio
Buy Nothing
Too Good To Go
Websites:
European Co-housing
UK Co-housing
US Co-housing
Brought By Bike (connects you with zero-carbon delivery goods)
ClimateBase (find a sustainable career)
Environmentjob (ditto)
Businesses (🤢):
Ethical Superstore
Hodmedods
Fairtransport/Sail Cargo Alliance
Let me know if you think there’s anything I’ve missed!
1K notes · View notes
3nding · 9 months
Text
4 notes · View notes
the-unforgotten · 8 months
Text
2024 reading list
my list of 50+ something books I plan to read this year. a mix of random fiction some series as well as classics fiction and philosophy and some political stuff
Little Women Louisa May Alcott Meditations Marcus Aurelius Pride and Prejudice Jane Austen Flowerheart Catherine Bakewell Bookshops & Bonedust Travis Baldree Blood Debts Terry J. Benton-Walker A Broken Blade Melissa Blair Utopia for Realists Rutger Bregman Break the Cycle Dr. Mariel Buqué Small Pleasures Clare Chambers The Ballad of Songbirds and Snakes Suzanne Collins Don Quixote Miguel de Cervantes, Edith Grossman Evicted Matthew Desmond Ripe Sarah Rose Etter Polysecure Jessica Fern The Wicked + The Divine (2014), Volume 1 Kieron Gillen Fear of Black Consciousness Lewis R. Gordon The Seven Principles for Making Marriage Work John Gottman, PhD, Nan Silver A Wizard of Earthsea Ursula K. Le Guin Seraphina Rachel Hartman Royal Assassin Robin Hobb Ain't I a Woman Bell Hooks Five Survive Holly Jackson The Queen of the Tearling Erika Johansen Time Squared Lesley Krueger Yellowface R. F. Kuang Jade City Fonda Lee Six Crimson Cranes Elizabeth Lim What We Owe the Future William MacAskill Earth Logic Laurie J. Marks The Communist Manifesto Karl Marx, Friedrich Engels One of Us Is Lying Karen M. McManus Killing Commendatore Haruki Murakami How High We Go in the Dark Sequoia Nagamatsu Hello Beautiful Ann Napolitano Beyond Good and Evil Friedrich Wilhelm Nietzsche Murder in an Irish Village Carlene O'Connor 1984 George Orwell Boy, Snow, Bird Helen Oyeyemi Children of Chicago Cynthia Pelayo Murder on Black Swan Lane Andrea Penrose The Republic Plato Mort Terry Pratchett Everything's Fine Cecilia Rabess Aristotle and Dante Discover the Secrets of the Universe Benjamin Alire Sáenz A Gathering of Shadows V. E. Schwab Vicious V. E. Schwab The Taming of the Shrew William Shakespeare Frankenstein Mary Shelley They Both Die at the End Adam Silvera How Fascism Works Jason Stanley Dracula Bram Stoker She Is a Haunting Trang Thanh Tran Womb City Tlotlo Tsamaase The 7 1/2 Deaths of Evelyn Hardcastle Stuart Turton The Picture of Dorian Gray Oscar Wilde
-----
after the creation of this list two weeks ago I've already added more
Mutual Aid: Building Solidarity in This Crisis Dean Spade
The Complete Maus Art Spiegelman
2 notes · View notes
Link
In truth, the EA community has always lacked integrity, honesty and common sense. This was, in fact, a core component of its brand. It’s what made EA unique and attractive to “moral weirdos,” a term that MacAskill uses endearingly. The about-face in the fallout of FTX’s implosion, with EAs now claiming to care about things like integrity, is just more evidence that the community is a relentless grift. It has billions in the bank, a palatial estate in Oxfordshire, and links to some of the richest people on the planet. Hundreds of millions of dollars are poured back into the community for “movement building,” and leading EAs — while presenting themselves as modest and self-sacrificing — flew in private jets, bought hundreds of millions in Bahamian real estate, and were offered literally millions, from tech billionaires, to boost book sales.
If you want to do good in the world — and you should — steer clear of EA.
45 notes · View notes
female-malice · 2 years
Text
Over the past two decades, a small group of theorists mostly based in Oxford have been busy working out the details of a new moral worldview called longtermism, which emphasizes how our actions affect the very long-term future of the universe – thousands, millions, billions, and even trillions of years from now. This has roots in the work of Nick Bostrom, who founded the grandiosely named Future of Humanity Institute (FHI) in 2005, and Nick Beckstead, a research associate at FHI and a programme officer at Open Philanthropy. It has been defended most publicly by the FHI philosopher Toby Ord, author of The Precipice: Existential Risk and the Future of Humanity (2020). Longtermism is the primary research focus of both the Global Priorities Institute (GPI), an FHI-linked organisation directed by Hilary Greaves, and the Forethought Foundation, run by William MacAskill, who also holds positions at FHI and GPI. Adding to the tangle of titles, names, institutes and acronyms, longtermism is one of the main ‘cause areas’ of the so-called effective altruism (EA) movement, which was introduced by Ord in around 2011 and now boasts of having a mind-boggling $46 billion in committed funding.
It is difficult to overstate how influential longtermism has become. Karl Marx in 1845 declared that the point of philosophy isn’t merely to interpret the world but change it, and this is exactly what longtermists have been doing, with extraordinary success. Consider that Elon Musk, who has cited and endorsed Bostrom’s work, has donated $1.5 million dollars to FHI through its sister organisation, the even more grandiosely named Future of Life Institute (FLI). This was cofounded by the multimillionaire tech entrepreneur Jaan Tallinn, who, as I recently noted, doesn’t believe that climate change poses an ‘existential risk’ to humanity because of his adherence to the longtermist ideology.
Meanwhile, the billionaire libertarian and Donald Trump supporter Peter Thiel, who once gave the keynote address at an EA conference, has donated large sums of money to the Machine Intelligence Research Institute, whose mission to save humanity from superintelligent machines is deeply intertwined with longtermist values. Other organisations such as GPI and the Forethought Foundation are funding essay contests and scholarships in an effort to draw young people into the community, while it’s an open secret that the Washington, DC-based Center for Security and Emerging Technologies (CSET) aims to place longtermists within high-level US government positions to shape national policy. In fact, CSET was established by Jason Matheny, a former research assistant at FHI who’s now the deputy assistant to US President Joe Biden for technology and national security. Ord himself has, astonishingly for a philosopher, ‘advised the World Health Organization, the World Bank, the World Economic Forum, the US National Intelligence Council, the UK Prime Minister’s Office, Cabinet Office, and Government Office for Science’, and he recently contributed to a report from the Secretary-General of the United Nations that specifically mentions ‘long-termism’.
The point is that longtermism might be one of the most influential ideologies that few people outside of elite universities and Silicon Valley have ever heard about. I believe this needs to change because, as a former longtermist who published an entire book four years ago in defence of the general idea, I have come to see this worldview as quite possibly the most dangerous secular belief system in the world today. But to understand the nature of the beast, we need to first dissect it, examining its anatomical features and physiological functions.
(continue reading)
Rather than solve climate problems now, governments are focused on putting humanity in sci-fi virtual reality pods in 500,000 years.
We've seen this over and over again throughout history. Crazed emperors use up their nation's resources hunting for immortality. Hunting for the philosopher's stone.
Longtermism is our modern day philosopher's stone.
#cc
34 notes · View notes
belacqui-pro-quo · 1 year
Link
4 notes · View notes
x-authorship-x · 1 year
Text
Tag Game!
Thanks to @agardenofideas for tagging me!
Rules: Answer all the questions, then tag 9 people you want to get to know better!
Three ships: ...I'm a very relaxed multi shipper so this'll be...interesting 😅 I'll do most recently read
Ardeth Bay/Jonathan Carnahan (The Mummy)
Luke Alvez/Spencer Reid (Criminal Minds)
Uchiha Madara/Senju Tobirama (Naruto)
First ship: Uh. 👁️👄👁️ That's like dark ages, idk probably Harry/Hermione?? Can't remember tbh
Last song I listened to: My type by brb.
Last movie I watched: Dungeons & Dragons: Honour Among Thieves
Currently reading: I'm between books rn, I'm gaming more (BotW) but next on my list is What We Owe The Future by William MacAskill
Currently watching: Criminal Minds
Currently consuming: NOMO (free from) caramel chocolate bar, genuinely the nicest caramel bar I've ever had
Currently craving: Shin ramyun gormet spicy (:'))
Tagging (only if you want to!)
@katlou303 @stereden @ellorypurebloodculture @tsarinatorment @thekatthatbarks @iamnotakitty @mortysanchez @theraynealchemist @eruditeempress and anyone else who wants to play! Feel free to tag me if you want to join the chain, this is off the top of my head so don't feel excluded if I didn't tag you directly 😅
3 notes · View notes
bengrossbg · 2 years
Text
FTX and the Problem of Justificatory Ethics
א. Preface
This is my first chance to get to write straightforwardly on this blog on the topic of philosophy, and while I plan to write a full introduction to what I want this to be in the future, I figure I’ll offer a couple of disclaimers now:
I am probably retreading covered ground. This is a space for me to work out my thoughts, not publish brilliant and original philosophic work (gotta leave that for the journals!)
I am gonna get stuff wrong. Tell me if you think I do, and we can fight about it
I think many of the EA people are wonderful, smart, and good. Many of them are my friends, and I hope we can all engage in this argument in good faith.
With that out of the way, let’s begin
ב. The Scandal
In the past week or so, a scandal has emerged surrounding the crypto billionaire, political donor, effective altruist, and philanthropist Sam Bankman-Fried (SBF). It seems now like he was engaged in a Ponzi-scheme or Ponzi-adjacent scheme, moving money from FTX, his crypto-exchange, to a hedge fund he founded and remained affiliated with: Alameda Research. The fraud is massive, and billions of dollars are involved. The scheme, most observers agree, constitutes an ethical violation on a massive scale.
SBF and his co-conspirators are particularly attractive as a media spectacle because of Bankman-Fried’s political connections (he was a top campaign contributor to Democrats in the 2022 election) and lots of condescending and gossipy reporting on a polygamous relationship among those involved at the top of FTX and Alameda Research. None of that is of particular interest to us here. What is of particular interest is SBF’s involvement in the Effective Altruist (EA) movement.
ג. The Problem
The concern here is clear: did SBF engage in fraud knowingly and with the belief that he is morally justified?
The fact of the matter is that’s probably an unanswerable question: no one except Sam knows why he did what he did, and his motivations are likely muddled and unclear even to him. But here’s what I’d like to posit, it presents a problem for Effective Altruism as a philosophical project that the potential for an effective altruist justification of the scheme exists. If Effective Altruism is uniquely positioned to produce results that even its proponents detest, it (as a theory) is, to put it gently, in deep sh*t.
I’m not the only one that thinks so either. William MacAskill, an ethical philosopher that is extremely influential in EA circles, thought it was so dire a threat to EA he wrote a paragraphs-long Twitter thread on the subject. The rest of that thread will be trying to untangle that, and seeing if contentions hold up.
ד. Contra MacAskill
After some preliminary remarks expression his frustration and anger at SBF, as well as an explanation and renunciations of his ties with SBF, MacAskill dives into the philosophical meat:
I want to make it utterly clear: if those involved deceived others and engaged in fraud (whether illegal or not) that may cost many thousands of people their savings, they entirely abandoned the principles of the effective altruism community.
This is an interesting start, and it makes what MacAskill is trying to do here clear. For EA to maintain its credibility, it cannot be associated with the FTX scheme. MacAskill goes on:
For years, the EA community has emphasised the importance of integrity, honesty, and the respect of common-sense moral constraints. If customer funds were misused, then Sam did not listen; he must have thought he was above such considerations. A clear-thinking EA should strongly oppose “ends justify the means” reasoning. I hope to write more soon about this. In the meantime, here are some links to writings produced over the years.
At this point, anyone familiar with EA should be scratching their head. EA opposed to end-justify-the-means thinking? Isn’t that, like, the whole point of consequentialism? The answer: it is, and MacAskill knows that. What becomes clear as you read the rest of the thread (and its accompanying citations) is that he’s trying to thread a very important needle. Philosophically EA believes that the ends justify the means, but practically it must avoid engaging in that sort moral reasoning. As I’ll get to later, this is untenable, but let’s take it at face value for the time being.
What MacAskill does next is cite some existing EA literature that suggests this kind of argument, ostensibly to show that EA has always incorporated these ideas into their broader philosophy.
Tumblr media Tumblr media
From MacAskill’s book What We Owe The Future
These are some pretty interesting selections, and I think get to the real heart of the matter. I’ll start with the end, where he discusses “either endors[ing] nonconsequentialism or tak[ing] moral uncertainty seriously.” Frankly, I think the nonconsequentialism thing is basically just an ass-covering maneuver. EA is consequentialist and near-dogmatically so, that’s just what it is. The moral uncertainty piece is very interesting, and maybe represents EA’s ticket out of this dilemma, but only in a way that dramatically undercuts the movement. A truly morally uncertain person just should not be a longtermist effective altruist.1
However, the first two arguments are interesting. They can essentially be summed up as:
The Hedging Argument: Ethical decisions are made in conditions of uncertainty, and standard ethical principles represent a good method for what is essentially hedging, or minimizing risk
The Extraordinary Circumstances Argument: While there are valid circumstances in which violating standard ethical principles is justified, they are so infrequent that they aren’t worthy of concern.
Here is, as I see it, the trouble for MacAskill: neither of these arguments would have a chance of convincing SBF. The reasons for this are simple: dramatic ethical decisions are always risky, they are always extraordinary, and they are exactly what EA tells you is the right thing to do. Let’s take these in reverse order:
EA is, at its core, a maximizing argument. You have to maximize utility, globally, and failure to do so is a moral failing. This means that to be the best person possible, you have to accrue as much money and/or power as possible, and then direct it towards the most marginally efficient production of utility. One way to do that is to run a Ponzi scheme and give the money away to philanthropy.
When you are in control of vast resources, as EA incentivizes you to be and as SBF assuredly was, all decisions you choose to do with that are, by nature, extraordinary. Sure, you shouldn’t rob a baby to save a baby in Africa, but if you can rob BILLIONS to save BILLIONS, then you find yourself in the same kind of “baby Hitler scenario” MacAskill wants to pretend doesn’t actually arise. But it does.
When you are in control of vast resources, by the nature of opportunity cost, all decisions you decide to make with it (including indecisions) are extremely risky. Not defrauding your crypto customers could mean not lobbying the elected representatives you could bribe into preventing the next pandemic. Isn’t the next pandemic also a massive risk? When you reach the scale of billions of dollars under your control, hedging becomes a useless tactic. Isn’t doing the quote-unquote wrong thing the less risky proposition in the face of global annihilation?
ה. Conclusion
So where does that leave EA, and what the hell do I mean by “justificatory ethics?” Well, I think EA has a fundamental flaw that allows its adherents to engage in moral reasoning that upends the whole project.
This is because, by its maximizing nature, it incentivizes the conditions in which the moral principles which are supposed to prevent it from self-defeat are stripped away. It self-defeats its own self-defeat protection! Now, if William MacAskill and other EAs were simply willing to accept this result, perhaps they could survive as a niche, insulated community, although (as is becoming clear from the overwhelmingly negative media coverage of this entire affair) it probably couldn’t achieve the popular success it desires. But even the members of EA resent what SBF did, they find it morally abhorrent. And yet, they lack the very terms to define why it’s wrong.
ו. Post Script
Well, that’s my first post done! I hope everyone liked it, I sure had fun reading it, and I hope I didn’t make anyone too mad! If you’re reading this on Tumblr please follow me on here, and if you’re reading this through my substack, please subscribe, it’s free!
Moral uncertainty (and its cousin Moral Pluralism) are super interesting ideas and I hope to talk about them more in the future. ↩︎
9 notes · View notes
linusjf · 1 year
Text
William MacAskill: Key moral priority
Tumblr media
View On WordPress
0 notes
vefanyar · 2 years
Note
end of year book ask: 5, 9, 31, 39
5. The longest book you read this year
That would be Kim Stanley Robinson's The Ministry of the Future.
9. A book that was better than you expected it to be
How High We Go in the Dark by Sequoia Nagamatsu. I did not particularly enjoy the revelation at the end, but oh my god, what a book. I don't remember whether or not it made me cry (as per a question in the previous ask), but it was so so good for such a dark topic - a pandemic virus discovered in melting permafrost sweeps the world, "forcing humanity to devise a myriad of moving and inventive ways to embrace possibility in the face of tragedy" (as per the blurb). And that's really what this is about. It's quite dark, but it's also got an enormous amount of hope and kindness in the face of something so unthinkable. I don't quite know what to expect, but it wasn't connected vignettes that were so profoundly moving.
31. Did you read any translations?
The thing here is that I'm German, and I prefer to read books in the original if I can, so mostly English originals if I can get my hands on them. (These days at least they're available, even if I have to order them from my favourite bookshop; in-store they have a two-shelf selection of English bestsellers for the most part, and that's about it.) I read four this year, two are Emmi Itäranta's The Weaver - my Finnish isn't good enough to read a book with yet, unfortunately - as well as The Memory of Water which the author translated from Finnish herself, which gave reading this a very different experience. The third is Amitav Ghosh's The Great Derangement / Die Große Verblendung, a nonfiction longform essay about (what else) the climate crisis and how it's (not) treated in popular media, especially fiction, and the influence of that on collective thinking and action. It's in English originally, which wasn't available at the point I wanted it, re: COVID, so I read it in the German translation instead.
39. Five books you absolutely want to read next year?
Just five? LMAO, no. I'll do ten, five nonfiction (work-related) and five novels, in no particular order.
Nonfiction:
Thor Hanson: Hurricane Lizards and Plastic Squid: The Fraught and Fascinating Biology of Climate Change
Greta Thunberg: The Climate Book: The Facts and the Solutions
Adam Trexler: Anthropocene Fictions: The Novel in a Time of Climate Change
William MacAskill: What We Owe the Future
Elizabeth Kolbert: Field Notes from a Catastrophe: Man, Nature, and Climate Change
Novels:
Vaishnavi Patel: Kaikeyi
Samantha Allen: Patricia Wants to Cuddle
Danielle Daniel: Daughters of the Deer
Ruthanna Emrys: A Half-Built Garden
Tasha Suri's next installment of the Burning Kingdoms series, which apparently is expected to be published next year.
(I am absolutely not looking at my bookshelf and all its unread books, none of which apart from Kaikeyi is on there...)
6 notes · View notes