#AI and environment
Explore tagged Tumblr posts
Text
The Future of AI: A Symphony of Progress for Humanity
AI transforms healthcare, education, and the environment. Discover how humans and technology collaborate for a smarter, more sustainable future.
What’s On My Mind Today? As the sun rises on a new era of technological advancement, the concept of What’s On My Mind Today? AI is no longer just an idea from science fiction; it’s shaping our world in ways that were unimaginable just a few decades ago. From medicine to education, environmental conservation to creative expression, the possibilities seem endless. But this isn’t just about…
#AI and environment#AI collaboration#AI in healthcare#AI technology trends#AI-driven solutions#creative AI#Ethical AI#future of AI#human-AI partnership#personalized education#sustainable AI
0 notes
Text
Google's Greenhouse Gas Dilemma: Balancing AI Expansion and Climate Goals #GoogleEmissions #ClimateChange #AI #Sustainability #GreenTech
Google’s Greenhouse Gas Dilemma: Balancing AI Expansion and Climate Goals In recent years, Google has been at the forefront of technological advancements, driving innovations that have reshaped industries and everyday life. However, its latest environmental report reveals a troubling trend: a significant rise in greenhouse gas emissions. This surge poses a serious challenge to Google’s ambitious…
#AI and Environment#carbon footprint#Climate Goals#Data Center Energy#Google Emissions#Green Technology#Renewable Energy#Sustainability in Tech
0 notes
Note
one 100 word email written with ai costs roughly one bottle of water to produce. the discussion of whether or not using ai for work is lazy becomes a non issue when you understand there is no ethical way to use it regardless of your intentions or your personal capabilities for the task at hand
with all due respect, this isnt true. *training* generative ai takes a ton of power, but actually using it takes about as much energy as a google search (with image generation being slightly more expensive). we can talk about resource costs when averaged over the amount of work that any model does, but its unhelpful to put a smokescreen over that fact. when you approach it like an issue of scale (i.e. "training ai is bad for the environment, we should think better about where we deploy it/boycott it/otherwise organize abt this) it has power as a movement. but otherwise it becomes a personal choice, moralizing "you personally are harming the environment by using chatgpt" which is not really effective messaging. and that in turn drives the sort of "you are stupid/evil for using ai" rhetoric that i hate. my point is not whether or not using ai is immoral (i mean, i dont think it is, but beyond that). its that the most common arguments against it from ostensible progressives end up just being reactionary
i like this quote a little more- its perfectly fine to have reservations about the current state of gen ai, but its not just going to go away.
#i also generally agree with the genie in the bottle metaphor. like ai is here#ai HAS been here but now it is a llm gen ai and more accessible to the average user#we should respond to that rather than trying to. what. stop development of generative ai? forever?#im also not sure that the ai industry is particularly worse for the environment than other resource intense industries#like the paper industry makes up about 2% of the industrial sectors power consumption#which is about 40% of global totals (making it about 1% of world total energy consumption)#current ai energy consumption estimates itll be at .5% of total energy consumption by 2027#every data center in the world meaning also everything that the internet runs on accounts for about 2% of total energy consumption#again you can say ai is a unnecessary use of resources but you cannot say it is uniquely more destructive
1K notes
·
View notes
Text
Epic Systems, a lethal health record monopolist
Epic Systems makes the dominant electronic health record (EHR) system in America; if you're a doctor, chances are you are required to use it, and for every hour a doctor spends with a patient, they have to spend two hours doing clinically useless bureaucratic data-entry on an Epic EHR.
How could a product so manifestly unfit for purpose be the absolute market leader? Simple: as Robert Kuttner describes in an excellent feature in The American Prospect, Epic may be a clinical disaster, but it's a profit-generating miracle:
https://prospect.org/health/2024-10-01-epic-dystopia/
At the core of Epic's value proposition is "upcoding," a form of billing fraud that is beloved of hospital administrators, including the "nonprofit" hospitals that generate vast fortunes that are somehow not characterized as profits. Here's a particularly egregious form of upcoding: back in 2020, the Poudre Valley Hospital in Ft Collins, CO locked all its doors except the ER entrance. Every patient entering the hospital, including those receiving absolutely routine care, was therefore processed as an "emergency."
In April 2020, Caitlin Wells Salerno – a pregnant biologist – drove to Poudre Valley with normal labor pains. She walked herself up to obstetrics, declining the offer of a wheelchair, stopping only to snap a cheeky selfie. Nevertheless, the hospital recorded her normal, uncomplicated birth as a Level 5 emergency – comparable to a major heart-attack – and whacked her with a $2755 bill for emergency care:
https://pluralistic.net/2021/10/27/crossing-a-line/#zero-fucks-given
Upcoding has its origins in the Reagan revolution, when the market-worshipping cultists he'd put in charge of health care created the "Prospective Payment System," which paid a lump sum for care. The idea was to incentivize hospitals to provide efficient care, since they could keep the difference between whatever they spent getting you better and the set PPS amount that Medicare would reimburse them. Hospitals responded by inventing upcoding: a patient with controlled, long-term coronary disease who showed up with a broken leg would get coded for the coronary condition and the cast, and the hospital would pocket both lump sums:
https://pluralistic.net/2024/06/13/a-punch-in-the-guts/#hayek-pilled
The reason hospital administrators love Epic, and pay gigantic sums for systemwide software licenses, is directly connected to the two hours that doctors spent filling in Epic forms for every hour they spend treating patients. Epic collects all that extra information in order to identify potential sources of plausible upcodes, which allows hospitals to bill patients, insurers, and Medicare through the nose for routine care. Epic can automatically recode "diabetes with no complications" from a Hierarchical Condition Category code 19 (worth $894.40) as "diabetes with kidney failure," code 18 and 136, which gooses the reimbursement to $1273.60.
Epic snitches on doctors to their bosses, giving them a dashboard to track doctors' compliance with upcoding suggestions. One of Kuttner's doctor sources says her supervisor contacts her with questions like, "That appointment was a 2. Don’t you think it might be a 3?"
Robert Kuttner is the perfect journalist to unravel the Epic scam. As a journalist who wrote for The New England Journal of Medicine, he's got an insider's knowledge of the health industry, and plenty of sources among health professionals. As he tells it, Epic is a cultlike, insular company that employs 12.500 people in its hometown of Verona, WI.
The EHR industry's origins start with a GW Bush-era law called the HITECH Act, which was later folded into Obama's Recovery Act in 2009. Obama provided $27b to hospitals that installed EHR systems. These systems had to more than track patient outcomes – they also provided the data for pay-for-performance incentives. EHRs were already trying to do something very complicated – track health outcomes – but now they were also meant to underpin a cockamamie "incentives" program that was supposed to provide a carrot to the health industry so it would stop killing people and ripping off Medicare. EHRs devolved into obscenely complex spaghetti systems that doctors and nurses loathed on sight.
But there was one group that loved EHRs: hospital administrators and the private companies offering Medicare Advantage plans (which also benefited from upcoding patients in order to soak Uncle Sucker):
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8649706/
The spread of EHRs neatly tracks with a spike in upcharging: "from 2014 through 2019, the number of hospital stays billed at the highest severity level increased almost 20 percent…the number of stays billed at each of the other severity levels decreased":
https://oig.hhs.gov/oei/reports/OEI-02-18-00380.pdf
The purpose of a system is what it does. Epic's industry-dominating EHR is great at price-gouging, but it sucks as a clinical tool – it takes 18 keystrokes just to enter a prescription:
https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2729481
Doctors need to see patients, but their bosses demand that they satisfy Epic's endless red tape. Doctors now routinely stay late after work and show up hours early, just to do paperwork. It's not enough. According to another one of Kuttner's sources, doctors routinely copy-and-paste earlier entries into the current one, a practice that generates rampant errors. Some just make up random numbers to fulfill Epic's nonsensical requirements: the same source told Kuttner that when prompted to enter a pain score for his TB patients, he just enters "zero."
Don't worry, Epic has a solution: AI. They've rolled out an "ambient listening" tool that attempts to transcribe everything the doctor and patient say during an exam and then bash it into a visit report. Not only is this prone to the customary mistakes that make AI unsuited to high-stakes, error-sensitive applications, it also represents a profound misunderstanding of the purpose of clinical notes.
The very exercise of organizing your thoughts and reflections about an event – such as a medical exam – into a coherent report makes you apply rigor and perspective to events that otherwise arrive as a series of fleeting impressions and reactions. That's why blogging is such an effective practice:
https://pluralistic.net/2021/05/09/the-memex-method/
The answer to doctors not having time to reflect and organize good notes is to give them more time – not more AI. As another doctor told Kuttner: "Ambient listening is a solution to a self-created problem of requiring too much data entry by clinicians."
EHRs are one of those especially hellish public-private partnerships. Health care doctrine from Reagan to Obama insisted that the system just needed to be exposed to market forces and incentives. EHRs are designed to allow hospitals to win as many of these incentives as possible. Epic's clinical care modules do this by bombarding doctors with low-quality diagnostic suggestions with "little to do with a patient’s actual condition and risks," leading to "alert fatigue," so doctors miss the important alerts in the storm of nonsense elbow-jostling:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5058605/
Clinicians who actually want to improve the quality of care in their facilities end up recording data manually and keying it into spreadsheets, because they can't get Epic to give them the data they need. Meanwhile, an army of high-priced consultants stand ready to give clinicians advise on getting Epic to do what they need, but can't seem to deliver.
Ironically, one of the benefits that Epic touts is its interoperability: hospitals that buy Epic systems can interconnect those with other Epic systems, and there's a large ecosystem of aftermarket add-ons that work with Epic. But Epic is a product, not a protocol, so its much-touted interop exists entirely on its terms, and at its sufferance. If Epic chooses, a doctor using its products can send files to a doctor using a rival product. But Epic can also veto that activity – and its veto extends to deciding whether a hospital can export their patient records to a competing service and get off Epic altogether.
One major selling point for Epic is its capacity to export "anonymized" data for medical research. Very large patient data-sets like Epic's are reasonably believed to contain many potential medical insights, so medical researchers are very excited at the prospect of interrogating that data.
But Epic's approach – anonymizing files containing the most sensitive information imaginable, about millions of people, and then releasing them to third parties – is a nightmare. "De-identified" data-sets are notoriously vulnerable to "re-identification" and the threat of re-identification only increases every time there's another release or breach, which can used to reveal the identities of people in anonymized records. For example, if you have a database of all the prescribing at a given hospital – a numeric identifier representing the patient, and the time and date when they saw a doctor and got a scrip. At any time in the future, a big location-data breach – say, from Uber or a transit system – can show you which people went back and forth to the hospital at the times that line up with those doctor's appointments, unmasking the person who got abortion meds, cancer meds, psychiatric meds or other sensitive prescriptions.
The fact that anonymized data can – will! – be re-identified doesn't mean we have to give up on the prospect of gleaning insight from medical records. In the UK, the eminent doctor Ben Goldacre and colleagues built an incredible effective, privacy-preserving "trusted research environment" (TRE) to operate on millions of NHS records across a decentralized system of hospitals and trusts without ever moving the data off their own servers:
https://pluralistic.net/2024/03/08/the-fire-of-orodruin/#are-we-the-baddies
The TRE is an open source, transparent server that accepts complex research questions in the form of database queries. These queries are posted to a public server for peer-review and revision, and when they're ready, the TRE sends them to each of the databases where the records are held. Those databases transmit responses to the TRE, which then publishes them. This has been unimaginably successful: the prototype of the TRE launched during the lockdown generated sixty papers in Nature in a matter of months.
Monopolies are inefficient, and Epic's outmoded and dangerous approach to research, along with the roadblocks it puts in the way of clinical excellence, epitomizes the problems with monopoly. America's health care industry is a dumpster fire from top to bottom – from Medicare Advantage to hospital cartels – and allowing Epic to dominate the EHR market has somehow, incredibly, made that system even worse.
Naturally, Kuttner finishes out his article with some antitrust analysis, sketching out how the Sherman Act could be brought to bear on Epic. Something has to be done. Epic's software is one of the many reasons that MDs are leaving the medical profession in droves.
Epic epitomizes the long-standing class war between doctors who want to take care of their patients and hospital executives who want to make a buck off of those patients.
Tor Books as just published two new, free LITTLE BROTHER stories: VIGILANT, about creepy surveillance in distance education; and SPILL, about oil pipelines and indigenous landback.
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/10/02/upcoded-to-death/#thanks-obama
Image: Flying Logos (modified) https://commons.wikimedia.org/wiki/File:Over_$1,000,000_dollars_in_USD_$100_bill_stacks.png
CC BY-SA 4.0 https://creativecommons.org/licenses/by-sa/4.0/deed.en
#pluralistic#ehrs#robert kuttner#tres#trusted research environments#ben goldacre#epic#epic systems#interoperability#privacy#reidentification#deidentification#thanks obama#upcoding#Hierarchical Condition Category#medicare#medicaid#ai#American Recovery and Reinvestment Act#HITECH act#medicare advantage#ambient listening#alert fatigue#monopoly#antitrust
819 notes
·
View notes
Text
#democracy#vote democrat#election 2024#vote blue#voting#progressive#pro choice#diversity#equality#never trump#human rights#environment#Harris/Walz#ai generated
636 notes
·
View notes
Text
I don’t have a posted DNI for a few reasons but in this case I’ll be crystal clear:
I do not want people who use AI in their whump writing (generating scenarios, generating story text, etc.) to follow me or interact with my posts. I also do not consent to any of my writing, posts, or reblogs being used as inputs or data for AI.
#not whump#whump community#ai writing#beans speaks#blog stuff#:/ stop using generative text machines that scrape data from writers to ‘make your dream scenarios’#go download some LANDSAT data and develop an AI to determine land use. use LiDAR to determine tree crown health by near infrared values.#thats a good use of AI (algorithms) that I know and respect.#using plagiarized predictive text machines is in poor taste and also damaging to the environment. be better.
254 notes
·
View notes
Text
Mouthful of diamonds
inspired by this scene:
#dcmk#miyano shiho#detco#haibara ai#ai haibara#shiho miyano#detective conan#i feel like shiho/haibara's feelings towards her identity is a bit more complicated than just being stuck in a child's body#something something about being molded by the environment you grew up in#something something you'll never truly be normal#im not really proud of this but whatever
464 notes
·
View notes
Text
By hakoniwa
#nestedneons#cyberpunk#cyberpunk art#cyberpunk aesthetic#art#cyberpunk artist#cyberwave#megacity#futuristic city#scifi#urbex#urban decay#urban jungle#game design#game assets#environment art#environment design#dystopic future#dystopic#dystopia#ai art#aiartcommunity#thisisaiart
722 notes
·
View notes
Text
"The world's coral reefs are close to 25 percent larger than we thought. By using satellite images, machine learning and on-ground knowledge from a global network of people living and working on coral reefs, we found an extra 64,000 square kilometers (24,700 square miles) of coral reefs – an area the size of Ireland.
That brings the total size of the planet's shallow reefs (meaning 0-20 meters deep) to 348,000 square kilometers – the size of Germany. This figure represents whole coral reef ecosystems, ranging from sandy-bottomed lagoons with a little coral, to coral rubble flats, to living walls of coral.
Within this 348,000 km² of coral is 80,000 km² where there's a hard bottom – rocks rather than sand. These areas are likely to be home to significant amounts of coral – the places snorkelers and scuba divers most like to visit.
You might wonder why we're finding this out now. Didn't we already know where the world's reefs are?
Previously, we've had to pull data from many different sources, which made it harder to pin down the extent of coral reefs with certainty. But now we have high resolution satellite data covering the entire world – and are able to see reefs as deep as 30 meters down.
Pictured: Geomorphic mapping (left) compared to new reef extent (red shading, right image) in the northern Great Barrier Reef.
[AKA: All the stuff in red on that map is coral reef we did not realize existed!! Coral reefs cover so much more territory than we thought! And that's just one example. (From northern Queensland)]
We coupled this with direct observations and records of coral reefs from over 400 individuals and organizations in countries with coral reefs from all regions, such as the Maldives, Cuba, and Australia.
To produce the maps, we used machine learning techniques to chew through 100 trillion pixels from the Sentinel-2 and Planet Dove CubeSat satellites to make accurate predictions about where coral is – and is not. The team worked with almost 500 researchers and collaborators to make the maps.
The result: the world's first comprehensive map of coral reefs extent, and their composition, produced through the Allen Coral Atlas. [You can see the interactive maps yourself at the link!]
The maps are already proving their worth. Reef management agencies around the world are using them to plan and assess conservation work and threats to reefs."
-via ScienceDirect, February 15, 2024
#oceanography#marine biology#marine life#marine science#coral#coral reefs#environment#geography#maps#interactive maps#ai#ai positive#machine learning#conservation news#coral reef#conservation#tidalpunk#good news#hope#full disclosure this is the same topic I published a few days ago#but with a different article/much better headline that makes it clear that this is “throughout the world there are more reefs”#rather than “we just found an absolutely massive reef”#also included one of the maps this time around#bc this is a really big deal and huge sign of hope actually!!!#we were massively underestimating how many coral reefs the world has left!#and now that we know where they are we can do a much better job of protecting them
440 notes
·
View notes
Text
anyway folks if you ever see any AI art or AI fic or AI voices of fandom content remember to block those people because none of that shit is ethically generated and will always be theft in some way
195 notes
·
View notes
Text
This blog will not RP with 'writers' who use AI.
This goes for anything about your blog by the way. Not just writing, but icons, graphics, anything having to do with your blog. Even if you're not transparent about it and think that the AI is 'so good nobody will notice', people will find out, we always do.
This is a creative hobby. Most rational people will understand that not everyone can have fancy graphics & icons, but that doesn't mean you have a machine pull from 100,000+ stolen & scraped pieces of art & other images from multiple unknowing creatives to cobble something together to supposedly make your blog look 'nicer'.
Either make your own icons / graphics, and/or commission someone to make them for you. There's also plenty of free templates for you to check out that take less effort to find than feeding a prompt that's currently responsible for making life harder for creatives such as us. End of.
#rp meme#basic manners#not a meme#I'll try to find more genuinely deactivated blogs but ew#ew ew ew saw someone in passing use icons of their character clearly made in AI#I get having a niche muse that doesn't have icon material but like#go iconless dude no non-elitist person cares#don't come in my askbox trying to justify yourself btw you're gonna get IP blocked :|#it's also bad for the environment the same reason een eff tees were
371 notes
·
View notes
Text
Man...I'm getting really sick of seeing AI in places it dosent belong. Go detect cancer cells and leave my damn art and literature alone. I need an AI fly swatter at this point.
#It's everywhere I look and on every damn app#I hate it#I hate that a concept with so much potential is being used in exactly the wrong way#anti ai#I hate AI#also learned recently it's horrible for the environment. All that to make shitty version of uncanny valley ass art#commissioning art is way more fun.#late night rants
102 notes
·
View notes
Text
"i can't come up with a fantasy name for my world so i HAVE to use chatgpt to get the gears flowing" have you all forgotten what fantasynamegenerators.com has done for you
#we literally already have the resources people claim ai has introduced#'you're discarding a very helpful tool' we already have that tool in a thousand different varieties#and with the added bonus of not plagiarizing/lying/being utterly horrible for the environment#there are tons of prompt and name generators made for this exact purpose!#there are worldbuilding resources and lists all over the place!#need some music to listen to for inspiration? look up ambience playlists and you'll find tons with Real Songs!#there are whole composers on youtube who make fantasy specific music using talent and brains and it sounds better than ai cobbled noise!#people in creative communities have already used more functional forms of ai + actual brainpower to make you these resources!#if you would just spend 2-3 minutes googling you would find them!
59 notes
·
View notes
Text
im going to go insane on september 6th 2024 send tweet
#my art#drdt#drdt fanart#drdt spoilers#(does that one count??? idk)#drdt david#david chiem#danganronpa despair time#no genuinely im actually so proud of this piece i NEVERRRRR use cool colours or do perspective w hands#anyways don’t do ai art kids it’s bad for the environment and there’s so many drawing tutorials out there#quality sucks doodoo on tumblr js click 4 higher res#man david isnt even close to being one of my fav characters i just love drawing ppl losing their marbles#when i started the lineart for this i just started rewatching drdt and now im on ep 7 so. figure out how long this took from there#comparing my drdt art all the way to that teruko doodle three years back….. i never got rid of those eyelash spikes huh#proud of myself tho :3 yippee
116 notes
·
View notes
Text
The real AI fight
Tonight (November 27), I'm appearing at the Toronto Metro Reference Library with Facebook whistleblower Frances Haugen.
On November 29, I'm at NYC's Strand Books with my novel The Lost Cause, a solarpunk tale of hope and danger that Rebecca Solnit called "completely delightful."
Last week's spectacular OpenAI soap-opera hijacked the attention of millions of normal, productive people and nonsensually crammed them full of the fine details of the debate between "Effective Altruism" (doomers) and "Effective Accelerationism" (AKA e/acc), a genuinely absurd debate that was allegedly at the center of the drama.
Very broadly speaking: the Effective Altruists are doomers, who believe that Large Language Models (AKA "spicy autocomplete") will someday become so advanced that it could wake up and annihilate or enslave the human race. To prevent this, we need to employ "AI Safety" – measures that will turn superintelligence into a servant or a partner, nor an adversary.
Contrast this with the Effective Accelerationists, who also believe that LLMs will someday become superintelligences with the potential to annihilate or enslave humanity – but they nevertheless advocate for faster AI development, with fewer "safety" measures, in order to produce an "upward spiral" in the "techno-capital machine."
Once-and-future OpenAI CEO Altman is said to be an accelerationists who was forced out of the company by the Altruists, who were subsequently bested, ousted, and replaced by Larry fucking Summers. This, we're told, is the ideological battle over AI: should cautiously progress our LLMs into superintelligences with safety in mind, or go full speed ahead and trust to market forces to tame and harness the superintelligences to come?
This "AI debate" is pretty stupid, proceeding as it does from the foregone conclusion that adding compute power and data to the next-word-predictor program will eventually create a conscious being, which will then inevitably become a superbeing. This is a proposition akin to the idea that if we keep breeding faster and faster horses, we'll get a locomotive:
https://locusmag.com/2020/07/cory-doctorow-full-employment/
As Molly White writes, this isn't much of a debate. The "two sides" of this debate are as similar as Tweedledee and Tweedledum. Yes, they're arrayed against each other in battle, so furious with each other that they're tearing their hair out. But for people who don't take any of this mystical nonsense about spontaneous consciousness arising from applied statistics seriously, these two sides are nearly indistinguishable, sharing as they do this extremely weird belief. The fact that they've split into warring factions on its particulars is less important than their unified belief in the certain coming of the paperclip-maximizing apocalypse:
https://newsletter.mollywhite.net/p/effective-obfuscation
White points out that there's another, much more distinct side in this AI debate – as different and distant from Dee and Dum as a Beamish Boy and a Jabberwork. This is the side of AI Ethics – the side that worries about "today’s issues of ghost labor, algorithmic bias, and erosion of the rights of artists and others." As White says, shifting the debate to existential risk from a future, hypothetical superintelligence "is incredibly convenient for the powerful individuals and companies who stand to profit from AI."
After all, both sides plan to make money selling AI tools to corporations, whose track record in deploying algorithmic "decision support" systems and other AI-based automation is pretty poor – like the claims-evaluation engine that Cigna uses to deny insurance claims:
https://www.propublica.org/article/cigna-pxdx-medical-health-insurance-rejection-claims
On a graph that plots the various positions on AI, the two groups of weirdos who disagree about how to create the inevitable superintelligence are effectively standing on the same spot, and the people who worry about the actual way that AI harms actual people right now are about a million miles away from that spot.
There's that old programmer joke, "There are 10 kinds of people, those who understand binary and those who don't." But of course, that joke could just as well be, "There are 10 kinds of people, those who understand ternary, those who understand binary, and those who don't understand either":
https://pluralistic.net/2021/12/11/the-ten-types-of-people/
What's more, the joke could be, "there are 10 kinds of people, those who understand hexadecenary, those who understand pentadecenary, those who understand tetradecenary [und so weiter] those who understand ternary, those who understand binary, and those who don't." That is to say, a "polarized" debate often has people who hold positions so far from the ones everyone is talking about that those belligerents' concerns are basically indistinguishable from one another.
The act of identifying these distant positions is a radical opening up of possibilities. Take the indigenous philosopher chief Red Jacket's response to the Christian missionaries who sought permission to proselytize to Red Jacket's people:
https://historymatters.gmu.edu/d/5790/
Red Jacket's whole rebuttal is a superb dunk, but it gets especially interesting where he points to the sectarian differences among Christians as evidence against the missionary's claim to having a single true faith, and in favor of the idea that his own people's traditional faith could be co-equal among Christian doctrines.
The split that White identifies isn't a split about whether AI tools can be useful. Plenty of us AI skeptics are happy to stipulate that there are good uses for AI. For example, I'm 100% in favor of the Human Rights Data Analysis Group using an LLM to classify and extract information from the Innocence Project New Orleans' wrongful conviction case files:
https://hrdag.org/tech-notes/large-language-models-IPNO.html
Automating "extracting officer information from documents – specifically, the officer's name and the role the officer played in the wrongful conviction" was a key step to freeing innocent people from prison, and an LLM allowed HRDAG – a tiny, cash-strapped, excellent nonprofit – to make a giant leap forward in a vital project. I'm a donor to HRDAG and you should donate to them too:
https://hrdag.networkforgood.com/
Good data-analysis is key to addressing many of our thorniest, most pressing problems. As Ben Goldacre recounts in his inaugural Oxford lecture, it is both possible and desirable to build ethical, privacy-preserving systems for analyzing the most sensitive personal data (NHS patient records) that yield scores of solid, ground-breaking medical and scientific insights:
https://www.youtube.com/watch?v=_-eaV8SWdjQ
The difference between this kind of work – HRDAG's exoneration work and Goldacre's medical research – and the approach that OpenAI and its competitors take boils down to how they treat humans. The former treats all humans as worthy of respect and consideration. The latter treats humans as instruments – for profit in the short term, and for creating a hypothetical superintelligence in the (very) long term.
As Terry Pratchett's Granny Weatherwax reminds us, this is the root of all sin: "sin is when you treat people like things":
https://brer-powerofbabel.blogspot.com/2009/02/granny-weatherwax-on-sin-favorite.html
So much of the criticism of AI misses this distinction – instead, this criticism starts by accepting the self-serving marketing claim of the "AI safety" crowd – that their software is on the verge of becoming self-aware, and is thus valuable, a good investment, and a good product to purchase. This is Lee Vinsel's "Criti-Hype": "taking press releases from startups and covering them with hellscapes":
https://sts-news.medium.com/youre-doing-it-wrong-notes-on-criticism-and-technology-hype-18b08b4307e5
Criti-hype and AI were made for each other. Emily M Bender is a tireless cataloger of criti-hypeists, like the newspaper reporters who breathlessly repeat " completely unsubstantiated claims (marketing)…sourced to Altman":
https://dair-community.social/@emilymbender/111464030855880383
Bender, like White, is at pains to point out that the real debate isn't doomers vs accelerationists. That's just "billionaires throwing money at the hope of bringing about the speculative fiction stories they grew up reading – and philosophers and others feeling important by dressing these same silly ideas up in fancy words":
https://dair-community.social/@emilymbender/111464024432217299
All of this is just a distraction from real and important scientific questions about how (and whether) to make automation tools that steer clear of Granny Weatherwax's sin of "treating people like things." Bender – a computational linguist – isn't a reactionary who hates automation for its own sake. On Mystery AI Hype Theater 3000 – the excellent podcast she co-hosts with Alex Hanna – there is a machine-generated transcript:
https://www.buzzsprout.com/2126417
There is a serious, meaty debate to be had about the costs and possibilities of different forms of automation. But the superintelligence true-believers and their criti-hyping critics keep dragging us away from these important questions and into fanciful and pointless discussions of whether and how to appease the godlike computers we will create when we disassemble the solar system and turn it into computronium.
The question of machine intelligence isn't intrinsically unserious. As a materialist, I believe that whatever makes me "me" is the result of the physics and chemistry of processes inside and around my body. My disbelief in the existence of a soul means that I'm prepared to think that it might be possible for something made by humans to replicate something like whatever process makes me "me."
Ironically, the AI doomers and accelerationists claim that they, too, are materialists – and that's why they're so consumed with the idea of machine superintelligence. But it's precisely because I'm a materialist that I understand these hypotheticals about self-aware software are less important and less urgent than the material lives of people today.
It's because I'm a materialist that my primary concerns about AI are things like the climate impact of AI data-centers and the human impact of biased, opaque, incompetent and unfit algorithmic systems – not science fiction-inspired, self-induced panics over the human race being enslaved by our robot overlords.
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/11/27/10-types-of-people/#taking-up-a-lot-of-space
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
#pluralistic#criti-hype#ai doomers#doomers#eacc#effective acceleration#effective altruism#materialism#ai#10 types of people#data science#llms#large language models#patrick ball#ben goldacre#trusted research environments#science#hrdag#human rights data analysis group#red jacket#religion#emily bender#emily m bender#molly white
289 notes
·
View notes