Arvind Narayanan, a computer science professor at Princeton University, is best known for calling out the hype surrounding artificial intelligence in his Substack, AI Snake Oil, written with PhD candidate Sayash Kapoor. The two authors recently released a book based on their popular newsletter about AI’s shortcomings.
But don’t get it twisted—they aren’t against using new technology. “It's easy to misconstrue our message as saying that all of AI is harmful or dubious,” Narayanan says. He makes clear, during a conversation with WIRED, that his rebuke is not aimed at the software per say, but rather the culprits who continue to spread misleading claims about artificial intelligence.
In AI Snake Oil, those guilty of perpetuating the current hype cycle are divided into three core groups: the companies selling AI, researchers studying AI, and journalists covering AI.
Hype Super-Spreaders
Companies claiming to predict the future using algorithms are positioned as potentially the most fraudulent. “When predictive AI systems are deployed, the first people they harm are often minorities and those already in poverty,” Narayanan and Kapoor write in the book. For example, an algorithm previously used in the Netherlands by a local government to predict who may commit welfare fraud wrongly targeted women and immigrants who didn’t speak Dutch.
The authors turn a skeptical eye as well toward companies mainly focused on existential risks, like artificial general intelligence, the concept of a super-powerful algorithm better than humans at performing labor. Though, they don’t scoff at the idea of AGI. “When I decided to become a computer scientist, the ability to contribute to AGI was a big part of my own identity and motivation,” says Narayanan. The misalignment comes from companies prioritizing long-term risk factors above the impact AI tools have on people right now, a common refrain I’ve heard from researchers.
Much of the hype and misunderstandings can also be blamed on shoddy, non-reproducible research, the authors claim. “We found that in a large number of fields, the issue of data leakage leads to overoptimistic claims about how well AI works,” says Kapoor. Data leakage is essentially when AI is tested using part of the model’s training data—similar to handing out the answers to students before conducting an exam.
While academics are portrayed in AI Snake Oil as making “textbook errors,” journalists are more maliciously motivated and knowingly in the wrong, according to the Princeton researchers: “Many articles are just reworded press releases laundered as news.” Reporters who sidestep honest reporting in favor of maintaining their relationships with big tech companies and protecting their access to the companies’ executives are noted as especially toxic.
I think the criticisms about access journalism are fair. In retrospect, I could have asked tougher or more savvy questions during some interviews with the stakeholders at the most important companies in AI. But the authors might be oversimplifying the matter here. The fact that big AI companies let me in the door doesn’t prevent me from writing skeptical articles about their technology, or working on investigative pieces I know will piss them off. (Yes, even if they make business deals, like OpenAI did, with the parent company of WIRED.)
And sensational news stories can be misleading about AI’s true capabilities. Narayanan and Kapoor highlight New York Times columnist Kevin Roose’s 2023 chatbot transcript interacting with Microsoft's tool headlined “Bing’s A.I. Chat: ‘I Want to Be Alive. 😈’” as an example of journalists sowing public confusion about sentient algorithms. “Roose was one of the people who wrote these articles,” says Kapoor. “But I think when you see headline after headline that's talking about chatbots wanting to come to life, it can be pretty impactful on the public psyche.” Kapoor mentions the ELIZA chatbot from the 1960s, whose users quickly anthropomorphized a crude AI tool, as a prime example of the lasting urge to project human qualities onto mere algorithms.
Roose declined to comment when reached via email and instead pointed me to a passage from his related column, published separately from the extensive chatbot transcript, where he explicitly states that he knows the AI is not sentient. The introduction to his chatbot transcript focuses on “its secret desire to be human” as well as “thoughts about its creators,” and the comment section is strewn with readers anxious about the chatbot’s power.
Images accompanying news articles are also called into question in AI Snake Oil. Publications often use clichéd visual metaphors, like photos of robots, at the top of a story to represent artificial intelligence features. Another common trope, an illustration of an altered human brain brimming with computer circuitry used to represent the AI’s neural network, irritates the authors. “We're not huge fans of circuit brain,” says Narayanan. “I think that metaphor is so problematic. It just comes out of this idea that intelligence is all about computation.” He suggests images of AI chips or graphics processing units should be used to visually represent reported pieces about artificial intelligence.
Education Is All You Need
The adamant admonishment of the AI hype cycle comes from the authors’ belief that large language models will actually continue to have a significant influence on society and should be discussed with more accuracy. “It's hard to overstate the impact LLMs might have in the next few decades,” says Kapoor. Even if an AI bubble does eventually pop, I agree that aspects of generative tools will be sticky enough to stay around in some form. And the proliferation of generative AI tools, which developers are currently pushing out to the public through smartphone apps and even formatting devices around it, just heightens the necessity for better education on what AI even is and its limitations.
The first step to understanding AI better is coming to terms with the vagueness of the term, which flattens an array of tools and areas of research, like natural language processing, into a tidy, marketable package. AI Snake Oil divides artificial intelligence into two subcategories: predictive AI, which uses data to assess future outcomes; and generative AI, which crafts probable answers to prompts based on past data.
It’s worth it for anyone who encounters AI tools, willingly or not, to spend at least a little time trying to better grasp key concepts, like machine learning and neural networks, to further demystify the technology and inoculate themselves from the bombardment of AI hype.
During my time covering AI for the past two years, I’ve learned that even if readers grasp a few of the limitations of generative tools, like inaccurate outputs or biased answers, many people are still hazy about all of its weaknesses. For example, in the upcoming season of AI Unlocked, my newsletter designed to help readers experiment with AI and understand it better, we included a whole lesson dedicated to examining whether ChatGPT can be trusted to dispense medical advice based on questions submitted by readers. (And whether it will keep your prompts about that weird toenail fungus private.)
A user may approach the AI’s outputs with more skepticism when they have a better understanding of where the model’s training data came from—often the depths of the internet or Reddit threads—and it may hamper their misplaced trust in the software.
Narayanan believes so strongly in the importance of quality education that he began teaching his children about the benefits and downsides of AI at a very young age. “I think it should start from elementary school,” he says. “As a parent, but also based on my understanding of the research, my approach to this is very tech-forward.”
Generative AI may now be able to write half-decent emails and help you communicate sometimes, but only well-informed humans have the power to correct breakdowns in understanding around this technology and craft a more accurate narrative moving forward.
19 notes
·
View notes
You know, for years now it's been annoying me why marginalized people have to argue with Western English speakers about slurs we experience, or even just what is or isn't a slur, especially in languages the English folks don't even know shit about.
I realized, you guys have absolutely no idea how to handle how to confront slurs and your own internalized bullshit.
I've noticed that primarily English speakers have this weird gold standard for what a slur is, and you seem to have a checklist to determine if you're gonna bother respecting that slurs from other places exist, and that you probably shouldn't so casually keep repeating them when made aware. As if people from the Anglosphere are the final judge and jury about what is or isn't a slur, even when you can't even pronounce the name of the language the slur is from, and you've never met a person affected by that stigma and slur.
Someone not from the Anglosphere mentions a horrible experience with slurs and bigotry. English speakers can't keep their mouth shut and just not try to relativize that person's experience, because they just have to know everything.
The gold standard is the N. word btw. Yes it is a horrible slur, especially when in the English language paired with the history.
But when I see more people upset that random languages have words that vaguely have a similar sound, Naga-snake person, nega-a korean word, rather than keeping up the energy to avoid using other slurs or not denying other languages and cultures also have specific slurs, which have been used against many, you just keep doing it. I've seen people flip out, calling the words controversial and insensitive. When it has nothing to do with the slur. Rather than actually caring that they themselves keep repeating or perpetuating harmful mindsets that no slur other than English ones should be avoided or be taught to be mindful of, but everything else isn't their problem... yeah no.
Meanwhile on the other side, people who're victims of slurs, especially those not English, have to constantly explain why something is a slur, and people in the English language still try to explain why everyone else is wrong. Even in the English language you people keep getting offended when someone asks you not to use a literal slur from the English language, and you still keep arguing. The A/B/O situation btw, G**psy as well. You love your slurs, but you also love your moral superiority.
--
I'm a little confused by this being phrased in terms of the Anglosphere.
Yes, I do sometimes see people being idiots about "You can't say that vaguely n-word sounding thing while not speaking English!", but even most monolingual English speakers understand why that's silly.
It's equally silly to think people should avoid innocuous-in-English words while speaking English.
But your two concrete examples are things used in the West in the Anglosphere. It's just that Americans sometimes have poor judgments about the level of offensiveness of slurs in English that aren't common here.
19 notes
·
View notes
uh oh twilight of the gods got it wrong yet again so here's an old norse homophobia grammar psa
the adjective is argr, not ergi. if you call someone queer(/etc), you call them argr: argr man, argr scum, argr desires, argr behaviour, how dare you call me argr, urgh that's so gay argr...
ergi is a noun. BUT. it's not a name-calling noun. it's not equivalent to faggot. it's what you call the trait itself: queerness, faggotry. you wouldn't say someone "is" ergi; you would accuse them of ergi and spread rumours about their outrageous ergi.
if you do end up saying someone "is" ergi, what you're saying is extremely hyperbolic. you're effectively calling them Queerness Itself. which is probably not what you actually mean. i suppose it could work in some situations, especially if you're talking about a god, but it'll work better if you know you're saying something unusual and you're doing it on purpose.
43 notes
·
View notes
What's the most evil lab equipment in your opinion 😂
In my very subjective opinion? Ion chromatographs are on my shit list. It's not really their fault - the one I have to work with is old, hasn't been maintained properly, and no one else knows how to use it so I have to figure out everything myself, which is NOT FUN! It's finicky, frustrating, and it requires working with sulfuric acid, which I do not enjoy. It's very very sensitive and if you accidentally contaminate it with ions, you will be troubleshooting for weeks trying to figure out what happened. Ions are everywhere, in everything. In dust, in tap water (and nearly all filtered water, we have a special machine that makes Ultra Pure water with no ions or anything in it), on your skin, on virtually every surface that hasn't been specially cleaned. So if you have extra ions that shouldn't be there, it's a guessing game - are they from the sample? The eluent? Sample vials? Glassware? Is the water filter malfunctioning? Are the ions even there at all or is the detector messed up? You just have to keep trying stuff until it sorts itself out. The one I work with has NOT sorted itself out yet and I've been at this for over two weeks. I'm at my wit's end here.
And bonus answer for non-instrument equipment: drying tins. They're these little aluminum trays to put soil or whatever in when you stick them in the drying oven but they make Bad Noises when they scrape together. Also they bend easily, so if you stack them and then they get knocked around, sometimes it becomes very very difficult to get them apart.
In contrast, the best instrument is the Gas Chromatograph and the best other equipment is micropippettes. The GC is straightforward, easy to use, really hard to contaminate, and rarely has technical problems (plus when it does, they're not my problem - I am not the designated GC expert). Micropippettes are just fun because it's satisfying to click the button to release the pipette tip and launch it into the trash can.
8 notes
·
View notes