#wanting to make their current giant black box toy language models look older and more researched than they are
Explore tagged Tumblr posts
chronologicalimplosion · 5 months ago
Text
[[Screenshot ID: A twitter thread by sjklapecwriting that says:
The fact that "AI = what makes NPCs in video games do things" and "AI = complex scientific models that have been in use for years" and "AI = non-generative tools that automate tedious processes" and "AI = generative tools" are all called AI feels like deliberate obfuscation.
I want good AI in video games but I also don't want AI in video games at all and I think AI is useful in the sciences but don't trust AI at all to be used in science or medicine or law and AI to colour-correct a video or remove greenscreen is cool but AI generating movies sucks.
We talk about things that threaten art and creativity and steal vast quantities of work from artists and burn the amazon and drain the seas to do it all with the same language as something that can track a part of a video so special effects are easier to make and that sucks.
And this obfuscation feels so deliberate to me, because now people freak out if someone talks about wanting to use "AI that learns from the player" or "an AI solution to help me edit video" or "an AI that can be used to generate theoretical materials to test" and they look silly.
Because now "AI" is so firmly wrapped up with the concept of "generative models" that people are, rightfully, on guard against any mention of AI whatsoever to the point where entirely distinct technologies get uselessly criticized under the same umbrella.
Like imagine if we had no other language but "tank" to describe motor vehicles, so if I said we needed "public transit vehicles" everyone thought I wanted M1A1 Abrams for civilian transit. That's what it's like talking about AI.
End ID]]
Tumblr media Tumblr media Tumblr media Tumblr media
#I really like the closing analogy and the points being made here and I'd take the criticism of the lack of specificity even farther#Like the generative nature is not the problem you can throw a bunch of texts into a small model you can train on your local computer and#make weird robot poetry that you execute your human curation skills on in order to find stuff worth sharing#that can be a worthwhile artistic endeavor that's generative use of computational models and even doing the same sort of#mathematical recombining#but you can do it in a way that's intentional and transformative and doesn't burn through any more power than routine computer tasks#in the year of our lord 2024#if you use a small enough set of texts and you're familiar with them you can spot plagiarism pretty easily#this was like a really common toy exercise for artsy or lit-loving folks in CS for years to dick around with the works of an author or two#anyways as someone who's had their finger on this pulse since before the chatgpt explosion#I still think that the problem has to do with the ease of interacting with an overpowered imperfect world-burning computer program#that will produce good-SEEMING results with absolutely no training from the operator#and i wish we had a name for the bad ones that focused on that#generative AI is too kind of a name for things like chatgpt#it lumps things that are actually useful (and old) in with things that are problematic#it's not calling out the problem and that's why the proponents of chatgpt are still okay with it#They're live tanks that look like fisher price cars#they're unregulated cartoon vapes#they're a brain surgery for dummies book#they're an unmarked button in your car that fires a cannon out of the top#fwiw I think the obfuscation is also coming more from a place of the AI bros wanting to steal legitimacy from scientific fields#wanting to make their current giant black box toy language models look older and more researched than they are#which I suppose is more or less what OP is saying#but phrased in a way that makes AI bros sound less like chessmasters#which i think is a useful exercise#long post
44K notes · View notes