#this isn’t to say that there aren’t problems with llms
Explore tagged Tumblr posts
Text
there’s been a lot of talk about chatgpt and other llms on the autism subreddits, where a lot of people are saying that it’s changed their lives because it’s made it a lot easier for them to communicate with other people. and it’s interesting to contrast that with a lot of posts about chatgpt on here which essentially come down to “i don’t use chatgpt because i’m too smart, and anybody who does use it is unintelligent and also destroying creativity as we know it”.
#this isn’t to say that there aren’t problems with llms#because of course there are#but i do think there’s a lack of empathy for why some disabled people might find them useful
15 notes
·
View notes
Text
The thing about using DuckDuckGo for a while (which I do on my personal computer) is that it’s worse but not that much worse than google. The initial emotional hit from using it only lasted about a week, and now I’m in a pretty solid rhythm of “look it up on DuckDuckGo” and if the results really aren’t what I’m looking for (especially after correcting search terms) -- I’ll go to the goog.
The main effect over time is starting to notice that google is only a little bit better and that starts to...disappoint over time. Sometimes it’s even actively worse for the kind of information I’m looking for.
I really enjoyed nostalgebraist‘s post on the topic with respect to LLMs/ChatGPT, which provides a longer explanation of how SEO garbage is a big part of what’s making google (and other search engines) so bad. The post also highlights what happens when you use chatgpt as a search engine:
LLM interfaces to existing search indices don’t address any of the things people hate about Google.
They don’t solve the problem of SEO garbage. When I ask questions to New Bing, I usually get a digested summary of a few articles from the top Bing results for a query, which are … SEO garbage.
I think one of the most cursed things about the tone that ChatGPT has been tuned to take is that it sounds calming and authoritative. But if the source information it’s chewing up is garbage, well, that’s worse, right? If you could tell from an earlier google search that something sucked shit, you wouldn’t read it or incorporate it into your life view.
anyways I think the tl;dr is that for me, the difference between duckduck-go-ing something and googling isn’t that much...at least not enough for me to fold google into my workflow when my main goal is just to get to Jay White’s wikipedia page. And from a workplace workflow standpoint (or anything that I’d need authoritative information on, say, how edible a plant is) -- I don’t want that context collapse. I want to be able to see and assess the character of the source of that information for myself.
31 notes
·
View notes
Text
I’m going to approach this as though when tumblr user tanadrin says that they haven’t seen anti-AI rhetoric that doesn’t trade in moral panic, that they’re telling the truth and more importantly that they would would be interested in seeing some. My hope is that you will read this as a reasonable reply, but I’ll be honest upfront that I can’t pretend that this isn’t also personal for me as someone whose career is threatened by generative AI. Personally, I’m not afraid that any LLM will ever surpass my ability to write, but what does scare me is that it doesn’t actually matter. I’m sure I will be automated out whether my artificial replacement can write better than me or not.
This post is kind of long so if watching is more your thing, check out Zoe Bee’s and Philosophy Tube’s video essays, I thought these were both really good at breaking down the problems as well as describing the actual technology.
Also, for clarity, I’m using “AI” and “genAI” as shorthand, but what I’m specifically referring to is Large Language Models (like ChatGpt) or image generation tools (like MidJourney or Dall-E). The term “AI” is used for a lot of extremely useful things that don’t deserve to be included in this.
Also, to get this out of the way, a lot of people point out that genAI is an environmental problem but honestly even if it were completely eco-friendly I’d have serious issues with it.
A major concern that I have with genAI, as I’ve already touched on, is that it is being sold as a way to replace people in creative industries, and it is being purchased on that promise. Last year SAG and the WGA both went on strike because (among other reasons) studios wanted to replace them with AI and this year the Animation Guild is doing the same. News is full of fake images and stories getting sold as the real thing, and when the news is real it’s plagiarised. A journalist at 404 Media did an experiment where he created a website to post AI-powered news stories only to find that all it did was rip off his colleagues. LLMs can’t think of anything new, they just recycle what a human has already done.
As for image generation, there are all the same problems with plagiarism and putting human artists out of work, as well as the overwhelming amount of revenge porn people are creating, not just violating the privacy of random people, but stealing the labour of sex workers to do it.
At this point you might be thinking that these aren’t examples of the technology, but how people use it. That’s a fair rebuttal, every time there’s a new technology there are going to be reports of how people are using it for sex or crimes so let’s not throw the baby out with the bathwater. Cameras shouldn’t be taken off phones just because people use them to take upskirt shots of unwilling participants, after all, people use phone cameras to document police brutality, and to take upskirt shots of people who have consented to them.
But what are LLMs for? As far as I can tell the best use-case is correcting your grammar, which tools like Grammarly already pretty much have covered, so there is no need for a billion-dollar industry to do the same thing. I am yet to see a killer use case for image generation, and I would be interested to hear one if you have it. I know that digital artists have plugins at their disposal to tidy up or add effects/filters to images they’ve created, but again, that’s something that already exists and has been used for very good reason by artists working in the field, not something that creates images out of nothing.
Now let’s look at the technology itself and ask some important questions. Why haven’t they programmed the racism out of GPT-3? The answer to that is complicated and the answer is complicated and sort of boils down to the fact that programmers often don’t realise that racism needs to be programmed out of any technology. Meredith Broussard touches on this in her interview for the Black TikTok Strike of 2021 episode of the podcast Sixteenth Minute, and in her book More Than A Glitch, but to be fair I haven’t read that.
Here's another question I have: shouldn’t someone have been responsible for making sure that multiple image generators, including Google’s, did not have child pornography in their training data? Yes, I am aware that people engaging in moral panics often lean on protect-the-children arguments, and there are many nuanced discussions to be had about how to prevent children from being abused and protect those who have been, but I do think it’s worth pointing out that these technologies have been rolled out before the question of “will people generate CSAM with it?” was fully ironed out. Especially considering that AI images are overwhelming the capacity for investigators to stop instances of actual child abuse.
Again, you might say that’s a problem with how it’s being used and not what it is, but I really have to stress that it is able to do this. This is being put out for everyday people to use and there just aren’t enough safeguards that people can’t get around them. If something is going to have this kind of widespread adoption, it really should not be capable of this.
I’ll sum up by saying that I know the kind of moral panic arguments you’re talking about, the whole “oh, it’s evil because it’s not human” isn’t super convincing, but a lot of the pro-AI arguments have about as much backing. There are arguments like “it will get cheaper” but Goldman Sachs released a report earlier this year saying that, basically, there is no reason to believe that. If you only read one of the links in this post, I recommend that one. There are also arguments like “it is inevitable, just use it now” (which is genuinely how some AI tools are marketed), but like, is it? It doesn’t have to be. Are you my mum trying to convince me to stop complaining about a family trip I don’t want to go on or are you a company trying to sell me a technology that is spying on me and making it weirdly hard to find the opt-out button?
My hot take is that AI bears all of the hallmarks of an economic bubble but that anti-AI bears all of the hallmarks of a moral panic. I contain multitudes.
8K notes
·
View notes
Text
Sidenotes from a stoner who talked about this stuff all the time in college because one of the two professors in my CS department did all his research in AI:
- Turing was also speaking about a theoretically perfect computer, capable of doing anything, not just answering any question. One of the other problems with LLMs is their limited practical applications. People try to say they can do anything, but we’re quickly discovering they can’t replace scriptwriters or authors or professional technical writers or science educators or anything like that. They’re basically just good for customer service applications which they’re already being used for. Turing’s Turing Machine is more sophisticated than LLMs because it could theoretically also solve math problems and do chemistry, not just answer questions about math and chemistry.
- A way my professor would get us to think about the Turing Test and if it was actually a test of intelligence/sapience or just a test of appearing intelligent/sapient was by presenting this thought problem: imagine yourself waking up locked in a room with no door. The room is full of rows and rows of filing cabinets, and on one wall is a small slot. For a while, nothing happens, and out of boredom you begin sliding open drawers on the cabinets and looking at the files. Each page contains only two lines of text written in characters you don’t understand. Eventually, a sheet of paper slides through the slot on the wall. It contains one line of text. You aren’t sure what to do with it. You look through the drawers a while and eventually find a page where the first line of text perfectly matches what’s on the sheet. You realize there’s a jar of pens on one of the cabinets and you grab one and— wait you could get help! You write a message in your own language explaining that you are trapped and confused and need someone to rescue you and slide it back out the slot. After a moment, you hear the sound of paper crumpling and then a new sheet of paper with the same text is slid back through. Well. You plod back to the drawer and dutifully copy the second line from the matching page and try sliding it back again. This time, the delay is longer and you get a new sheet of paper with a new line of text on it. Great… You find the matching line in the drawers and copy the second line out. Okay thought experiment over. Now I tell you that you were acting as a blackbox chatbot for some society speaking a language you don’t understand. You are functionally equivalent to the most basic chatbots from the 80s. You don’t even understand the language you are using. Would you be found intelligent by the people outside the room? (Computers don’t understand words as words. It’s a string of characters that it recognizes as a pattern, and chatbots like GPT learn to associate those pattern blocks with certain responses. Phrases like “you are wrong” will always cause it to apologize to you, for example. Or if you ask it “what is the definition of <word>” it’s not literally understanding that you want it to define the word. It sees “<word>” and “definition” and knows that it can finish the sentence “the definition of <word> is…” based on previous information it was given. “Teaching” an LLM new information is actually just storing that information in the LLM’s memory — adding a new page to one of the drawers in the file room. It’s not actually being taught anything and the “learning” is it making tiny adjustments to which words go with which responses based on feedback from users — just like you wouldn’t learn anything new from new pages being added to your filing cabinets because they’re in a language you don’t speak.) Basically his point was the Turing Test also isn’t a perfect measure of intelligence anyways, because it’s only looking at one specific way intelligence can present, and it can presumably be fooled by something well trained enough in conversation. (Oh, now I need a Pygmalion adaptation with an android trying to appear fully human…)
- the continuity of consciousness idea brought up in the reblog above mine is really interesting and a very good point in explaining why the AIs we currently have aren’t truly intelligent. The program that Siri exists as isn’t running when it’s not being used. Siri doesn’t exist at all when you aren’t talking to it. Which is why I like the idea of that Black Mirror episode about the copies of people being used as their personal assistants inside little egg devices, Siri SHOULDN’T exist when you aren’t talking to it.
So, the Turing Test basically got dropped as a criterion for determining when computer programs might be sentient, right? We looked at the results that large language models produced, and looked at the underlying algorithms, and decided that the algorithms almost certainly couldn’t instantiate sentience, unless you go full panpsychist and think that, hey, maybe rocks are sentient in some weird way.
What next? We already don’t fully understand how a given, full-trained bot works, right? (That’s why they need reinforcement learning with human feedback, not, like, “if IsRacist(output): { output = removeRacism(output) }”.), We understand it at a higher meta-level. Will the next generation of AI add further layers between the ultimate program and the initial human creation? Barring actually understanding human consciousness at a comprehensive level, when would it actually make sense to start thinking about the possibility of sentient machines, rather than just being “that idiot at Google who got spooked and thought a statistical model was a person?”
49 notes
·
View notes