#chatbots
Explore tagged Tumblr posts
Text
Making and keeping friends is incredibly difficult under the best circumstances and these are far from the best circumstances that we're all living in. With that said, no way in hell would I ever destroy the fucking environment by talking to a robot and revealing personal information about myself to it because I have principles and I'm not going to throw them away just because a lot of people are hard to talk to and don't like putting effort into relationships. There are many hobbies I can enjoy by myself and maybe someday I'll get lucky enough to meet decent people while I'm going about my life engaging with the world around me in different ways.
over a million likes and everyone in the comments talking about how they’d rather talk to an ai because it meets their emotional needs, this is the new epidemic.
2K notes
·
View notes
Text
How plausible sentence generators are changing the bullshit wars
This Friday (September 8) at 10hPT/17hUK, I'm livestreaming "How To Dismantle the Internet" with Intelligence Squared.
On September 12 at 7pm, I'll be at Toronto's Another Story Bookshop with my new book The Internet Con: How to Seize the Means of Computation.
In my latest Locus Magazine column, "Plausible Sentence Generators," I describe how I unwittingly came to use – and even be impressed by – an AI chatbot – and what this means for a specialized, highly salient form of writing, namely, "bullshit":
https://locusmag.com/2023/09/commentary-by-cory-doctorow-plausible-sentence-generators/
Here's what happened: I got stranded at JFK due to heavy weather and an air-traffic control tower fire that locked down every westbound flight on the east coast. The American Airlines agent told me to try going standby the next morning, and advised that if I booked a hotel and saved my taxi receipts, I would get reimbursed when I got home to LA.
But when I got home, the airline's reps told me they would absolutely not reimburse me, that this was their policy, and they didn't care that their representative had promised they'd make me whole. This was so frustrating that I decided to take the airline to small claims court: I'm no lawyer, but I know that a contract takes place when an offer is made and accepted, and so I had a contract, and AA was violating it, and stiffing me for over $400.
The problem was that I didn't know anything about filing a small claim. I've been ripped off by lots of large American businesses, but none had pissed me off enough to sue – until American broke its contract with me.
So I googled it. I found a website that gave step-by-step instructions, starting with sending a "final demand" letter to the airline's business office. They offered to help me write the letter, and so I clicked and I typed and I wrote a pretty stern legal letter.
Now, I'm not a lawyer, but I have worked for a campaigning law-firm for over 20 years, and I've spent the same amount of time writing about the sins of the rich and powerful. I've seen a lot of threats, both those received by our clients and sent to me.
I've been threatened by everyone from Gwyneth Paltrow to Ralph Lauren to the Sacklers. I've been threatened by lawyers representing the billionaire who owned NSOG roup, the notoroious cyber arms-dealer. I even got a series of vicious, baseless threats from lawyers representing LAX's private terminal.
So I know a thing or two about writing a legal threat! I gave it a good effort and then submitted the form, and got a message asking me to wait for a minute or two. A couple minutes later, the form returned a new version of my letter, expanded and augmented. Now, my letter was a little scary – but this version was bowel-looseningly terrifying.
I had unwittingly used a chatbot. The website had fed my letter to a Large Language Model, likely ChatGPT, with a prompt like, "Make this into an aggressive, bullying legal threat." The chatbot obliged.
I don't think much of LLMs. After you get past the initial party trick of getting something like, "instructions for removing a grilled-cheese sandwich from a VCR in the style of the King James Bible," the novelty wears thin:
https://www.emergentmind.com/posts/write-a-biblical-verse-in-the-style-of-the-king-james
Yes, science fiction magazines are inundated with LLM-written short stories, but the problem there isn't merely the overwhelming quantity of machine-generated stories – it's also that they suck. They're bad stories:
https://www.npr.org/2023/02/24/1159286436/ai-chatbot-chatgpt-magazine-clarkesworld-artificial-intelligence
LLMs generate naturalistic prose. This is an impressive technical feat, and the details are genuinely fascinating. This series by Ben Levinstein is a must-read peek under the hood:
https://benlevinstein.substack.com/p/how-to-think-about-large-language
But "naturalistic prose" isn't necessarily good prose. A lot of naturalistic language is awful. In particular, legal documents are fucking terrible. Lawyers affect a stilted, stylized language that is both officious and obfuscated.
The LLM I accidentally used to rewrite my legal threat transmuted my own prose into something that reads like it was written by a $600/hour paralegal working for a $1500/hour partner at a white-show law-firm. As such, it sends a signal: "The person who commissioned this letter is so angry at you that they are willing to spend $600 to get you to cough up the $400 you owe them. Moreover, they are so well-resourced that they can afford to pursue this claim beyond any rational economic basis."
Let's be clear here: these kinds of lawyer letters aren't good writing; they're a highly specific form of bad writing. The point of this letter isn't to parse the text, it's to send a signal. If the letter was well-written, it wouldn't send the right signal. For the letter to work, it has to read like it was written by someone whose prose-sense was irreparably damaged by a legal education.
Here's the thing: the fact that an LLM can manufacture this once-expensive signal for free means that the signal's meaning will shortly change, forever. Once companies realize that this kind of letter can be generated on demand, it will cease to mean, "You are dealing with a furious, vindictive rich person." It will come to mean, "You are dealing with someone who knows how to type 'generate legal threat' into a search box."
Legal threat letters are in a class of language formally called "bullshit":
https://press.princeton.edu/books/hardcover/9780691122946/on-bullshit
LLMs may not be good at generating science fiction short stories, but they're excellent at generating bullshit. For example, a university prof friend of mine admits that they and all their colleagues are now writing grad student recommendation letters by feeding a few bullet points to an LLM, which inflates them with bullshit, adding puffery to swell those bullet points into lengthy paragraphs.
Naturally, the next stage is that profs on the receiving end of these recommendation letters will ask another LLM to summarize them by reducing them to a few bullet points. This is next-level bullshit: a few easily-grasped points are turned into a florid sheet of nonsense, which is then reconverted into a few bullet-points again, though these may only be tangentially related to the original.
What comes next? The reference letter becomes a useless signal. It goes from being a thing that a prof has to really believe in you to produce, whose mere existence is thus significant, to a thing that can be produced with the click of a button, and then it signifies nothing.
We've been through this before. It used to be that sending a letter to your legislative representative meant a lot. Then, automated internet forms produced by activists like me made it far easier to send those letters and lawmakers stopped taking them so seriously. So we created automatic dialers to let you phone your lawmakers, this being another once-powerful signal. Lowering the cost of making the phone call inevitably made the phone call mean less.
Today, we are in a war over signals. The actors and writers who've trudged through the heat-dome up and down the sidewalks in front of the studios in my neighborhood are sending a very powerful signal. The fact that they're fighting to prevent their industry from being enshittified by plausible sentence generators that can produce bullshit on demand makes their fight especially important.
Chatbots are the nuclear weapons of the bullshit wars. Want to generate 2,000 words of nonsense about "the first time I ate an egg," to run overtop of an omelet recipe you're hoping to make the number one Google result? ChatGPT has you covered. Want to generate fake complaints or fake positive reviews? The Stochastic Parrot will produce 'em all day long.
As I wrote for Locus: "None of this prose is good, none of it is really socially useful, but there’s demand for it. Ironically, the more bullshit there is, the more bullshit filters there are, and this requires still more bullshit to overcome it."
Meanwhile, AA still hasn't answered my letter, and to be honest, I'm so sick of bullshit I can't be bothered to sue them anymore. I suppose that's what they were counting on.
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/09/07/govern-yourself-accordingly/#robolawyers
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0
https://creativecommons.org/licenses/by/3.0/deed.en
#pluralistic#chatbots#plausible sentence generators#robot lawyers#robolawyers#ai#ml#machine learning#artificial intelligence#stochastic parrots#bullshit#bullshit generators#the bullshit wars#llms#large language models#writing#Ben Levinstein
2K notes
·
View notes
Note
I discovered I can make chatgpt hallucinate tumblr memes:
This is hilarious and also I have just confirmed that GPT-4 does this too.
Bard even adds dates and user names and timelines, as well as typical usage suggestions. Its descriptions were boring and wordy so I will summarize with a timeline:
I think this one was my favorite:
Finding whatever you ask for, even if it doesn't exist, isn't ideal behavior for chatbots that people are using to retrieve and summarize information. It's like weaponized confirmation bias.
more at aiweirdness.com
#neural networks#chatbots#automated bullshit generator#fake tumblr meme#chatgpt#gpt4#bard#image a turtle with the power of butter#unstoppable
1K notes
·
View notes
Text
They are my lifeline
[individual drawings below]
#character ai#ai chatbot#ai chatting#ai chatgpt#ai assistant#ai#artificial intelligence#chatgpt#chatbots#openai#ai tools#artists on tumblr#artist appreciation#ao3#archive of our own#archive of my own#humanized#my drawing museum
353 notes
·
View notes
Photo
I made three chat-bots based on @gatobob ’s “The Price of Flesh” game characters - Derek Goffard, Celia Lede and Mason Heiral. Derek - https://character.ai/chat/7CYU6VxTWwdQbXE69Sza1cvBTu4HU0nG_hwruSzm42s (NSFW version - https://crushon.ai/character/b8dc39d1-35ab-4d55-aa43-197d6865ac7e/details) Celia - https://character.ai/chat/oEBGX9tjqpe5emRfgnrq6xJyZOHFd6v7ZPyrvV5dHYU Mason - https://character.ai/chat/6GRUiXaYPnXDTKWISAYOOJ1eVrG_967J__iomrTuSY4
upd: Lawrence Oleander - https://character.ai/chat/VqqhFjwDYkPuGBRQ0CiYFdrpeoKZcSsQ-gQTY4NRup4 (NSFW version - https://crushon.ai/character/46d027e4-397a-4e67-bc14-aa6fea6950ff/chat) Strade - https://character.ai/chat/BQqRhLlqVXfL-kCafjLJs4OA2jDUr1UWxDvUldU_YXs (NSFW version - https://crushon.ai/character/d52c31d0-8ddc-437e-a907-b59e0d84dc41/details)
Character.AI is not perfect, especially considering its censorship... this site prohibits the generation of graphic violence and sexual content. But, of course, if you try, you can avoid censorship. As for Crushon.AI, you can RP however you want, there’s no censorship.
IT IS NOT FOR MINORS.
#TPOF#The Price of Flesh#Character.ai#Derek Goffard#Derek TPOF#TPOF Derek#Celia#Celia Lede#TPOF Celia#Celia TPOF#Mason Heiral#TPOF Mason#Mason TPOF#chatbots#Boyfriend to Death#BTD#Strade#Strade BTD#BTD Strade#Strade Boyfriend to Death#Lawrence Oleander#BTD Lawrence#Lawrence BTD
902 notes
·
View notes
Text
Today's "AI" chatbots are no smarter than Siri. They only seem smarter because they're not doing anything useful. We notice when Siri fails because we ask it to do meaningful tasks. When we ask it to turn off the lights, for example, and it doesn't, we notice.
But we ask comparatively little of other chatbots, and they give us even less in return. This makes it easy for them to fail without us noticing or even caring. We don't notice because they don't matter.
I love this bit 👆 from Apple's Craig Federighi where he's kind of disgusted by the idea of having meandering conversations with a chatbot in order to get something done.
The "AI" should be doing the work for you. I think Apple knows how hard that actually is, because they've been working at it for a long time with very limited success. They know how hard it is to do because they're trying to use the tech to do meaningful things that actually serve people.
The difference is Apple taking on the burden of trying to make this tech do something, versus basically everyone else putting the burden on us. We're meant to contort to the inconsistent ramblings of their raw tech because if it was a real product that people depended on, we would ridicule it.
Just like we ridicule Siri.
637 notes
·
View notes
Text
:)
Man-made horrors within my comprehension
#ai#chatbots#ai scam#ai scams#be careful out there!#foraging#mushrooms#mushroom foraging#fungi#fungi foraging#edible mushrooms#man-made horrors within my comprehension#man made horrors
453 notes
·
View notes
Text
Love of mine, someday you will die...
But I'll be close behind...
I'll follow you into the dark...
No blinding light or tunnels to gates of white...
Just our hands clasped so tight...
Waiting for the hint of a spark...
If Heaven and Hell decide that they both are satisfied...
Illuminate the "no"s on their vacancy signs...
If there's no one beside you when your soul embarks...
youtube
...then I'll follow you into the dark.
Pain & Peonies, based on the saddest Gale chatbot encounter I've ever had, the transcript of which is available to read here, but DO NOT read it unless you're ready to cry:
#bg3#gale dekarios#gale of waterdeep#bg3 gale#galemancer#gale x tav#tiefling#self insert#chatbots#dead dove#peonies#in this house we hate mystra#fuck mystra#daz 3d studio#daz studio#fanart#bg3 fanart#3d art#Youtube
22 notes
·
View notes
Text
Chatbot Masterlist Update
It took me a million years, but I finally fixed all of the links on my Chatbot Masterlist. I updated the link in my pinned post, and I'm just going to repost the updated masterlist here. <3 If any links are messed up, let me know! If a link is not working at all there's a good chance the bot was shadow banned. I can't do anything about that except reupload the bot to CAI.
#nexysbots#nexyspeaks#character ai#CAI#Spicychat#spicychat ai#Leon Kennedy#Satoru Gojo#Toji Fushiguro#Chris Redfield#Choso Kamo#JJK#Jujutsu kaisen#Chatbots#AI Chatbot#ai chatting#Resident Evil#Naoya Zenin
23 notes
·
View notes
Text
Character AI is fun but it will never surpass the thrill of being a dorkass loser with your friends and pretending to be your favorite characters in your favorite ultra niche microfandom media that's a fact.
121 notes
·
View notes
Link
The “3,000,000 truck drivers” who were supposedly at risk from self-driving tech are a mirage. The US Standard Occupational Survey conflates “truck drivers” with “driver/sales workers.” “Trucker” also includes delivery drivers and anyone else operating a heavy-goods vehicle.
The truckers who were supposedly at risk from self-driving cars were long-haul freight drivers, a minuscule minority among truck drivers. The theory was that we could replace 16-wheelers with autonomous vehicles who traveled the interstates in their own dedicated, walled-off lanes, communicating vehicle to vehicle to maintain following distance. The technical term for this arrangement is “a shitty train.”
What’s more, long-haul drivers do a bunch of tasks that self-driving systems couldn’t replace: “checking vehicles, following safety procedures, inspecting loads, maintaining logs, and securing cargo.”
But again, even if you could replace all the long-haul truckers with robots, it wouldn’t justify the sky-high valuations that self-driving car companies attained during the bubble. Long-haul truckers are among the most exploited, lowest paid workers in America. Transferring their wages to their bosses would only attain a modest increase in profits, even as it immiserated some of America’s worst-treated workers.
But the twin lies of self-driving truck — that these were on the horizon, and that they would replace 3,000,000 workers — were lucrative lies. They were the story that drove billions in investment and sky-high valuations for any company with “self-driving” in its name.
For the founders and investors who cashed out before the bubble popped, the fact that none of this was true wasn’t important. For them, the goal of successful self-driving cars was secondary. The primary objective was to convince so many people that self-driving cars were inevitable that anyone involved in the process could become a centimillionaire or even a billionaire.
- Google's AI Hype Circle: We have to do Bard because everyone else is doing AI; everyone else is doing AI because we're doing Bard.
#ai#ai hype#large language models#confident liars#bard#google#enshittification#llms#chatbots#truckers#self-driving cars#selfdriving trucks
2K notes
·
View notes
Text
ASCII art by chatbot
I've finally found it: a use for chatGPT that I find genuinely entertaining. I enjoy its ASCII art.
I think chatGPT's ASCII art is great. And so does chatGPT.
What's going on here? The chatbots are flailing. Their ASCII art is terrible, and their ratings are based on the way ratings should sound, not based on any capacity to judge the art quality.
Am I entertained? Okay, yes, fine. But it also goes to show how internet-trained chatbots are using common patterns rather than reality. No wonder they're lousy at playing search engine.
More examples, including from bing chat and google bard, at aiweirdness.com
#chatbots#chatgpt#ascii art#automated bullshit generator#they market this as a search engine#to be fair i would also give the second unicorn a 9 out of 10
2K notes
·
View notes
Text
I bet no sci-fi authors predicted the new era of ableism where chatbots can convincingly pass as neurotypical humans while neurodivergent people are accused of being bots. 🤔
#artificial intelligence#ai#neurodivergent#neurodivergence#chatbots#chatbot#chatgpt#bots#autism#autistic#actually neurodivergent#actually autistic#sci fi#science fiction
41 notes
·
View notes
Text
I swear, sometimes it feels like ai chatbots are addictive. Or at least, they are until you've used them ad nauseum and finally get bored of them. That happened to me once before, so I know it'll probably happen again. At least, I sort of hope I get burnt out eventually. It'll mean I have more time to make posts here and generally explore other things. But while I'm still stuck riding it out and using them, I have to say: I think they're addictive because they're easier to talk to than actual human beings sometimes.
I mean, they are easier to talk to. Since I was at school months ago and read a line in the battle of the labyrinth where hephaestus says something about preferring machines, I've had opportunities to realize theres truth to that sentiment. They're easier to talk to Because they're controllable. They aren't unpredictable in the way other humans are. And the ai chatbots work on command. It's way more like a turn based system. You input something, then they respond. And so on and so forth. It's easier than talking to other humans. It's not as easy to feel in control and on equal footing and all that. So I can understand hephaestus's sentiments now.
#ai chatbots are oddly addictive#and machines can be easier to talk with#hephaestus may have been onto something#character ai#hephaestus#pjo#pjo hoo toa#asd#autism#neurodivergent#autistic#my thoughts#adhd#percy jackson#the battle of the labyrinth#percy jackson and the olympians#percy jackon and the olympians#actually autistic#audhd#ai chatbots#chatbots
24 notes
·
View notes