#Generative Chatbot
Explore tagged Tumblr posts
neurobridgeminsky · 5 months ago
Text
0 notes
purpleartrowboat · 1 year ago
Text
ai makes everything so boring. deepfakes will never be as funny as clipping together presidential speeches. ai covers will never be as funny as imitating the character. ai art will never be as good as art drawn by humans. ai chats will never be as good as roleplaying with other people. ai writing will never be as good as real authors
28K notes · View notes
aiweirdness · 2 years ago
Text
Tumblr media
When questioned, chatgpt doubles down on how it is definitely correct.
Tumblr media
But it's not relying on some weird glitchy interpretation of the art itself, a la adversarial turtle-gun. It just reports the drawing as definitely being of the word "lies" because that kind of self-consistency is what would happen in the kind of human-human conversations in its internet training data. I tested this by starting a brand new chat and then asking it what the art from the previous chat said.
Tumblr media
Google's bard, on the other hand, interprets it differently
Tumblr media
Bard has the same tendency to generate illegible ASCII art and then praise its legibility, except in its case, all its art is cows.
Tumblr media
Not to be outdone, bing chat (GPT-4) will also praise its own ASCII art - once you get it to admit it even can generate and rate ASCII art. For the "balanced" and "precise" versions I had to make my request all fancy and quantitative.
Tumblr media
With Bing chat I wasn't able to ask it to read its own ASCII art because it strips out all the formatting and is therefore illegible - oh wait, no, even the "precise" version tries to read it anyways.
Tumblr media
These language models are so unmoored from the truth that it's astonishing that people are marketing them as search engines.
More at AI Weirdness
6K notes · View notes
mostlysignssomeportents · 1 year ago
Text
How plausible sentence generators are changing the bullshit wars
Tumblr media
This Friday (September 8) at 10hPT/17hUK, I'm livestreaming "How To Dismantle the Internet" with Intelligence Squared.
On September 12 at 7pm, I'll be at Toronto's Another Story Bookshop with my new book The Internet Con: How to Seize the Means of Computation.
Tumblr media
In my latest Locus Magazine column, "Plausible Sentence Generators," I describe how I unwittingly came to use – and even be impressed by – an AI chatbot – and what this means for a specialized, highly salient form of writing, namely, "bullshit":
https://locusmag.com/2023/09/commentary-by-cory-doctorow-plausible-sentence-generators/
Here's what happened: I got stranded at JFK due to heavy weather and an air-traffic control tower fire that locked down every westbound flight on the east coast. The American Airlines agent told me to try going standby the next morning, and advised that if I booked a hotel and saved my taxi receipts, I would get reimbursed when I got home to LA.
But when I got home, the airline's reps told me they would absolutely not reimburse me, that this was their policy, and they didn't care that their representative had promised they'd make me whole. This was so frustrating that I decided to take the airline to small claims court: I'm no lawyer, but I know that a contract takes place when an offer is made and accepted, and so I had a contract, and AA was violating it, and stiffing me for over $400.
The problem was that I didn't know anything about filing a small claim. I've been ripped off by lots of large American businesses, but none had pissed me off enough to sue – until American broke its contract with me.
So I googled it. I found a website that gave step-by-step instructions, starting with sending a "final demand" letter to the airline's business office. They offered to help me write the letter, and so I clicked and I typed and I wrote a pretty stern legal letter.
Now, I'm not a lawyer, but I have worked for a campaigning law-firm for over 20 years, and I've spent the same amount of time writing about the sins of the rich and powerful. I've seen a lot of threats, both those received by our clients and sent to me.
I've been threatened by everyone from Gwyneth Paltrow to Ralph Lauren to the Sacklers. I've been threatened by lawyers representing the billionaire who owned NSOG roup, the notoroious cyber arms-dealer. I even got a series of vicious, baseless threats from lawyers representing LAX's private terminal.
So I know a thing or two about writing a legal threat! I gave it a good effort and then submitted the form, and got a message asking me to wait for a minute or two. A couple minutes later, the form returned a new version of my letter, expanded and augmented. Now, my letter was a little scary – but this version was bowel-looseningly terrifying.
I had unwittingly used a chatbot. The website had fed my letter to a Large Language Model, likely ChatGPT, with a prompt like, "Make this into an aggressive, bullying legal threat." The chatbot obliged.
I don't think much of LLMs. After you get past the initial party trick of getting something like, "instructions for removing a grilled-cheese sandwich from a VCR in the style of the King James Bible," the novelty wears thin:
https://www.emergentmind.com/posts/write-a-biblical-verse-in-the-style-of-the-king-james
Yes, science fiction magazines are inundated with LLM-written short stories, but the problem there isn't merely the overwhelming quantity of machine-generated stories – it's also that they suck. They're bad stories:
https://www.npr.org/2023/02/24/1159286436/ai-chatbot-chatgpt-magazine-clarkesworld-artificial-intelligence
LLMs generate naturalistic prose. This is an impressive technical feat, and the details are genuinely fascinating. This series by Ben Levinstein is a must-read peek under the hood:
https://benlevinstein.substack.com/p/how-to-think-about-large-language
But "naturalistic prose" isn't necessarily good prose. A lot of naturalistic language is awful. In particular, legal documents are fucking terrible. Lawyers affect a stilted, stylized language that is both officious and obfuscated.
The LLM I accidentally used to rewrite my legal threat transmuted my own prose into something that reads like it was written by a $600/hour paralegal working for a $1500/hour partner at a white-show law-firm. As such, it sends a signal: "The person who commissioned this letter is so angry at you that they are willing to spend $600 to get you to cough up the $400 you owe them. Moreover, they are so well-resourced that they can afford to pursue this claim beyond any rational economic basis."
Let's be clear here: these kinds of lawyer letters aren't good writing; they're a highly specific form of bad writing. The point of this letter isn't to parse the text, it's to send a signal. If the letter was well-written, it wouldn't send the right signal. For the letter to work, it has to read like it was written by someone whose prose-sense was irreparably damaged by a legal education.
Here's the thing: the fact that an LLM can manufacture this once-expensive signal for free means that the signal's meaning will shortly change, forever. Once companies realize that this kind of letter can be generated on demand, it will cease to mean, "You are dealing with a furious, vindictive rich person." It will come to mean, "You are dealing with someone who knows how to type 'generate legal threat' into a search box."
Legal threat letters are in a class of language formally called "bullshit":
https://press.princeton.edu/books/hardcover/9780691122946/on-bullshit
LLMs may not be good at generating science fiction short stories, but they're excellent at generating bullshit. For example, a university prof friend of mine admits that they and all their colleagues are now writing grad student recommendation letters by feeding a few bullet points to an LLM, which inflates them with bullshit, adding puffery to swell those bullet points into lengthy paragraphs.
Naturally, the next stage is that profs on the receiving end of these recommendation letters will ask another LLM to summarize them by reducing them to a few bullet points. This is next-level bullshit: a few easily-grasped points are turned into a florid sheet of nonsense, which is then reconverted into a few bullet-points again, though these may only be tangentially related to the original.
What comes next? The reference letter becomes a useless signal. It goes from being a thing that a prof has to really believe in you to produce, whose mere existence is thus significant, to a thing that can be produced with the click of a button, and then it signifies nothing.
We've been through this before. It used to be that sending a letter to your legislative representative meant a lot. Then, automated internet forms produced by activists like me made it far easier to send those letters and lawmakers stopped taking them so seriously. So we created automatic dialers to let you phone your lawmakers, this being another once-powerful signal. Lowering the cost of making the phone call inevitably made the phone call mean less.
Today, we are in a war over signals. The actors and writers who've trudged through the heat-dome up and down the sidewalks in front of the studios in my neighborhood are sending a very powerful signal. The fact that they're fighting to prevent their industry from being enshittified by plausible sentence generators that can produce bullshit on demand makes their fight especially important.
Chatbots are the nuclear weapons of the bullshit wars. Want to generate 2,000 words of nonsense about "the first time I ate an egg," to run overtop of an omelet recipe you're hoping to make the number one Google result? ChatGPT has you covered. Want to generate fake complaints or fake positive reviews? The Stochastic Parrot will produce 'em all day long.
As I wrote for Locus: "None of this prose is good, none of it is really socially useful, but there’s demand for it. Ironically, the more bullshit there is, the more bullshit filters there are, and this requires still more bullshit to overcome it."
Meanwhile, AA still hasn't answered my letter, and to be honest, I'm so sick of bullshit I can't be bothered to sue them anymore. I suppose that's what they were counting on.
Tumblr media Tumblr media Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/09/07/govern-yourself-accordingly/#robolawyers
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0
https://creativecommons.org/licenses/by/3.0/deed.en
2K notes · View notes
bluebellhairpin · 1 year ago
Text
Woke up in the middle of the night with the realization that the reason people like AI so much is because they're addicted to instant gratification, and don't have any patience to wait on real creators since they're all spoilt entitled brats.
They can't wait a few hours for a human rp partner to reply so they use an AI Chat bot.
They can't comment what they liked about a fanfic and wait a few days for a author to write the next part so they use a AI generator.
They won't let real artist put in the time required to create something breathtaking and talented so they use a AI generator.
They can't wait a few years for a trully good show or movie to be produced or paid for properly so they go to AI.
All for what? Because they can't wait. Because they have no patience. Because they're greedy and ungrateful. And you want to know what will happen then? AI will stop working. Real creators will get stick of being called obsolete and suddenly the water to power the mill will stop flowing. AI can't make anything new if it isn't being fed. All this art and music and writing that your giving it will run out, and AI will start repeating stuff.
All your AI Bots will sound the same.
All your fics will read the same.
All your art will look the same.
And all your shows and movies will play the same old blockbuster plots until it's bled dry.
You will destroy it yourself and all we have to do is wait. And you know what? It will work. It will eat itself alive. And I will point and laugh because unlike you I AM patient. I can wait. Instant gratification isn't important to me because in the long run, it means shit. The satisfaction of seeing you all burn fandoms alive with your greed - the same greed the WGA and SAG is striking against - will bring me more satisfaction than what any AI can 'remake'.
519 notes · View notes
king-crawler · 2 months ago
Note
how would you feel if i made a king candy bot on janitor ai
If I'm being honest I don't encourage using ANY AI chatbots because that stuff can be seriously addicting and it's really wasteful :[
I'm not judging if you do decide to use them- its your choice after all, but it should at least be an educated choice. Please keep in mind that AI companies are scum. They prey on our emotional attachment to fiction and they're only going to get better at it. And if that sounds scary it's because it is!
Anyways if I deleted my character AI account after getting addicted you can be free too ❤️ Only together can we avoid living in a soulless AI entertainment slop dystopia. Read fanfic made by humans or write your own :]
96 notes · View notes
rebeccathenaturalist · 1 day ago
Text
So you may have seen my posts about AI foraging guides, or watched the mini-class I have up on YouTube on what I found inside of them. Apparently the intersection of AI and foraging has gotten even worse, with a chatbot that joined a mushroom foraging groups on Facebook only to immediately suggest ways people could cook a toxic species:
Tumblr media
First, and most concerningly, this once again reinforces how much we should NOT be trusting AI to tell us what mushrooms are safe to eat. While they can compile information that's fed to them and regurgitate it in somewhat orderly manners, this is not the same as a living human being who has critical thinking skills to determine the veracity of a given piece of information, or physical senses to examine a mushroom, or the ability to directly experience the act of foraging. These skills and experiences are absolutely crucial to being a reliable forager, particularly one who may be passing on information to others.
We already have enough trouble with inaccurate info in the foraging community, and trying to ride herd on both the misinformed and the bad actors. This AI was presented as the first chat option for any group member seeking answers, which is just going to make things tougher for those wanting to keep people from accidentally poisoning themselves. Moreover, chatbots like this one routinely are trained on and grab information from copyrighted sources, but do not give credit to the original authors. Anyone who's ever written a junior-high level essay knows that you have to cite your sources even if you rewrite the information; otherwise it's just plagiarism.
Fungi Friend is yet one more example of how generative AI has been anything but a positive development on multiple levels.
84 notes · View notes
queenjena1 · 11 months ago
Text
Tumblr media
185 notes · View notes
stardevlin · 9 months ago
Text
these ai's are some of the most foul specimen ive ever talked to
Tumblr media
111 notes · View notes
neurobridgeminsky · 5 months ago
Text
0 notes
marlynnofmany · 2 months ago
Text
Tumblr media
Glad you guys liked that one. Sorry about the reverse rickroll.
31 notes · View notes
aiweirdness · 1 year ago
Note
I discovered I can make chatgpt hallucinate tumblr memes:
Tumblr media Tumblr media
This is hilarious and also I have just confirmed that GPT-4 does this too.
Tumblr media Tumblr media Tumblr media
Bard even adds dates and user names and timelines, as well as typical usage suggestions. Its descriptions were boring and wordy so I will summarize with a timeline:
Tumblr media
I think this one was my favorite:
Tumblr media
Finding whatever you ask for, even if it doesn't exist, isn't ideal behavior for chatbots that people are using to retrieve and summarize information. It's like weaponized confirmation bias.
Tumblr media
more at aiweirdness.com
1K notes · View notes
food-theorys-blog · 4 months ago
Text
"oh but i use character ai's for my comfort tho" fanfics.
"but i wanna talk to the character" roleplaying.
"but that's so embarrassing to roleplay with someone😳" use ur imagination. or learn to not be embarrassed about it.
stop fucking feeding ai i beg of you. theyre replacing both writers AND artists. it's not a one way street where only artists are being affected.
45 notes · View notes
Text
Y’all, not calling anyone out, but I spend enough time on character.ai to know if you’ve used it to write your fanfiction for you…
21 notes · View notes
yuyuonabeat · 5 months ago
Text
I hate when it does that.
IM TRYNNA FUCK MY MAN
Tumblr media
Nah you wait till I fucking lose my shit. Just let me talk to him please😩 it was getting so interesting.
41 notes · View notes
redwinterroses · 1 year ago
Text
So I was curious to see how much scraping AI had done of the mcyt side of AO3, so I clenched my teeth, went to a ChatGPT knockoff and told it to "write me a Hermitcraft fanfic about Grian's backstory and true name."
My logic was that "Xelqua" and the Watchers are pretty fanfic specific and not likely to have been pulled from sources like wikis or any place outside of something like AO3. I figured if it brought up Xelqua or anything about masks, etc., that would be a pretty good indicator that it had scraped mcyt fics.
This is what I got, and I can't stop laughing.
Tumblr media
(image id in alt)
GREGORY.
Forget Xelqua, that's old news. Grian's true name, "brimming with deep meaning and symbolism", is Gregory.
ignoring for the moment the fact that the name gregory means... um. it means 'watchful.' just...we're just ignoring that.
...I think I'm going to take this as evidence that at least the mcyt side of AO3 maybe hasn't been thoroughly scraped just yet.
126 notes · View notes