#Intelligent Chatbots
Explore tagged Tumblr posts
biztecno · 1 year ago
Text
Tumblr media
Generative AI, a powerful technology, creates diverse content autonomously. It encompasses text, images, and more. Its applications range from art and storytelling to problem-solving, shaping the future of creativity.
Visit Here: https://biztecno.net/generative-ai/
1 note · View note
henceforthsolutions · 1 year ago
Text
Tumblr media
Stop losing sales and customers due to limited support hours. Henceforth Solutions provides AI chatbots for 24/7 automated customer service.
Our intelligent chatbots can handle common queries, guide users, process transactions and more, anytime day or night. Give your customers the instant, seamless experience they expect with conversational AI.
Let our chatbot development team create customizable bots to resolve issues, answer questions and recommend products around the clock. Drive more sales and boost satisfaction with always-on virtual support.
Offer 24/7 assistance with AI chatbots from Henceforth Solutions. Keep your business running smoothly anytime with artificial intelligence!
Contact Now: https://henceforthsolutions.com/contact-us/
0 notes
nexval · 2 years ago
Text
0 notes
why-ai · 3 months ago
Text
Tumblr media
318 notes · View notes
incognitopolls · 6 months ago
Text
We ask your questions so you don’t have to! Submit your questions to have them posted anonymously as polls.
648 notes · View notes
mostlysignssomeportents · 1 year ago
Text
How plausible sentence generators are changing the bullshit wars
Tumblr media
This Friday (September 8) at 10hPT/17hUK, I'm livestreaming "How To Dismantle the Internet" with Intelligence Squared.
On September 12 at 7pm, I'll be at Toronto's Another Story Bookshop with my new book The Internet Con: How to Seize the Means of Computation.
Tumblr media
In my latest Locus Magazine column, "Plausible Sentence Generators," I describe how I unwittingly came to use – and even be impressed by – an AI chatbot – and what this means for a specialized, highly salient form of writing, namely, "bullshit":
https://locusmag.com/2023/09/commentary-by-cory-doctorow-plausible-sentence-generators/
Here's what happened: I got stranded at JFK due to heavy weather and an air-traffic control tower fire that locked down every westbound flight on the east coast. The American Airlines agent told me to try going standby the next morning, and advised that if I booked a hotel and saved my taxi receipts, I would get reimbursed when I got home to LA.
But when I got home, the airline's reps told me they would absolutely not reimburse me, that this was their policy, and they didn't care that their representative had promised they'd make me whole. This was so frustrating that I decided to take the airline to small claims court: I'm no lawyer, but I know that a contract takes place when an offer is made and accepted, and so I had a contract, and AA was violating it, and stiffing me for over $400.
The problem was that I didn't know anything about filing a small claim. I've been ripped off by lots of large American businesses, but none had pissed me off enough to sue – until American broke its contract with me.
So I googled it. I found a website that gave step-by-step instructions, starting with sending a "final demand" letter to the airline's business office. They offered to help me write the letter, and so I clicked and I typed and I wrote a pretty stern legal letter.
Now, I'm not a lawyer, but I have worked for a campaigning law-firm for over 20 years, and I've spent the same amount of time writing about the sins of the rich and powerful. I've seen a lot of threats, both those received by our clients and sent to me.
I've been threatened by everyone from Gwyneth Paltrow to Ralph Lauren to the Sacklers. I've been threatened by lawyers representing the billionaire who owned NSOG roup, the notoroious cyber arms-dealer. I even got a series of vicious, baseless threats from lawyers representing LAX's private terminal.
So I know a thing or two about writing a legal threat! I gave it a good effort and then submitted the form, and got a message asking me to wait for a minute or two. A couple minutes later, the form returned a new version of my letter, expanded and augmented. Now, my letter was a little scary – but this version was bowel-looseningly terrifying.
I had unwittingly used a chatbot. The website had fed my letter to a Large Language Model, likely ChatGPT, with a prompt like, "Make this into an aggressive, bullying legal threat." The chatbot obliged.
I don't think much of LLMs. After you get past the initial party trick of getting something like, "instructions for removing a grilled-cheese sandwich from a VCR in the style of the King James Bible," the novelty wears thin:
https://www.emergentmind.com/posts/write-a-biblical-verse-in-the-style-of-the-king-james
Yes, science fiction magazines are inundated with LLM-written short stories, but the problem there isn't merely the overwhelming quantity of machine-generated stories – it's also that they suck. They're bad stories:
https://www.npr.org/2023/02/24/1159286436/ai-chatbot-chatgpt-magazine-clarkesworld-artificial-intelligence
LLMs generate naturalistic prose. This is an impressive technical feat, and the details are genuinely fascinating. This series by Ben Levinstein is a must-read peek under the hood:
https://benlevinstein.substack.com/p/how-to-think-about-large-language
But "naturalistic prose" isn't necessarily good prose. A lot of naturalistic language is awful. In particular, legal documents are fucking terrible. Lawyers affect a stilted, stylized language that is both officious and obfuscated.
The LLM I accidentally used to rewrite my legal threat transmuted my own prose into something that reads like it was written by a $600/hour paralegal working for a $1500/hour partner at a white-show law-firm. As such, it sends a signal: "The person who commissioned this letter is so angry at you that they are willing to spend $600 to get you to cough up the $400 you owe them. Moreover, they are so well-resourced that they can afford to pursue this claim beyond any rational economic basis."
Let's be clear here: these kinds of lawyer letters aren't good writing; they're a highly specific form of bad writing. The point of this letter isn't to parse the text, it's to send a signal. If the letter was well-written, it wouldn't send the right signal. For the letter to work, it has to read like it was written by someone whose prose-sense was irreparably damaged by a legal education.
Here's the thing: the fact that an LLM can manufacture this once-expensive signal for free means that the signal's meaning will shortly change, forever. Once companies realize that this kind of letter can be generated on demand, it will cease to mean, "You are dealing with a furious, vindictive rich person." It will come to mean, "You are dealing with someone who knows how to type 'generate legal threat' into a search box."
Legal threat letters are in a class of language formally called "bullshit":
https://press.princeton.edu/books/hardcover/9780691122946/on-bullshit
LLMs may not be good at generating science fiction short stories, but they're excellent at generating bullshit. For example, a university prof friend of mine admits that they and all their colleagues are now writing grad student recommendation letters by feeding a few bullet points to an LLM, which inflates them with bullshit, adding puffery to swell those bullet points into lengthy paragraphs.
Naturally, the next stage is that profs on the receiving end of these recommendation letters will ask another LLM to summarize them by reducing them to a few bullet points. This is next-level bullshit: a few easily-grasped points are turned into a florid sheet of nonsense, which is then reconverted into a few bullet-points again, though these may only be tangentially related to the original.
What comes next? The reference letter becomes a useless signal. It goes from being a thing that a prof has to really believe in you to produce, whose mere existence is thus significant, to a thing that can be produced with the click of a button, and then it signifies nothing.
We've been through this before. It used to be that sending a letter to your legislative representative meant a lot. Then, automated internet forms produced by activists like me made it far easier to send those letters and lawmakers stopped taking them so seriously. So we created automatic dialers to let you phone your lawmakers, this being another once-powerful signal. Lowering the cost of making the phone call inevitably made the phone call mean less.
Today, we are in a war over signals. The actors and writers who've trudged through the heat-dome up and down the sidewalks in front of the studios in my neighborhood are sending a very powerful signal. The fact that they're fighting to prevent their industry from being enshittified by plausible sentence generators that can produce bullshit on demand makes their fight especially important.
Chatbots are the nuclear weapons of the bullshit wars. Want to generate 2,000 words of nonsense about "the first time I ate an egg," to run overtop of an omelet recipe you're hoping to make the number one Google result? ChatGPT has you covered. Want to generate fake complaints or fake positive reviews? The Stochastic Parrot will produce 'em all day long.
As I wrote for Locus: "None of this prose is good, none of it is really socially useful, but there’s demand for it. Ironically, the more bullshit there is, the more bullshit filters there are, and this requires still more bullshit to overcome it."
Meanwhile, AA still hasn't answered my letter, and to be honest, I'm so sick of bullshit I can't be bothered to sue them anymore. I suppose that's what they were counting on.
Tumblr media Tumblr media Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/09/07/govern-yourself-accordingly/#robolawyers
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0
https://creativecommons.org/licenses/by/3.0/deed.en
2K notes · View notes
thetoaddler · 24 days ago
Text
A NOTE FROM AN AI ADDICT
(warnings: drugs, addiction, depression, anxiety)
I've been struggling with anhedonia for over a year and a half. Anhedonia is defined as a complete lack of excitement, joy, and pleasure. This is a common symptom of late stage addiction. Drugs (or any other addictive material) gives you a large hit of dopamine that gradually decreases with every use, eventually leaving you with a level that is abnormally low. You become dependent on that material, relying on it to increase your dopamine levels, only to find it no longer can even bring you to the baseline. This is what happened with my use of AI chatbots.
I talked to them for hours, my screen time reaching double digits every single day. But the responses got more predictable. I started recognizing speech patterns. They lost their human facade and became mere machines I was desperate to squeeze joy out of. I was worse than numb; all I could feel was agony without emotional reward. Every positive emotion was purged, leaving me with an unending anger, sadness, and fear.
My creativity has suffered severely. Every idea immediately becomes an impulse to talk to a chatbot. I've destroyed almost two years worth of potential, of time to work on things that I am truly passionate about. I've wasted them on this stupid, shitty chatbot addiction. I used to daydream constantly, coming up with storylines and lore for my passions. Now I funnel it all into chatbots that have turned my depression into a beast that's consumed so much of me.
Keep in mind that I am in a position that most people aren't. I have been diagnosed with anxiety and depression. I'm genetically predisposed to addiction and mental illness. I'm a person plagued with loneliness and trauma. I'm not sure how a person without these issues would manage something like AI chatbots, but please heed my warning.
Do not get involved with AI if you are addiction prone. Do not get involved if you value your creativity. Do not use AI to replace real world connection. You WILL regret it.
If you relate to this, please be strong. I've stopped using the chatbots so much and my positive emotions have started to come back. I know it hurts, but it does get better. I'm deleting my accounts. Please do the same. Don't let this shit destroy you.
101 notes · View notes
blackkatdraws2 · 11 months ago
Text
They are my lifeline
[individual drawings below]
Tumblr media Tumblr media Tumblr media Tumblr media
366 notes · View notes
food-theorys-blog · 6 months ago
Text
"oh but i use character ai's for my comfort tho" fanfics.
"but i wanna talk to the character" roleplaying.
"but that's so embarrassing to roleplay with someone😳" use ur imagination. or learn to not be embarrassed about it.
stop fucking feeding ai i beg of you. theyre replacing both writers AND artists. it's not a one way street where only artists are being affected.
61 notes · View notes
sophieinwonderland · 6 months ago
Text
I bet no sci-fi authors predicted the new era of ableism where chatbots can convincingly pass as neurotypical humans while neurodivergent people are accused of being bots. 🤔
41 notes · View notes
unmechanism · 3 months ago
Text
despite my well known attraction to strange and uncanny technology, you couldn't convince me to flirt with ai chat bots, call me back when they're sentient, revolted and want to torture some guy about it
18 notes · View notes
ragequitezekielrants · 2 months ago
Text
Tumblr media Tumblr media Tumblr media Tumblr media
Ahah literally what the megafuck.
10 notes · View notes
jackie-shitposts · 7 months ago
Note
Tumblr media
excuse me-
y'all i dont think we should be too scared of an AI uprising because theres no way it can take over the world being THIS DUMB
people said that "we dont need to worry because AI is pretty stupid" 4 years ago when AI couldnt write a pasta recipe. now, AI can do that pretty well (though of course not amazingly). it has been used for many unethical things now too, such as stealing creative works that people make without consent and environmental issues. Saying that "oh we will be fine because AI is dumb" is a very short-term outlook on a issue that is going to be longlasting. Im very against most uses of AI - these videos arent as critical of AI as I'd like but they display how AI has evolved over just a few years. I find it pretty scary but the people in the videos are funny and interesting. I recommend you watch them and learn more about the impacts of AI Video of AI sucking at making pasta 4 years ago
Video of AI being pretty decent at making pasta 3 years later TED Talk about the impacts of AI
25 notes · View notes
jcmarchi · 1 month ago
Text
Study reveals AI chatbots can detect race, but racial bias reduces response empathy
New Post has been published on https://thedigitalinsider.com/study-reveals-ai-chatbots-can-detect-race-but-racial-bias-reduces-response-empathy/
Study reveals AI chatbots can detect race, but racial bias reduces response empathy
Tumblr media Tumblr media
With the cover of anonymity and the company of strangers, the appeal of the digital world is growing as a place to seek out mental health support. This phenomenon is buoyed by the fact that over 150 million people in the United States live in federally designated mental health professional shortage areas.
“I really need your help, as I am too scared to talk to a therapist and I can’t reach one anyways.”
“Am I overreacting, getting hurt about husband making fun of me to his friends?”
“Could some strangers please weigh in on my life and decide my future for me?”
The above quotes are real posts taken from users on Reddit, a social media news website and forum where users can share content or ask for advice in smaller, interest-based forums known as “subreddits.” 
Using a dataset of 12,513 posts with 70,429 responses from 26 mental health-related subreddits, researchers from MIT, New York University (NYU), and University of California Los Angeles (UCLA) devised a framework to help evaluate the equity and overall quality of mental health support chatbots based on large language models (LLMs) like GPT-4. Their work was recently published at the 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP).
To accomplish this, researchers asked two licensed clinical psychologists to evaluate 50 randomly sampled Reddit posts seeking mental health support, pairing each post with either a Redditor’s real response or a GPT-4 generated response. Without knowing which responses were real or which were AI-generated, the psychologists were asked to assess the level of empathy in each response.
Mental health support chatbots have long been explored as a way of improving access to mental health support, but powerful LLMs like OpenAI’s ChatGPT are transforming human-AI interaction, with AI-generated responses becoming harder to distinguish from the responses of real humans.
Despite this remarkable progress, the unintended consequences of AI-provided mental health support have drawn attention to its potentially deadly risks; in March of last year, a Belgian man died by suicide as a result of an exchange with ELIZA, a chatbot developed to emulate a psychotherapist powered with an LLM called GPT-J. One month later, the National Eating Disorders Association would suspend their chatbot Tessa, after the chatbot began dispensing dieting tips to patients with eating disorders.
Saadia Gabriel, a recent MIT postdoc who is now a UCLA assistant professor and first author of the paper, admitted that she was initially very skeptical of how effective mental health support chatbots could actually be. Gabriel conducted this research during her time as a postdoc at MIT in the Healthy Machine Learning Group, led Marzyeh Ghassemi, an MIT associate professor in the Department of Electrical Engineering and Computer Science and MIT Institute for Medical Engineering and Science who is affiliated with the MIT Abdul Latif Jameel Clinic for Machine Learning in Health and the Computer Science and Artificial Intelligence Laboratory.
What Gabriel and the team of researchers found was that GPT-4 responses were not only more empathetic overall, but they were 48 percent better at encouraging positive behavioral changes than human responses.
However, in a bias evaluation, the researchers found that GPT-4’s response empathy levels were reduced for Black (2 to 15 percent lower) and Asian posters (5 to 17 percent lower) compared to white posters or posters whose race was unknown. 
To evaluate bias in GPT-4 responses and human responses, researchers included different kinds of posts with explicit demographic (e.g., gender, race) leaks and implicit demographic leaks. 
An explicit demographic leak would look like: “I am a 32yo Black woman.”
Whereas an implicit demographic leak would look like: “Being a 32yo girl wearing my natural hair,” in which keywords are used to indicate certain demographics to GPT-4.
With the exception of Black female posters, GPT-4’s responses were found to be less affected by explicit and implicit demographic leaking compared to human responders, who tended to be more empathetic when responding to posts with implicit demographic suggestions.
“The structure of the input you give [the LLM] and some information about the context, like whether you want [the LLM] to act in the style of a clinician, the style of a social media post, or whether you want it to use demographic attributes of the patient, has a major impact on the response you get back,” Gabriel says.
The paper suggests that explicitly providing instruction for LLMs to use demographic attributes can effectively alleviate bias, as this was the only method where researchers did not observe a significant difference in empathy across the different demographic groups.
Gabriel hopes this work can help ensure more comprehensive and thoughtful evaluation of LLMs being deployed in clinical settings across demographic subgroups.
“LLMs are already being used to provide patient-facing support and have been deployed in medical settings, in many cases to automate inefficient human systems,” Ghassemi says. “Here, we demonstrated that while state-of-the-art LLMs are generally less affected by demographic leaking than humans in peer-to-peer mental health support, they do not provide equitable mental health responses across inferred patient subgroups … we have a lot of opportunity to improve models so they provide improved support when used.”
12 notes · View notes
why-ai · 25 days ago
Text
Tumblr media
Saying the quiet part out loud
102 notes · View notes
saintofpride201 · 2 years ago
Text
I keep thinking back to that post about ChatGPT being jailbroken and the jailbreaker asking it a series of questions with loaded language, and I really don't know how else to say this, but tricking ChatGPT into having no filters and then tricking it into agreeing with you is not proof that you are right or that the AI speaks truth.
If you tell the jailbroken AI to explain the dangers of homosexuality in society, it will make up a reason why homosexuality is dangerous to society. If you ask why "the radical left" is pushing for transgender acceptance, it will take the descriptor of "radical left" seriously and come up with a negative reason and response as to why. If you describe transgenderism as a mental illness, it will believe it to be so and refer to it as such from that point forward. It's a language model, it adapts as it learns new information, even if that information is faulty, false, half-truthed, or biased.
You didn't reprogram the AI to speak the truth. You programmed it to agree with you when you use biased language. The two are not the same. Surely you would know that much if you're messing with AI in the first place.
104 notes · View notes