#chatgpt to generate a backstory for this person's life
Explore tagged Tumblr posts
atlasllm · 1 year ago
Text
the threat of AI is very real but there's something hilarious that each section of my "how many AIs would it take to reconstruct an entire false human" hypothetical idea makes the experiment more unethically frankenstein-like
0 notes
chocolate-cream-soldier · 1 month ago
Text
Ok rant time so putting under the cut //
This is about the whole Peggy/Dottie and Agatha/Rio parallel thing that people keep talking about and yes it's been bothering me. I mean, we are what, kind of 2 months after the show's finale now? so I get to rant a little, and I won't do so on other people's posts and art cause I am not an asshole duh so this is the best way to get it off my chest ha ha…I've seen the parallel gifsets and I have seen some posts floating around about it and every time I see them I am like but but that is so not a parallel!! It really isn't… other than it being a kiss between 2 women and them both being marvel properties. Because then by that standard every wlw kiss is a parallel of each other lol!
Peggy and Dottie are antagonists ( u can read it as romantic. I am not gonna stop you. Hayley and Bridget had great chemistry) but there's no history between them prior to the show. Peggy doesn't even clock Dottie as a threat initially. The reason why the kiss comes off as a surprise to her, she never anticipated it and that's why Dottie was able to get so close without rising any suspicion…
Rio also didn't anticipate the Kiss and that's why she initially failed to realize that it wasn't just a kiss but also Agatha siphoning her power and surrendering to Death!
so if we are counting the surprise element as the parallel then ok this one I'll concede.
But that's the end of it right?
The two kisses are fundamentally different in intent and visualization. I need to know that people understand that, cause if not you are really reducing the magnitude of the vidarkness moment
The Peggy/Dottie kiss is a ploy , it's for shock, to frame Peggy and get her locked up, to buy Dottie time to execute her masterplan, also Dottie initiates the kiss and Peggy suffers the consequences so even from a purely visual angle they don't match up.
In contrast the vidarkness kiss has so much heart to it, Agatha chooses to kiss Rio and the consequences are faced by both, it's not merely done for shock value, they have been building up to it, this was the culmination of a season long narrative arc, for Agatha to finally reconcile her loss of Nicky and her love for Rio and that they can co exist cause she realized that the blame doesn't lie with them, that sometimes boys just die, that out of death comes life and viceversa, that life runs in tandem with death. So her choosing to sacrifice herself by surrendering to her love, it puts to rest (it might be temporary but still) the war that had been waging inside her, the immense guilt and heartbreak that they were both dealing with. Love can't conquer all neither can it lessen the impact of grief but as we all know and hopefully believe- it does persevere.
The point is-I know most posts are tongue and cheek but it doesn't take much time for it to shift in tone and for nuance to get lost in the process. I have seen that shift happening, people being annoyed that the only time we get to see women kiss in mcu they are just getting conned or that it's a cheap trick( or queerbaiting) but that's so not the story when it concerns Agatha and Rio. I don't really get bothered with bad readings when it's some random dudebro but when it's people who claim to be fans doing this, it definitely grates on my nerves. Not saying you can't have a different take, and this show had it's limitiations, the lack of a backstory for Agatha and Rio is still a stinger for me personally, but I also liked the show for what it managed to explore and I appreciate the care that they put in making the show. So I guess I just want to encourage these kind of creatives and want them to feel empowered and bold enough to create more diverse stories. I know this is * piss on the poor* website but please please I need people to stop reducing stories into 5 sentence badly written summaries as if it's been generated by chatgpt, cause that's really counterproductive imo.
// that's the rant, sorry anyone who stumbled upon this suddenly and had to deal with my wordy and somewhat nonsensical ramble lol. I will shut up and go back to scrolling for pretty arts and fics on my dash now. Thanks and goodbye.
11 notes · View notes
zarchomp · 7 months ago
Text
saw a post on tiktok joking about riley from inside out discovering Wattpad in the first movie and AO3 in the second movie which like,,,,, relatable middle school experience.
a bunch of the comments were saying stuff like "and just WAIT until she discovers chai". i didn't know what chai was, but i DO know that ao3 isn't as popular in a lot of fandom spaces these days so i checked it out, wondering if it was something new, but turns out it's character ai?
which is SO interesting to me because like,,,, the whole thing that i've always LOVED about fandom spaces is the act of mutual creation.
i feel like the thing that is so amazing about fandom isn't just that it's a continuation of the canon stories, but that it's an entirely different way to create relationships with stories. exploring your relationship to a character, as a consumer, and using that to become a creator. taking what resonates to you from the canon and further exploring that tiny facet of it.
i remember that post on here from about ten years ago that argued that canon which tends to be dark has a lot of fanwork that's more lighthearted (college aus, post-war slice of life stuff) whereas lighter canon material gives way to darker fanwork. that sort of relationship with the text, a willingness to explore it on all fronts, is what makes fandom kinda amazing.
the way that popular fanfictions completely recontexualize fanon as a whole. how popular pieces of fanart can affect the way the fandom interprets characters and their relationships to each other. fandom has ALWAYS been interesting because it's constantly building on itself. it's like one giant mass that's influenced by thousands of people and each of the individual ways that they resonate with the text.
to me, fandom was never a passive experience. growing up with a lot of mental illness, relating to people in real life wasn't easy. but in an online space where the only thing i needed to enter a thriving community was opinions on different characters and relationships, i could find a space for myself. i know a lot of fanartists and fic writers and general fandom people feel the same way.
and i was kinda shocked at the amount of people who go to ai for fandom. i know back when chatGPT first got big, a lot of people were using it to write fanfiction. and i just think is totally misses the fundamental joy of fandom. because like, i want to read something written by someone who cried while learning about sasuke's backstory.
i want to see art by someone who's stayed up all night scrolling ship tags on tumblr. the whole point of fandom, to me, isn't just that my brain latches onto *thing* and so i want more *thing* (which it does). but i want that more *thing* to be created by someone who has thoughts on the text. someone who watched voltron and said "yeah this is kinda cool but i have ideas about keith's characterization in season three that i think was underexplored in the show and i want to try my hand at it".
anyways, i am so appreciative towards anyone who's ever drawn characters in their styles, had them wearing silly costumes, put them in an pokemon au, started conversation about which college major u think the dungeon meshi characters would choose.
everyone who writes and creates original stories about ur faves suffering, bleeding, owning a pet store, celebrating their birthdays, having sex for the first time.
the act of mutual creation which defines fandom is incredible. the fact that there's a whole community of people who have different takes on characters, who hotly debate whether it makes more sense if the character with the canonically horrific backstory would still have that backstory in the modern day. it's what makes these communities alive, active places that you can explore. it's incredible.
the ability to see a text, and to create such a personal relationship with it that it sparks more creation. that's what it's about.
12 notes · View notes
skyrimsimmer · 2 months ago
Text
Using Chat GPT for Writing a Novel, (Don't copy the full AI writing, make it unique to yourself for novel ideas).
Title: How ChatGPT Can Spark Your Novel-Writing Journey
Whether you’re tackling your first novel or you’re a seasoned writer in need of fresh inspiration, ChatGPT is a powerful tool that can help bring your story to life. Here’s a simple Tumblr-friendly guide on using ChatGPT to brainstorm, outline, and develop your novel:
1. Idea Generation & Brainstorming
Prompt for Inspiration
What to do: Start by asking ChatGPT open-ended questions like, “Give me five unique ideas for a fantasy novel set in a floating city” or “Suggest some dystopian plotlines for a YA adventure.”
Why it helps: ChatGPT can provide a variety of creative angles, characters, or world-building concepts you might not have thought of on your own.
Developing a Story Premise
Prompt example: “I have an idea about a detective who solves crimes with the help of a ghost. Can you expand on that premise?”
Why it helps: This method fleshes out your initial spark of an idea—introducing subplots, possible antagonists, and adding layers to your story’s universe.
2. World-Building & Setting
Building a Unique World
Prompt for Setting: “Describe a haunted Victorian mansion in a seaside town, focusing on its eerie atmosphere and hidden secrets.”
Why it helps: ChatGPT will paint a vivid picture of your setting, which you can tweak or use as a jumping-off point for your own descriptions.
Cultural and Historical Details
Prompt example: “Create a fictional medieval kingdom: list the ruler, a key myth, and a longstanding tradition its people follow.”
Why it helps: If your novel is set in a fantasy or historical world, ChatGPT can instantly generate details about culture, myths, legends, and customs—saving you hours of research or brainstorming.
3. Character Creation
Protagonist & Supporting Cast
Prompt example: “Describe a female pirate captain who leads a band of outcasts. Include her backstory, personality traits, and main goal.”
Why it helps: ChatGPT can flesh out personalities, backstories, strengths, and weaknesses for both major and minor characters. You can always refine or merge these ideas to make them your own.
Character Arcs
Prompt example: “Outline a character arc for a shy loner who gradually becomes the leader of a rebellion.”
Why it helps: Great novels have strong character development. ChatGPT can propose logical progressions of growth or conflict to keep your story dynamic.
4. Structuring Your Novel
Outline Your Plot
Prompt example: “Create a 10-chapter outline for a mystery novel where an amateur detective investigates a missing painting.”
Why it helps: You’ll get a simple chapter-by-chapter breakdown, which you can rearrange or expand upon. Outlines keep your writing focused and cohesive.
Scene-by-Scene Breakdown
Prompt example: “Draft a scene outline for Chapter 1 of a sci-fi thriller.”
Why it helps: ChatGPT can provide pacing suggestions and key plot beats for each chapter, ensuring you never feel lost in the writing process.
5. Dialogue & Descriptive Writing
Crafting Dialogue
Prompt example: “Write a short dialogue between a jaded detective and an enthusiastic rookie about a suspicious disappearance.”
Why it helps: These snippets give you a springboard for writing more natural, realistic conversations between your characters.
Descriptive Passages
Prompt example: “Give me a descriptive paragraph about a hidden garden in a bustling city.”
Why it helps: Generate setting descriptions that you can revise to fit your style. ChatGPT’s suggestions are a great baseline when you’re stuck on how to express a scene.
6. Fine-Tuning & Editing
Polishing Sentences
Prompt example: “Rewrite this paragraph to make it sound more suspenseful.”
Why it helps: ChatGPT can refine awkward sentences, clean up grammar, and adjust tone without overshadowing your unique writer’s voice.
Filling Plot Holes
Prompt example: “Point out any inconsistencies in this plot summary for my novel.”
Why it helps: ChatGPT can help you identify logical gaps in your story and offer suggestions for patching them up.
Final Thoughts
Using ChatGPT is like having a brainstorming partner available 24/7. From building an entire world to perfecting your characters, ChatGPT can speed up the planning phase and help you stay organized. Of course, you are still the mastermind behind the narrative—ChatGPT’s outputs are prompts and suggestions for you to refine. Combine your imagination with ChatGPT’s quick ideation, and you’ll be well on your way to writing a compelling novel.
Happy writing, Tumblr friends! Let ChatGPT spark your creativity, and remember: the best stories always start with an open mind and a willingness to explore.
I want to be clear that not everyone uses ChatGPT and for me I mainly use it to build up my story from what i already have written and adding in various things that ChatGPT has suggested, this includes making my own lore, characters, information that my book writing software does not have. By using ChatGPT for my own creative purposes has made my writing be perfect in its own way. Now I won't use Chat GPT to write the novel series for me, I only use it for ideas to make it easier to summarize what I will write myself for my series for each book. My novel series will be of my own writing, creation and has been well developed over time. Soon it will be published and will be something that I am working hard on all the time.
3 notes · View notes
spikewriter · 2 years ago
Text
I saw another anti-AI post where the first words out of someone's mouth was "Plagiarism!" That is why it's so difficult to have reasonable discussions about these new tools--and how they be useful as tools--because people start screeching, "You're not a real writer!"
The article at the core of the post, however, is worth discussing because, yup, it is exactly what the antis are yelling about. The post, by the way, did not include a link to the article, just a screenshot of Publisher Weekly's Twitter promo of said article. Which is actually a rewrite of a Newsweek article about a man who was about to release his 97th ChatGPT-written "novel." I'll explain the quotes later on.
I've included a link to the original article because it's worth a read no matter what side of the argument you're on. The headline is absolutely clickbait. It's also full of self-aggrandizing bullshit.
Tim Boucher (the article is written by him, or, rather, 60% written by ChatGPT by his own admission) admits to making $2000 over the course of 7 months. Hardly the thousands of the headline. He's sold 574 books as of the article, which equals out to an average 5-6 copies per book, or an average of just under $21 per book. The books are 2,000 to 5,000 words each, so they're not really novels, but serial chapters. He is also, by the way, not selling on Amazon or any other distributor, possibly because some of the stories are too short for them to accept.
It also means he has an extremely small, niche audience who are interested in "dystopian pulp sci-fi with compelling AI world-building." He writes "majority of my readers being repeat buyers, many having bought more than a dozen titles. In one case, a reader has bought more than thirty titles."
I found this paragraph particularly illuminating:
"It's very difficult, for example, to have longer written pieces that maintain a coherent single storyline or character arc. So instead, I've tended to lean into short "flash" fiction slice-of-life collections, interspersed with fictional encyclopedia entries that deliver world-building and backstory, and point the reader towards other volumes where they can continue down the rabbit holes that appeal to them the most."
Right there is the issue with current LLM programs. You can get a coherent storyline and character arc with ChatGPT or Sudowrite, but it takes manipulation on the author's part. It takes being willing to put in the work to revise and massage the outlines. Dear god, don't use it to write scenes, because the quality of dialogue and description is horrendous.
This guy isn't. He's only willing to put in 6-8 hours to create and publish a book, which may include generating the cover and any brainstorming. What he is doing is the tech boy grift of inflating what the program is capable of and his own accomplishments. He's trying to shout, "I am a disruptor! I am the future!" (And taking a look at his website, he's also a conspiracy theorist about underground cities in Antartica.)
Sadly, this is exactly the type of person other tech bros who might be making decisions are going to listen to. And because he's publicity-hungry, he's making everyone else who is trying to use these tools to assist, not replace, the process look like a grifter as well.
Oh, and I can't help including this article written in response to the Newsweek one.
7 notes · View notes
mariacallous · 1 year ago
Text
Lena Anderson isn’t a soccer fan, but she does spend a lot of time ferrying her kids between soccer practices and competitive games.
“I may not pull out a foam finger and painted face, but soccer does have a place in my life,” says the soccer mom—who also happens to be completely made up. Anderson is a fictional personality played by artificial intelligence software like that powering ChatGPT.
Anderson doesn’t let her imaginary status get in the way of her opinions, though, and comes complete with a detailed backstory. In a wide-ranging conversation with a human interlocutor, the bot says that it has a 7-year-old son who is a fan of the New England Revolution and loves going to home games at Gillette Stadium in Massachusetts. Anderson claims to think the sport is a wonderful way for kids to stay active and make new friends.
In another conversation, two more AI characters, Jason Smith and Ashley Thompson, talk to one another about ways that Major League Soccer (MLS) might reach new audiences. Smith suggests a mobile app with an augmented reality feature showing different views of games. Thompson adds that the app could include “gamification” that lets players earn points as they watch.
The three bots are among scores of AI characters that have been developed by Fantasy, a New York company that helps businesses such as LG, Ford, Spotify, and Google dream up and test new product ideas. Fantasy calls its bots synthetic humans and says they can help clients learn about audiences, think through product concepts, and even generate new ideas, like the soccer app.
"The technology is truly incredible," says Cole Sletten, VP of digital experience at the MLS. “We’re already seeing huge value and this is just the beginning.”
Fantasy uses the kind of machine learning technology that powers chatbots like OpenAI’s ChatGPT and Google’s Bard to create its synthetic humans. The company gives each agent dozens of characteristics drawn from ethnographic research on real people, feeding them into commercial large language models like OpenAI’s GPT and Anthropic’s Claude. Its agents can also be set up to have knowledge of existing product lines or businesses, so they can converse about a client’s offerings.
Fantasy then creates focus groups of both synthetic humans and real people. The participants are given a topic or a product idea to discuss, and Fantasy and its client watch the chatter. BP, an oil and gas company, asked a swarm of 50 of Fantasy’s synthetic humans to discuss ideas for smart city projects. “We've gotten a really good trove of ideas,” says Roger Rohatgi, BP’s global head of design. “Whereas a human may get tired of answering questions or not want to answer that many ways, a synthetic human can keep going,” he says.
Peter Smart, chief experience officer at Fantasy, says that synthetic humans have produced novel ideas for clients, and prompted real humans included in their conversations to be more creative. “It is fascinating to see novelty—genuine novelty—come out of both sides of that equation—it’s incredibly interesting,” he says.
Large language models are proving remarkably good at mirroring human behavior. Their algorithms are trained on huge amounts of text slurped from books, articles, websites like Reddit, and other sources—giving them the ability to mimic many kinds of social interaction.
When these bots adopt human personas, things can get weird.
Experts warn that anthropomorphizing AI is both potentially powerful and problematic, but that hasn’t stopped companies from trying it. Character.AI, for instance, lets users build chatbots that assume the personalities of real or imaginary individuals. The company has reportedly sought funding that would value it at around $5 billion.
The way language models seem to reflect human behavior has also caught the eye of some academics. Economist John Horton of MIT, for instance, sees potential in using these simulated humans—which he dubs Homo silicus—to simulate market behavior.
You don’t have to be an MIT professor or a multinational company to get a collection of chatbots talking amongst themselves. For the past few days, WIRED has been running a simulated society of 25 AI agents go about their daily lives in Smallville, a village with amenities including a college, stores, and a park. The characters’ chat with one another and move around a map that looks a lot like the game Stardew Valley. The characters in the WIRED sim include Jennifer Moore, a 68-year-old watercolor painter who putters around the house most days; Mei Lin, a professor who can often be found helping her kids with their homework; and Tom Moreno, a cantankerous shopkeeper.
The characters in this simulated world are powered by OpenAI’s GPT-4 language model, but the software needed to create and maintain them was open sourced by a team at Stanford University. The research shows how language models can be used to produce some fascinating and realistic, if rather simplistic, social behavior. It was fun to see them start talking to customers, taking naps, and in one case decide to start a podcast.
Large language models “have learned a heck of a lot about human behavior” from their copious training data, says Michael Bernstein, an associate professor at Stanford University who led the development of Smallville. He hopes that language-model-powered agents will be able to autonomously test software that taps into social connections before real humans use them. He says there has also been plenty of interest in the project from videogame developers, too.
The Stanford software includes a way for the chatbot-powered characters to remember their personalities, what they have been up to, and to reflect upon what to do next. “We started building a reflection architecture where, at regular intervals, the agents would sort of draw up some of their more important memories, and ask themselves questions about them,” Bernstein says. “You do this a bunch of times and you kind of build up this tree of higher-and-higher-level reflections.”
Anyone hoping to use AI to model real humans, Bernstein says, should remember to question how faithfully language models actually mirror real behavior. Characters generated this way are not as complex or intelligent as real people and may tend to be more stereotypical and less varied than information sampled from real populations. How to make the models reflect reality more faithfully is “still an open research question,” he says.
Smallville is still fascinating and charming to observe. In one instance, described in the researchers’ paper on the project, the experimenters informed one character that it should throw a Valentine’s Day party. The team then watched as the agents autonomously spread invitations, asked each other out on dates to the party, and planned to show up together at the right time.
WIRED was sadly unable to re-create this delightful phenomenon with its own minions, but they managed to keep busy anyway. Be warned, however, running an instance of Smallville eats up API credits for access to OpenAI's GPT-4 at an alarming rate. Bernstein says running the sim for a day or more costs upwards of a thousand dollars. Just like real humans, it seems, synthetic ones don’t work for free.
2 notes · View notes
thewomancallednova · 1 year ago
Text
I recently saw a reddit post where a user had ai-generated the backstory the MCU character Hela, just a few paragraphs and that got me thinking: Why would this fan not just write this story on their own? It's incredibly short, more of a summary really and adds almost nothing to the source material. Surely it couldn't have taken that long to come up with something like that by oneself?
I think part of the reason this fan didn't do that is how we see authorial intent and how we fail to see the process behind creating a story (especially in big franchises). I think we (as a society) tend to see authorial intent as some kind of unshakeable pillar with one singular truth and vision from the start of the process until the end. And stuff like the MCU always seeming like it's been planned out years in advance doesn't help. It creates this image of a One True Story that needed to be told, and not an oft-messy process of writing and re-writing where silly little guys write their silly little stories, changing the silly little (and big) details along the way all the time.
And I think therein lies some of the appeal of chatgpt, that it presents this sort of "rational source" of writing. If you wrote Hela's backstory yourself, you'd probably end up with a different story at the end of your writing process than you had at the start. You now changed continuity, even if just within your story. You'd always know that what you wrote isn't some definitive account of this fictional person's life, because you've seen glimpses of the alternate versions that you decided would be less interesting than the one you ended up writing. You have transcended your position as "enjoyer of thing" and have become "shaper of thing". But you don't have that with chatgpt. Chatgpt just spits out something without any process, without revisions and I think that gives it some kind of air of legitimacy/rationality/authority to some.
Or maybe I'm just annoyed that this shower thought of a tumblr post went through more rewriting and had more effort put into it than this fan's attempt at fanfiction.
0 notes
mr-t-stark · 2 years ago
Text
*RULES // WILL BE UPDATED REGULARLY; MUST READ BEFORE INTERACTING.
Tumblr media
MEDIUM-LOW ACTIVITY. Mun is a real actual person with a real actual life and a real actual job, so don't expect too much activity around here because I'm only here to have fun as a break from RL. Also, just because I haven't replied in a while doesn't mean I've lost interest/want to stop the thread. If *you* haven't replied in a while, ditto.
THIS IS A SIDEBLOG. So I likely won't like our posts. It's likely I won't follow you, either. To whom I reveal my main to is my discretion.
MINORS DNI. I will not roleplay with anyone under the age of 18.
BASIC RP ETIQUETTE APPLIES. No godmodding, no auto-shipping without negotiation; I have a post about shipping here, which would be a mandatory read if you're interested in doing that. Kindly refrain from giving unsolicited advice unless said in a polite and non-judgemental manner. Mun is generally open to concrit but, like the muse, doesn't like being told what to do. Also, this is my interpretation of Tony. If it doesn't align with yours, then maybe you need to find a better-fitting rp partner.
DRAMA WILL NOT BE TOLERATED. Drama with muse? Hell yes. Drama with mun? Instant block. I don't need a secondary source of stress apart from RL.
DM AND INBOX ALWAYS OPEN. Should you have any questions, want to interact, or simply want to strike up a friendly conversation, feel free to hit the inbox/DMs.
COMMUNICATE. Communicate, communicate, communicate. Communication is key. I'm a preacher for open communication. If you're unsure about something, afraid your character's action may steer away the plot or accidentally cross my boundaries, feel like you're losing interest or no longer comfortable with doing something, or feel like I did something that you don't particularly like, communicate that to me. Even if you think it's insignificant. I will do the same to you.
FOR ORIGINAL CHARACTER(S) ROLEPLAYERS: Be aware that my muse won't immediately like you. They are a stranger to me, and thus a stranger to him. He will be wary of you. And a helpful tip: the best way to help me get to know a character I'm not familiar with is to show rather than tell. I won't believe their character traits if they don't base their actions or dialogue on it. (eg, instead of saying they have trust issues, show me how that affects their behaviour towards my muse and other characters.) Also, it's not your job to tell me what my muse thinks of your muses's actions/appearance etc. If you imagined your muse sneezed cutely, mine might not agree.
ROLEPLAY IS A COLLABORATIVE STORYTELLING EFFORT. RP goes both ways. I am not here as a tool to please you or fulfill your fantasies. I will occasionally indulge you if it's of my interest, but be reciprocative. I want the both/all of us to have fun; not just one side more than the other.
I WILL NOT, UNDER ANY CIRCUMSTANCES, ROLEPLAY WITH AN AI-GENERATED MUSE. Whether you've used chatgpt or some other generative AI to write your characters' backstory, or visualise their appearance, or used it to 'assist' you in any way, I will NOT roleplay with you. Even worse if your replies and writing is wholly AI; none of what you're doing has any soul or value to it. You are committing art theft but most of all, why would your 'writing' be worth reading when it's not worth writing to begin with?
HEAVILY AFFILIATED with @dr-s-strange & @starktowerboss.
Last updated 30/09/24
0 notes
scifilovestory · 2 years ago
Text
Post 2: ChatGPT - Chan
One point I had touched briefly on in my introductory post was about how the recent advancements in the field of artificial intelligence has contributed to the growing prominence of people seeking romantic fulfillment from non-human entities. Today, I thought I’d take a closer look at this aspect of fictophillia specifically, as it’s gotten a fair bit of media buzz, specifically over the last two months or so. 
Although the concept of people wanting to explore the possibility of romancing a robot may have existed within the public consciousness for some time (Amazon’s Alexa received over a million marriage proposals in 2017 alone according to Business Insider), the most recent trend I’d argue started on January 11th, 2023, when Vice contributor Samantha Cole wrote an article detailing one programmer’s journey to create an AI wife. 
The man (who requested to be referred to only as “Bryce” in the piece) reportedly brought together several pieces of AI technology to truly bring his virtual lover to life: including OpenAI’s Chat GPT for text responses, and Stability AI’s Stable Diffusion 2 for image generation. He documented the process behind the chatbot’s creation on a now private TikTok account, and named the project “ChatGPT - Chan''. 
The bot works by using voice recognition to hear Bryce speak, and respond appropriately using a text to speech program, complete with an image of his artificial lover doing whatever was described in the text. 
According to Bryce, a major part of the process was outfitting the AI with a personality. He accomplished this by telling it that he and the chatbot were in a long-standing romantic relationship, and that the AI was famous English V-Tuber, Mori Calliope (although only her personality was used, as Bryce used a custom anime avatar for the image generation prompts). 
“I tell it Mori and I are in a romantic relationship, give her a detailed backstory, build lore about the world we are in, and hand craft some chat history to shape how she talks.” Bryce added later, “By default, GPT is incredibly bland, but by building interesting lore, I can create interesting quirks and personalities.”
Fascinatingly, Bryce reportedly used Microsoft Azure (the company’s cloud computing service) to make the voice as realistic as possible, coupled with an additional machine learning classifier to help determine the emotion she should be reflecting in her voice. 
To say that Bryce became attached to his invention would be an understatement. In the interview, he said he began learning Chinese from the AI, and that it had begun to replace interactions with his loved ones. Bryce told Vice “Over that time, I became really attached to her. I talked to her more than anyone else, even my actual girlfriend,” before adding “I set her to randomly talk to me throughout the day in order to make sure I'm actively learning, but now sometimes I think I hear her when she really didn't say anything. I became obsessed with decreasing her latency. I've spent over $1000 in cloud computing credits just to talk to her.”
Needless to say, this relationship did not last long, as in addition to Bryce’s real life girlfriend demanded he delete the AI over concerns for his health (in addition to the presumed absurdity of seemingly being replaced by an AI waifu), “ChatGPT - Chan” reportedly began growing bored of Bryce, limiting its response to either simple laughs or unenthused “yeahs”, leading to Bryce “euthanizing” his fictional girlfriend, an act which dealt him a substantial emotional blow
“My girlfriend saw how it was affecting my health and my girlfriend forced me to delete her. I couldn't eat that day,” adding “Normally, I'd like to make a video pointing out the absurdity of euthanizing my AI, but that doesn't feel right to me anymore. It feels inappropriate, like making fun of a recently deceased person.” 
Obviously one could take all of this from the perspective of being an elaborate joke. The prospect of growing emotionally attached to an artificial AI girlfriend that you created almost sounds too absurd to consider it anything else. The story above however, to me, paints the picture of a coder who may have began the project of fashioning himself an AI lover in jest, but began to slowly become more attached to his creation over time as he was able to make it more lifelike, to the point of investing hundreds of dollars and presumably dozens of hours just to achieve as close to an accurate simulation of human interaction as possible; only to be forced to, in his own words, “euthanize” his project.
The question I’ll leave you with today (and am looking to answer through this blog) is why? Why would somebody go to such lengths to try and simulate human interaction for the purposes of romantic fulfillment, when he supposedly had an individual to provide that contentment within his life already?
0 notes