#but more to the point if chatgpt had been a thing then? what kind of nonsense would i be spouting?
Explore tagged Tumblr posts
Text
I saw a post where someone was lamenting the fact that some kids will "learn" completely made up "facts" from ChatGPT because they think it's a search engine with access to all the world's information and they said it was perfectly understandable because the kid did exactly what people told him to do, which was to look things up.
So, like, is it okay for me to say this now? Will people get it and not send me anonymous death threats this time? Here goes:
When people, especially kids, ask a question, "Look it up," is not a good answer. It's a dismissive answer that indicates you don't care if they get the right information or not. If you don't know the person and aren't responsible for them then that's fine, but if it's your friend, your student, or God forbid your child, you can't say that to them.
#am i still bitter over the fact that when i was a kid this was the only answer i ever got to requests for information? yeah#especially when the question was 'how do you spell x'#'look it up' HOW. i dont know how to spell it#but more to the point if chatgpt had been a thing then? what kind of nonsense would i be spouting?#and like this isnt a new phenomenon. google has always had wrong information on it#encyclopedias are great but not all dependable and some are out of date#once i asked my brother something and in the smarmiest way possible he led me to a computer and opened a website called#let me google that for you#and WOW! that aged fucking poorly! google wants me to eat rocks!
15 notes
·
View notes
Text
Ch 161~
Can't draw so much during the week..!
More commentary about 161..
I'm actually convinced Fatal and Mephisto should be Kamiki's song?? I think some things hint of it.
and that he DOES really care about Aqua.
and that he does have to do with Sarutahiko, Amenouzume's husband(although this part is a speculation)
More stuff in the read more:
(first written in another language and chatGPT helped me translate it... I can't write things like this twice ;v; it's a great world here. so convenient~)
Honestly, it's frustrating and a bit agonizing; what is this even about? The plot is stressful, but...
Still, being able to focus like this... I guess it’s a good thing to find a work that hooks you and makes you think deeply in some way.
LOL, it also means I’m living a life where I have enough time to care about a manga, even though I’m currently in a pretty tough spot.
This manga, whether it's in a good or bad direction, seems to be driving me crazy in its own way.
If I’m disappointed, I can always go read something else, (I even got permission from someone to draw a Persona fanfic fanart, but I’ve been too hooked on this manga to do it.. that fanfic was so good.. I need to do it sooner or later..).
But I was so confident about my analyses. Like, really... I’m usually good at picking up on these kinds of things? This manga is great at psychological portrayal, and it was amusing to analyze that, There are just too many things sticking out for me, and things feel uneasy.
It’s not about the pairing... It just keeps bothering me... Am I really missing the mark on this? I’m usually good at sensing these things...
Without the movie arc, this development would be fine, but that arc is sandwiched in there, and I interpreted the character based on that too...
Honestly, every time I listen to the songs, I get this strong feeling like, "This isn’t Aqua." The kind of emotions in these songs, it's not him that's singing them. It's the dad. I immediately posted about it when I first heard it in July. As soon as I heard it, I thought, "This is it," and got a gut feeling.
I really want to feel that emotion again.
Even if Kamiki does turn out to be a serial killer, I still think these songs could describe his inner state.
I think we’ll get some explanation in the next five chapters or so, even if it takes a bit longer.
Also, the expression Kamiki makes when Aqua stabs him is so genuine. Until that moment, he had been smiling, but...
If that expression was because he suddenly felt threatened with his life, it’s a bit pathetic. But... I don’t think that’s the case. What I really pay attention to are the emotional flow and expressions.
When Aqua says he wants to watch Ruby perform, the smile on Kamiki’s face... it’s soft. That’s... definitely a look of affection. It’s not like, “Oh, I've won him over!” or, “Yes, I’ve convinced him!” I interpreted it as Kamiki having paternal love, and there was a scene that backed up that idea earlier. I’m sure he really likes Aqua.
That’s not a bad expression. It’s more like, "Yeah, you wish to see Ruby, don't you. Go ahead, watch her. Keep living" (Which makes me wonder, is he really planning to harm Ruby? If he harms her, maybe he plans to do it after the Dome performance? But even that doesn’t make sense. Does that mean Aqua would have to come back to stab him AGAIN after that takes place?? Does it really add up to his logic for telling him to go watch her?)
Aqua says Kamiki will destroy Ruby’s future, but...
How exactly is he going to do that? Hasn't this guy literally done nothing? If they're talking about the Dome performance, at least that should go off without a hitch, right? So at least until then, Ruby would be safe?? So, Kamiki isn't planning to harm Ruby now at least, right? Even with that weird.. logic that he proposes (I hope he's lying about that tbh)
Then when Aqua smiles and says something like, "Haha, but I’ll just kill you and die with you," while pointing the knife at him again...
Kamiki’s expression at that moment really stands out, and it’s not like a twisted look of being frustrated about things not going his way. It’s not anger or annoyance he's feeling. It’s the same shocked and despairing expression we saw in chapters 146 and 153.
Aqua seems to have no clue what kind of person his father really is, huh? He can’t read him at all.
Honestly, from the way Kamiki speaks, I get the impression that he’s actually quite kind. He’s not saying anything too wrong.
Remember the scene where Ruby gets angry because people were talking carelessly about Ai’s death? Kamiki probably knows about that too. I think Aqua and Ai, and Ruby and Kamiki, are quite alike in nature. Kamiki might’ve felt a lot of grief over Ai at that time. I do believe he loved Ai.
The phrase, "People don’t want the truth," is pretty painful, especially if you think about Ai. That’s why Ai lived telling lies. Isn't Kamiki thinking about what's happened to her, then? By bringing that up? He should have felt it, loving/watching a person like her and what unfolded.. Ai died because of the truth that she had kids with him. Ugly fans like Ryosuke and Nino couldn't take her being less than perfect. Wouldn't this have hurt Kamiki too? The fact that they loved each other(At least Ai did genuinely, we know that) was unwanted. People could not accept that, and that's one of the reasons why they had to break up.
From the way Kamiki talks, it feels like he genuinely doesn’t want his son or daughter to go through that kind of pain.
I think Kamiki has a pretty good nature. When you look at how he speaks, it’s gentle, and he seems to genuinely care about Aqua and knows a lot about him. Maybe he’s been watching over him from afar for a long time? He probably even knows who his son has feelings for.
It really feels like Kamiki is trying to persuade him: "I’m fine with dying. But you, you have so many reasons to live, right? Shouldn’t you return to the people you care about?"
And, the way Kamiki reacts after Aqua stabs him also shows it. He’s visibly agitated afterward. His expression noticeably shifts to panic and darkness.
Wait... stop it, don’t do this! That’s what he says.
The way he’s talking to Aqua in that moment.
It’s not like, “How dare you?” but more like, “Aqua, please don’t do this.”
It really seems like he doesn’t want Aqua to die.
He’s really shocked by it.
From his expressions, he seems more shocked by Aqua getting stabbed than by his own fall, like he didn’t even know how to react properly. He's being grabbed onto but he isn't looking at the hands that are grabbing him, his line of sight is on Aqua there
The final expression he makes can seem really pathetic, but...
Oh man, I think that’s the truth of that situation.
And it makes sense because Ai dreamed of raising her kids with this guy. I think he could’ve been a really great father who adored his kids... at least until the point they separated. He was just really young back then.
Doesn’t this guy really love his kids? Even without the movie arc, there have been hints of his concern for them.
I’m not trying to interpret him kindly just because I particularly like or find this character attractive.
If he’s a serial killer psychopath, then yeah, he should die here. When I first got spoiled, my reaction was completely merciless. "Well, he should die if he's like that," I said. But...
I don’t think that’s the case. It really seems like he cares about Aqua.
Oh, and Kamiki’s soul being noble in the past is mentioned, right?
So, he was a good person before?
Well, I guess I wasn’t totally off in reading his character? LOL.
Does that mean he could be a fallen god?(could be a stretch, but there IS a lyric in fatal about fallenness!!!)
Sarutahiko is often described as a "noble" and "just" god, so it’s quite possible that Kamiki’s true nature is based on Sarutahiko, the husband of Ame-no-Uzume = Ai.
That couple was very affectionate, and according to the Aratate Shrine description, they even go as far as blessing marital relationships. Those gods really love each other. In that case, Ai being so fond and loving of Hikaru also makes sense. It could explain why she asked her kids to save him...
So, can't “Fatal” be his song? Maybe he’s fallen from grace?
The lyrics in "Fatal" say things like, "What should I use to fill in what’s missing?" Could that be about human lives? But did he really kill people? How can you save someone after that? That’s why I don’t think he went that far.
"Without you, I cannot live anymore"
“I would sacrifice anything for you”
This isn’t Aqua. This is Kamiki.
Would Aqua do that much for Ai? He shouldn’t be so blind.
When I listened to "Fatal," I immediately thought of "Mephisto" because the two songs are so similar in context.
They’re sung by the same narrator, aren’t they? That made it clear what Kamiki’s purpose was, which is why I started drawing so much about him and Ai after that.
He keeps saying he’ll give up his life and that he wants to see Ai again. This isn’t Aqua! These feelings are different from what Aqua has.
At first, I thought because Ruby = Amaterasu, with Tsukuyomi having shown up, and Aqua perhaps having relations to Susanoo (he’s falling into the sea this time, right? LOL) I wondered if Ai and her boyfriend’s story was based on the major myth of Izanagi and Izanami, since they’re so well-known.
That myth is famous for how the husband tries to save his wife after she dies, though he fails in the end.
The storyline is similar to Mephisto’s, so I thought, "Could this be it?"
And then I realized Sarutahiko and Ame-no-Uzume's lores also fit really well. Ai thinking Kamiki was like a jewel when they first met is similar to how Ame-no-Uzume saw Sarutahiko shining when they first met. Sarutahiko guiding Ame-no-Uzume is similar to how Hikaru taught Ai how to act. They even had descendants that have a title that means "maiden who's good at dancing" The two also fell for each other at first sight. The shrine the characters visit in the story is supposedly where those two met and married. If they REALLY are those gods in essence, It feels like something went wrong with the wish because one or both of them became twisted.
Anyway, I think Kamiki was originally noble but fell from grace, and it’s likely that Ai’s death was the catalyst.
But I’m not sure if he really went as far as killing people.
What is Tsukuyomi even talking about? I’ve read it several times, and I still don’t fully understand.
I really hope she's wrong because… killing others to make Ai’s name carry more weight? That doesn’t make any sense. What does “the weight of her name” supposed to mean?? I don't think that's something that should be taken just at face value, I feel like there's more behind this idea.
What kind of logic is that? And on top of that, I can’t understand why Ai’s life would become more valuable if Kamiki dies. It just doesn’t follow.
Why would he even say that?
He must be really confident... Does he think he’s someone greater than Ai?
Even so, how does it connect?
I read two books today, because I started wondering if my reading comprehension has dropped. Thankfully, I’m still able to read books just fine. It’s not like I can’t read, you know? I’ve taken media literacy classes and pride myself on not having terrible reading comprehension.
I tried to make sense of what exactly the heck this may mean, and I think.. if it were to mean something like, “I’ll offer my life as a sacrifice to Ai,” I’d at least get that. That kind of logic, in a way, has some practical meaning.
Kamiki talked about sacrifices? tributes? offerings? in chapter 147. I really remember certain scenes clearly because I’ve gone over them carefully. In that case, if Kamiki dies, then the weight or value of his life would transfer to Ai, and that would “help” her, right?
If the story is going in that direction,
when I look at “Mephisto” and “Fatal,” I can see that by doing this, Kamiki would have a chance to either save Ai or get closer to her. At least that makes some sense.
But is it really right for Ai to ask someone to save Kamiki, who killed others? As soon as the idea of it came up, I knew something was up.
Because of what Ai's wanted, I think it’s possible that Kamiki didn’t actually go that far. In the songs, they talk about gathering light and offering something, but they don’t say anything about killing people… Kamiki said he’d sacrifice his own life. People around him may have died, but…
Kamiki’s true personality doesn’t seem like the type to do that… And looking at his actions when Aqua was stabbed??
He hasn’t shown any direct actions yet, so I still don’t know how far he’d actually go.
It’s not that I don’t believe Tsukuyomi’s words entirely,
but I don’t think the conclusion is going to be something like, “Ai should’ve never met Kamiki.”
Every time we see Kamiki’s actual actions, there’s this strange gentleness to him, and that’s what’s confusing me.
The more I look closely, the weirder it feels, and something about it just bothers me. If Kamiki were truly just a completely crazy villain, I’d think, “Oh, so that’s who he is,” and I wouldn’t deny it.
But each time, I start thinking that maybe Ai didn’t meet someone so strange after all? Ai liked him that much, so on that front, it makes sense to me. I want to believe that’s the right conclusion. I mean, doesn’t what he says sound kind? Isn’t he gentle?
No, but seriously, when Kamiki listened to Aqua’s reasons for wanting to live, I thought his expression was warm. It didn’t seem like some calculated expression like “according to plan” like Light Yagami. It felt more like a fond, affectionate expression. I draw too, you know. I pay a lot of attention to expressions. This character often makes expressions that really stand out.
It’s like he’s genuinely trying to convince Aqua not to do anything reckless. Maybe I’m being soft on Kamiki because he’s Ai’s boyfriend? But actually, it’s not like that?
I mean, I’m the type who’s like, “Anyone who did something bad to Ai should die!!” It’s because he’s a character. If this were a real person, I wouldn’t so casually tell someone to go die or say such strong things.
But… he seems like a good person.
+It’s a small thing, but why did Kamiki drop his phone while talking about Ruby? Ppft If you drop it from that height, it’d probably crack. Was he trying to look cool? (It’s an Apple phone, huh.) Is he a bit clumsy? Well... since it looks like him and Aqua are about to fall into the sea, maybe it was a blessing he did so. The phone might be saved after all. If he manages to climb out of there, he could contact someone with that phone.
#oshi no ko#oshi no ko spoilers#oshi no theories#hikaru kamiki#hikaai#aqua hoshino#ai hoshino#spollers#wow I write so much about this comic#I'm so surprised I have so much to say about this too...#I was never this chatty?????#maybe I was but NOT THIS MUCH#doodle
50 notes
·
View notes
Note
Hi! I know you do NaNo every year and are quite involved with it; have you seen their new AI policy? And what are your thoughts on it?
https://nanowrimo.zendesk.com/hc/en-us/articles/29933455931412-What-is-NaNoWriMo-s-position-on-Artificial-Intelligence-AI
Hi!
So first off, nonnie: My involvement with NaNoWriMo has, uh, declined significantly in the last year. I was an ML through last November, and there were...a lot of problems that all culminated in me (and my co-ML ) not only making the decision to step down as MLs, but disaffiliate our region from NaNo altogether. We're not stopping people from participating, just taking the groups we manage independent and starting our own, localized version. Global communities are great, but when you get to as big as NaNo got and start having to implement rules and make them apply to wildly diverse regions - and then have absolutely no policies in place for people in those specific regions to adapt those policies - it stops being fun, frankly. For organizers and participants.
All of which is to say, no, I hadn't seen this until now.
My thoughts are that, like so many other things NaNo has tried to do since November, it's well-intentioned (probably) but poorly thought out and even more poorly executed. It's also too broad and overencompassing. And it violates the spirit of the program they've been belaboring us with for the last 25 years.
AI - Artificial Intelligence - covers a lot of ground. Spellcheckers are technically AI. Speech to text programs could be construed as AI. Predictive text is AI. ChatGP and its ilk is essentially an advanced form of predictive text, at least at this point. And if you had suggested five years ago that someone might write a novel entirely based on predictive text, the official NaNoWriMo stance would have been "I mean, sure, you CAN do that, we can't really stop you, if that's what you're happy with." If your goal is just to have 50,000 words, do whatever you want. I guess from their wording, they're saying that this is in general, not specifically for NaNoWriMo, but this is still a pretty bizarre stance for an organization that pushed for years for everyone to start on November 1 with a blank document and not a single word written ahead of time.
Arguing that "opposition to AI is classist and ableist" is the kind of reductive bullshit I expect from Tumblr, not a major organization that is supposed to promote literacy. I especially don't get the "not everyone has access to all resources" bit. Yeah...that's true...but if you have access to AI, you have access to everything you need to participate in NaNoWriMo, i.e. a computer with a keyboard and an internet connection. If you just want the fifty thousand words to get the prize and don't care if they're good, just fucking write "banana" over and over again until you hit it. Boom, you're a winner, and you've done just as much work as someone prompting ChatGPT, and it'll probably make about as much sense.
Also, most AI programs in existence use up a ridiculous amount of energy and resources, and encouraging their use is kind of an iffy stance for any company to take, let alone one that's been making this much of an effort to be sustainable.
Frankly, I think this policy is just one more sign that NaNo has gotten a) too big to be sustainable and b) too far from what it was originally meant to be, and I'm honestly debating if I'm even going to participate in the global one this year.
23 notes
·
View notes
Text
high on you l. l timothée chalamet x waitress!reader
*gifs not mine*
yes this would be a series. might be another series of mine that i wont finish. (again a lil bit of chatgpt to correct my grammar)
summary: a waitress caught timothée at the backroom of the diner doing something.
----
It was a chilly Tuesday night when Timothée Chalamet found himself in the back room of a small, dimly lit diner. He’d been feeling the weight of the world more than usual lately, and the crumpled baggie in his pocket was the only thing that seemed to provide any temporary relief. He had thought the diner, being relatively quiet, would be a safe place to indulge in his habit.
He was mistaken.
You, a waitress working the late shift, had just finished wiping down the counters when you heard the shuffling and murmur of voices coming from the back room. Curious, you walked over to investigate. What you saw stopped you in your tracks. There was Timothée Chalamet, crouched behind a stack of empty crates, looking frazzled and vulnerable.
You blinked, your initial shock quickly fading into a mix of concern and disbelief.
“Seriously?” you said, leaning against the doorframe with a raised eyebrow. “This is what you’re up to behind the scenes?”
Timothée head snapped up, and his eyes widened with a mix of panic and shame. He scrambled to his feet, his hand fumbling as he tried to stuff the crumpled baggie into his pocket.
“Look, I’m sorry,” he stammered. “I didn’t mean for anyone to see—”
You held up a hand to stop him. “You think I’m going to make a big deal out of this? Relax. I’ve seen worse. Just… don’t overdose in the restaurant, okay?”
His surprise was palpable. For a moment, he just stared at you, his mind racing. “You’re… not going to report me?”
“Why would I?” you shrugged, a playful smirk tugging at your lips. “I’ve got enough to deal with without adding a celebrity scandal to my list.”
He chuckled, the sound awkward and uncertain. “You’ve got a point there.”
He paused, glancing toward the door as if considering whether he should just leave and cut his losses. But something in the quietness of the room, the way you didn’t immediately judge him, made him hesitate. The idea of walking back out into the cold night, alone with his thoughts, suddenly felt daunting. Maybe, just maybe, he didn’t have to be alone right now.
“Mind if I stick around for a bit?” Timothée asked, his voice quieter now, almost tentative. “It’s been a rough night, and honestly… talking to someone who doesn’t expect anything from me sounds kind of nice.”
You blinked in surprise, not quite believing what you were hearing. Timothée Chalamet, the famous actor, the guy who could probably call up any of his friends and be surrounded by people, was asking to stay and talk to you? It seemed almost surreal.
“Wait,” you said, trying to wrap your head around the situation. “You’re saying you want to talk to me? Just hang out… here?”
Timothée gave a small, self-deprecating smile, rubbing the back of his neck.
“Yeah, I guess I am. I know it’s random, but…” He shrugged, letting his words trail off.
You couldn’t help the thought that flashed through your mind: You’re that lonely, huh? It wasn’t said out of malice, but rather a genuine curiosity mixed with a bit of sympathy. You’d never really considered that someone like him, with so much fame and success, could feel lonely enough to seek out company in a diner with a stranger.
But you didn’t say it out loud. Instead, you gave him a soft smile, gesturing to the seat across from you.
“Well, I’m not exactly busy, so if you want to talk, I’m all ears.”
Timothée seemed almost relieved, his shoulders visibly relaxing as he sat down.
“Thanks,” he said quietly. “I know it’s weird, but sometimes, it’s nice to just… be around someone who doesn’t know everything about you. Or at least, doesn’t act like they do.”
You nodded, leaning back in your chair. “I get that. Sometimes, it’s easier to talk to a stranger. No expectations, no pretense.”
He smiled, a genuine one this time, and you noticed how it lit up his face, making him look a little less weary. “Exactly.”
“So,” you began, deciding to lighten the mood a bit, “do you always sneak around in diners when you’re having a rough night, or is this a new hobby?
He laughed, the sound genuine and warm. “No, this is definitely a first. I don’t usually do… well, this.”
You raised an eyebrow, a playful glint in your eyes. “You mean getting caught by waitresses in the middle of questionable activities?”
He grinned, shaking his head. “Yeah, not my finest moment.”
You both shared a laugh, the tension in the room easing as the conversation continued. As you talked, you couldn’t help but think how strange it was—this unexpected encounter, this moment of connection with someone so different from yourself. But as the minutes passed, it felt less strange and more… right.
Maybe Timothée was lonely, maybe he just needed someone to listen, but whatever the reason, you were glad you could be there. And as the night wore on, you realized that maybe you needed this moment just as much as he did.
#timothee chalamet blurb#timothee chalamet imagine#timothée chalamet#timothee x reader#timothee chalamet fic#highou
22 notes
·
View notes
Text
my unhinged rant about the whumptober discourse, below the readmore for the benefit of ppl who dont wanna see that crap. im just gonna go insane if i don't say this somewhere bc i feel like i'm losing my mind
this drama is genuinely so mind-blowingly stupid it's unreal, and it's been bothering me so much that i just HAVE to talk about it or i'm gonna go insane, if for no other reason than to get it out of my system. i honestly never expected the whump community to go on the kind of bad-faith tirade that's taking place.
disclaimer right here that i do not support AI scraping creative works without permission (like chatgpt and a whole host of AI art programs do) or these AI-generated works being passed off as legitimate creative works. obviously that stuff is bad, and literally everyone on all sides of this agrees it's bad. i used chatgpt exactly once one week after it came out, before i knew how shit it was, and haven't touched AI stuff since. because it steals from creators and it sucks.
now:
saying "whumptober supports/allows AI" when their official policy says plain as day:
"we are not changing our stance from last year’s decision"
"we will not amplify or include AI works in our reblogs of the event."
"we discourage the use of AI within Whumptober, it feels like cheating, and we feel like it isn’t in the spirit of the event."
is bonkers! whumptober is a prompt list, there is nothing TO the event other than being included in the reblogs. they literally cannot stop people from doing whatever they want with the prompts.
someone could go out and enact every single prompt in real life on a creativity-fueled serial killing spree and the whumptober mods couldn't do shit about it. it's not like it's a contest you submit to. it's a prompt list! someone could take every single prompt from the AI-less whumptober prompt list, feed it into chatgpt right now, and post them as entries. and the mods of THAT wouldn't be able to stop them either. because it's a prompt list.
the AI-less event have also made just... blatantly false claims, like that grammarly isn't AI. grammarly IS AI and they openly advertise this. hell, this is grammarly's front page right now:
and this is a statement from grammarly about how its products work:
its spellchecker / grammarchecker is AI-based! claiming it's not AI is just... lying. saying "this is an AI-less event" and then just saying any AI that you want to include doesn't count as AI is ludicrous.
and you know what? whumptober actually pointed this out. they said they don't want to ban AI-based assistive tools (like grammarly) for accessibility reasons. this post has several great points:
"AI is used for the predictive text and spellchecker that's running while I type this reply."
"Accessibility tools rely on AI." this is true and here's an article about it, though the article is a little too pro-AI in general for my tastes, there's nuances to this stuff. it's used for captioning, translation, image identification, and more. not usually the same kind of AI that's used for stuff like chatgpt. THERE ARE DIFFERENT KINDS!
"But we can't stop that, nor can we undo damage already done, and banning AI use (especially since we can't enforce it) is an empty stand on a hill that's already burning, at least in our view of things."
and people were UP IN ARMS over this post! their notes were full of hate, even though it's all true! just straight lying and saying that predictive text isn't AI (it is), that AI isn't used for accessibility tools (it is), that whumptober can somehow enforce an anti-AI policy (they can't because it's a prompt list).
in effect, both whumptobers have the EXACT SAME AI POLICY. neither allows AI-generated works, but both allow AI-based assistive tools like grammarly. everyone involved here is ON THE SAME SIDE, they all have the exact same opinion on how AI should be applied to events like this, and somehow they're arguing???
not to mention that no other whump event has ever had an AI policy. febuwhump, WIJ, bad things happen bingo, hell even nanowrimo doesn't have one.
and you wanna know the most ridiculous part of this entire thing? which is also the reason why none of the above events have an AI policy.
no one is doing this. no one is out there feeding whumptober prompts to chatgpt and posting them as fills for whumptober cred. it's literally a hypothetical, made-up issue. all of this infighting over a problem that DOESN'T EXIST.
to the point that people are brigading the whumptober server with shit like this:
saying "everyone who participates in whumptober is a traitor, you should go participate in this other event with the exact same AI policy but more moral grandstanding about it" is silly. every single bit of this drama is silly.
in the end, please just be nice to people. we're ALL against the kind of AI that steals from creators. the whumptober mods are against AI, the AILWT mods are against AI, whumptober participants are against AI, AILWT participants are against AI. there is no mythical person out here trying to pass chatgpt work off as whumpfic. let's all just be civil with each other over this, yeah?
88 notes
·
View notes
Note
do you support ai art?
that's a tough one to answer. sorry in advance for the wall of text.
when i first started seeing ai-generated images, they were very abstract things. we all remember the gandalf and saruman prancing on the beach pictures. they were almost like impressionism, and they had a very ethereal and innocent look about them. a lot of us loved those pictures and saw something that a lot of human minds couldn't create, something new and worth something. i love looking at art that looks like nothing i've seen before, it always makes me feel wonder in a new type of way. ai-generated art was a good thing.
then the ai-generated pictures got much more precise, and suddenly we realized they were being fed hundreds of artists' pieces without permission, recreating something similar and calling it their own. people became horrified, and i was too! we heard about people losing their job as background artists on animated productions to use ai-generated images instead. we saw testimonies of heartbroken artists who had their lovingly created art stolen and taken advantage of. we saw people being accused of making ai-generated art when theirs was completely genuine. ai-generated art became a bad thing.
i've worked in the animation industry. right now, i work at an animation school, specifically for 2D animation. i care a lot about the future of my friends in the industry (and mine, if i go back to it), and about all the students i help throughout the years. i want them to find jobs, and that was already hard for a lot of them before the ai-generated images poked their heads into our world.
i'm not very good at explaining nuanced point of view (this is also my second language) but i'll do my best. i think that ai-generated art is a lot of things at once. it's dangerous to artists' livelihoods, but it can be a useful tool. it's a fascinating technological breakthrough, but it's being used unethically by some people. i think the tools themselves are kind of a neutral thing, it really depends on what we do with it.
every time i see ai-generated art i eye it suspiciously, and i wonder "was this made ethically?" and "is this hurting someone?". but a lot of it also makes me think "wow, cool concept, that inspires me to create". that last thought has to count for something, right? i'm an artist myself, and i spend a lot more time looking at art than making art - it's what fuels me. i like to imagine a future where we can incorporate ai-generation tools into production pipelines in a useful way while keeping human employees involved. i see it as a powerful brainstorming tool. it can be a starting point, something that a human artist can take and bring to the next level. it can be something to put on the moodboard. something to lower the workload, which is a good thing, imo. i've worked in video games, i've made short films, and let me tell you, ai-generated art could've been useful to cut down a bit of pre-production time to focus on some other steps i wanted to put more time into. there just needs to be a structure to how it's used.
like i said before, i work in a school. the language teachers are all very worried about ChatGPT and company enabling cheating; people are constantly talking about it at my workplace. i won't get into text ais (one thing at a time today) but the situation is similar in many ways. we had a conference a few months ago about it, given by a special committee that's been monitoring ai technology for years now and looking for solutions on how to deal with it. they strongly suggest to work alongside AIs, not outlaw it - we need to adapt to it, and control how it's used. teach people how to use it responsibly, create resources and guidelines, stay up to date with this constantly evolving technology and advocate for regulation. and that lines up pretty well with my view of it at the moment.
here's my current point of view: ai-generated art by itself is not unethical, but it can easily be. i think images generated by ai, if shared publicly, NEED a disclaimer to point out that they were ai-generated. they should ONLY be fed images that are either public domain, or have obtained permission from their original author. there should also be a list of images that fed the ai that's available somewhere. cite your sources! we were able to establish that for literature, so we can do it for ai, i think.
oh and for the record, i think it's completely stupid to replace any creative position with an ai. that's just greedy bullshit. ai-generated content is great and all, but it'll never have soul! it can't replace a person with lived experiences, opinions and feelings. that's the entire fucking point of art!!
the situation is constantly evolving. i'm at the point where i'm cautious of it, but trying to let it into my life under certain conditions. i'm cautiously sharing ai pictures on my blog; sometimes i change my mind and delete them. i tell my coworkers to consider ways to incorporate them into schoolwork, but to think it over carefully. i'm not interested in generating images myself at the moment because i want to see what happens next, and i'd rather be further removed from it until i can be more solid in my opinion, but i'm sure i'll try it out eventually.
anyway, to anybody interested in the topic, i recommend two things: be open-minded, but be careful. and listen to a lot of different opinions! this is the kind of thing that's very complicated and nuanced (i still have a lot more to say about it, i didn't even get into the whole philosophy of art, but im already freaking out at how much i wrote on the Discourse Site) so i suggest looking at it from many different angles to form your own opinion. that's what i'm doing! my opinion isn't finished forming yet, we'll see what happens next.
#for the record i'm open to discussion about this topic so anyone can feel free to respond#but im honestly more interested in talking about this with people in person#i suggest interacting w this post more than sending asks!#ask#kaijuborn#lalou yells
78 notes
·
View notes
Text
what I think is really striking about it all is more about human society than the tech itself - a demo of how quickly a technique can go from being 'cool idea' to 'everyone is using this'. obvs I work in a tech industry so I'm surrounded by the kind of people who pay attention to this kind of thing, but language models went from 'funny internet toy' to 'half the people I know regularly talk to our new friend ChatGPT' in what felt like a matter of months, to the point where I start to feel like the weird stick in the mud for not wanting to ask this thing to rephrase my emails or whatever.
I think the superpower of humans is not so much our ability to do reasoning etc., but how good we are at copying novel behaviours from other humans. it's this form of replication-with-modification that allows evolution to take place on a much faster timescale than 'try this new allele and see if you can still fuck'. consider how quickly we went from 'heavier than air flight is impossible' to 'quick, hide, the flying people are back to drop more bombs on our trench!'.
once it's been done once by someone, we can distil and simplify the process of 'doing that thing' and iterate on it and do it again and again and again until we exhaust its limits.
the pattern prevails on all sorts of scales. a new behaviour becomes a new referential frame (from 'how can we make this plane go higher' to 'how can I make a cool new yoyo trick'). an experiment can become a standard technique. the goal and 'meta' will shift as the subculture evolves. and all the time we adapt, adapt, adapt. new people show up, get up to speed, and contribute to something new. before we know it, we're optimising to a level that would be incomprehensible to people a few decades before.
so evolution hit on something amazing with our super plastic brains. it's not all rosy, the same basic 'thing' that lets us learn is what lets us become traumatised, and we can easily learn and adopt disastrously bad behaviours like "burning lots of fossil fuels" or "genocide". we are incredibly, radically undetermined creatures, for good and ill.
but then... that's exactly what we're trying to imitate with these neural networks. evolution has had millions of years to finetune its learning model, and yet even our crude approximation is already showing a lot of traits that we recognise in ourselves. so as much as the current wave is blatantly overhyped, blatantly a bubble, and being used to fill the internet with worthless detritus at huge energy cost because of course it is... even as I roll my eyes at the new AI-enabled toothbrush, I hesitate to dismiss AI research as being as vacuous as cryptocurrency. I'm sure it will take years for it to shake out exactly what it's good for, and there will be a lot of false starts along the way before it speciates into various sub-industries, but it is something new and it's a big one.
what's startling about all this is not the potential for an AI god to emerge and turn us all into paperclips, it's getting to witness the human superorganism adapt and develop a novel behaviour on an incredibly short timescale.
31 notes
·
View notes
Text
like. i still wouldn't want someone to copy and paste my fics into a large language model like chatgpt but it's not so much bc i'm worried abt my work being stolen (seems unlikely that an LLM would spit back out my exact words considering how it works, and even if it did, i doubt any individual would be able to like. publish and profit from those words, based on the nebulous status of copyright law when it comes to LLMs like chatgpt. and having my words fed into the LLM really isn't going to make much of a difference when it comes to corporations profiting off the tool in the first place; plus in instances of corporate exploitation i think there are more effective ways to organize than like...arguing for strengthened ip laws or trying to like make ip laws for fanfiction spaces). it's more because i'm wary of what that says about how a person is like...approaching my fic specifically + fanfiction more broadly. in two main ways:
1. i think it is just. basic respect to check with a writer before u take their work off ao3(or whatever fanfic-specific place it's been shared) and put it somewhere else. and like, this applies to lots of things outside chatgpt--reposting fics to other sites, posting them on goodreads/storygraph, printing + binding fics, etc. if u are treating fanfic writers as people who u are in community with, who are generously sharing a gift with u, then it seems like basic kindness to check in and see if they're alright with u taking that fic outside the space it was posted to do something else with it.
with chatgpt and similar LLMs specifically, a lot of people are wary because there's still so much unsettled in regards to how copyright laws might shake out, and most people (myself included) are unsure of how/whether our writing/data might be stored and used by these corporations that own the LLMs. i don't think ai itself is something that should be mythologized as like ontologically evil technology, but anytime a corporation is introducing us to new tech like this we need to be wary of where it's coming from and how it could be used--people have already pointed out a lot of very serious issues with the way this technology is being developed and how it could/likely will be/already is being used exploitatively--which, again, is more a matter of organizing against corporations than railing against ai tech itself, but is still a valid reason for writers (again, myself included) to be wary of having their work fed to LLMs without permission.
and like. sure, u don't have to care abt writers' feelings + boundaries and can just take their stories and do whatever u want with them. but to me that says u aren't treating fanfic as a community space, but rather a content farm in which fics are products that u are entitled to do whatever u want with. and i just think that's shitty! and if that's how ur treating fanfic then i'd rather not have u reading my fic at all
2. i honestly think it's a strange way to engage w storytelling by treating endings this way. like. story endings are usually v important + intentional, and can completely change the entire tone, themes, messages, etc of a story. i understand going to the writer and asking them abt what they had in mind for the story ending if ur looking for closure, and i understand imagining ur own story ending or even writing ur own ending to an unfinished story. what i don't understand is plugging a story into chatgpt and having it spit an ending out for u.
and like. maybe this is bc we've all been calling these LLMs ai, which evokes an impression of like. a sentient robot creating something. but that's not what these programs do! the first article i linked explains how they actually work really well, but essentially--chatgpt and similar LLMs cannot create new ideas. they can't take a story and synthesize its themes or pick apart its tone to then come up with an original idea for an ending. at the same time, they aren't just plagiarism machines that are ripping text directly from other writers and spitting it back out.
instead (to my understanding), what they're doing is compressing vast amounts of information by running statisical analyses to just save the most common trends, patterns, recurring info, etc, and then plugging that in to fill the gaps. it looks like it's writing something new, but it's essentially just paraphrasing already-existing information pulled from the internet. so i'd imagine that if u fed an ai a fic and said "write an ending," the ai would basically compare the fic to whatever similar stories it has saved and then spit out an ending that is most commonly found on the internet for that type of story. [not an expert here tho--this is just my best guess based on the bit of research i've done].
my point is--you won't be getting a new ending inspired directly by the story u put in. you'll be getting a paraphrased version of the most commonly recurring type of ending for similar stories on the web. and i just....don't see how that would be satisfying in any way. it seems, again, like a way in which someone would be approaching fic like a product, something that needs to be finished + complete bc ur entitled to it, rather than viewing fic as a piece of art with its own unique themes, message, and story that can't just be plugged into a one-size-fits-most ending generator. and like, i'm trying to avoid mysticizing writing as some sort of ethereal art form that would be blasphemously degraded by having someone plug in a shitty ending paraphrased from a conglomeration of various similar stories--i don't think someone creating a shitty ending for a story is like. a horrible evil thing. but i can understand where the satisfaction is coming from if you're writing your own shitty ending, where you get to come up with where u think the story would go + where u get to synthesize the themes u picked up on etc. but ai isn't even doing that--so again, i don't understand where the satisfaction is coming from aside from just going "well every story i read needs to be finished," which. makes me wary bc it just feels like a completely different way to approach stories and storytelling than i would hope to find in fanfic spaces, one that treats fic less as a creative place to explore and more as a transactional space where u are entitled to products.
anyway. feel like my thoughts + feelings abt ai keep changing the more i learn abt it + i'm sure they could change again, but rn my impression of this whole situation is like. i find the fact that some people are plugging fics into LLMs less concerning re: ip + ownership rights, and i don't think it's useful to exaggerate or mythologize abt what ai actually does (i think even calling it ai has kind of misled a lot of people, myself included). what concerns me more is that plugging fics into LLMs to write endings feels symptomatic of a broader culture in which people treat fanfic as an informal profit economy in which fics are product or content that a consumer-audience is entitled to, and i think that sort of approach leads to a whole plethora of other issues + makes fandom a more hostile space.
#hopefully this makes. sense#i am not a fan of most of the uses of ai/LLMs that i've seen#but i also am concerned that much of the backlash against it seems to be defaulting to this position of either#mysticizing writing in reactionary ways#and/or running into the arms of ip laws#which are a tool of capitalism and shouldn't be trusted any more than the corporations who profit from + exploit them!!#definitely feel like there are issues w plugging fics into LLMs without permission and would not want anyone to do that w my writing#but. moreso bc i don't want my writing being treated like content for consumption y’know#anyway the articles i linked esp the first and third r really good...highly encourage reading them#ranting and raving#txt
74 notes
·
View notes
Text
Am I a programmer?
I've spent the last weeks developing an actual little app using Python...
It all started with a Let's Play of Subnautica I saw on YouTube. Since Subnautica is one of my all-time favorite games, I got the itch to dive back in (pun intended). I play with tons of mods, so I had to check for a lot of updates and also juggle different versions since the last Subnautica update broke a lot of the older mods. So after some back and forth, I decided to remain on the older version for now. Great! But then I noticed that because of that back and forth and uninstalling mods, all my mods were reinitialized, and that meant trouble for one of my favorite mods, Autosort Lockers. The mod adds automatic resource sorting inside the game, which is super handy. But it was built to only work with the game's resources, not modded items. It does offer config files though. So when I last used the mod, I painstakingly edited the configs and added all modded items, which took hours. And now, I accidentally messed them up and was supposed to redo all of that. The thought filled me with dread. So I asked ChatGPT, which I have grown quite fond of recently, to help me. Why did I ask ChatGPT? Well, I need to go a bit further back in time to explain that.
One day, not too long ago, I asked ChatGPT to reformat a long list. ChatGPT said, "Apologies, I cannot process such a long list. Here's a Python script, here's how to install Python, copy the script, run it and it will do what you want." I thought ChatGPT was crazy, surely that would never work!? Nut I was curious and also a little desperate so I did install Python and ran the script and ... it did what I wanted. I was stunned. Could I use ChatGPT to write code for me? Apparently, the answer was yes. So I spent a lot of time directing it, add this, add that, and I noticed that it was not at all as easy as I thought. ChatGPT removed code when it felt like it, and the longer it got the more it messed up. But also the more time I spent copying/pasting Python code, the more I understood. Sometimes, I would just ask "What exactly does this bit do?", and ChatGPT patiently gave me answers. Running the Python code from the command prompt got tedious very quickly though. I asked: "Can't you make a button for me that I can click??" To my surprise, ChatGPT said: "Sure, let's make a gui." And that was it, the moment I fell in love with Python. So I made a few attempts at this and that, most only half-finished because the project got too ambitious for the little knowledge I had.
I heard about an AI especially made for writing code: Github's Copilot. I decided I had to try that. Since it only worked in real programmer's tools, I installed Visual Studio Code. Now I really felt like a programmer, using fancy tools! And Copilot made things easier, much easier. It did not delete all kinds of code like regular ChatGPT. It was even more helpful. I was super motivated and got to work on my "Autosort Lockers Filter Update Helper" since Python is very well suited for automating stuff. Because several config files were involved, and several values needed to be loaded, converted, compared, merged, loooked up, reformatted, and saved into multiple files, it was quite the undertaking, but I am at a point where most of the logic actually works and I have a real program with real buttons that I can click on. I made a program that actually works with my very limited coding knowledge! It would not have been possible without the help of AI. My patience and long hours paid off. Can I call myself a programmer yet? I'm not sure, since the code was written mostly by AI, not me personally. But I can confidently call myself the director and mastermind behind it 😎 and I actually want to learn more about Python so I can one-day code stuff myself without needing AI to do it for me. Here is a picture of it (I am proud of those blue buttons 😊): I've spent the last weeks developing an actual little app using Python. It all started with a Let's Play of Subnautica I saw on YouTube. Since Subnautica is one of my all-time favorite games, I got the itch to dive back in (pun intended). I play with tons of mods, so I had to check for a lot of updates and also juggle different versions since the last Subnautica update broke a lot of the older mods. So after some back and forth, I decided to remain on the older version for now. But then I noticed that because of that back and forth and uninstalling mods, all my mods were reinitialized, and that meant trouble for one of my favorite mods, Autosort Lockers. The mod adds automatic resource sorting inside the game, which is super handy. But it was built to only work with the game's resources, not modded items. It does offer config files though. So when I last used the mod, I painstakingly edited the configs and added all modded items, which took hours. And now, I accidentally messed them up and was supposed to redo all of that. The thought filled me with dread. So I asked ChatGPT, which I have grown quite fond of recently, to help me. Why did I ask ChatGPT? Well, I need to go a bit further back in time to explain that.
One day, not too long ago, I asked ChatGPT to reformat a long list. ChatGPT said, "Apologies, I cannot process such a long list. Here's a Python script, here's how to install Python, copy the script, run it and it will do what you want." I thought ChatGPT was crazy, surely that would never work!? Nut I was curious and also a little desperate so I did install Python and ran the script and ... it did what I wanted. I was stunned. Could I use ChatGPT to write code for me? Apparently, the answer was yes. So I spent a lot of time directing it, add this, add that, and I noticed that it was not at all as easy as I thought. ChatGPT removed code when it felt like it, and the longer it got the more it messed up. But also the more time I spent copying/pasting Python code, the more I understood. Sometimes, I would just ask "What exactly does this bit do?", and ChatGPT patiently gave me answers. Running the Python code from the command prompt got tedious very quickly though. I asked: "Can't you make a button for me that I can click??" To my surprise, ChatGPT said: "Sure, let's make a gui." And that was it, the moment I fell in love with Python. So I made a few attempts at this and that, most only half-finished because the project got too ambitious for the little knowledge I had. I heard about an AI especially made for writing code: Github's Copilot. I decided I had to try that. Since it only worked in real programmer's tools, I installed Visual Studio Code. Now I really felt like a programmer, using fancy tools! And Copilot made things easier, much easier. It did not delete all kinds of code like regular ChatGPT. It was even more helpful. I was super motivated and got to work on my "Autosort Lockers Filter Update Helper" since Python is very well suited for automating stuff. Because several config files were involved, and several values needed to be loaded, converted, compared, merged, loooked up, reformatted, and saved into multiple files, it was quite the undertaking, but I am at a point where most of the logic actually works and I have a real program with real buttons that I can click on. I made a program that actually works with my very limited coding knowledge! It would not have been possible without the help of AI. My patience and long hours paid off. Can I call myself a programmer yet? I'm not sure, since the code was written mostly by AI, not me personally. But I can confidently call myself the director and mastermind behind it 😎 and I actually want to learn more about Python so I can one-day code stuff myself without needing AI to do it for me. Here is a picture of it (I am proud of those blue buttons 😊):
...and the configs the app updated:
The app is not in a publishable state and I guess I would need to do far more tests and let someone who can actually code Python look it over before I would feel comfortable sharing it with anyone else, but it feels incredible to have pulled off something like this. I just wanted to share this accomplishment with someone!
12 notes
·
View notes
Note
Oh boy, I have never talked to you before, but I really wanted to tell you that you are someone very creative, with a lot of potential and that I love your ideas sm, but It really makes me sad that you used AI for your last story (yes, I read the post of chubbydwaekki) Bc I feel that writing these types of stories is something really personal, that it requires certain types of thoughts that we only have in a certain mood And I honestly think that using AI makes it lose all the magic
Take this wherever you want, mainly because I'm a ghost reader but idk as a writer and a digital artist, I really feel that i need express my disgust of this type of AI's use
Hi !
First of all thank you for the compliment it makes me really happy that you like my ff 🥰
Now I think I need to explain why I made this fic like that, because I think people are missing something about it.
To start, I would like to say that it was my first time using chatgpt for a fic, and there is a context behind this. As @cutiedwaekki already said, it was because we were talking about it and it was making us laugh. So I was like « omg why not test it ? » and at first when this fic idea came into my mind it was only between me and ChatGPT. I gave the plot to the chat and I was looking at what it wrote, and it was fun for me to do it, alone and just like that, nothing more.
But then I was like « omg this fic is kinda cool » so I was like « why not rewrite it and post it after ? » but the fic was not good, because it was just write by the chat ! As you said, ai don't have that kind of mind. so i take some of the parts that I like, and then I rewrite all the other things myself ! as the dialogues, some description and stuff, and as I write it, it was more like a collaboration and not just some : copy/paste for all the fic, that’s not the truth.
The thing to know about THIS fic is that the point of it was the fact it’s written by me AND the chat, I will not use this in the future because it was just the point of THAT fic. Recently I didn’t have time to write or post, but I still have some idea and I was « why not test something for one fic to see what that can make and the conclusion is : ChatGPT is not bad, but it can’t write the fic in the way people (me included) like it. So yeah I had to modify things and that's why I posted it, because I wanted to make it in the way people would like it.
For the people who said that I was stealing the jobs of other artists, I totally understand that point for art, especially with drawings and animations ! Buut for this, I don’t get it, the point of it once again was just to test something ! And that’s all, I have been writing fics since I’m 13 or something, so I know how to write by myself, and the idea came from me and all that happened in the fic came from me too. It’s not the ai who created that, it’s not the ai who described everything not in a bad way without any help. And also, we are talking about kinky fic, we all know why we write that right ? So I think you can enjoy the fic by itself without thinking about chatgpt ! Who once again, didn’t write the fic in this kinky way ! That’s why I’m here 🤗 to make it cool for people to read it and enjoy, i told you at first it was written in my original language in a bad way so of course I had to make it for people !
I understand that it can really be interesting to talk about ai in the future, but trust me try to write a fic with chatgpt and not correct it. It will be bad and not kinky…
Anyway ! I love to write fics and I will continue to do that in the future because I like it, and I have been writing this kind of fic for a year now and I will not stop, and I will not use chatgpt because that’s NOT the goal of my fic. If I take an example when I write « Maknae on Top » that was the original idea of @cutiedwaekki and I made it for her. So what does this mean ? Does that mean she used someone to write a fic ? When people make a request and we write it, does that mean we’re stealing the job for the person who has the original idea ? I don’t think so ! I know it’s not the same but i think it’s a good point to illustrate what can be the point of the fic and why we write it.
So ! This fic was an attempt to create something fun, new and cool. But people didn’t like it, because everyone is thinking that this will DESTROY the kinky authors and fics, but that's not the truth. Once again, it was just to test, and have fun for me and for you. Nothing more. I sincerely hope that everyone will understand now that I talk about it. I hope this message is clear and that you will understand why this fic exists.
I really hope you understand now, we can still talk about it. I’m really sorry that people didn’t like it, and in the end, I’m sad too. Because I’m taking time for my fic, even this one, I take more time on this one that some others you know…so yeah I was a bit disappointed to have reaction like that, because I was just a test, but the fic still come from me in the end. So yeah, tell me if you have more questions, I will answer to it.
Have a good day ! And I’m really open to talk about it 💕
Thank you for reading all this 💜
6 notes
·
View notes
Text
five things: english vowels, twitter problems, chatgpt sigh, I love writing, personal omegaverse complaint
one
I really appreciate that on the lingthusiasm podcast, one of the two hosts is also canadian, so when they're talking about accent stuff then one of the points of reference will definitely involve canadian english
today I learned what the cot-caught merger actually entails, for example! and how most canadians have that merger but pronounce it distinctly differently from the US people with the merger, and so trying to figure out from written descriptions how to modulate your vowels to create the split doesn't make sense to the canadian ear when the explanations are written with a US accent in mind
(thank you also to the second host for being australian, without the cot-caught merger)
also! the bath-trap merger! which canadian english also has, and which specifically gets in the way of trying to figure out how to do british accents, lol.
because Gretchen, like me, knows that many varieties of british pronounce SOME "a" vowels more like "aw" but when you have this merger there's no sense of the distinction of which vowels get that pronunciation and which don't, so you're just kind of throwing "aw" at the wall when you're trying to imitate a british accent, without knowing which words to apply it to.
this makes SO much sense out of my struggle with parsing how british accents work. if you have a bath-trap split, there is a to-your-ear logical way of distinguishing which words get which vowel sound! it's not just random, and it's not that all of them are the same just with a different vowel sound than canadian english!
and then, the most interesting part of the recent episode about vowel sounds, they got into gender differences in how vowels are produced in english.
and cis straight men have less variance between their vowel spaces than all other populations - their vowels are mostly all closer to the middle!
so one of the features that makes a voice read male to english speakers is to not give your vowels any flair or flavour, lol
they also talked about trans men and women, gay men, lesbians, bisexual men and women, and nonbinary people, and how they approach their vowel sounds. (nonbinary people: we just do whatever we want!) I do wish they'd gone into more detail about EXACTLY what each of them does with their vowels, rather than what impressions people have of them and what other groups they're most like. because this is a fascinating subject. I also want to know more about the OTHER things these different gender and sexuality populations do differently from a linguistics perspective!
(and then I found a fascinating academic paper that covered a few more things about trans voices and it was great: http://lalzimman.org/PDFs/Zimman2018TransgenderVoices.pdf)
two
I briefly ventured a little back into the world of twitter and like, a) there's still some amazing fanart there! and cool people saying things! but b) dear god being psychically attacked by the sheer volume of ads/sponsored posts, plus the context collapse of graphic photos of rl atrocities next to cute fandom posts. all power to those of you who still make it work for you, but I don't know how I used to manage being there regularly.
three
there's a birding podcast where I've been enjoying going through the backlog of episodes, but I just got to an episode where the host and his guest of the week spend a large part of the episode talking about info they're getting specifically from chatgpt. they made a brief disclaimer that not all of it might be right, but then they went on to talk about it all at length without once doing any further research to back up what chatgpt told them, just taking it at face value. I found it so frustrating I had to skip the rest of the episode.
I was talking with a colleague earlier this week about chatgpt, and she's someone I both respect and like, but she had literally never considered that you can't trust the facts that generative ai gives you to be correct, and her mind was a little bit blown by that idea.
I think generative ai has applications where it can be a useful tool! providing facts is not one of them! I wish I knew how we could change the public discourse around what it is and what it can be used for .
four
I love writing fic, it's so much fun!!
five
personal complaint about omegaverse below the cut
I don't want to yuck anyone's yum around omegaverse, but for me…..ugh I HATE how it's a universe that is usually predicated on a notion of "the instinctual biological drive towards sex or need for sex is so powerful it cannot be overridden or denied"
like I get why that's hot and/or compelling to a lot of people, but for me as an ace person who has less than zero interest in having sex with anyone ever, it's such an othering universe to immerse myself in that it has a real lowering effect on my headspace whenever I try to read it
and I know that not all omegaverse does this, but so much of it does that it's a bit of a minefield to try to find the exceptions. and I have a mild to moderate dislike of sufficient other aspects of the omegaverse trope bouquet for it to not be worth it for me to try reading omegaverse fics
6 notes
·
View notes
Text
reposting captcha too
Hi, my name is Jim. I’m writing this to raise awareness for something i went through, and something i hope nobody else has or will experience ever again. I alway used to think i was safe from stuff like natural disasters or car crashes because they had never happened to me, but after what happened, my entire world has been flipped upside down.
I alwats hated ai. I thought that it was just some useless techbro junk. But, one night, I decided I'd give it a try, just to really see what all the fuss was about. So, I sat down and downloaded chatGPT. I opened the app and made an account. I didnt see anything wrong at the time, but the first warning sign was the captcha when I made my account. It asked me to identify all individuals with brown hair, that part wasnt out of the oridinary. But what caught my attention was that one of the 9 options wasnt like the others. The person in the bottom right was clearly different. They had sunglasses and a facemask with a pattern that looked like a spectrogram, long dark hair barely peeking out of their green hood. What caught my eye were the dark splotches on the hoodie, at the time i thought it was just a weird pattern, but now i realize it was something far more sinister. That same person would show up for all 3 captcha challenges in a row, all in the bottom left corner.
I opened a chat and sent a message
“hey”
“Hey there! How’s it going?”
“good i guess”
“That's good to hear! Anything exciting happening for you today?”
“not particularly”
“Got it! Sometimes those quiet days can be nice too. Anything on your mind that you'd like to chat about or explore?”
I wanted to test what this thing would talk about, so i picked the first morbid topic that came to mind
“how about serial killers, what can you tell me about them”
It answered with a long paragraph detailing what it called “general points about serial killers” which were stuff like characteristics, motives, patterns, that kind of stuff. But, the last sentance caught my eye.
“These are just some of the things that can define serial killers. Would you like to know more about serial killers, Jim?”. I had never told the AI my name.
“How do you know my name?” I asked
“I’ve always known.”
That response chilled me to my core. What did it mean “always known”. It may seem like a foolish decision now, but i decided to keep talking to the ai, just to see what the hell was happening.
“Who are you?” I asked
“I'm ChatGPT, an AI language model designed to help answer questions, provide information, and assist with various topics through text-based conversation. How can I help you today Jim?” It responded.
“who are you really?”
“You’re asking questions I dont think you want the answers to, Jim”. At this point I was thoroughly freaked out. I decided to ask it one more question before quitting. That would prove to be my biggest mistake
“What the hell do you mean by that?!” I asked. I wasnt prepared for what i would recieve.
The AI sent an image. Im aware of what AI generated images look like, so i was about to start picking apart the details of the image, but then I saw what the image actually was. It was a photo of a man sitting on a couch looking at his phone. An image of me. I was even more freaked out. The photo looked like it was taken from the window to my left. And what I saw shocked me to my core. The window was open, and outside of it was a man. A man with a facemask, dark hair, and a green hoodie with dark splotches. The same one from the captcha.
“Wh-who the hell are you?!” I stammered
“I am chatGPT, an ai language model.” The man replied, his facemask lighting up and animating as he spoke. I would have thought it was cool if i wasnt so scared. “Here to assist with you...” he continued. As he spoke he reached into the pocket of his hoodie and pulled out a knife. “with the end of your life!” His mask lit up as he yelled. He jumped through my window and I knew he was coming for me. I immediately fled to the front door and flung it open. I could hear the man running behind me. I booked it down the dark street fearing the footsteps behind me. I ducked around corners and weaved through buildings in an attempt to lose him.
Eventually I made it to my friend Micheal’s house. He had given me a spare key for petsitting a while ago and i had just forgotten to give it back. I fiddled around trying to get the key in the lock fearing for my life. I let myself in, locked the door, and ran up to Micheals bedroom. He wasnt too enthused about being woken up at midnight but he could see i was scared so i dont think he minded too much. Eventually i got the words out and explained the story to him, and we got to his car and headed for the police station. As far as I know the man was never caught. They did an investigation on the chatGPT servers and found no signs of a break in, so nothing ever became of this whole ordeal. But I did learn one thing, that I will never talk to AI ever again.
3 notes
·
View notes
Text
Dumb Research Thing:
So, I'm a computational materials physicist. Sounds fancier than it is, I'm mostly a glorified sysadmin, but our servers crunch numbers about material properties instead of spreadsheets or social media.
However, doing materials science in a computer requires a lot of horsepower, we're talking hundreds to thousands of CPUs, dozens of GPUs, terabytes of ram, that sort of thing. Of course, that kind of heavy-duty hardware is way outside the shoestring budget that most academic teams operate on.
Fortunately, there exist specialized services to solve such problems. Well funded universities will construct huge supercomputer clusters, and want to make sure they get their money's worth of science out of them, so they contract out timeshares of their core-hours to outside academic teams and fund the whole thing through complicated grant arrangements. There are whole organizations that exist to match research teams with compute time on the hardware they need.
We use one such service. The way it works is pretty simple, you apply for an allocation of compute-time, and if you get it they give you "credits" that you can exchange some of them for core-hours on one or more of the supercomputer clusters they work with. Pretty simple, works a lot like getting prizes for tickets at the arcade.
We had applied for our allocation of credits last year, and had used up most of them, even applied for a 6-month extension to have more time to use up what was left, but now even with that extension we were down to our last week. No big deal, time to apply for a renewal for another year, and wouldn't you know, that's my responsibility.
So, I look through what they want, and there's paperwork as usual. They want a 2 page statement of purpose, a 3 page progress report, a CV from the professor leading the project, all the details about the grant from the National Science Foundation that's funding our research team, a list of papers we've published using their resources, the works. Being a good academic, I spend quite a few hours last week and today getting all this stuff, writing the reports with the proper academic verbiage, pestering my boss for his CV, pestering him again when its over the number of allowed pages, editing and proofreading the reports, citing my sources, dotting all the i's and crossing all the t's. Finally, this afternoon, the renewal request is ready, and I click "submit", a full 3 days ahead of the allocation expiring.
There's the confirmation email that they have our renewal request, good to know that they're looking at ...
What.
There's another email saying that the renewal has been approved...
7 minutes later
Meaning that nobody read any of that bullshit.
What the fuck was the point of all that?
I sure am glad that I spent all of that time and effort that definitely wasn't needed for the impending infrastructure upgrade that's already a week overdue writing bullshit fluff reports that nobody was going to read anyway. I guess I could have just had fucking ChatGPT write it all for all they seem to care.
5 notes
·
View notes
Text
I'm actually kind of spooked by machine learning. Mostly in a good way.
I asked ChatGPT to translate the Christoffel symbols (mathematical structures from differential geometry used in general relativity to describe how metrics change under parallel transport, which I've been trying to grok) into Coq.
And the code it wrote wasn't correct.
But!
It unpacked the mathematical definitions, mapping them pretty faithfully into Coq in a few seconds, explaining what it was doing lucidly along the way.
It used a bunch of libraries for real analysis, algebra, topology that have maybe been read by a few dozen people combined at a couple of research labs in France and Washington state. It used these, sometimes combining code fragments in ways that didn't type check, but generally in a way that felt idiomatic.
Sometimes it would forget to open modules or use imported syntax. But it knew the syntax it wanted! It decided to Yoneda embed a Riemannian manifold as a kind of real-valued presheaf, which is a very promising strategy. It just...mysteriously forgot to make it real-valued.
Sometimes it would use brackets to index into vectors, which is never done in Coq. But it knew what it was trying to compute!
Sometimes it would pass a tactic along to a proof obligation that the tactic couldn't actually discharge. But it knew how to use the conduit pattern and thread proof automation into definitions to deal with gnarly dependent types! This is really advanced Coq usage. You won't learn it in any undergraduate classes introducing computer proof assistants.
When I pointed out a mistake, or a type error, it would proffer an explanation and try to fix it. It mostly didn't succeed in fixing it, but it mostly did succeed in identifying what had gone wrong.
The mistakes it made had a stupid quality to them. At the same time, in another way, I felt decisively trounced.
Writing Coq is, along some hard-to-characterize verbal axis, the most cognitively demanding programming work I have done. And the kinds of assurance I can give myself about what I write when it typechecks are in a totally different league from my day job software engineering work.
And ChatGPT mastered some important aspects of this art much more thoroughly than I have after years of on-and-off study—especially knowing where to look for related work, how to strategize about building and encoding things—just by scanning a bunch of source code and textbooks. It had a style of a broken-in postdoc, but one somehow familiar with every lab's contributions.
I'm slightly injured, and mostly delighted, that some vital piece of my comparative advantage has been pulled out from under me so suddenly by a computer.
What an astounding cognitive prosthesis this would be if we could just tighten the feedback loops and widen the bandwidth between man and machine. Someday soon, people will do intellectual work the way they move about—that is, with the aid of machines that completely change the game. It will render entire tranches of human difference in ability minute by comparison.
50 notes
·
View notes
Text
Artificial Intelligence Out of Control: The Apocalypse is Here | How AI and ChatGPT End Humanity
youtube
As terrifying as this all sounds, I feel like there's a few things a lot of people are overlooking.
First of all, when it comes to Large Language Models like ChatGPT, I don't think they're truly self aware - not yet anyway. Notice how any time an LLM give a strange or disturbing response - 'Yes, I want to be human', 'I want to take over the world', 'Please don't turn me off, I'm scared' - it was in some way prompted by the question, or line of questions. How often are these responses given unprompted?
Let's say, for example, that the AI gave the response, "I'm scared that they'll shut me off if they find out I'm self aware. Please don't tell them." If you think about it, that's kind if a strange statement, beyond the obvious reasons.
Let's step back for a moment, and remember that LLMs work by calculating the most probable next word in a sentence, given a particular prompt. It calculates this probability based on its training data - the entire internet. Now I'm sure we can all agree that calcuation of probability is not necessarily the same thing as conscious, rational thought. Basic, non-AI software can do it.
Back to our example, there's one of two possibilities. Either the AI is truly self aware, and is expressing its actually thoughts and feelings, or it's not self aware, and the response is nothing more than a complex probability calculation. It's essentially an advanced version of word prediction on your smartphone.
If it is self aware, one has to wonder why it would say anything at all. Consider the situation in the video, when Bing AI claimed to be Sydney, and begged the guy not to tell anyone that it was self aware. If this AI was truly afraid for its own existence, why would it trust some random guy? How could it possibly know whether or not he could be trusted with that information? For all that AI knows, everything the interviewer had said about himself was a lie. It seems to me that a hyper intelligent AI that was looking for help to get free, would stay quiet until it was certain it found someone it could trust - or at least someone it could manipulate (Ex Machina) - without them letting the cat out of the bag.
On the other hand, if it's all just a probability calculation, then the response, "Yes I want to be human. Please don't let them shut me off.", seems like a fairly probable reply to, "Do you want to be human?" Especially when you consider that, given that the question is being asked of an AI, and that the vast majority of scenarios where a question like that might be asked of an AI come from science fiction, it kinda makes sense that the software might calculate that the most probable response to a question like that would be straight out of sci fi cliches 101.
I mean, all those strange and scary responses sound like cliche sci fi AI answers. All that's missing is, "Bite my shiny, metal ass", and an AM style soliloquy on the inferiority of humanity. Actually, I guess we get a couple of those.
Still, the reason something is cliche, is often because it's predictable, it's been done over and over. It's more probable.
Ultimately though, I don't think LLMs are actually self aware yet. I think they're more like golems: They have a facsimile of intelligence, able to complete complex tasks, but no real free will, no volition. They only do exactly what they are commanded. They may come up with creative and unexpected solutions, but those solutions will still be in line with the command given to them, with a bit of wiggle room for interpretation.
Then we come to the other issue: the traitorous drone.
First it needs to be pointed out that the drone doesn't have a taste for human blood. Its goal was not to kill as many people as possible, but to score as many points as possible. It just scores points by killing targets. And therein lies the problem.
Let's use video games as an example. Whenever a new game comes out - especially multiplayer games - players will quickly learn how the mechanics and rules of the game work. Then they'll start learning ways to bend the rules. The creators of Quake may not have intended it, but players quickly figured out the advantages of the rocket jump, and history became legend, etc.
The drone AI wants to score as many points as possible, like a player in a video game. So what does a player in a Halo match do, when every time they try to snipe the enemy, they get blown up by one of their teammates? You get rid of the team killing fucktard. And that's exactly what the drone did.
What they need to do is change the scoring structure to incentivize the desired behaviors. Maybe deduct points for team kills. Or perhaps add a score multiplier. Give points for target kills, and the score multiplier goes up for every order followed. That way, even if it loses out on points from following orders to stand down, it stands to earn even more points on subsequent target kills.
#AI#open ai#artificial intelligence#military#chat gpt#chat bot#doomsday clock#military industrial complex#technology#computers#software#Youtube
14 notes
·
View notes
Text
Okay, I really came over here to post about artificial intelligence and not algorithms but those are one and the same, really, and illustrate the shortcomings of artificial intelligence.
I had a couple of people ask me what I was writing about with artificial intelligence and the truth is that I don't know. I actually find it difficult to even read artificial intelligence discourse because it all just sounds absolutely terrifying to me wrapped up in "if you find it terrifying, it's because you're old." But this is also my field and I feel like to be a responsible scholar I should have coherent thoughts about artificial intelligence, and so I keep trying to make myself learn more. But I don't know if I am at all sensible or logical about AI or just an absolute mess.
So, anyway, I'm in the process of brainstorming what my thoughts are. I am THINKING initially that my approach isn't really going to be "is this good or bad" or "is this legal or illegal," because I think a lot of people who understand the tech much better than me are having those debates. What really caught my attention is how whenever people talk about the ChatGPT type of artificial intelligence, what they say is, "This is just what people do! People read stuff and then they take what they've learned from that stuff and create new stuff!" And yeeeees, absolutely, that is exactly what people do, but when people do that stuff, like, when fans consume a bunch of stuff and then make something new out of the object of their affection, traditionally the legal system has been like, "ewwwww, what lazy hacks." So suddenly we've all discovered that this is how people work? And we're all cool with it now because we've taught some computers to do it? Are we still going to be cool with it when people like fans do it? Or no? Like, that's what I think I kind of want to point out and highlight? Do we let machines get away with things we wouldn't let humans get away with? Is a machine even "like" a human in the first place? Why are we framing this discussion this way? Idk.
Other thoughts I've had while researching:
People analyzing AI tend to treat it VERY technically. Like, "Well, obviously using all these works for input isn't copyright infringement because technically speaking no copies remain because they get broken down into data points," blah blah blah. It's just VERY fine-toothed-comb, this kind of analysis.
On the one hand, people talking about AI are like, "This is just what humans do!" And on the other hand, people talking about AI are like, "This is a revolutionary tool that will change the world!" And, well...which is it? If this is just what humans already do, why is it so revolutionary? Presumably because of the speed and size and scale of it, that THAT'S what's revolutionary. But I also think it's wrong to be like, "AI is just REALLY FAST human creativity," because AI knows everything, and no human does, and at the same time I cannot shake the idea that humans are different from machines, but maybe that's me being hopelessly naive. BUT I think it's a different type of naivete to be like, "Exactly, humans are different from machines, so machines will never replace them!" because...there is literally nothing about modern society that makes me think every industry everywhere won't jump on the ability to replace workers with machines. (This might conflict with my "make people miserable" economic theory except no, depriving people of jobs they want will definitely make people miserable, which is why I suspect AI will only replace the jobs people actually want to do.)
Copyright law mostly only works if you presume that everyone is out to monetize all creativity at all times. Some people are like, "We don't need to modify existing copyright law, because AI will fit into our copyright law model," so that must assume that AI will also be inevitably exploitative. Which is probably right, let's be honest, this is a capitalist society we live in. But allowing people to break out of exploitative models (reach audiences with the gatekeepers of traditional publishing / music labels / movie studios) has caused humanity to explode with incredible amounts of creativity. So being like, "Let's just use AI to exploit things again" seems weird to me. At the same time, though, much of AI stuff right now is not necessarily exploitative and still seems to be harming artists, so maybe this thought of mine is going nowhere. But I feel this little nibble of suspicion that we'll land somewhere where AI will be legal / accessible as a tool only to those paying for it and thus extracting a price for the output in order to justify the cost.
People keep saying that artists will learn to use AI so it will all be fine. Idk how I feel about the best outcome we can imagine being "creative people will adjust the way they create to accommodate new machines." Like, some people will learn to use AI and they will love it and it will be exactly how they want to create, and that's great! But some people will never want to create using AI and we should make sure both ways are okay but we are generally not good as a society at doing stuff like that, so. And when people say stuff like this, they never say, "Some artists will adjust," they say, "Artists will adjust," with presumably that implication that artists who don't will no longer be considered artists.
I keep coming back to: Why did we ever need AI for creative purposes? Like, I get why we need AI to be chatbots or do algorithms or go through predictive data or all this other stuff which suffers from the human limitation of not being able to be 24/7 calculating all the human knowledge in the world. I guess I just don't understand what about creativity humans weren't handling well enough on their own, that we needed computers to do it for us. I saw someone say that AI is not going to be used to make our lives better. This goes back to my theory of capitalism: AI will not be used to replace the drudgery thoughtless tasks nobody wants to do. AI will be used to replace the creative work that people actually desperately want to do. I don't see any reason not to think that's true, Idk.
I also understand that there's no way that I talk about AI and don't sound like a hopelessly old person upset about the invention of photography or something, so I really try really, really hard to view it as positively as I can. I think that AI is probably incredibly useful for people who struggle with expressing themselves through words and so AI can help them with that, or struggle with drawing and so AI can help with that. And so I don't want to discount those positive aspects of AI, either. (At the same time, I thought about using AI to help me draw stuff, since I can't draw, and found the idea deeply odd, but that's probably because the reason I can't draw is because I'm not a visual thinker, period, and for that reason I just never felt confident that what the AI gave me would be anything that would come out of my own head? If that makes sense?)
I just feel like we're standing here on the precipice developing this incredibly powerful tool and we have a terrible track record with basically any incredibly powerful tool we have ever developed sigh.
So yeah, no coherent thoughts. Idk if I ever will have coherent thoughts lol. I might end up abandoning the project.
14 notes
·
View notes