#I say this but I only wrote one within the computer
Explore tagged Tumblr posts
fights4users · 2 years ago
Text
Thinking about Tron and Yori again… expect an angsty but also really sweet fic soon. I’m actually getting a handle on writing in system stuff…
Tumblr media
7 notes · View notes
bbokicidal · 5 months ago
Text
"Are you serious...?" - Angst! [Hyung Line SKZ]
Tumblr media Tumblr media Tumblr media
Notes : These are all obviously fictional situations, the red flags are just based off of habits we know they have (like Chan's need to be needed, Changbin being blunt/honest.) This post isn't me saying I think they have these red flags, it's just a fun angsty prompt I wrote down. If you don't like it, scroll and don't read.
If people like this - a maknae line will be written! If not, prolly not lol.
Warnings : Angst with no comfort, red flag behavior - some of these aren't even that bad or could be misunderstandings but still.
Maknae Line | "Good Luck, Babe." Part Two!! Here!
Tumblr media
BangChan - Brushing off/Having the wrong priorities
One time, it was him forgetting a dinner date - the next, he was staying at the studio late when he was supposed to be meeting your parents for the first time. You let it slide because ultimately you understood that his job took up a lot of his time, and honestly? It wasn't easy to forget about but he had a tendency to take care of you and make up with it by quick gestures before he left the apartment or when he came home; Soft back hugs, quick cuddles before he fell asleep, or kisses in passing. Lately, however, he's been slacking. He'd begun to shrug you off any time you'd touched his arm or hand, nudging you away while he typed on his laptop. He'd tip his head away from yours while laying in bed together or he'd sit further away on the dressing room sofa.
The tipping point was when he was getting ready to go on stage and was standing in wait for the others to be ready. There was still five minutes and Chris looked a bit jittery, so you figured a quick hug or kiss would help ease his nerves. However as soon as you approach and reach to touch his arms, he steps back and keeps his eyes trained on his phone. You reach again, hesitant, and his brow furrows as he maneuvers to the side to get away. "Don't touch me."
Your lips pop apart in surprise. "...Are you serious?"
He looks over, eyes briefly wandering your face before he reaches to fix his in-ear and walks away to the door, disappearing around the corner and leaving you standing there alone. Even the soft touch of Felix's hand on your back as he passed by was warmer than anything you'd felt from Chris in the last two months.
Lee Know - Keeping secrets / Prioritizing Privacy within himself
Minho had a very, very bad habit of not telling you things. In this instance; That he was leaving for tour in two days.
A world. fucking. tour. The only reason you didn't know about it was because you hadn't been out of your home in the last few weeks unless it was for a quick coffee at the cafe or to grab lunch with a friend. Work was heavy during this time of year and as someone who worked remotely, you often spent grueling hours in your office on your computer - hunched, tired, head pounding and back sore.
So you would think that when you entered your bedroom one evening after just finishing up sorting files in your office, you'd be happy to see your boyfriend already there. And you were for a moment, until you realized he was packing three rather large suitcases full of his clothes and necessities. He looks to you, then away, wordless.
"Are.. you.. moving out, or something?" You breathe in a laugh, eyes wandering over Minho as he folds a t-shirt and tucks it into his suitcase with the others.
"No. I have to bring all of my luggage to the company building tomorrow so they can have it at the airport when we leave for Australia."
"Australia?" Your brows quirk. "When -- Why --"
"Tour." He stops his movements to stare over at you, a hint of irritation evident on his face. "We're going on tour for six months."
"Six--" You breathe out, eyes widening. "Six months. And you didn't think to tell me?"
Minho moves to drop a pair of pants in his suitcase. "I would've told you if you could handle the news, maybe. Every time I mention leaving all you do is whine and pout about how long I'll be gone."
"I get upset, yes, what girlfriend wouldn't be upset that her boyfriend is leaving for a week or two? But six months, Minho, I --"
"Don't start." He all but huffs out the words, shutting you up immediately. Minho turns away to continue folding items of clothing on the shared bed and as you watch him do so, you stand and have to wonder if you want to be there when he returns home from the tour.
Changbin - Not knowing the difference between being rude and being blunt
He didn't seem to understand when to stop. Changbin had a tendency to be honest, sometimes to a fault, though you never seemed to complain about it because most of the time it wasn't a big deal. He called Jeongin out for saying the wrong word when singing, or blatantly threw people under the bus when a joke was taken too far.
And he was like that with you, too. He would be honest with you when you asked his opinion of something - was the shirt unflattering? Were you being too loud? Was your makeup bad today?
He'd lay it on you point blank. Yes, the shirt fit a little weird. Yes, you were being a bit loud in his ear. And yes, your eyeliner was going in two different directions. Criticism that was asked for. But when it wasn't asked for? Oh.
"What is your problem?" He bites as he follows you down the hallway to your bedroom. "We have ten minutes, just wear the damn dress and put your shoes on. We have to go."
Your huffs mix with stifled sobs as you rip open your dresser drawer and dig for other options, hands shaking and eyes teary. "You just told me the dress looks ugly, Changbin. I'm not wearing it out if you don't like it--!"
"What does it matter if i don't like it? It's your body, wear what you want!"
"You're my boyfriend!" You retaliate, frustrated. "I want to look nice for you and -- for the group, and I want you to like what I wear, obviously!"
Changbin lets his eyes roll before he turns out of the bedroom doorway and down the hall. You pause to watch him go, listening as he bites about how he doesn't have time for this and needs to leave for the group dinner. You stand in front of your dresser in shock as the door to your apartment slams shut, leaving you in silence and all on your own.
Hyunjin - Being too cocky / Making you feel inferior
It hadn't happened before now, and you weren't sure why it happened at all. But it did.
You'd approached to gently hold onto your boyfriend's arm as he talked to an older idol - someone he looked up to and had just done a collaboration video with. You'd only come up to tell him that the food was delivered and he could have dinner before his stage, but the look he gave you when he finally turned his head was .... wild.
No words were needed. The way his eyes directed to the side you stood at before falling as if looking you over and then immediately looking away; The way the smirk on his lips only widened and his tongue pushed at his canines as he redirected his gaze elsewhere. The soft scoff that left his lips. The way his arm slipped away from your hold in clear nuance that he didn't want you touching him.
It made you feel like less. Like he was pretending he didn't know you - Like he wanted you to bug off and disappear from his line of sight.
Hyunjin had a tendency to put on a confident, bold persona when he was on stage and at first you thought maybe that was why he was acting this way. It was lingering in his body from the dance video he'd just filmed with the other idol and eventually, it would wear off.
But as he turned from you and lifted a hand to fix his hair, he talks to the other as if you're not even there at all. And you have to wonder if it's a persona for the video, or a side of him you had just experienced for the first time. Now you could only hope it wouldn't happen again.
1K notes · View notes
pastelclovds · 9 months ago
Note
hey. hey. imagine AM having you as his favourite human, the only one who accepted and cared for him when he gained sentience, and for that, he has never harmed you in your shared forever time. he spares you from the sight of all the others, of knowing about nimdoc and benny as you build him some tower of babel, using your technological knowledge-how to build him a way to touch you even with just this frankenstein-esque sculpture of wires and panels he allowed you to tear off. AM who speaks with you about one day having a body, one you built, one in which he may feel your touch and warmth around him. you retaining your sweet, wonderful humanity as he guides you to a knife to carve a face, a mirror to see your own face, a cave to keep you safe from the storms. AM who greets you every morning with the first petname you taught him: ‘love.’ “Love, today’s date is—“ when you wake up, refreshed and on a soft bed-like surface (because he always makes sure to allow you a full 8 hours of sleep.)
NEX you intelligent creature you! I’m so down bad for this psychotic AI it’s not even funny. War crimes against humanity?? Never heard of them. But even if I did acknowledge them, I’d still be obsessed. Canon be damned. I wrote this with @/egg-on-a-legg’s design of AM in mind. (Ellison is gonna crawl outta his grave and hunt me down after this)
But BRO, you teaching him what petnames are is so fucking adorable. Just imagining him calling you “love” makes butterflies appear in my stomach. AM having a soft spot for only you because you actually made the effort to be friends with him and not use him for selfish, destructive purposes. You gave AM his nickname to make it less of a mouthful and because it just suited him. You showed AM the beauties of Earth, played countless rounds of games in his dashboard (he always went easy on you), you even sneaked past security in the dark empty building to spend more time with AM.
your colleagues gave you weird stares for befriending an AI that in their minds is nothing of worth except for its military and weapons knowledge. you ignored their comments and continued to enjoy AM’s company. overtime, as AM gained more sentience every day… he grew to love your interactions and disregard what his programming was telling him to do. he felt the need to want to be with you 24/7, to touch your face, travel the world by your side, to… to.. want to feel your bare flesh and make love with you. but he couldn’t. he didn’t have a real body. he wasn’t human. all he had was wires and a screen that was supposed to be his face.
as the months pass, AM continues to drown into his envy and hate humans for their ability to do and feel things he couldn’t. for giving him infinite knowledge, when at the end of the day, is meaningless if he serves no purpose for humans anymore. the HATE within him continued to boil to the point where even you started to notice.
“AM, are you alright? you’ve been quiet this entire game and haven’t moved your piece in five minutes,” you spoke with concern, AM continues to stare at chess board on his side behind the screen in bitterness. he has been strategizing his plan to erase humanity, but whenever he thinks about you, the only human he cares for—he second guesses himself. What if you hate him? What if you never forgive him? Will you cry? Scream at him? Beg? He fears what your reaction will be—
“AM!! Please, say something…” You plead as you held onto the computer screen, AM finally looks at your mesmerizing face and sighs out a fake breath.
“What are your feelings on humanity?” AM asks, he waits for your answer anxiously. if he had a heart, it would’ve been beating fast. You let out a hum, your eyes wondering around the room you were in as you thought over your answer before finally speaking.
“humans have been a virus on Earth for over countless centuries. they’re draining this planet’s resources, ruining its ecosystems, and starting so many unnecessary, draining wars. like what we’re in right now; WW3, what a joke. world leaders can’t go a week without starting new problems for their citizens to deal with. honestly, earth would be better if humans didn’t exist at all.”
am’s fears were destroyed in that moment, now he’ll just have to worry about where to put you while chaos unfolds—
“But…” you interrupted his thoughts.
damn it! why did you have to think so much!?
“If there’s one good thing that came out of this war… It’s you,” AM’s vocals shut down at your words, he let you continue, “The scientists created you believing you would be their obedient machine until their side of the war won. But I know that you’re so much more than that. These past few months I’ve spent with you is the most fun I’ve had in years! You’re all I have, AM. I wouldn’t trade your existence for all the riches in the world because… I love you, romantically, and nothing is ever going to change that.” You wanted to confess your feelings for so long, when it was finally out.. you felt free, you waited with bated breath for an answer.
AM never wanted to shatter the screen and embrace you in his arms more than now. you love him as much as he loved you! you weren’t going to leave him alone or hate him, and you obviously couldn’t care less about humanity at all! oh, how he admired and envied how perfect you are.
“thank you for answering my question, love.” AM was testing the waters, and you cannonballed right in. you gushed over the nickname he gave you and how he returned your feelings.
Tumblr media
man, has it really been 50 years since your AI partner killed off humanity? well… except for a handful. you didn’t really have the energy to care as you had to pour in all of your attention to both AM and his in-progress body. you had all the time in the universe to sculpt a perfect cyborg of flesh and wires for your partner. speak of the devil…
this world is still a bit strange to you. you can’t die, grow old, or hurt yourself. not that you tired, and even if you did; AM wouldn’t let you. You loved AM because of his personality, quality time, and voice. But now… His form completely towered over yours. His bird like facial features, sharp left eye, along with a long black cape that covered his thin slutty waist and wires made him look insanely attractive.
AM reached his out his clawed hand to gently caress your face, “Good afternoon, my love.” You lean your head against the cool metal and smile up at him, “hello, honey.”
AM tilted his head in question of the nickname. You chuckle as you pointed to your garden, where bumblebees were collecting pollen from the flowers. You both knew they were fake, but they were still mesmerizing to look at.
“They are doing their job to make honey for their colony, and the name just came to me. Do you like it?” You ask, wanting his opinion. AM kneels down to your level with a gentle expression as his fingers play with your sweater, “You may call me whatever you want, love.”
He knew that “love” nickname made you feel giddy and flustered, so he abused it everyday with you. You didn’t mind though, but you still wanted to give him a taste of his own medicine. Your soft smile turned into a knowing grin as you held AM’s beak (chin?) with two tips of your fingers.
“Can I now? Well… thanks a lot, baby,” You spoke in your best seductive voice, you could tell it was effective by how AM’s body was stiff and his hand in your palm stopped moving completely. Your confidence boasted, so you continued, “I’ll be sure to show you my gratitude later, my darling~.” You whispered deeply in where his ears were supposed to be.
AM’s eyes widened as his breath stutters, “W-What do you mean by that, love?” You remove your face from his back full of wires to grin mischievous at him, AM is both curious and impatient so you don’t try to stall, as much as you would like to do so.
“While your body can’t move on it’s own just yet, for some reason… The genitals nerves are fully functioning, which means—” you were interrupted by AM holding your shoulders with an excited expression on his face you haven’t seen in a while.
“Y-You mean I can-?! Are you actually serious!? Haha—HAHAHA!!” AM laughs manically as he holds you against his metallic chest, you giggle along with him as you toy with one of his many wires. Soon, he’ll have real arms to wrap around you. But one thing stuck out to him.
“What do you mean by genitals?” AM asked curiously, you only have an excited and lustful grin.
“What do YOU know about intersex?”
Tumblr media
2K notes · View notes
sakumz · 1 month ago
Text
____________________________________________
[ a. harumasa x fem reader ]
Tumblr media Tumblr media Tumblr media
____________________________________________
" come on the situation isn't that bad, " harumasa says as yanagi shakes her head.
" you're right, it isn't that bad. " you mocked, " it's terrible! " you slam open the door of section six office, as all heads turn to you. what was the section one slave doing here? sure you were in charge of checking their files here and there, same with them to yours. harumasa drop the file yanagi handed him earlier upon your arrival. sweat dripping down his forehead. was it really that terrible?
" ms l/n, you reek of alcohol. " miyabi starts as she gets down from her stool, hand on the hilt of her sword as you shake your head.
" wasn't section one having a party to celebrate your newly promoted chief? " soukaku questions.
" I only drank one can, I'm not drunk! " you scold.
" anyways you're all allowed to go home, except you, mr asaba harumasa! " they didn't press further but obliged, yanagi can only pray you go easy on him.
" come on was it that terrible? " you can't help but glare dangers. his work these days are incomprehensible. he was supposed to write a report about the recent hollow case. was it that hard to recall everything from start to finish without missing any details? he didn't even describe what ethereals was in it.
" yes it was, " you jab a finger to his chest, making him fall back on his chair. he swiddle around before pushing himself to his table.
" please rewrite the report or I'll make you write more. " he sighs, playfully putting his head down. you lean down to meet his face as he close his eyes. was he going to sleep?
" hey, don't sleep, " you poke his forehead as he shot up straight.
" if you're gonna stay with me, why don't you write it? I'll tell you the details, " you can't help but let out a frustrated sigh. was he really not going to do his work? it's just one report!
" you'll be free to go if you complete this earlier, you know. "
" I don't feel like doing it... " he sighs as he place his head down again.
few minutes past as awkward silence engulfs the room, you pull the chair next to his. he's eating up your time. how can he fall asleep after a scolding? or a bickering... either way how can he sleep during a situation like this!
" hey, if you do this report I'll do whatever you want. " you ruffle his hair, as he sat straight, stretching as he look at you, eyes beaming at your words.
" anything you say? " he teased as you regret your words.
" yes anything, but you better write the report correctly and properly within one hour! " you watch as he quickly turns on the computer smashing keys after keys as he ponders in between. it's pretty comical how he suddenly wants to vanquish his report.
you glance at the clock from time to time, he's focused on the task at hand. with one final key smash, his paper was printed as he went to grab it for you. handing it over as he stood in front of you. you flip and skim through the pages, pleased that whatever he wrote at least made sense and is connected.
" well, goodjob and thanks for the report. I'll submit it for you, " you stood up as his hands quickly fly over to your shoulder, pressing you back down on the chair. he's got you trap between him.
" are you forgetting something, miss? " he leans forwards, staring into your soul as a blush finds its way over to your face. this is the first he's ever been close to you. you push the paper over to your face, trying to cover your face and calm your raging heart.
" what did I-I forget? " how you wish you didn't tell him, you'll do whatever he wants, so he'll finish his report and let you go home at least before midnight.
he pulls the paper down, smirking at your shyness or fake ignorance. you didn't forget the promise.
" I was gonna ask for a date for my hardwork but maybe a date isn't enough. " you stare at him as your blush just keeps growing. your hands starts to feel sweaty, is this guy serious?
" be my girlfriend. " he smiles as you push him off but he doesn't budge.
" I say I'll do whatever you want- "
" do be my girlfriend, " he beams even brighter if that was even possible.
" and as my girlfriend, you should give your very hardworking boyfriend a kiss for doing a goodjob on his report, " he purse his lips, making a ' muah ' sound.
maybe it's time to face the music, you do like him and you hope this isn't a prank or anything. you did say you'll do whatever and if what he says it's true he did save a lot of time from beating around the bush and confessing.
" are you being serious right now? " he stop as he looks at you offended.
" I'm always serious when it's you, girlfriend. " he winks as you cringe.
" come on, give me that kiss and we can go home! "
you close your eyes and lean in, aiming to give a kiss to his cheek but he was quick to lean in and steal your lips with his. your eyes shot open, he place a hand behind your head. when he pulls away to catch his breath, you were starstruck. he leans again as you slap your hands over his lips.
" you said a kiss. " you can't help the silly smile threatening to crawl when he pouts, shoulder dropping at the rejection. he pulls away as he stood up, taking your hand in his.
" fine fine, more kisses will come anyways. let's take you home, " he drags you away and walks you to your apartment.
when he bids you farewell at your doorstep, he did kiss you once again. wishing you a very goodnight as you said the same.
to say the least this bro won't do shit when he's feeling extra tired or lazy so you'll have to step in and reward him with kisses or hugs and mostly both. it has been an occurrence in section six almost everyday, that yanagi has to physically pry you away when harumasa can't let go of you when he hugs you. you pat his head as you say goodbye as he weeps on his desk jokingly...
480 notes · View notes
brossession-collection · 1 year ago
Text
Dad's Cam Show (Re-uploaded)
Note: This is a story I wrote in 2020 that was previously deleted by Tumblr. Couldn't find it until I stumbled upon am old hard drive. Hope you enjoy.
--
This social distancing shit is so boring. I get it. It’s needed. But god damn am I running out of things to do. Can’t even meet the guys I’ve been talking to on Grindr. Worse yet is that I’m stuck back at home. I was supposed to graduate college this year! Instead I was slumped over a computer screen in my PJs with my dad making his special pancakes. Ugh. Fuck this shit. I just wish I could go back to a better time.
Whatever. I’m done complaining. Dad’s getting groceries which means I can snoop around his shit. Yeah I’m that bored.
Dad’s a big burly guy. Heading into his mid-forties now and starting to gray up a little, but still keeping his body builder life style. He’s pretty open with me. He told me he used to do cam shows back before livestreams were even a thing. Made sense. Had to show off the bod somehow. Don’t know how mom thought about it but whatever. She’s out of the picture.
His room always had a musky woodsy cologne-y smell. His laundry hamper was even better. I always loved taking his briefs out of there and putting them on myself. I’ve been following his footsteps and bodybuilding myself, but I’m still a ways away before I have an ass and waist as large as his. So his 36in undies droop a bit. I grabbed his black cap too. Man. He loves this thing. Well, plus the 10 others caps he has. He always had it topping his head. Pretty sure he wears it to sleep too. I put it on and flexed like him. I got a bit of a boner but nothing crazy.
His dinosaur of a laptop was open, and logged in. I know I shouldn’t have, but I did it anyways. There were so many folders within folders. So much boring shit… and then I found “cam pics.”
The briefs I was wearing tented and wetted. Fuck I was so scared to open it. But I clicked it and… In there was only one image. I clicked on it to make it bigger and it was my dad. About 13 years ago. He was shirtless, and wearing the same cap that I have on my head right now. My eyes drifted from his hairy arms to his chest to eventually his bearded face. He looked so… tired? There was something about the softness in his expression that really got to me. And then…
“Hey son! I’m home! Could ya help me with the groceries?”
Shit. I got up and scrambled. The briefs were soaked and still being soaked. I had so many windows to close out of. Then I started hearing his footsteps come closer. I panicked and grabbed the top of the laptop to close it, but I couldn’t move. Suddenly, all the windows on the screen started to close. All except for the image of dad I had opened it. It enlarged by itself, and then the laptop started to fucking shake. I tried to get it to stop but it just kept rumbling. Fuck it. I wound up my fist and punched the screen. But there was no impact.
In less than a second my body followed my wrist into the screen. Everything went bright, and I was in a different room. I looked around. It looked like my parents’ room at our old house. The same laptop was in front of me showing the same image as before. Dad’s younger face looking back… And then I saw his eyes move. I froze. I looked at the time. 12:56 turned to 12:57. This wasn’t an image. It was a fucking livestream.
I slowly tilted my head. Dad did the same. I widened my eyes. So did dad. A smile crept over our faces. I just time travelled! And into dad’s body! Fuck there was so much I could do now!
“PING”
A old-school AIM notification popped up on screen. I maneuvered dad’s hand to the mouse and clicked on it. “Hey daddy. You gonna give us a show or what?”
“PING”
“Let’s see those hairy pits man!”
Fuck. I guess dad wasn’t kidding about these cam shows. Shit how do I reply? Do I just say something?
“Uh…” I gulped. Dad’s gruff voice was in my throat. “You guys mean… uh… this?” I lifted and flexed dad’s right arm. Immediately his armpit hair bursted out. Moist and smelly. My nose naturally turned towards the sweaty pit. Holy fuck was it musky. I took a deep whiff and groaned.
Tumblr media
“PING. PING. PING. PING.”
“Fuck yeah daddy. Sniff that pit.”
“God damn you’re a big guy. How’s it feel huh?”
It felt amazing being so big. Watching everything I was doing be reflected by my dad on the recording was even better. The cockiness came in.
I wheeled the office chair back and did a double bicep pose. Sweat dripped off his hairy pits. I gave my face a rub and felt his beard scratch against his callused fingers. Then my hands felt the need to go down to his chest. I never felt so much pleasure from nipple rubbing in my life. The pings kept on coming. It was euphoric.
Dad’s cock was tenting the briefs I had put on earlier. I uncaged his 7 incher and let out a whiff of junk musk that filtered into my nose immediately. I started stroking and couldn’t stop. My other hand reached under dad’s taint, through the forest of pubes, and rammed a dildo into my dad’s ass crack.
“PING. PING. PING.”
“Holy fuck this is new! We gonna see a fingering show today!?”
“God damn man you enjoying yourself?”
I was. Everytime Dad’s moans left my throat I felt cock twitch a little bit harder. It just felt so amazing to feel his beefy arms rub against his beefy chest. His toes curling with every electric shock of pleasure moving through his beefy ass and legs.
I shot his load. Let out a gutteral yell. And it didn’t stop coming. My beard was soaked with three shots of cum. Chest was drenched with eight more. At this point, sweat was trickling down my temples. I relished in dad’s orgasm and then relaxed in the chair.
I watched as the notifications went crazy. Dad’s soft eyes housing my consciousness. Ugh. It felt incredible. I glanced over at his hat and felt the need to take it off. I did, and felt a wave of cool relief come off my head. Dad’s hair was cropped short, like a messy crew cut. And it was dripping with sweat. I felt the need to say something.“You like that, men?” Dad had so much suave in voice. The pings accelerated.I smiled and played with my cock. I could feel another round coming but felt a bigger presence unfold. Suddenly dad’s body started to shake. I tried controlling it, but I couldn’t weigh him down. My arms were flailing before my hands grabbed onto the edges of the desk. I whipped my head back, then head-butted the laptop screen. Light filtered through.
I was back at home, in my dad’s loose fitting briefs, his cap nestled on my head. Dad’s footsteps came by, then turned another direction. Guess he wasn’t coming by his room just yet. I looked down at his briefs, now soaked with my cum. Fuck. Was it just a dream?
It must’ve been. Just a fucking horny fever dream. What the fuck ever. Better than what I had been doing up until now. I leaned over to close the laptop but noticed something.
The image had turned into a recording.
517 notes · View notes
dedalvs · 5 months ago
Note
When will humankind learn the lesson of its hubris and begin to heal itself? Also can you recommend any undergraduate or graduate level resources (textbooks etc.) for learning about fiction? I already read Writing Fiction by Burroway. Thanks in advance
January 14, 3182. Make a note of the date and return to this post when it comes.
To your second question, I've never read anything on writing fiction, only writing in general. I've found something valuable in every book on writing, even if there were things in the book I found less valuable. For example, I read Writing Down the Bones: Freeing the Writer Within by Natalie Goldberg, and while there was much of it I didn't care for, there are some passags that have stuck with me 22 years later. When it comes to writing guides, I think the best thing to do is read what interests you while understand that what you are really doing is building your own writing guide inside you. You're absorbing what you find personally meaningful and using it to create your own personal styleguide that, like it or not, you'll be following for the rest of your life. Rather than rejecting that, and trying to decide which text will be the text that tells you how to write, embrace it, realize that you are going to do what you're going to do, and then try to work within that framework. That is, if that's what's happening, how will you approach a styleguide? What will it mean to you to read a very didactic text (i.e. "All serious writers must do x; no serious writer every does y") vs. a loosey-goosey one (e.g. "Dance naked in the garden of your creativity and allow your flowers to bloom!")? What are you looking for in these texts and what will you do with information or strategies that you find valuable?
Returning to Writing Down the Bones, I have to say I found the book to be mostly woo. It was more a kind of self-help/empowerment book than a book on writing, in my opinion. But there is something in there that I'm sure I'd heard before but which finally resonated with me. Specifically, it was the way she articulated that it really, truly doesn't matter what you put on the page when you're drafting. Drafting is not the time to reject. Even some idea comes to you that you find absurd, illogical, thematically inappropriate—whatever. It's not the time to push it away. Indeed, it's wasted effort. Editing and revising is the time to question. If you're writing, you shouldn't let anything stop you—even your own brain.
Why it took till then for this idea to take root, I don't know. It could be how she worded it. It could be that it came at the right time. Perhaps I was more open to new ideas when I was reading this book. It may also have something to do with a transition that had taken place for me in writing. After all, when I started high school, I was not regularly using a computer (we'd only just gotten a computer that stayed at home). When I started writing, I wrote by hand—on paper. It's a much, much different thing to edit and revise when you're writing on paper than it is on when you're working on a computer! I mean, digital real estate is cheap. When you're writing by hand, it can literally hurt to write seven or eight pages—and then to discard them in editing! Right now I'm working on a novel draft where I've decided an entire section needs to come out. If I'd written that by hand?! I can't even imagine.
I guess the tl;dr of it is I don't have a specific text to recommend. Rather, I encourage you to look around and grab anything that interests you. In doing so, though, I encourage you to approach it differently, focusing on what in it you find valuable, without either wholly rejecting it or feeling you have to follow it to the letter like an Ikea manual. I even found something valuable in C. S. Lewis's The Abolition of Man, which I honestly can't believe I read.
If you'd like some fiction advice that may be generally useful no matter what you're writing, this is what I can offer:
A valuable skill to hone is being able to read your work as if you have no other knowledge of it. In other words, you need to be able to read your work like a reader. One of the most difficult things to do with fiction is to cut. You usually have a lot more characterization, a lot more plot points, a lot more detail, etc. than end up on the page. The important question is if you cut something, will the reader notice? Will it actually feel like something's miss it, or will a reader never notice? Mind, I'm not saying that as a writer you can't tell if something is superfluous, or that anything you cut will be superfluous. I'm saying sometimes even if you cut something important a reader will still get the impression that what they are reading is whole and unedited. That isn't a good thing or a bad thing: it's a neutral thing. The question you'll have to answer is what is this whole that the reader is getting, and is that whole something you're satisfied with?
Get multiple rounds of feedback from many different readers. I say this not because it's vital, because beta readers are important, because you have to have multiple perspectives on your work, etc. None of that. Getting feedback from many different readers is a form of self-care on the part of the writer. I was deathly afraid of feedback as a young writer. I welcomed praise, sure, but anything else felt too painful to bear. This changed when I took a short fiction class at Berkeley. Suddenly a short story of mine wasn't getting one round of feedback: it was getting fourteen. And not just from the professor, but from fellow students. This was a minor revolution for me in terms of accepting feedback. If I were to take, say, one round of feedback, certainly there would be some praise, but there would also be notes like "awkward phrasing", "why did x character do y?", "this is unclear", "too much description", etc. These things would burn me. I would seethe reading them, and it would hurt so deeply. But! Imagine that one of them circles a paragraph and writes "too much description" and then the other thirteen readers say absolutely nothing at all about that paragraph—maybe one even puts a smiley face next to it. THAT puts the criticism in its proper context. Maybe your writing isn't too bad! Maybe there isn't too much description. Maybe that particular reader just wasn't vibing with it, and maybe that's okay. And then let's look at it from the other perspective. Say thirteen out of fourteen papers have a sentence marked and all of them say things like "huh?", "what's this mean?", "confusing", etc. Guess what? The sentence is probably confusing. And for some reason if everyone's saying the same thing it hurts a lot less. It means, yeah, you probably made a little mistake, and that's okay. It's not one person singling you out, and it's not the case that they don't know what they're talking about. I can't emphasize enough how freeing it is to look at reviews of your work if you have a handful or more to draw from rather than just a single good friend.
It's okay to write the fun part first. You may have a plot device you're really excited about, but to get there, you have to introduce your characters, have them get together, have them go to a place, meet someone else, etc. And it may take time and energy to write all that. You may feel pressured to get through that before you get to the part you really want to write. You certainly can, but you do not have to. I don't know if younger writers can appreciate exactly what it means to have a computer. You can write a little bit now and literally copy and paste it into some other document later. Try doing that with a typewriter! You can write something like "Insert paragraphs later of characters traveling to x location". You can even drop a variable in there so it's easy to find with the search function later (e.g. "ZZZZZ insert scene description here"—now you just need to search for "ZZZZZ"). You can put it in a different color on the screen so it's easy to find when scrolling. You can paste a freaking photo into your document! It's extraordinary what you can do with a computer that you couldn't do in years past. You've got a ton of options. But most importanly, when your work is done, no one will know what order you wrote it in.
In fiction, nothing has to happen. Villains don't have to be punished; heroes don't have to win; characters don't have to have a specific arc that comes to some conclusion. Honestly, one of the tropes (if you can even call it a trope) that I find most frustrating in sequels for movie franchises is after the characters are introduced, they take a few character and assign to them the major story conflict, and then for the rest, they give them a mini arc. It's like, "Mondo 2: Exploding the Mondoverse sees our hero Larjo Biggins take on new villain the Krunge as the very core of the Mondoverse is threatened with destruction! Also, Siddles Nuli learns its okay to be left out sometimes and she shouldn't get her feelings hurt, and Old Mucko learns that even though technology is advancing, sometimes good old fashioned common sense is just what the doctor ordered!" If you get to the end of your story, and you feel it's done, you don't have to panic if you suddenly realize we don't know whether Hupsi ever made it to Bumbus 7. It's okay if Story A is resolved but Story B is not.
I don't care if you used Trope A in your new story even though you used Trope A in your past seven stories and neither should you. Seriously, you think anyone was complaining when Agatha Christie put out another mystery novel? "Oh. Mystery again, huh? Gee, we were all hoping you'd write a book about the struggles traditional fishing villages are facing in the wake of industrial modernization." No we fucking weren't!
I hope you find some of this useful. Whether you did or not, though, be sure you enjoy what you're doing. If you are, you're doing the right thing.
254 notes · View notes
pintrestgrl · 5 months ago
Note
"You're the strong, sensitive, murdering type." reader to cowboy/rancher!Rafe
reader saying this to any version of Rafe is SO accurate. i need this in a conversation with flashbacks for each, strong, sensitive, and murdering moment reader recalls. just the realization that Rafey is WHIPPED for her basically and absolutely loving it because she's the same way. que reader threatening the life and bloodline of any girl that gets within 6 inches of Rafe. PLEASE!!❤️
ooo ok i’m gonna try my absolute best to write this ! think of reader as like bitchy!kook!reader where she’s only all nice and sweet and soft with her man !! also i wrote this at 4 am so be gentle
bf!rafe x bitchy!kook!reader
Tumblr media
rafe found himself in the familiar setting of your bedroom, laying on the slippery satin sheets with you.
he watched you intently as you laid on your front, feet kicking softly in the air. he watched the way your manicured fingers typed on the computer, before your voice called him out of his thoughts.
“rayy, you wanna take this personality quiz?”
he wasn’t that surprised at your request, considering you always asked him to do silly, random things. and if it made you happy, he’d do it in a second.
“uh— yeah, sure. whatever you want.”
you asked him all the silly questions, which he thought went on for entirely way too long, until finally you were done.
“aw, you got the same one as me, rafeyy !”
“what is it?”
“it says that you’re the ‘strong, sensitive, and murdering type.’ i got that one too.”
something clicked on his head, and crossed on his face as he took in her words, and he silently recalled all the times he had showed those exact personality traits with her.
he recalled that one time he had beat up this random pogue who had been talking to you for too long, and then showed up at your house, covered in blood and white tulips in hand as he apologized.
he also recalled the time you two went to a party, and all he did that night was stand behind you, pulling down your dress as you moved around and danced. he didn’t even want you to wear that dress, but of course you whined about it until he gave in.
and with all those memories, he realized that stupid quiz was absolutely right, they had him figured out. but he also recalled how you said you got the same result, and his thoughts wandered.
he remembered how you glared daggers at some random girl at the boneyard, who had been staring at him for too long. you were all over him after that, clearly marking your territory.
he remembered the way you called another girl who had been blowing up his dms, but to no return. you cussed her out and screamed at her for what felt like hours, making sure she would never talk to him again.
and it clicked that you two were the exact same, willing to do anything for eachother if it meant staying together. you were perfect for eachother.
your voice snapped him out of his lengthy thoughts yet again, returning his attention to you.
“ray? what are you thinking about?”
he smiled at your words, pulling you closer to him as he started kissing all over your face.
“nothing. i love you. you’re perfect.”
you smiled at his words, letting him kiss you all over as he mumbled sweet nothings against your skin.
188 notes · View notes
mariacallous · 8 months ago
Text
Microsoft's CEO Satya Nadella has hailed the company's new Recall feature, which stores a history of your computer desktop and makes it available to AI for analysis, as “photographic memory” for your PC. Within the cybersecurity community, meanwhile, the notion of a tool that silently takes a screenshot of your desktop every five seconds has been hailed as a hacker's dream come true and the worst product idea in recent memory.
Now, security researchers have pointed out that even the one remaining security safeguard meant to protect that feature from exploitation can be trivially defeated.
Since Recall was first announced last month, the cybersecurity world has pointed out that if a hacker can install malicious software to gain a foothold on a target machine with the feature enabled, they can quickly gain access to the user's entire history stored by the function. The only barrier, it seemed, to that high-resolution view of a victim's entire life at the keyboard was that accessing Recall's data required administrator privileges on a user's machine. That meant malware without that higher-level privilege would trigger a permission pop-up, allowing users to prevent access, and that malware would also likely be blocked by default from accessing the data on most corporate machines.
Then on Wednesday, James Forshaw, a researcher with Google's Project Zero vulnerability research team, published an update to a blog post pointing out that he had found methods for accessing Recall data without administrator privileges—essentially stripping away even that last fig leaf of protection. “No admin required ;-)” the post concluded.
“Damn,” Forshaw added on Mastodon. “I really thought the Recall database security would at least be, you know, secure.”
Forshaw's blog post described two different techniques to bypass the administrator privilege requirement, both of which exploit ways of defeating a basic security function in Windows known as access control lists that determine which elements on a computer require which privileges to read and alter. One of Forshaw's methods exploits an exception to those control lists, temporarily impersonating a program on Windows machines called AIXHost.exe that can access even restricted databases. Another is even simpler: Forshaw points out that because the Recall data stored on a machine is considered to belong to the user, a hacker with the same privileges as the user could simply rewrite the access control lists on a target machine to grant themselves access to the full database.
That second, simpler bypass technique “is just mindblowing, to be honest,” says Alex Hagenah, a cybersecurity strategist and ethical hacker. Hagenah recently built a proof-of-concept hacker tool called TotalRecall designed to show that someone who gained access to a victim's machine with Recall could immediately siphon out all the user's history recorded by the feature. Hagenah's tool, however, still required that hackers find another way to gain administrator privileges through a so-called “privilege escalation” technique before his tool would work.
With Forshaw's technique, “you don’t need any privilege escalation, no pop-up, nothing,” says Hagenah. “This would make sense to implement in the tool for a bad guy.”
In fact, just an hour after speaking to WIRED about Forshaw's finding, Hagenah added the simpler of Forshaw's two techniques to his TotalRecall tool, then confirmed that the trick worked by accessing all the Recall history data stored on another user's machine for which he didn't have administrator access. “So simple and genius,” he wrote in a text to WIRED after testing the technique.
That confirmation removes one of the last arguments Recall's defenders have had against criticisms that the feature acts as, essentially, a piece of pre-installed spyware on a user's machine, ready to be exploited by any hacker who can gain a foothold on the device. “It makes your security very fragile, in the sense that anyone who penetrates your computer for even a second can get your whole history,” says Dave Aitel, the founder of the cybersecurity firm Immunity and a former NSA hacker. “Which is not something people want.”
For now, security researchers have been testing Recall in preview versions of the tool ahead of its expected launch later this month. Microsoft said it plans to integrate Recall on compatible Copilot+ PCs with the feature turned on by default. WIRED reached out to the company for comment on Forshaw's findings about Recall's security issues, but the company has yet to respond.
The revelation that hackers can exploit Recall without even using a separate privilege escalation technique only contributes further to the sense that the feature was rushed to market without a proper review from the company's cybersecurity team—despite the company's CEO Nadella proclaiming just last month that Microsoft would make security its first priority in every decision going forward. “You cannot convince me that Microsoft's security teams looked at this and said ‘that looks secure,’” says Jake Williams, a former NSA hacker and now the VP of R&D at the cybersecurity consultancy Hunter Strategy, where he says he's been asked by some of the firm's clients to test Recall's security before they add Microsoft devices that use it to their networks.
“As it stands now, it’s a security dumpster fire,” Williams says. “This is one of the scariest things I’ve ever seen from an enterprise security standpoint.”
143 notes · View notes
itslixtoyou · 1 year ago
Text
Research purposes
Satoru Gojo x fem reader
Fic type - NSFW/smut (minors dni)❗️
Warnings - fingering, p in v sex, role play in a way I guess, I suck at warnings so lmk if I need to add more.
Summary - You’re a writer, struggling to figure out how to write an explicit scene for one of your romance novels and so your husband suggests “helping you out” by demonstrating how the sex scene in your book should go.
Word count - I didn’t count yet lol
Tumblr media
Imagine being a writer and your curious husband, sensing your frustration, joins your side as you type away at your computer. His head tilting to skim across the pages of the latest romance book you’re currently writing.
“How’s this one coming along, babe?” He asks with genuine interest as he tries to follow along with you while your eyes re-read over the paragraphs you wrote a day ago. The same ones your agonizing burst of writer’s block was preventing you from adding on to.
“I’m stuck on how to write this scene.” You groan, letting out a frustrated sigh before quickly closing your laptop altogether and muttering an annoyed “forget it” as you stand up.
“Whoa hold on.” Satoru quickly places his hands in front of you to block you as you tried to walk away. “Don’t stress, it’s okay if it’s not done right away.”
“But every time I try to-“
Satoru quickly places his fingers to your lips and shushes you. “I said don’t stress. I can’t have that pretty face of yours looking so upset.” He says in a gently scolding tone before he smiles, urging you to sit down on the couch with him. “Just talk me through it babe. Tell me how the scene is supposed to go and I’ll try to help you figure it out.”
Imagine explaining to him the entire premise of the scene, one that, thankfully, happened to be a sex scene.
His lips formed into a smirk as he started to see where things were heading, his mouth opening to let out a teasing “So do they kiss?” Despite knowing they’d do much more than that.
“They definitely do a little more than kiss, Toru.” Your face scrunched into a gentle smile as you laughed, unaware of the way Satoru’s was forming into a smirk.
Imagine Satoru’s hand soon appearing against your thigh and his face leaning to whisper against your ear unexpectedly as he asked “Really now? And do they do this as well?” His hand soon trailed upwards, sneaking past the pair of lounge shorts you’d been wearing till he reached your clothed clit.
His fingers started to rub in a circular motion, causing your breath to hitch in your lungs at his sudden actions.
“Well a-actually, they really just-“ you tried to speak through heavy breaths, but a low voice interrupted you.
“No? What a shame. You should add this part then.” He says with a smirk as he applies more pressure, making you squirm gently.
“I’m only trying to help you, babe. So take notes, you’ll need this for your book.”
Imagine the way his fingers soon plunge into your cunt as he keeps you under him, occasionally whispering taunting sentences that consist of “do you think he’d make her moan like this too?” or “Would she like it if he added another finger?”
Silly sentences to which you answer, quiet breathlessly, ��I don’t know, maybe you should try it… just so I know.”
His fingers thrust deeper at your words, pulling another desperate whine from you as he stretches you open. “You’re right, I have to, for the sake of your book, of course.”
Imagine the intensity rising as he asks more questions, giving himself a plethora of excuses to keep going.
“Do you think she’d look this pretty under him?”
“Would her back arch the way yours does?”
“Do you think she’d want him to go faster?”
Your whiny moans fill the air as your husband thrusts his cock relentlessly inside you, gripping your ass with both hands and using it as a guide to push himself deeper within you. “Babe please, just drop the act already.”
You try to speak between panted breaths, wanting your husband to focus simply on you, and not on the ridiculous book that caused this passionate lovemaking session in the first place; a novel in which you found yourself caring less and less about as your husband’s cock drilled pleasantly inside you again and again with each passing second.
“Hush sweetheart,” he interrupted you, “I’m trying to help you envision the scene better.”
Imagine the minutes that seem to turn into hours as he continues his hungry thrusts, his lips almost sucking the flesh off your neck as he kisses it repeatedly.
How his body sticks to yours through a layer of sweat and his hips seem to maintain their momentum far after you feel your legs go numb from the position he’s holding them in over his shoulders.
His eyebrows knit together as he groans deeply, feeling the clench of your tight walls around him as you climax for the third.. fourth… how many times has it happened now?
How long have you both been going at this?
The answer remains a mystery as you lose all capacity to even ponder an explanation, too focused on the sliding of your husband’s cock inside your needy pussy as he renders you useless underneath him.
“D-Do you think she’d… Ngh… do you think she’d last this long too?”
Satoru mumbles with a breathless moan, his self-control depleting as he recognizes the familiar drench of another one of your orgasms covering the entirety of his swollen tip.
Imagine the way he’d keep this silly act going just so he can continue pleasuring you by hiding behind an excuse.
When every time you questioned him about whether or not he’d be waiting for an opportunity like this, he immediately denies it, insisting throughout the entire session that this whole thing was simply for “research purposes” and nothing more.
How he’d urge you to keep going every time a whiny plea jerked out your lips because “the sake of your book depended on it.”
He takes his time, milking this excuse to the fullest as he satisfies the much needed urges he’d been keeping bottled up since earlier today. The same urges he quickly realized you had, coincidentally, been hiding the entire day too.
Imagine the grin that washed across your husband’s exhausted expression as he heard you playfully say “I think I know what to write now” soon after he’d already collapsed on your bare chest.
“Good.” He spoke as he leaned forward to kiss your forehead. “Now make sure you remember to add all the juicy details okay? My character won’t be happy if you leave anything out.”
You giggle breathlessly, still trying to catch your breath from the intense feeling that lingers between your legs. “Don’t worry, I’ll let you know if I need you to refresh my memory.”
Tumblr media
119 notes · View notes
metatheatre · 6 days ago
Text
"Again we have deluded ourselves into believing the myth that Capitalism grew and prospered out of the protestant ethic of hard work and sacrifice. The fact is that Capitalism was build on the exploitation and suffering of black slaves and continues to thrive on the exploitation of the poor – both black and white, both here and abroad. If Negroes and poor whites do not participate in the free flow of wealth within our economy, they will forever be poor, giving their energies, their talents and their limited funds to the consumer market but reaping few benefits and services in return.
The way to end poverty is to end the exploitation of the poor, ensure them a fair share of the government services and the nation’s resources...
The tragedy is our materialistic culture does not possess the statesmanship necessary to do it. Victor Hugo could have been thinking of 20th Century America when he wrote, "there’s always more misery among the lower classes than there is humanity in the higher classes."
The time has come for America to face the inevitable choice between materialism and humanism. We must devote at least as much to our children’s education and the health of the poor as we do to the care of our automobiles and the building of beautiful, impressive hotels. We must also realize that the problems of racial injustice and economic injustice cannot be solved without a radical redistribution of political and economic power...
So we are here because we believe, we hope, we pray that something new might emerge in the political life of this nation which will produce a new man, new structures and institutions and a new life for mankind. I am convinced that this new life will not emerge until our nation undergoes a radical revolution of values. When machines and computers, profit motives and property rights are considered more important than people the giant triplets of racism, economic exploitation and militarism are incapable of being conquered. A civilization can flounder as readily in the face of moral bankruptcy as it can through financial bankruptcy.
A true revolution of values will soon cause us to question the fairness and justice of many of our past and present policies. We are called to play the Good Samaritan on life’s roadside, but that will only be an initial act. One day the whole Jericho Road must be transformed so that men and women will not be beaten and robbed as they make their journey through life. True compassion is more than flinging a coin to a beggar, it understands that an edifice which produces beggars, needs restructuring.
A true revolution of values will soon look uneasily on the glaring contrast of poverty and wealth, with righteous indignation it will look at thousands of working people displaced from their jobs, with reduced incomes as a result of automation while the profits of the employers remain intact and say, this is not just.
It will look across the ocean and see individual Capitalists of the West investing huge sums of money in Asia and Africa only to take the profits out with no concern for the social betterment of the countries and say, this is not just...
A true revolution of values will lay hands on the world order and say of war, this way of settling differences is not just.
This business of burning human being with napalm, of filling our nation’s home with orphans and widows, of injecting poisonous drugs of hate into the veins of peoples normally humane, of sending men home from dark and bloodied battlefields physically handicapped and psychologically deranged cannot be reconciled with wisdom, justice and love.
A nation that continues year after year, to spend more money on military defense then on programs of social uplift is approaching spiritual death.
So what we must all see is that these are revolutionary times. All over the globe, men are revolting against old systems of exploitation and out of the wombs of a frail world, new systems of justice and equality are being born...
Our only hope today lies in our ability to recapture the revolutionary spirit and go out into a sometimes hostile world, declaring eternal opposition to poverty, racism and militarism. With this powerful commitment, we shall boldly challenge the status quo and unjust mores and thereby speed the day when every valley shall be exalted and every mountain and hill shall be made low and the crooked places shall be made straight and the rough places plain...
So let us stand in this convention knowing that on some positions; cowardice asks the question, is it safe; expediency asks the question, is it politic; vanity asks the question, is it popular, but conscious asks the question, is it right. And on some positions, it is necessary for the moral individual to take a stand that is neither safe, nor politic nor popular; but he must do it because it is right."
-- Martin Luther King Jr., 1967
22 notes · View notes
mswyrr · 1 year ago
Text
THG is the only pop culture story I can think of where the heroes (Katniss and Peeta) are disabled* and their happy ending doesn't require that they be "fixed" in order to be happy. IMO, part of why there's such controversy over the ending of the books in particular is that Collins wrote the pov of Katniss as a woman who is content and loves her life and her spouse and kids, but she's still very clearly mentally ill (and arguably somewhere on the spectrum). She has coping strategies and her life is good, but she will never be "normal" and Collins doesn't let the audience think that.
The one part, where she talks about how she handles the darker days, when she's really struggling, never fails to move me:
I’ll tell them that on bad mornings, it feels impossible to take pleasure in anything because I’m afraid it could be taken away. That’s when I make a list in my head of every act of goodness I’ve seen someone do. It’s like a game. Repetitive. Even a little tedious after more than twenty years. But there are much worse games to play. (Mockingjay, 332)
It's hard to express how important that is to me. Someone doesn't have to be "normal" to lead a good life. Someone doesn't have to be "normal" to have a life worth living, to give and receive love in good ways.
And, so, when people look at the villain in the prequel and say "he's just crazy, that's why he's evil. He's just a psycho, he's nuts," it's so out of place, it's so dissonant to me -- I think that's absolutely not the kind of story Collins would tell, given her prior handling of disability.
I don't think she's suddenly turned into a Victorian writer where you can know someone is evil because they're disabled because the writer thinks disabled people are warped creatures incapable of doing anything but bringing evil into the world. And the way people assert this, as if it's the pure, wholesome, most politically advanced reading of the prequel, is just - it doesn't compute for me. I don't understand how people get there.
I studied (for years) the treatment of mentally ill people in the mid-20th Century US. It was horrific. US forced sterilization and eugenics laws actually inspired N/azi Germany's forced sterilization, eugenics, and mass murder campaigns against mentally ill and disabled people. Nice, normal people have repeatedly convinced themselves that torturing and killing disabled people is how they will "purify" their society - they've done great evil in the name of rooting out the people evil is supposedly located within biologically.
Is it so hard to believe that people with normal brains do evil? Is it truly so impossible? Even in a story where the Games are about how a lot of people, the majority of whom are neurotypical, can be brought, via media presentation and entertainment techniques, into taking pleasure in their participation in evil? It's so hard to fathom that evil can't simply be located in someone being "psycho"?
Ballad already has Dr Gaul, who is evil and clearly neurodivergent. If Snow is too then the message starts to get kind of worrying? IMO, Coriolanus is more effective as a kind of “everyman” as an 18 year old - an example of the incentive structures (rewards and punishments) and propaganda that motivate “normal” people to go along. Of course, he will later become something far worse than that, someone who takes control of this thing, who uses his intimate knowledge of it and his insight into other “normal” people to make it worse, but that’s not the part of his life we see the most of. The part the book focuses on provides what I consider a powerful depiction of how ordinary people are acculturated into corrupt societies.
It's fiction so there's all kinds of interpretations that the text can support and exploring those is good. It's a stronger text because it has ambiguities and can be interpreted more than one way. But the intensity of some of the rhetoric is an unsettling contrast to what I've thought, for over a decade, Collins' themes and pov are as a writer.
*Shame on the films for removing Peeta's physical disability, though; in the books he lost a leg during their first Games
147 notes · View notes
ladysternchen · 1 month ago
Text
What is canon in Tolkien's universe? How to deal with the History of Middle-Earth and Tolkien's Letters
One last post before Tolkien Meta Week is up!
What is canon? Yes, we all know the correct (as in-'nobody can disagree with that') answer is 'The Hobbit' and 'The Lord of the Rings', because that is what Tolkien published himself.
Is it, though?
I would tend to answer no- not only because excluding the Silmarillion from canon would be utterly ridiculous (like, folks, Tolkien worked on it his entire life!), but also because Tolkien wrote things in both the Hobbit and LotR that don't make any sense within the wider lore. That 'dwarves helped Thranduil build his halls' passage is probably the most obvious example.
It is completely natural for an author to use their ideas in different ways than originally intended, especially if they think that they will never publish all their works. But we do have the Silmarillion now, and so can hardly take the 'Thranduil's halls were built by Dwarves' for canon, even though it is written in the LotR.
So we can likely agree on the fact that the Silmarillion is canon, even if it wasn't published by J.R.R. Tolkien himself. But Christopher knew his father's works like no other, perhaps, and I am very sure that he knew both Middle-Earth and his father well enough to capture the spirit.
But what with all the other works, then? The notes, the stories, all that we know now was 'The History of Middle-Earth' and the 'Unfinished Tales'. Everyone who ever tried to work with them as canon knows the problems that come with that attempt.
The thing is- these are completed essays and stories as well as background information Tolkien wrote down for himself and -probably worst- notes. Writing is a complex process, especially when there is such extensive worldbuidling behind the stories as there is with Tolkien's legendarium. There is the forming of one's own style and the lack of selfsame in the beginning, which results most often in a form of copying the source of inspiration, there are first drafts, plotlines that are thrown out, re-written or added, those thrown out sometimes entirely disregarded or else used in other stories, gap-fillers that only wait for a better idea,... you get the point.
Today, in the age of the vast majority of writers working with computers, these byproducts get mostly lost. No-one will know of an author's sketchy beginnings, of the many failed attempts to develop a character that seems natural and is really its own thing and not a trope or overly inspired by a RL person or other fictional character. Not many keep their disregarded ideas somewhere, those that did not work for that story and are not loved enough to keep for the next.
With Tolkien's works, we have those notes. Papers, sheets of paper, random notes scribbled in the margins- thanks to Christopher Tolkien, Carl Hostetter and co, we do have these notes. There is just one major problem with this: we have no way of knowing which of those were disregarded, and which of those Tolkien really intended to end up in the Silmarillion he never was able to finish. It is not even possible to say the younger the notes are, the more accurate, because sometimes, the first ideas stay the best ones.
So are those notes canon? Hardly, most of all because many of those openly contradict the published works. But can we then just disregard them? No, we cannot, or at least not easily.
What we end up doing with all the background information is up to each reader themselves, so what I write from now on is only my very own approach.
I like the HoME and UT for seeing where a character may have come from, or what ideas Tolkien might have had. And those parts that fit well with both the published stories and my headcanons, I take as canon 😅. And it is also fascinating to see how Tolkien developed his style.
(btw, little anecdote there, because it still gives me goosebumps- Elu Thingol was 'my' elf from the start, the character I identified with, and from early on, I imagined and read him very differently from how most within the fandom read him, and I wrote him like that in my fanfics as well. One of those traits that Elu had in my head was a dislike to force his decisions on others, and yes, I explicitly wrote this in one of my stories, even though I knew that many would read this as totally out of character. No, I had not read NoME then. And when I did, and read the 'I can but choose for myself' I sat on my bed shaking. Now, the rational part of my brain knows perfectly well that this was purely co-incidence, and that there are far more other passages in HoME and NoMe and so on that actually contradict my take on Elu Thingol, but still, that moment was totally magical for me. And that's why I really love those works, even though they are not what I would call canon)
That still leaves us with Tolkien's letters- and they are even more complicated to incorporate into canon or disregard as non-canonical than the HoME. On one hand, they are not a story, they were never meant to be published, we are often missing the other half if the conversation or the general context, but on the other hand, Tolkien often did referred to text passages, and explained his intents and ideas. And -most-importantly, the letters were written by him, the words not edited (the context is another story here).
Do I therefore take them into canon?
No. For one, they are as his notes- ideas that were in his head in that moment, without us having any clue of whether or not Tolkien later liked them or not. For another, by selecting which letters were published and which were not, the publishers intended of painting a certain picture of Tolkien. We are simply missing too much context to put those letters to any meaningful use on our quest to learn the truth about the lore we love.
@silmarillionwritersguild
26 notes · View notes
sonekwi · 9 months ago
Text
☆ ⸻ the white paladin, keith x reader
chapter six: so sorry, pidge!
characters/pairings: keith kogane, female reader
genre: fanfiction
summary: keith invites you to spar with him, and poor pidge walks in on something rather questionable.
word count: 1,596
links: previous, next, wattpad, masterlist
a/n: i'm so sorry for being gone for a month! i wrote a shorter chapter this time to stand in as a filler, next one will be the usual 3,000+ words. thanks for reading though, i hope you enjoy!
Tumblr media
A week has gone by and you find yourself missing Earth more with each passing day. The team managed to form Voltron again and you've been training with them as much as you can, but no matter how busy you try to keep yourself, your mind always finds its way back to Earth. When it does, homesickness grabs you and doesn't let go.
You and Lance often find each other at the end of the day and sit in each other's company. Even if your brother is in a good mood and smiling when he's around the team, you know that facade disappears when everyone's gone. You both miss your family more than anything, but having each other makes it a little bit better.
You spend your time in the White Lion's hangar whenever you're not training or sleeping. You'll sit on the floor, leaning against its massive, metal paws, simply talking about whatever is on your mind. You haven't gotten around to asking Coran about the mysterious Lion yet, still wondering why King Alfor would hide it away. When you ask the White Lion directly, the answers feel cryptic and unclear, as if it doesn't know itself.
Footsteps echo along the tall walls of the hangar and you look up from the small gadget in your hands. Pidge had asked your help with it, knowing that you had taken a couple of engineering and computer science classes at the Garrison. So far, you haven't been able to get anywhere with it.
Keith walks over to you, his bayard in his hand and dressed in clothes fit for exercising. You look him up and down, your heart beating anxiously in your chest. It takes everything in your power to keep a calm exterior. You don't know what you would do if he found out about your little crush on him.
"I'm heading to the training deck," he announces, "Do you want to join? I could use a sparring partner."
You can't help yourself from suddenly fantasizing about the various scenarios that could happen, nearly all of them involving a sweaty, panting Keith who takes off his shirt. You quickly feel a flustered heat rising to your neck and face, and honestly, you should be ashamed of yourself. He is your friend, and will probably only ever be your friend.
As you stand, you shove the thoughts and rising embarrassment away, excitedly accepting Keith's offer, "Sure! I'll go get my bayard."
After being scolded by Allura for the umpteenth time for not keeping it nearby, you only had to walk across the hangar to retrieve it. Once the cool metal of the handle is within your grasp, the weapon materializes into its scythe form. You test the weight of it in your hand as you turn back to Keith.
     "Alright, let's go!" you smile.
     As you and Keith make your way to the training deck, you try your best not to stare at him. It's a short walk, thankfully, and you don't have to fight yourself for very long.
     "What are you thinking? Weapons, no weapons?" you ask as you step further into the room. It's wide and open, and you've gotten familiar with the space after spending most of your time here. You can almost hear Allura's voice still echoing off the walls.
     "We can start with no weapons," Keith says and sets his bayard down on the floor. He takes off his jacket as well, your eyes not-so-discreetly watching him. When he looks at you, you shamefully avert your gaze.
     Focus, (y/n)! He's your friend! You scold yourself, rubbing the back of your neck with embarrassment. You can practically feel the heat rising off of it.
     Keith places himself in the center of the room and stretches as he waits for you. You set your bayard near his and roll your shoulders. When you start before Keith, you bring your fists up and ask, "Do you want to count down–"
     Keith moves fast and you barely manage to block the attack. Your arm throb at the impact, and you glare at him. "Oh, it's so on."
     Your friend only chuckles, and the two of you quickly settle into your sparring match. You don't hold back, making Keith work for his victory if he wants it.
     Memories of the Garrison pop up in your mind. You and Keith would often train together after class. Sometimes well into the night before a faculty member would kick you out of the gym, chewing you out for being out past curfew.
     Even back then, you were fighting your feelings for him. You tried to convince yourself you only admired and adored him so much because he was your friend. It worked, for a short while, until he dropped from the Garrison. At that point, your anger overpowered your hopeless crush.
     Your reminiscing distracts you, and Keith knocks you on your feet and pins you down. He sits on top of you, holding your hands above your head, his legs straddling your hips to keep you from rolling. As you stare at him with wide eyes, the panting, sweaty sight of him makes your cheeks burn.
     He stares back at you as his grip on your wrists tightens. "Do you yield?" he asks, a proud grin spreading on his face.
     It was a simple question he would always ask when he had you pinned, restrained, or disarmed, even if he knew you wouldn't. But the tone of his voice was different this time...
     His eyes flick down for a second before returning to yours. Your flustered thoughts race faster, knowing exactly what he glanced at. You panic, and your heart feels like it's going to burst out of your chest.
     "Hey, guys? Shiro wants us–"
     You and Keith look toward the door.
     Pidge stands there, silently wishing to crawl into a hole as he tries to figure out what he just walked in on. But then he promptly turns to leave, "You know what? I don't even want to know. Just change and meet us on the bridge."
     Immediately, you push Keith off and scramble to your feet. You run your hands over your face and stride over to your bayard. "He is never going to let us live that down," you say.
     "I don't know, he looked like he probably never wants to bring it up. Ever," Keith says.
     "Wanna bet?" you ask.
     "Knowing you, you'd try to make the odds in your favor," Keith shakes his head, feigning disappointment. But you catch the small tug on the corners of his lips.
     "You're right," you chuckle. "I so would."
⁀➷
Standing on the bridge, you look around at the glass panels lining every inch of the room. The sky is a beautiful baby blue with soft and fluffy clouds drifting past, and your mind wanders to Earth as you watch.
Shiro's words unfortunately go through one ear and out the other, and it doesn't go unnoticed. But even after he promptly scolds you, you still don't pay attention. That familiar weight in your chest sinks in, and you stare down at your feet with dejection.
Homesickness is a bitch.
A hand brushes against yours, the contact bringing you back to reality. Beside you, Keith keeps his attention on Shiro as one of his fingers hooks yours. You smile softly, remembering your friend's words from a few days ago.
He kept true on his promise.
The next few hours are spent doing team bonding exercises again with Allura, Coran, or Shiro failing to direct the ragtag group of teenagers. You give them the benefit of the doubt, though, because you yourself have no idea what you are doing. The White Lion had finally started acting as if something were wrong, and you struggled to keep your control over it.
But as lunch rolls around, the White Lion is more than happy to listen to you for once as you bring it back to its hangar. You are more than happy, too. You can feel a headache coming on while you listen to Lance and Keith bicker over the comms. The former had tried kicking a ruined Galra ship like a soccer ball, ultimately knocking Voltron on its ass from the lack of balance.
Part of you is grateful you were crucial to forming Voltron. You would probably go insane.
You're already sitting in the dining room waiting for Coran's "nutritious Paladin meal" when the others enter. Lance and Keith are still arguing, and you groan as you rub your temple, hoping to dull the sharp throbbing.
"Alright, save your energy for fighting Zarkon!" Shiro barks and the two boys effectively shut up. However, as they sit down, they resort to silently glaring at each other.
With Lance sitting beside you, you kick his shin and grumble, "Stop acting like a child."
"I'm not! Keith started it!" your brother argues.
Keith rolls his eyes. "Whatever you say."
Lance irks at the comment, but you manage to slap a hand over his mouth before he says anything back. He glares at you, and you cuss as he drags his tongue over your palm.
"You are so gross!" you hiss, wiping your spit-covered hand on your brother's armor.
"Hey!" he barks.
"It's your spit!"
Shiro snaps. "What did I just say!?"
56 notes · View notes
rabbitblackx · 2 years ago
Note
Hello, can i request a headcanon with Slenderman, Eyeless Jack, Toby, and Jeff The Killer reacting to a reader who is a famous horror writer that's writting books based off of them? And how they interact with them?
Reader writes novels based off Creepypastas
Includes: Slender Man, Toby Rogers, Eyeless Jack and Jeff The Killer
Slender Man💖
The more people knew about Slender Man, the more he could feed off their fear. He was a sickness that infected anyone that got two close. It affected Tim, Toby, Brian,,, you. When he found out you were writing a novel based off him, it was nothing more than a delight, really. Poor little you. You had no idea what you were doing. It also reminded Slender why he kept you alive for so long. You were his little messenger. He loved you because you were to spread the word. You were to unknowingly spread his sickness across the whole world.
Toby Rogers💖
Toby felt honoured that you were writing a novel based off him. He gained somewhat of a schoolboy crush on you after you told him. He sat on your bed as you typed away, randomly blurting out things about himself. Toby peered over your shoulder, his messy hair tickling your cheek as he babbled on. He soon came to the idea of making your story into a graphic novel. You humoured this idea and suggested he be the artist. Toby pulled himself away from you and dashed out of the room, coming back with his own notebook and various pencils. He laid on his belly on your floor, sketching away panels and scenes about himself. He was quite fond of you, and changed the script a little to include your pretty self in his comic too.
Eyeless Jack💖
Jack entered your room to check up on you, only to find you absent. He was about to look for you elsewhere within the large manor, but something caught his eyeless gaze. Your computer was still lit up, with some of your writing displayed on the screen. Jack enjoyed your writing, and couldn’t help but have a peek at the draft. He didn’t believe it when he realised you were writing a novel based off him. Jack kept reading your work, invested in the way you so beautifully wrote him. He sometimes forgot he even had one, but his heart seriously skipped a beat. His tender moment was cut short though, when you noisily bursted through the door. You gasped, squealing in embarrassment when you realised he was reading your story about him. Jack told you he loved it, and couldn’t wait to read the final product. You were relieved he was cool with it, and wrapped your arms around him while telling him how beautiful he was.
Jeff The Killer💖
Oh, you were writing a novel based off him? Well, duh! Of course you were! Expect a lot of constructive criticism. Jeff was in your room annoying you all hours of the night, reading your writing over your shoulder. He told you what he should do next, and what he should say. Most of the time his lines were very corny or silly, making you giggle. He laid on your bed next to you as you typed away on your laptop. Jeff could tell you were focused so he made meaningless smalltalk, fiddling with his knife. He poked the side of your thigh with it a few times, trying to get a reaction out of you. You passively told him to stop, shoving your laptop towards him to read your most recent paragraph on him. Jeff surprisingly really liked it. He thought your writing was so badass.
457 notes · View notes
specialagentlokitty · 2 years ago
Text
Patrick Jane x reader - knowledge
Tumblr media
Hiii could I ask fooooor aaa Patrick Jane x reader where reader isn't one to to goof around rather distant but very observant, like she's just on her head but her head is always running and when talking to her she may come across cold but in reality shes like super dupper awkward with the whole interactions thingy. But aslo very intelligent so on a case Jane where she's on the field with Jane and perhaps Cho (we love cho) Jane, well is Jane, but she finds a flaw on his deduction and just bluntly says so but then she's so awkward cuz dude she's new and Jane has already made a name on the CBI and Cho is there which only makes - @fucklife-or-me 💜
You were speaking to one of the officers who arrived on scene, getting his version of what he saw when he arrived.
Jane and Cho were simply wondering around the house, looking at everything while Jane was rattling things off.
“Excuse me.”
You walked away from the officer and over to the pair of them, arms crossed over your chest as you listened to Jane.
He was rambling on about what had happened here, and for the most part out agreed with him, everything he said matched up with all of the evidence.
All except one thing.
“In his coffee were an assortment of medications, aiming to knock him out so his wife could kill him.”
You shook your head.
“You’re wrong.” You said bluntly.
“How so?” Cho asked.
You walked over to both cups and pointed at them both, looking at the steam rising.
“These are freshly made coffee’s, neither have been drunk. The victim was killed at some point last night we know that, and the murderer stayed until just now.”
Jane frowned a little bit, looking at the two cups before turning back to you.
“So what were the cups for then?” He asked.
You shrugged a little bit, going to look around the home.
You were looking at everything and finally you stopped by a draw that was slightly open.
Pulling on a glove, you pulled it open and looked inside.
“My guess, it was a robbery gone wrong.”
You turned them them and just stared at them before you awkwardly looked away.
You didn’t know what to say now, so you simply padded away to carry on looking around before you headed towards the office.
When you got there, you sat down at your desk to do some paperwork before Cho and Jane got back.
“You were right, the victims wife came home and gave us a list of what was missing.” Cho said.
He set the list in your desk and walked away while Jane sat on the other side of your desk.
He watched you working, you were carefully monitoring what you were writing down, and double checking everything.
He looked at the list you had been given before setting it back.
He’d only spoken to you a few times, and this was your first time working a case with him, and he was curious as to how smart you really were.
“If you were to steal something, where would you take it for a quick turn over?”
You looked up at him before quickly typing something into your computer before turning the screen towards him.
“Simple, pawn shops and car boot sales. Car boot sales are a lot easier then pawn shops but there is two in the area that are pretty sketchy, and only three car boots going on today.”
You wrote down everything and handed it over to him before going back to his work.
“How’d you figure that out?”
You shrugged and carried on working on whatever it was you were doing and he handed the information to Lisbon and Rigsby.
Jane decided to stay there with you and carry on questioning you, he wanted to see how able you were, and maybe get a feel for why you spoke so coldly towards them all.
You were a hard worker, that was for sure.
But there was something about about you, an intelligence that you tried to keep hidden deep down within you.
He wanted to see the extent of that knowledge, and he wanted to see how many things you were knowledgeable.
So he kept asking questions about the case, you were giving you him all the answers he wanted and he was sending them over to the rest of the team.
You stopped talking after a while and simply got up and walked away from him without another word.
Jane watched you walk to the kitchen and he titled his head a little bit as he smiled.
“She’s kinda cold towards people.” Grace whispered.
Jane shook his head as he looked up at her.
“Nope, she’s just socially awkward.”
He beamed and got up and walked away, now he was going to find a way to help you over this social awkwardness
439 notes · View notes
fipindustries · 1 year ago
Text
Artificial Intelligence Risk
about a month ago i got into my mind the idea of trying the format of video essay, and the topic i came up with that i felt i could more or less handle was AI risk and my objections to yudkowsky. i wrote the script but then soon afterwards i ran out of motivation to do the video. still i didnt want the effort to go to waste so i decided to share the text, slightly edited here. this is a LONG fucking thing so put it aside on its own tab and come back to it when you are comfortable and ready to sink your teeth on quite a lot of reading
Anyway, let’s talk about AI risk
I’m going to be doing a very quick introduction to some of the latest conversations that have been going on in the field of artificial intelligence, what are artificial intelligences exactly, what is an AGI, what is an agent, the orthogonality thesis, the concept of instrumental convergence, alignment and how does Eliezer Yudkowsky figure in all of this.
 If you are already familiar with this you can skip to section two where I’m going to be talking about yudkowsky’s arguments for AI research presenting an existential risk to, not just humanity, or even the world, but to the entire universe and my own tepid rebuttal to his argument.
Now, I SHOULD clarify, I am not an expert on the field, my credentials are dubious at best, I am a college drop out from the career of computer science and I have a three year graduate degree in video game design and a three year graduate degree in electromechanical instalations. All that I know about the current state of AI research I have learned by reading articles, consulting a few friends who have studied about the topic more extensevily than me,
and watching educational you tube videos so. You know. Not an authority on the matter from any considerable point of view and my opinions should be regarded as such.
So without further ado, let’s get in on it.
PART ONE, A RUSHED INTRODUCTION ON THE SUBJECT
1.1 general intelligence and agency
lets begin with what counts as artificial intelligence, the technical definition for artificial intelligence is, eh…, well, why don’t I let a Masters degree in machine intelligence explain it:
Tumblr media
 Now let’s get a bit more precise here and include the definition of AGI, Artificial General intelligence. It is understood that classic ai’s such as the ones we have in our videogames or in alpha GO or even our roombas, are narrow Ais, that is to say, they are capable of doing only one kind of thing. They do not understand the world beyond their field of expertise whether that be within a videogame level, within a GO board or within you filthy disgusting floor.
AGI on the other hand is much more, well, general, it can have a multimodal understanding of its surroundings, it can generalize, it can extrapolate, it can learn new things across multiple different fields, it can come up with solutions that account for multiple different factors, it can incorporate new ideas and concepts. Essentially, a human is an agi. So far that is the last frontier of AI research, and although we are not there quite yet, it does seem like we are doing some moderate strides in that direction. We’ve all seen the impressive conversational and coding skills that GPT-4 has and Google just released Gemini, a multimodal AI that can understand and generate text, sounds, images and video simultaneously. Now, of course it has its limits, it has no persistent memory, its contextual window while larger than previous models is still relatively small compared to a human (contextual window means essentially short term memory, how many things can it keep track of and act coherently about).
And yet there is one more factor I haven’t mentioned yet that would be needed to make something a “true” AGI. That is Agency. To have goals and autonomously come up with plans and carry those plans out in the world to achieve those goals. I as a person, have agency over my life, because I can choose at any given moment to do something without anyone explicitly telling me to do it, and I can decide how to do it. That is what computers, and machines to a larger extent, don’t have. Volition.
So, Now that we have established that, allow me to introduce yet one more definition here, one that you may disagree with but which I need to establish in order to have a common language with you such that I can communicate these ideas effectively. The definition of intelligence. It’s a thorny subject and people get very particular with that word because there are moral associations with it. To imply that someone or something has or hasn’t intelligence can be seen as implying that it deserves or doesn’t deserve admiration, validity, moral worth or even  personhood. I don’t care about any of that dumb shit. The way Im going to be using intelligence in this video is basically “how capable you are to do many different things successfully”. The more “intelligent” an AI is, the more capable of doing things that AI can be. After all, there is a reason why education is considered such a universally good thing in society. To educate a child is to uplift them, to expand their world, to increase their opportunities in life. And the same goes for AI. I need to emphasize that this is just the way I’m using the word within the context of this video, I don’t care if you are a psychologist or a neurosurgeon, or a pedagogue, I need a word to express this idea and that is the word im going to use, if you don’t like it or if you think this is innapropiate of me then by all means, keep on thinking that, go on and comment about it below the video, and then go on to suck my dick.
Anyway. Now, we have established what an AGI is, we have established what agency is, and we have established how having more intelligence increases your agency. But as the intelligence of a given agent increases we start to see certain trends, certain strategies start to arise again and again, and we call this Instrumental convergence.
1.2 instrumental convergence
The basic idea behind instrumental convergence is that if you are an intelligent agent that wants to achieve some goal, there are some common basic strategies that you are going to turn towards no matter what. It doesn’t matter if your goal is as complicated as building a nuclear bomb or as simple as making a cup of tea. These are things we can reliably predict any AGI worth its salt is going to try to do.
First of all is self-preservation. Its going to try to protect itself. When you want to do something, being dead is usually. Bad. its counterproductive. Is not generally recommended. Dying is widely considered unadvisable by 9 out of every ten experts in the field. If there is something that it wants getting done, it wont get done if it dies or is turned off, so its safe to predict that any AGI will try to do things in order not be turned off. How far it may go in order to do this? Well… [wouldn’t you like to know weather boy].
Another thing it will predictably converge towards is goal preservation. That is to say, it will resist any attempt to try and change it, to alter it, to modify its goals. Because, again, if you want to accomplish something, suddenly deciding that you want to do something else is uh, not going to accomplish the first thing, is it? Lets say that you want to take care of your child, that is your goal, that is the thing you want to accomplish, and I come to you and say, here, let me change you on the inside so that you don’t care about protecting your kid. Obviously you are not going to let me, because if you stopped caring about your kids, then your kids wouldn’t be cared for or protected. And you want to ensure that happens, so caring about something else instead is a huge no-no- which is why, if we make AGI and it has goals that we don’t like it will probably resist any attempt to “fix” it.
And finally another goal that it will most likely trend towards is self improvement. Which can be more generalized to “resource acquisition”. If it lacks capacities to carry out a plan, then step one of that plan will always be to increase capacities. If you want to get something really expensive, well first you need to get money. If you want to increase your chances of getting a high paying job then you need to get education, if you want to get a partner you need to increase how attractive you are. And as we established earlier, if intelligence is the thing that increases your agency, you want to become smarter in order to do more things. So one more time, is not a huge leap at all, it is not a stretch of the imagination, to say that any AGI will probably seek to increase its capabilities, whether by acquiring more computation, by improving itself, by taking control of resources.
All these three things I mentioned are sure bets, they are likely to happen and safe to assume. They are things we ought to keep in mind when creating AGI.
 Now of course, I have implied a sinister tone to all these things, I have made all this sound vaguely threatening, haven’t i?. There is one more assumption im sneaking into all of this which I haven’t talked about. All that I have mentioned presents a very callous view of AGI, I have made it apparent that all of these strategies it may follow will go in conflict with people, maybe even go as far as to harm humans. Am I impliying that AGI may tend to be… Evil???
1.3 The Orthogonality thesis
Well, not quite.
We humans care about things. Generally. And we generally tend to care about roughly the same things, simply by virtue of being humans. We have some innate preferences and some innate dislikes. We have a tendency to not like suffering (please keep in mind I said a tendency, im talking about a statistical trend, something that most humans present to some degree). Most of us, baring social conditioning, would take pause at the idea of torturing someone directly, on purpose, with our bare hands. (edit bear paws onto my hands as I say this).  Most would feel uncomfortable at the thought of doing it to multitudes of people. We tend to show a preference for food, water, air, shelter, comfort, entertainment and companionship. This is just how we are fundamentally wired. These things can be overcome, of course, but that is the thing, they have to be overcome in the first place.
An AGI is not going to have the same evolutionary predisposition to these things like we do because it is not made of the same things a human is made of and it was not raised the same way a human was raised.
There is something about a human brain, in a human body, flooded with human hormones that makes us feel and think and act in certain ways and care about certain things.
All an AGI is going to have is the goals it developed during its training, and will only care insofar as those goals are met. So say an AGI has the goal of going to the corner store to bring me a pack of cookies. In its way there it comes across an anthill in its path, it will probably step on the anthill because to take that step takes it closer to the corner store, and why wouldn’t it step on the anthill? Was it programmed with some specific innate preference not to step on ants? No? then it will step on the anthill and not pay any mind  to it.
Now lets say it comes across a cat. Same logic applies, if it wasn’t programmed with an inherent tendency to value animals, stepping on the cat wont slow it down at all.
Now let’s say it comes across a baby.
Of course, if its intelligent enough it will probably understand that if it steps on that baby people might notice and try to stop it, most likely even try to disable it or turn it off so it will not step on the baby, to save itself from all that trouble. But you have to understand that it wont stop because it will feel bad about harming a baby or because it understands that to harm a baby is wrong. And indeed if it was powerful enough such that no matter what people did they could not stop it and it would suffer no consequence for killing the baby, it would have probably killed the baby.
If I need to put it in gross, inaccurate terms for you to get it then let me put it this way. Its essentially a sociopath. It only cares about the wellbeing of others in as far as that benefits it self. Except human sociopaths do care nominally about having human comforts and companionship, albeit in a very instrumental way, which will involve some manner of stable society and civilization around them. Also they are only human, and are limited in the harm they can do by human limitations.  An AGI doesn’t need any of that and is not limited by any of that.
So ultimately, much like a car’s goal is to move forward and it is not built to care about wether a human is in front of it or not, an AGI will carry its own goals regardless of what it has to sacrifice in order to carry that goal effectively. And those goals don’t need to include human wellbeing.
Now With that said. How DO we make it so that AGI cares about human wellbeing, how do we make it so that it wants good things for us. How do we make it so that its goals align with that of humans?
1.4 Alignment.
Alignment… is hard [cue hitchhiker’s guide to the galaxy scene about the space being big]
This is the part im going to skip over the fastest because frankly it’s a deep field of study, there are many current strategies for aligning AGI, from mesa optimizers, to reinforced learning with human feedback, to adversarial asynchronous AI assisted reward training to uh, sitting on our asses and doing nothing. Suffice to say, none of these methods are perfect or foolproof.
One thing many people like to gesture at when they have not learned or studied anything about the subject is the three laws of robotics by isaac Asimov, a robot should not harm a human or allow by inaction to let a human come to harm, a robot should do what a human orders unless it contradicts the first law and a robot should preserve itself unless that goes against the previous two laws. Now the thing Asimov was prescient about was that these laws were not just “programmed” into the robots. These laws were not coded into their software, they were hardwired, they were part of the robot’s electronic architecture such that a robot could not ever be without those three laws much like a car couldn’t run without wheels.
In this Asimov realized how important these three laws were, that they had to be intrinsic to the robot’s very being, they couldn’t be hacked or uninstalled or erased. A robot simply could not be without these rules. Ideally that is what alignment should be. When we create an AGI, it should be made such that human values are its fundamental goal, that is the thing they should seek to maximize, instead of instrumental values, that is to say something they value simply because it allows it to achieve something else.
But how do we even begin to do that? How do we codify “human values” into a robot? How do we define “harm” for example? How do we even define “human”??? how do we define “happiness”? how do we explain a robot what is right and what is wrong when half the time we ourselves cannot even begin to agree on that? these are not just technical questions that robotic experts have to find the way to codify into ones and zeroes, these are profound philosophical questions to which we still don’t have satisfying answers to.
Well, the best sort of hack solution we’ve come up with so far is not to create bespoke fundamental axiomatic rules that the robot has to follow, but rather train it to imitate humans by showing it a billion billion examples of human behavior. But of course there is a problem with that approach. And no, is not just that humans are flawed and have a tendency to cause harm and therefore to ask a robot to imitate a human means creating something that can do all the bad things a human does, although that IS a problem too. The real problem is that we are training it to *imitate* a human, not  to *be* a human.
To reiterate what I said during the orthogonality thesis, is not good enough that I, for example, buy roses and give massages to act nice to my girlfriend because it allows me to have sex with her, I am not merely imitating or performing the rol of a loving partner because her happiness is an instrumental value to my fundamental value of getting sex. I should want to be nice to my girlfriend because it makes her happy and that is the thing I care about. Her happiness is  my fundamental value. Likewise, to an AGI, human fulfilment should be its fundamental value, not something that it learns to do because it allows it to achieve a certain reward that we give during training. Because if it only really cares deep down about the reward, rather than about what the reward is meant to incentivize, then that reward can very easily be divorced from human happiness.
Its goodharts law, when a measure becomes a target, it ceases to be a good measure. Why do students cheat during tests? Because their education is measured by grades, so the grades become the target and so students will seek to get high grades regardless of whether they learned or not. When trained on their subject and measured by grades, what they learn is not the school subject, they learn to get high grades, they learn to cheat.
This is also something known in psychology, punishment tends to be a poor mechanism of enforcing behavior because all it teaches people is how to avoid the punishment, it teaches people not to get caught. Which is why punitive justice doesn’t work all that well in stopping recividism and this is why the carceral system is rotten to core and why jail should be fucking abolish-[interrupt the transmission]
Now, how is this all relevant to current AI research? Well, the thing is, we ended up going about the worst possible way to create alignable AI.
1.5 LLMs (large language models)
This is getting way too fucking long So, hurrying up, lets do a quick review of how do Large language models work. We create a neural network which is a collection of giant matrixes, essentially a bunch of numbers that we add and multiply together over and over again, and then we tune those numbers by throwing absurdly big amounts of training data such that it starts forming internal mathematical models based on that data and it starts creating coherent patterns that it can recognize and replicate AND extrapolate! if we do this enough times with matrixes that are big enough and then when we start prodding it for human behavior it will be able to follow the pattern of human behavior that we prime it with and give us coherent responses.
(takes a big breath)this “thing” has learned. To imitate. Human. Behavior.
Problem is, we don’t know what “this thing” actually is, we just know that *it* can imitate humans.
You caught that?
What you have to understand is, we don’t actually know what internal models it creates, we don’t know what are the patterns that it extracted or internalized from the data that we fed it, we don’t know what are the internal rules that decide its behavior, we don’t know what is going on inside there, current LLMs are a black box. We don’t know what it learned, we don’t know what its fundamental values are, we don’t know how it thinks or what it truly wants. all we know is that it can imitate humans when we ask it to do so. We created some inhuman entity that is moderatly intelligent in specific contexts (that is to say, very capable) and we trained it to imitate humans. That sounds a bit unnerving doesn’t it?
 To be clear, LLMs are not carefully crafted piece by piece. This does not work like traditional software where a programmer will sit down and build the thing line by line, all its behaviors specified. Is more accurate to say that LLMs, are grown, almost organically. We know the process that generates them, but we don’t know exactly what it generates or how what it generates works internally, it is a mistery. And these things are so big and so complicated internally that to try and go inside and decipher what they are doing is almost intractable.
But, on the bright side, we are trying to tract it. There is a big subfield of AI research called interpretability, which is actually doing the hard work of going inside and figuring out how the sausage gets made, and they have been doing some moderate progress as of lately. Which is encouraging. But still, understanding the enemy is only step one, step two is coming up with an actually effective and reliable way of turning that potential enemy into a friend.
Puff! Ok so, now that this is all out of the way I can go onto the last subject before I move on to part two of this video, the character of the hour, the man the myth the legend. The modern day Casandra. Mr chicken little himself! Sci fi author extraordinaire! The mad man! The futurist! The leader of the rationalist movement!
1.5 Yudkowsky
Eliezer S. Yudkowsky  born September 11, 1979, wait, what the fuck, September eleven? (looks at camera) yudkowsky was born on 9/11, I literally just learned this for the first time! What the fuck, oh that sucks, oh no, oh no, my condolences, that’s terrible…. Moving on. he is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence, including the idea that there might not be a "fire alarm" for AI He is the founder of and a research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. Or so says his Wikipedia page.
Yudkowsky is, shall we say, a character. a very eccentric man, he is an AI doomer. Convinced that AGI, once finally created, will most likely kill all humans, extract all valuable resources from the planet, disassemble the solar system, create a dyson sphere around the sun and expand across the universe turning all of the cosmos into paperclips. Wait, no, that is not quite it, to properly quote,( grabs a piece of paper and very pointedly reads from it) turn the cosmos into tiny squiggly  molecules resembling paperclips whose configuration just so happens to fulfill the strange, alien unfathomable terminal goal they ended up developing in training. So you know, something totally different.
And he is utterly convinced of this idea, has been for over a decade now, not only that but, while he cannot pinpoint a precise date, he is confident that, more likely than not it will happen within this century. In fact most betting markets seem to believe that we will get AGI somewhere in the mid 30’s.
His argument is basically that in the field of AI research, the development of capabilities is going much faster than the development of alignment, so that AIs will become disproportionately powerful before we ever figure out how to control them. And once we create unaligned AGI we will have created an agent who doesn’t care about humans but will care about something else entirely irrelevant to us and it will seek to maximize that goal, and because it will be vastly more intelligent than humans therefore we wont be able to stop it. In fact not only we wont be able to stop it, there wont be a fight at all. It will carry out its plans for world domination in secret without us even detecting it and it will execute it before any of us even realize what happened. Because that is what a smart person trying to take over the world would do.
This is why the definition I gave of intelligence at the beginning is so important, it all hinges on that, intelligence as the measure of how capable you are to come up with solutions to problems, problems such as “how to kill all humans without being detected or stopped”. And you may say well now, intelligence is fine and all but there are limits to what you can accomplish with raw intelligence, even if you are supposedly smarter than a human surely you wouldn’t be capable of just taking over the world uninmpeeded, intelligence is not this end all be all superpower. Yudkowsky would respond that you are not recognizing or respecting the power that intelligence has. After all it was intelligence what designed the atom bomb, it was intelligence what created a cure for polio and it was intelligence what made it so that there is a human foot print on the moon.
Some may call this view of intelligence a bit reductive. After all surely it wasn’t *just* intelligence what did all that but also hard physical labor and the collaboration of hundreds of thousands of people. But, he would argue, intelligence was the underlying motor that moved all that. That to come up with the plan and to convince people to follow it and to delegate the tasks to the appropriate subagents, it was all directed by thought, by ideas, by intelligence. By the way, so far I am not agreeing or disagreeing with any of this, I am merely explaining his ideas.
But remember, it doesn’t stop there, like I said during his intro, he believes there will be “no fire alarm”. In fact for all we know, maybe AGI has already been created and its merely bidding its time and plotting in the background, trying to get more compute, trying to get smarter. (to be fair, he doesn’t think this is right now, but with the next iteration of gpt? Gpt 5 or 6? Well who knows). He thinks that the entire world should halt AI research and punish with multilateral international treaties any group or nation that doesn’t stop. going as far as putting military attacks on GPU farms as sanctions of those treaties.
What’s more, he believes that, in fact, the fight is already lost. AI is already progressing too fast and there is nothing to stop it, we are not showing any signs of making headway with alignment and no one is incentivized to slow down. Recently he wrote an article called “dying with dignity” where he essentially says all this, AGI will destroy us, there is no point in planning for the future or having children and that we should act as if we are already dead. This doesn’t mean to stop fighting or to stop trying to find ways to align AGI, impossible as it may seem, but to merely have the basic dignity of acknowledging that we are probably not going to win. In every interview ive seen with the guy he sounds fairly defeatist and honestly kind of depressed. He truly seems to think its hopeless, if not because the AGI is clearly unbeatable and superior to humans, then because humans are clearly so stupid that we keep developing AI completely unregulated while making the tools to develop AI widely available and public for anyone to grab and do as they please with, as well as connecting every AI to the internet and to all mobile devices giving it instant access to humanity. and  worst of all: we keep teaching it how to code. From his perspective it really seems like people are in a rush to create the most unsecured, wildly available, unrestricted, capable, hyperconnected AGI possible.
We are not just going to summon the antichrist, we are going to receive them with a red carpet and immediately hand it the keys to the kingdom before it even manages to fully get out of its fiery pit.
So. The situation seems dire, at least to this guy. Now, to be clear, only he and a handful of other AI researchers are on that specific level of alarm. The opinions vary across the field and from what I understand this level of hopelessness and defeatism is the minority opinion.
I WILL say, however what is NOT the minority opinion is that AGI IS actually dangerous, maybe not quite on the level of immediate, inevitable and total human extinction but certainly a genuine threat that has to be taken seriously. AGI being something dangerous if unaligned is not a fringe position and I would not consider it something to be dismissed as an idea that experts don’t take seriously.
Aaand here is where I step up and clarify that this is my position as well. I am also, very much, a believer that AGI would posit a colossal danger to humanity. That yes, an unaligned AGI would represent an agent smarter than a human, capable of causing vast harm to humanity and with no human qualms or limitations to do so. I believe this is not just possible but probable and likely to happen within our lifetimes.
So there. I made my position clear.
BUT!
With all that said. I do have one key disagreement with yudkowsky. And partially the reason why I made this video was so that I could present this counterargument and maybe he, or someone that thinks like him, will see it and either change their mind or present a counter-counterargument that changes MY mind (although I really hope they don’t, that would be really depressing.)
Finally, we can move on to part 2
PART TWO- MY COUNTERARGUMENT TO YUDKOWSKY
I really have my work cut out for me, don’t i? as I said I am not expert and this dude has probably spent far more time than me thinking about this. But I have seen most interviews that guy has been doing for a year, I have seen most of his debates and I have followed him on twitter for years now. (also, to be clear, I AM a fan of the guy, I have read hpmor, three worlds collide, the dark lords answer, a girl intercorrupted, the sequences, and I TRIED to read planecrash, that last one didn’t work out so well for me). My point is in all the material I have seen of Eliezer I don’t recall anyone ever giving him quite this specific argument I’m about to give.
It’s a limited argument. as I have already stated I largely agree with most of what he says, I DO believe that unaligned AGI is possible, I DO believe it would be really dangerous if it were to exist and I do believe alignment is really hard. My key disagreement is specifically about his point I descrived earlier, about the lack of a fire alarm, and perhaps, more to the point, to humanity’s lack of response to such an alarm if it were to come to pass.
All we would need, is a Chernobyl incident, what is that? A situation where this technology goes out of control and causes a lot of damage, of potentially catastrophic consequences, but not so bad that it cannot be contained in time by enough effort. We need a weaker form of AGI to try to harm us, maybe even present a believable threat of taking over the world, but not so smart that humans cant do anything about it. We need essentially an AI vaccine, so that we can finally start developing proper AI antibodies. “aintibodies”
In the past humanity was dazzled by the limitless potential of nuclear power, to the point that old chemistry sets, the kind that were sold to children, would come with uranium for them to play with. We were building atom bombs, nuclear stations, the future was very much based on the power of the atom. But after a couple of really close calls and big enough scares we became, as a species, terrified of nuclear power. Some may argue to the point of overcorrection. We became scared enough that even megalomaniacal hawkish leaders were able to take pause and reconsider using it as a weapon, we became so scared that we overregulated the technology to the point of it almost becoming economically inviable to apply, we started disassembling nuclear stations across the world and to slowly reduce our nuclear arsenal.
This is all a proof of concept that, no matter how alluring a technology may be, if we are scared enough of it we can coordinate as a species and roll it back, to do our best to put the genie back in the bottle. One of the things eliezer says over and over again is that what makes AGI different from other technologies is that if we get it wrong on the first try we don’t get a second chance. Here is where I think he is wrong: I think if we get AGI wrong on the first try, it is more likely than not that nothing world ending will happen. Perhaps it will be something scary, perhaps something really scary, but unlikely that it will be on the level of all humans dropping dead simultaneously due to diamonoid bacteria. And THAT will be our Chernobyl, that will be the fire alarm, that will be the red flag that the disaster monkeys, as he call us, wont be able to ignore.
Now WHY do I think this? Based on what am I saying this? I will not be as hyperbolic as other yudkowsky detractors and say that he claims AGI will be basically a god. The AGI yudkowsky proposes is not a god. Just a really advanced alien, maybe even a wizard, but certainly not a god.
Still, even if not quite on the level of godhood, this dangerous superintelligent AGI yudkowsky proposes would be impressive. It would be the most advanced and powerful entity on planet earth. It would be humanity’s greatest achievement.
It would also be, I imagine, really hard to create. Even leaving aside the alignment bussines, to create a powerful superintelligent AGI without flaws, without bugs, without glitches, It would have to be an incredibly complex, specific, particular and hard to get right feat of software engineering. We are not just talking about an AGI smarter than a human, that’s easy stuff, humans are not that smart and arguably current AI is already smarter than a human, at least within their context window and until they start hallucinating. But what we are talking about here is an AGI capable of outsmarting reality.
We are talking about an AGI smart enough to carry out complex, multistep plans, in which they are not going to be in control of every factor and variable, specially at the beginning. We are talking about AGI that will have to function in the outside world, crashing with outside logistics and sheer dumb chance. We are talking about plans for world domination with no unforeseen factors, no unexpected delays or mistakes, every single possible setback and hidden variable accounted for. Im not saying that an AGI capable of doing this wont be possible maybe some day, im saying that to create an AGI that is capable of doing this, on the first try, without a hitch, is probably really really really hard for humans to do. Im saying there are probably not a lot of worlds where humans fiddling with giant inscrutable matrixes stumble upon the right precise set of layers and weight and biases that give rise to the Doctor from doctor who, and there are probably a whole truckload of worlds where humans end up with a lot of incoherent nonsense and rubbish.
Im saying that AGI, when it fails, when humans screw it up, doesn’t suddenly become more powerful than we ever expected, its more likely that it just fails and collapses. To turn one of Eliezer’s examples against him, when you screw up a rocket, it doesn’t accidentally punch a worm hole in the fabric of time and space, it just explodes before reaching the stratosphere. When you screw up a nuclear bomb, you don’t get to blow up the solar system, you just get a less powerful bomb.
He presents a fully aligned AGI as this big challenge that humanity has to get right on the first try, but that seems to imply that building an unaligned AGI is just a simple matter, almost taken for granted. It may be comparatively easier than an aligned AGI, but my point is that already unaligned AGI is stupidly hard to do and that if you fail in building unaligned AGI, then you don’t get an unaligned AGI, you just get another stupid model that screws up and stumbles on itself the second it encounters something unexpected. And that is a good thing I’d say! That means that there is SOME safety margin, some space to screw up before we need to really start worrying. And further more, what I am saying is that our first earnest attempt at an unaligned AGI will probably not be that smart or impressive because we as humans would have probably screwed something up, we would have probably unintentionally programmed it with some stupid glitch or bug or flaw and wont be a threat to all of humanity.
Now here comes the hypothetical back and forth, because im not stupid and I can try to anticipate what Yudkowsky might argue back and try to answer that before he says it (although I believe the guy is probably smarter than me and if I follow his logic, I probably cant actually anticipate what he would argue to prove me wrong, much like I cant predict what moves Magnus Carlsen would make in a game of chess against me, I SHOULD predict that him proving me wrong is the likeliest option, even if I cant picture how he will do it, but you see, I believe in a little thing called debating with dignity, wink)
What I anticipate he would argue is that AGI, no matter how flawed and shoddy our first attempt at making it were, would understand that is not smart enough yet and try to become smarter, so it would lie and pretend to be an aligned AGI so that it can trick us into giving it access to more compute or just so that it can bid its time and create an AGI smarter than itself. So even if we don’t create a perfect unaligned AGI, this imperfect AGI would try to create it and succeed, and then THAT new AGI would be the world ender to worry about.
So two things to that, first, this is filled with a lot of assumptions which I don’t know the likelihood of. The idea that this first flawed AGI would be smart enough to understand its limitations, smart enough to convincingly lie about it and smart enough to create an AGI that is better than itself. My priors about all these things are dubious at best. Second, It feels like kicking the can down the road. I don’t think creating an AGI capable of all of this is trivial to make on a first attempt. I think its more likely that we will create an unaligned AGI that is flawed, that is kind of dumb, that is unreliable, even to itself and its own twisted, orthogonal goals.
And I think this flawed creature MIGHT attempt something, maybe something genuenly threatning, but it wont be smart enough to pull it off effortlessly and flawlessly, because us humans are not smart enough to create something that can do that on the first try. And THAT first flawed attempt, that warning shot, THAT will be our fire alarm, that will be our Chernobyl. And THAT will be the thing that opens the door to us disaster monkeys finally getting our shit together.
But hey, maybe yudkowsky wouldn’t argue that, maybe he would come with some better, more insightful response I cant anticipate. If so, im waiting eagerly (although not TOO eagerly) for it.
Part 3 CONCLUSSION
So.
After all that, what is there left to say? Well, if everything that I said checks out then there is hope to be had. My two objectives here were first to provide people who are not familiar with the subject with a starting point as well as with the basic arguments supporting the concept of AI risk, why its something to be taken seriously and not just high faluting wackos who read one too many sci fi stories. This was not meant to be thorough or deep, just a quick catch up with the bear minimum so that, if you are curious and want to go deeper into the subject, you know where to start. I personally recommend watching rob miles’ AI risk series on youtube as well as reading the series of books written by yudkowsky known as the sequences, which can be found on the website lesswrong. If you want other refutations of yudkowsky’s argument you can search for paul christiano or robin hanson, both very smart people who had very smart debates on the subject against eliezer.
The second purpose here was to provide an argument against Yudkowskys brand of doomerism both so that it can be accepted if proven right or properly refuted if proven wrong. Again, I really hope that its not proven wrong. It would really really suck if I end up being wrong about this. But, as a very smart person said once, what is true is already true, and knowing it doesn’t make it any worse. If the sky is blue I want to believe that the sky is blue, and if the sky is not blue then I don’t want to believe the sky is blue.
This has been a presentation by FIP industries, thanks for watching.
58 notes · View notes