#artificial intelligence thesis
Explore tagged Tumblr posts
Text
*This document is a blend of Real History and Speculative Extrapolation. *Historically Accurate Elements: Early Technology (1700-1950), Mid-20th Century Developments, & Recent Technology (1990's-2020's). *This Document Contains Conspiracy Theories of: Overstated Connections, Current Ai Capabilities, & Economic Data.
#Ai#artificial intelligence#consciousness#Content Creator#control#Creative Writing#dystopia#Dystopian#Emmitt Owens#existential horror#freedom#Future#futuristic#Historical Events#horror#humanity#optimization#Plizaya Productions#psychological horror#science fiction#Scifi#Short Story#surveillance#technology#The Gentle Dystopia#Thesis#ThisIsMineEO#TumbleDweeeb#WordPress Writer#Writer Life
2 notes
·
View notes
Video
youtube
I dare you to watch this video #hvanderbiltalexander
#youtube#sonicconditioning#controlledschizophrenia#ThesisPHDLawsuits#Thesis#PHD#Lawsuits#Miami#Florida#Miami-Dade County#Organized Harassment#behavior modification#ai#artificial intelligence#billionaire#ignored#disregarded#USA#America#The United States of America#Learn#Stream#Streaming#Founders Stream
0 notes
Text
Data Science and Artificial Intelligence Degree under 30 minutes | courses, thesis, internship!
course descriptions, personal experience, and life after uni: this video goes over my bachelor study I did at Maastricht University … source
0 notes
Note
I want my boy Gaz some recognition 😭😭😭😭
Maybe the team will get to meet her🤨🤨🤨🤨
(okay but like imagine... Gaz having a wife similar to Price's and Ghost's wife like she is all sweet, loving, and caring... And then boom! She's Carrying Gaz like it's nothing! Like she has that Texas Cottage core vibe (is that even a thing?) like girl is sunshine and strength)
omg omg omg... im so sorry it took so long anon RAAAA. But! I have an ideaa hehehhe. Soo yk Rick and Morty?? Hehehhe well…

cw: chaotic afab reader x kyle ‘gaz’ garrick, slightly mad scientist afab reader, fluff
HEADCANON: The team meets Gaz’s bird. And well…. She was probably more than they’d expected
PAIRING: Kyle ‘Gaz’ Garrick x afab reader
Kyle has been dating her for months.
Wildly intelligent and hilariously blunt. Slightly feral lass who wears chaos like perfume and can talk about planetary physics and frogs in the same breath.
The kind who corrects documentaries mid-sentence, and once told Kyle after snooping through his documents, about how his missile trajectory calculations were “embarrassingly phallic,” and sincerely meant it.
And Kyle? Well... He’s absolutely gone for her.
Has been since day one when she marched up to him after attending a childhood friend’s lecture, shoved a melting popsicle in his hand, and said:
"If you had to save the world with only one mathematical constant, which one would you choose? Don’t think — answer!"
Caught between her unblinking stare and a rapidly dripping sticky mango mixture near his cargos, Kyle had only blinked twice and mumbled, “...Pi?”
“Coward,” she said, then grinned like she’d just met her new favorite problem.
That was it. Done. Hooked. Doomed, even.
And well Kyle?
Kyle, awestruck, bemused, and surprised — fingers and wrist sticky with artificial sugar and syrup. The gossamer and sweet liquid staining his newly acquired cargos — could only smile back and nod almost knowingly.
The 141 meet her months later though, during one of those rare in-between missions when there's time for drinks and dinner and recharging before the next chaos hits. But here he was. Fucking sweating and itching through and through.
Well it wasn’t like he never expected all their paths to cross eventually. He always knew she’d meet them. Meet this.
Introduce herself to this part of his life soon enough and not as an accessory or a passing visitor. But as something inevitable. Like gravity. Like sunrise. Something meant to be embedded into every bit of narrative she could sew herself into.
Because if Kyle was ever honest, she knew she wasn’t the kind of person you could keep in a separate drawer. No, never. Would never even think of ever shucking her away on some pent up flat or four-cornered bedroom. Pretty little bird kept and fed well with jewels and soft perches? No. That wasn’t her.
That was never going to be her.
Never.
She was storm and thesis, claws and questions, and Kyle -- sweet, brilliant Kyle -- knew it from the moment she walked into his life like a living paradox, equal parts catastrophe and charm. She didn’t visit chapters. She rewrote them. Annotated margins. Circled themes. Demanded footnotes.
So yes, he always knew.
She overflows. Gushes. Deluged. Trickles sweetly and syrupy into the vestiges of the gloomy part of his existence. Will spill into everything and into him. And Kyle, hopelessly, stupidly gone for her, will never really try to stop it.
So if he was being honest, some part of him had always imagined this moment -- her walking into the same room as the lads, sharp-tongued and starlit, leaving a trail of sparks in her wake. Not if. But more on when.
And now it was when.
But Christ was he still bloody nervous, aye?
Collar too hot and cap a bit too tight on his forehead, palms vaguely clammy like he was back in basic waiting to be called for his first ever inspection all over again. Which was stupid, because this wasn’t a mission. Wasn’t even a bloody op.
It was just.... her -- meeting the rest of his team.
And yet, Kyle was still internally combusting like she was a ticking biochemical warhead that could either charm the lads or annihilate the entirety of Price's backyard.
He glanced sideways at the entrance. No sign of her yet. Okay. Okay. That was fine.
Soap, across from him, was already two pints in and mid-rant about the correct ranking of fast food crisps, while Ghost sat with his arms crossed and offered the occasional low grunt of disagreement. Slow blinking in boredom and lazying around near some of Mrs. Price's potted plants.
Price nursed a whiskey like it was an old grudge and pretended not to be listening, albeit trying to stifle the slight quirk of his lip every time Soap seemed to look even more chauved and disgruntled at Ghost's lack of interest at the importance of learning the difference between Cheese-flavored crisps and barbecued ones. The younger bloke almost fuming at the disinterested and blased remarks he received from his superior. Snobbish over Ghost not knowing the based characteristics on Vinegar vs Vinegar-coated.
“She’s gonna love you lot,” Kyle muttered, more to himself than anyone else.
“Still don’t get why you’re sweatin’ bullets, mate,” Price replies after sidling up next to Gaz after Soap started yelling at Ghost over the massive and weighty bastard choosing Walkers over Pringles, shaking his head with an amused grin. “You said she’s a wee genius, yeah? She'll be fine aye?"
“She's just.... odd” Kyle said after swallowing
Price’s eyebrows drooped a bit reassuringly. Boonie hat tilted, expression something between humoured and understanding -- the same look he gives rookies before a live op. “Odd’s never been a problem with us, son. You seen Soap’s sock drawer?”
“Ah sort them by how often I wear ‘em, obviously” Soap called out from the booth, clearly listening now after a huff. Stomping back to grab another pint. “It’s practical warfare.”
“Freak behaviour,” Ghost muttered behind his own drink.
Kyle exhaled a nervous laugh, glancing again at the door. “I just mean… she’s different. Proper brilliant, but she says things like ‘Diogenes walked so Newton could run,’ and she means it. Like, genuinely. She once argued with Siri and won.”
“She sounds like a bloody delight,” Price replied dryly, then gave him a nudge with his elbow. “C’mon. You think any of us are normal?”
Kyle looked down at his hands, a little calloused, a little sweaty. “She just means a lot. Don’t want her thinkin’ she’s gotta tone herself down for anyone. She deserves better than that”
Price’s voice lowered, sincere. “Then don’t let her. The team’ll love her for exactly who she is. Just like you already do.”
Kyle was about to respond -- probably with something sarcastic and choked-up -- when the door creaked open.
She walks through the gate carrying a box labeled “Absolutely Not Explosives (Maybe Snacks)”, wearing a bright-green button down with her usual tenured slacks and folded manila envelopes tucked in one pocket. Windblown, wide-eyed, her glasses sliding down her nose, and grinning like she just stepped out of a fever dream and into someone else’s backyard. Armed and saddled with that same barefoot-in-a-storm kind of confidence that had ruined him from day one.
“Hi!” she calls out.
And it’s not just a greeting -- it’s an announcement. A declaration of entry. Like Archimedes, entropy, and the snack box had all been waiting for this exact moment to collide.
Kyle’s heart stuttered once, then promptly gave up any hope of ever functioning normally again.
She beelined for him as usual like a woman on a mission, but halfway there.... she noticed the fire pit --
-- specifically, the way it was constructed.
Oh shit, not again.
She veered without hesitation, knelt next to it, squinting like she was analyzing a nuclear core, and muttered, “Someone built this using a Fibonacci spiral as emotional support.”
“Fuck's Fibonacci?”, Soap whispered loudly, nudging Ghost with his elbow. “This Gaz's lass then, aye?”
Ghost gave her a slow once-over. Head tilting a bit at her mismatched flats and patched pockets. “Bird looks like she drinks Red Bull and argues with God.”
Before Kyle could respond -- or run, depending on the emotional weather -- she reaches into the sleeve of her coat and yanks out a... suspicious-looking metal rod.
No one spoke.
Then -- click -- a blade folded out. But not like a normal blade. No, this looked like a half-melted Swiss Army knife made love to a soldering iron. Wires dangling at the bits of shorn metal. Clinking and sinewy it was. A button at the side of the make-shift handle blinking blue rapidly.
Yep. Something definitely hissed, Price concludes as he minutely flinches for the first time at the sight of something so foreign and obtuse near his wife's petunias.
Ghost tensed, gaze locked like he was trying to identify what kind of improvised weapon she’d just birthed into existence, while Soap -- daft numpty -- only leaned forward in fascination.
“What the fuck is that?” Price asked, calm but also not calm, the way a father might ask why there’s a raccoon in the dishwasher.
She didn’t look up. “Thermodynamic calibrator-slash-ultralight torch. Built it from scrap and spite. Give me a sec.”
Then she jammed it into the soil like she was performing surgery on the lawn. A sharp hum buzzed through the air. One of the lawn lights flickered. She squinted at the fire pit, adjusted a dial, then jammed the device again into the soil near the base. The fire pit roared to life, its flame suddenly tall and balanced, licking upward in a soft golden spiral. It was mesmerizing, a near-perfect bloom of heat and symmetry.
The men collectively leaned back.
“Hell's bells” Soap muttered.
She stood, smacked some dirt off her knees, and grinned with both pride and a worrying amount of glee. “There,” she said, adjusting a final dial before stepping back. “Now it distributes heat evenly -- low flicker rate, too, in case anyone here’s prone to headaches or, you know… prefers not to feel like they’re being interrogated by the sun.”
Her tone was light, but her eyes flicked briefly toward Ghost -- casual, gentle, like it was just an offhand observation. But Kyle caught it. The way she noticed things most didn’t. The way she chose to.
Soap leaned back slowly, a grin now stretching across his face like a man watching the birth of a new religion.
“I like her", Soap grinned.
Kyle was already up on his feet, rubbing the back of his neck. “Uh, love… you gonna say hi properly, or you planning to interrogate more of the landscaping?”
She stood up straighter now, poised and readied, like nothing was odd once more, turning with an inviting and warmy grin, holding the box up proudly with a small and enthusiastic wave. Almost like she didn't just reconstruct a fire pit with a weaponized calculator and a god complex. “Hi! Sorry, got distracted. The heat ratios were offensive. Also, I brought snacks!”
She shook the box once for emphasis. It jangled. The sound was deeply suspicious.
Ghost, once relaxed and a bit.... touched alarmed that someone picked up on his discomfort with flickering light without him saying a word, now sat a little straighter at that. Eyes sharp once again. Cautious and perched. Shoulders just barely tensed under his hoodie as something absolutely squeaked when she juggled the looming cardboard in her wry hands repeatedly.
Price side-eyed the box like it had a timer.
Soap was still smiling like he’d just found a new hobby. Gait shifting to approach her closer. Reading the “Absolutely Not Explosives" label aloud. “Tha's either a bloody threat or a right good promise.”
“Depends on who opens it,” she replied cheerfully, then smiled open and inviting, adjusting her grip to shake Soap's outstretched palm. Shoving the box right after to Kyle's chest. Price humming in amusement at the sight 'oof' Kyle breathes at the weight and mounty thing now in his grasp. A misguided care package from a mad scientist at that. He was sure of it.
Making him stagger a step back, having to catch it again with both hands as it tilted precariously to one side. Something clinked. Something else sloshed. Something definitely clicked.
Price hummed, one brow rising as he took another sip of whiskey. “She always gift-wrap danger?”
“Only on the holidays,” Kyle muttered, staring down at the box like it was about to start reciting code.
Meanwhile, she was already gripping Soap’s hand with a firm shake, her grin bright, chaotic energy radiating off her like a short-circuited sunbeam.
At his sergeant's words, Price shakes his head in hilarity and interest, a slight lift from his beard for a surprised smile before stepping forward himself and offering his own hand. “You must be the chaos professor.”
She blinked at his hand at that, his words making her pause but grin proudly, grasping his sinewy fingers firmly as well in return. “I’m not a professor. Yet. But I am a Doctor of Applied Theoretical Physics, with a minor in Quantum Physics”
“You’ll fit right in,” he replied, clearly entertained. “I’m John.”
“Captain John Price,” she said then, squinting. Almost like something just pieced itself together in her head. A corner of her glasses slightly blinking green and blue. However, light and subtle -- just a shimmer beneath the lens as if scanning data only she could see.
She tilted her head. “Ohhh. You’re the John Price. Task Force 141. SAS. Operation Kingfisher, the oil rig interception, three confirmed HVTs neutralized in twenty-one minutes. That John.”
Price raised a brow, his grip still firm in her handshake. “That’s a very specific résumé you’re rattling off.”
She grinned, shrugging. “I like to research my boyfriend’s coworkers. Helps me know what kind of cookies to bake and what kinds of extraction plans to draft in case things go horribly wrong. And can I just say for the record, that you truly have a ridiculously symmetrical face.”
Price chuckled low in his throat, that rare and gravelly sound of a man both flattered and bewildered. “Symmetrical, huh?”
She nodded, eyes narrowed with faux scrutiny. “Yep. It’s giving ‘military recruitment poster.’ Like someone made you in a lab to sell patriotism and protein powder.”
Soap let out a loud bark of laughter. “Och, she's clocked you dead-on, Cap"
Kyle was standing off to the side now, box still in his arms, looking like he was debating whether to set it down gently or hurl it into the bushes before something in it decided to hatch. “Please don’t feed her ego,” he called over. “It’s already got its own gravitational field.”
She shot him a wink at his response. “That’s rich coming from the man who cried at my thesis defense.”
“That’s -- I had a cold,” Kyle protested, cheeks already pinking.
“She presented using live fluid simulations and built a metaphor about dark energy and love,” he added for the others, like that would somehow make it less devastating.
Ghost muttered into his glass, “Startin' to think you didn’t pull her… bird drafted you.”
“She did,” Kyle said, deadpan. “I was conscripted.”
Price shook his head, that amused smile now tugging higher under his beard. “Well, Doc, welcome to the madness.”
She glanced at the squad -- all casually observing her like she was both a field report and an open flame -- and clapped her hands once, bright and fearless.
“Excellent,” she said. “Then I’ll make tea after this. Also, about that fire pit--”
Soap looked delighted. “Aye, that wee disaster? That wis me, cheers.”
She gave him a mock-somber nod. Almost cringing at Soap's enthusiasm as if it physically hurt her to try and school someone for something pointless and small at the end of the day. “I admire the conviction, Johnny. But the stones.... were holding a grudge.”
Ghost tilted his head. “Fuck do stones hold a grudge for?”
She looked at him over her glasses. “Vibrations. Like people. Only less dramatic.”
Soap leaned over to Price, muttering, “This one’s a unit. A proper mad scientist.”
Price snorted. “And you love it.”
“You know I do.”
Finally, Kyle placed the suspicious box on the table with the care of someone setting down a baby rattlesnake. “Alright, so are we opening this or performing a ritual?”
She lit up. “Both.”
Something beeped.
Ghost stiffened.
Soap leaned closer.
Price calmly took another sip of his whiskey like he was very used to seeing strange things unfold in his garden.
And Kyle?
He just grinned, wide and resigned, as she started peeling back the tape with the flair of someone revealing buried treasure. Because this was her. All of her.
Spilling and overflowing for sure. All light, wit, and kinetic mess. Sharp edges wrapped in cellophane, brilliance hidden beneath layers of glitter and chaos and a worrying understanding of black-market circuit boards. Solar flare in the shape of his other half is what it is.
But somehow. Bloody somehow.
Still. Will. And is --
-- utterly Kyle's.
“Alright,” she said brightly, flipping the box open now with a flourish, “Let’s play snack roulette!”
Revealing the inside of the malty cardboard now filled with neatly organized rows of tiny vacuum-sealed parcels, each labelled with suspicious enthusiasm:
Nutritionally Suspicious Brownies
Possibly Radioactive Jam -- Only Kyle's
Chili Lemon Cry-Biscuits
Emotionally Unstable Muffins
Entropy Taffy
Soap leaned in with glee. “Christ, ye name yer snacks like they’ve got emotional issues”
“They kind of are,” she replied, plucking out the Cry-Biscuits and casually tossing one to Ghost, who caught it one-handed with all the enthusiasm of a man expecting to be poisoned. He sniffed it once, then gave her a look.
“Why’s it humming.”
“Because it’s fresh,” she said simply, then added, “And also maybe reacting to trace particles in the air. The spice is… volatile.”
Ghost stared. “You trying to kill us bird?”
“If I was, you'd already be carbon scoring,” she chirped.
Soap popped one of the taffies into his mouth with a crunch. Immediately blinked. “Holy shite. I can taste colors!”
masterlist
#cod men#cod mwii#cod x reader#cod fanfic#cod modern warfare#cod mw2#cod mobile#kyle gaz x you#kyle gaz x reader#kyle gaz garrick#gaz x reader#gaz x you#gaz x y/n#kyle gaz garrick x reader#kyle gaz garrick x you#gaz x female reader#gaz x oc#kyle garrick#sergeant kyle gaz garrick#cod fic#cod fluff#cod fandom#cod#tf 141 au#tf 141 x you#tf 141 x reader#cod oc#call of duty x reader#call of duty fanfic#call of duty modern warfare
157 notes
·
View notes
Note
This is such a complex and nuanced topic that I can’t stop thinking now about artificial intelligence, personhood, and what it means to be alive. Because golem!Prowl actually seems to exist somewhere in the intersection of those ideas.
Certainly Prowl does not have a soul. And yet, where other golems depicted in mimic au seem to operate primarily as rule-based entities given a set of predefined orders that define their function, Prowl is able to go a step further — learning and defining his own rules based on observation and experience. Arguably, Prowl is even more advanced in this regard than real-world AI agents we might interact with such as ChatGPT (which still requires humans to tell it: when to update it’s knowledge, what data to use, and what that data means) currently are. Because Prowl formulates knowledge not just from a distillation and concentration of the most prominent and commonly accepted ideas that have come before.
He shows this when he rejects all the views that society accepts — resulting in the formulation of the idea that Primus must be wrong. And in a lot of ways, Prowl’s learning that gets him to ultimately reach that conclusion seems a lot more closely related to how we learn. He learns from observing the actions of those around him, from listening to what the people closest to him say and from experiencing things for himself. And this also shows in the beginnings of his interaction with Jazz. Prowl may know things like friendship as abstract concepts, but he only can truly come to define what they mean because he is experiencing them.
In some ways then, what seems to make Prowl much more advanced in his intelligence is that the conclusions he ultimately draws — the way he updates his understanding of the world to fit the framework he’s been given — is something he does independently. And this is what sets him apart.
So is he a person? Given his lack of soul or spark perhaps not. But then again, what truly defines humanity, for lack of a better word? Because perhaps there is not a clear and distinct line to tell when mimicry and close approximation crosses over to become the real thing.
But given the way that Prowl learns and interacts with the world around him, it does not seem too far-fetched to say that he is alive. And further, that he seems a fairly unique form of life within this continuity. Therefore, is he not his own individual? In much the same way that the others this society deems beasts and monsters because of their unique abilities are also individuals.
It’s just really interesting to think about.
(But I will stop myself there, because I did not initially think this would get as long as it did and I feel like I’ve already written an entire thesis in an ask at this point!)
DAMN That’s a really really interesting essay you got here👁
If we take an artificially created algorithm based on seek a goal -> complete the goal but then give it learning capability of a real person. At what point it’s gonna just become one? And if it gains the ability to have emotions. Could they be considered “real” if it’s processing them in it’s own way completely unknown to us?
I love making stories that force me to question the entire life hahdkj

299 notes
·
View notes
Text
sooo i've seen a couple people talking about ai on here before but idk, i guess i'm old & didn't realize people were really using it for full creative projects??? idk after learning about the deactivation of a very popular account on here i just started typing and wrote this dumb drabble - kind of hoping to inspire youngins to not rely on computers so much and know that the BEST work always comes from your own noodle <3
wrote this in like 20 minutes, arguably mark x reader but no real romantic involvement
the line between creativity and computer generated ideas is thin.
the line between depression and contentment is thinner.
you remember being told as a child that "you wouldn’t always have a calculator in your pocket!" and that learning how to multiply for tax purposes was IMPORTANT.
except, now, you DID always have a calculator, and it was called a smartphone. it translated languages in real time, video chatted your parents in a different state, and gave you recipes for meals your mom never taught. it was a blessing, a miracle wrapped in sleek metal.
so why, now, did you feel so guilty when using it to give you a synopsis on the book you were meant to read for English?
the rap of knuckles against your bedroom window made you jump as if you’d been caught committing a crime. your heart was lurching out of your chest when you looked.
it was just mark, of course.
smiling. awkward. now tapping with one finger toward the lock.
you stood, shuffled over and fumbled to let him in. jesus mark! you could’ve warned somebody!
…but… i texted you like, 30 minutes ago telling you…?
he was trying to reason on deaf ears, your shoulders tight as you paced back to your desk and quickly closed out of the tab that was open. he caught it though – super speed was good for more than just moving.
was that openai? he asked plainly.
huh?
you slap the monitor shut, like the conversation would just be done after that.
i thought you were writing a thesis for your college entry exam?
your face was tight. sour. irritated.
i am, you respond curtly. i just had writers block and needed some help.
mark’s pretty lips pulled down and it made your stomach twist. you knew the hero theatrics were coming.
[y/n], i get being tired, or hitting a bump in the road – believe me, i really do. but… using artificial intelligence for things like this is just… weird, y’know? i saw this post a few weeks ago about this medical student trying to get his MD and was caught using it to answer questions for an exam. can you imagine that guy passing and then trying to prescribe you narcotics?
your expression was deadpan. was he seriously comparing you to a medical student?
mark sighed, running gloved fingers through his hair before letting his arm fall to his side and smiling at you. listen, you don’t need to prove to me your smart. i already know – that’s why you don’t even give me a chance to answer at trivia night.
you had to really focus not to smirk at that. he knew it. he smirked anyway.
so let the board see it too! you have something in your mind worth sharing – even if you’ve gotta sit on it for awhile to get it out.
the frustration was still real. the defensiveness too. you wouldn’t admit it pointblank – that felt like suicide. but the drop in your shoulders, and the way your eyes wouldn’t meet his, were tell enough.
maybe he was right. creativity was dying, and the fight against its ghost was a battle worth waging.
#invincible#invincible fanfic#mark grayson#mark grayson x reader#invincible x reader#mark grayson fanfic#whimsical words#did i just drop an afterschool special for y'all about ai???#yes i did
122 notes
·
View notes
Text
;R1999 A Study on Afflatus (I)
Analysis and theories regarding the concept of Afflatus within the universe of Reverse: 1999
if you're from the old r99 news server or the current r99 rp one (or if you've talked to me at any point about r99) then you might know how obsessed I am with afflatus analysis!
so after going feral on my main account about it, and seeing my afflatus thesis drafts just catch dust on my wips, I decided to just open a discussion about it in the fandom! just little by little as I get the thoughts out of my brain!
so yes, this is very much an invitation for people to discuss and theorize about smaller details of the game such as afflatus, medium and other things--there are many fun ways to interpret the way afflatus applies to characters, I'd love to hear everyone's thoughts! ping me in your posts, go feral in the reblogs or comments!
as usual, transcripts were taken from the R1999 Neocities transcript project!
During the first few months of GL's launch, "Afflatus" was generally considered an extradiegetic aspect of the game--a simple mechanic meant to facilitate gameplay for the player, without any relevance to the lore or the plot. Throughout the course of the following patches, namely within the main story, this idea was disproved in many ways, and there were plenty of clues that reinforced its existence in-universe from the very beginning. For the sake of those who have never noticed, I'll do my best to be thorough, but there may be some instances of Afflatus that I missed, feel free to let me know!
The very first time Afflatus is mentioned is in the main story; Chapter 01 - Stage 4 "Chicago Rescue," Sonetto is the one to bring it up during one of the battle tutorials.
Sonetto: That's quite a lot of critters! Timekeeper, we must do something to turn things around now. Remember what the instructor said in class? "Afflatus is a way to hunt in the world." "Observations of the minerals, plants, stars, and beasts as well as our experiences with the spirit and intelligence let us better understand our own existence." Sonetto: Select a proper target for me, Timekeeper. Sonetto: Use an incantation that is strong against the enemy's Afflatus to defeat them more quickly. You can take it from here, Timekeeper.
The fact that this conversation takes place during a game tutorial doesn't instantly render the contents discussed as "just meta," since we've had many different instances of game mechanics being relevant to the overall history and worldbuilding of the setting; Artificial Somnambulism Therapy is both a game mode and a type of therapy developed by Mesmer Jr's family, used extensively within Laplace, as well as a key plot point in Chapter 3 "Nouvelles et Textes pour Rien"; Picrasma Candy is a way for the player to continue playing and an actual medicine for arcanists developed by Medicine Pocket that Argus heavily depends on to use her own arcanum. Bluepoch makes a point to further develop their story through these mechanics, thus it is impossible to separate them from the story itself--battle conversations, daily tidbits, loading screens, items and other details can all be considered canon! Afflatus is no different.
Another early instance of Afflatus occurs in the Tutorial Notebook, which disappears forever once completed--so the following screenshot was taken from this video!
The text reads:
Sometimes, our Afflatus is strong or weak against an enemy. We need to follow this principle and select the proper arcane skills based on the enemy. Strong or weak? Like a cat to a rat? The relationship between different Afflatuses is like the ecological cycle. When your Afflatus is strong against the target's, your incantation will deal more damage. ─ On Afflatus, Chapter 1
The sticky note implies the existence of a book or research on the subject ("On Afflatus, Chapter 1") which, in turn, supports Sonetto's dialogue about the Afflatus lessons she received in SPDM ("Remember what the instructor said in class?"). With this we can understand that Afflatus exists within the world to a degree that allows it to be studied, also eliminating one of the earliest theories in GL about how Vertin is the only person who can perceive Afflatus due to her status as the Timekeeper.
To my knowledge, there are no direct explanations nor clues as to why or how one would discern Afflatus in others as of writing this. What is the point of assigning Afflatus types in-universe? How can it be done? Sadly, I don't have answers to these questions in particular!
But let's analyze our current examples so far.
According to Sonetto's knowledge on the subject, Afflatus encompasses "observations of the minerals, plants, stars and beasts as well as our experiences with the spirit and intelligence," that allows people to understand themselves. This serves as a list of both Natural (Mineral, Plant, Star, Beast) and Primal (Spirit, Intellect) Afflatuses, while hinting towards the purpose of Afflatus as a tool of introspection.
With this, one may theorize that Afflatus can apply to every living being, as it tackles observations with the surrounding world (Natural Afflatus) and one's inner world (Primal Afflatus). This is partially true, there is a small yet important distinction to be made!
The 1.5 "Revival! The Uluru Games" patch explored the physiological and social differences between humans and arcanists through Ezra and Spathodea, and a new batch of loading screen tidbits were added, such as this one:
The text reads:
The arcanum's Afflatus categories do not apply to humans. However, factors such as personality and preferred instruments may cause certain individuals to have a closer affinity to a particular type of arcane Afflatus.
This daily tidbit confirms that arcanum's Afflatus categories do not apply to humans. The rest of tidbits emphasize on the contrast between the two groups in different aspects; humans cannot cast arcane skills, they use technology and commands rather than incantations, they're considered rational instead of passionate, reason vs instinct, etc etc.
But I believe there is an important distinction to be made! The first loading screen tidbit mentions "arcanum's Afflatus categories," rather than Afflatus itself. There is an aspect of Afflatus that is directly linked with arcanum, and thus it makes sense that it cannot be applied to humans.
We can see this happening in Greta Hoffman's report from the Special Chapter - "The Star" in which she explains her interactions with 37's mother, 77. Here are a few excerpts.
Writer of the Report: I'm not sure whether she was making fun of me or being serious, but I had this feeling that she was eager to tell me how she was granted the secret through a moment of afflatus. It seemed she just saw through the laws behind all things instead of finding them through logical deduction.
"HER": "The rhombus can't be seen with eyes. You shall close your eyes, hearken to the teaching of the supreme existence, and seize the moment of afflatus!" Writer of the Report: Of course, I didn't see anything, nor did I understand what a moment of afflatus was. Perhaps it's just another privilege enjoyed by arcanists, just like their right to be lunatic. Nevertheless, she reached the correct conclusion in a completely wrong way. Is it really possible?
Writer of the Report: But, if there is a god, why are you playing such a prank on us, after we had suffered from the collapse of all the existing orders and the failure of all the great laws? If this is what she called the glimpse of the supreme existence, the moment of afflatus, do you have to present it in such a cruel way?
Even Matilda brings up afflatus during this chapter, in reference to her job monitoring new members.
Matilda: &$#% ... I know she's a rookie, but even so, she's way too unbelievable! "Guide new members with caution and patience. Trigger their afflatus at the right time." Oh, I have to admit, Vertin is doing it better than me for now.
Note the distinction between capital A "Afflatus" and lowercase "afflatus." In this context, the "moment of afflatus" exists as its namesake implies--as an inspiration, a moment of divine impulse that only arcanists can utilize and, therefore, cannot be explained nor proved through human logic.
That is the basis for the tension between Greta Hoffman (a mixed whose arcane blood has been so diluted she can easily pass off as a pure-blooded human) and 77 (a pure-blooded arcanist from an isolated and ancient arcanist society) as two characters from vastly different groups that cannot reconcile nor find a middle ground in their differences. This is the arcane aspect of Afflatus as the 1.5's tidbit mentions, the part that cannot be applied to mankind. But Afflatus also exists as a tool of introspection as mentioned before, which encompasses aspects that any living being can relate to--therefore, it explains why we have both playable and non-playable humans with Afflatus types.
To further understand how non-arcane living beings can still lean towards different types of Afflatus, let's examine the enemies in the game.
The main story is very consistent with how they portray enemies, if you pay attention to their battle information you can see all the deliberate details Bluepoch has added; every enemy comes with a short description that might evolve and change along the story in future renditions of their fights, there are different card sets for different factions (Manus Vindictae's deep black and blue cards vs the Foundation's white and light blue cards), each attack/incantation is uniquely named, giving context and insight into the enemy you're fighting, some even have uniquely named traits/buffs/debuffs!
And as far as I know, every single enemy in the game, regardless of whether they're arcane in nature or not, has an Afflatus assigned to them. Let's look at arcane beings first.

We see that all three examples match their respective Afflatus; a Mineral Carbuncle with a Mineral Afflatus, a Dryad, commonly associated with nature has the Plant Afflatus, and Druvis III, a playable character, retains her Plant Afflatus even as a mysterious NPC boss fight in which her regular incantations have been switched to Manus Vindictae.
And we can also see continuity in Afflatus in other bosses, such as Matilda and Lilya in later chapters--they both retain their own Afflatus as playable characters, and much like Druvis, Matilda's cards are concealed as the default Foundation set for the sake of keeping their identities concealed. There is a clear intent behind these choices.

In the Surface levels of Artificial Somnambulism, the ones that directly correlate to the main story, also feature many other playable characters with their respective Afflatuses; La Source in "Misty Lake a"; The Fool, Bunny Bunny, Pavia, Satsuki and Tennant in "Floating Park a" ...
And I do want to mention that there are instances in which the game allows us to fight playable characters whose Afflatuses have been changed--but there is still an clear goal behind this.
The beta levels of Artificial Somnambulism feature the same aforementioned characters as the alpha ones, but a few of them have different Afflatus. This can be explained within the main story as the direct result of Vertin's mind being tampered with, as we see her struggle to remember and forget things clearly during her AST induced coma in Chapter 3.
???: Her traumatic segment has been reactivated. Increase the power, stabilize her psychube. Try the next dream. Z: The artificial somnambulism therapy may not work on her, Mesmer.
Mesmer Jr.: It means she had suffered the same traumatic experience repeatedly. Even so, she showed no behavioral or cognitive impairment. Back then, as we held her down and put the helmet on her, she even advised me in an extremely calm manner … “I agree with your judgment, but it’s just for this time.” … She was the bellwether of the “break-away” incident after all. I’ll say she’s been well-behaved this time. Sonetto: … I-I thought … Timekeeper is receiving for her low spirit. But you said you held her down … Mesmer Jr.: Oh, that’s just another description of the method used for the same purpose. The aim was to ensure Vertin was unconscious and taken back. That’s the direct order from the vice president of the committee, Constantine. The order from on high was given on the premise of rational thinking and consideration over pros and cons─you are not questioning the reasoning of mankind, are you?
A similar situation happens during UTTU Week, which features playable characters as different characters within the story that UTTU is attempting to share. For example, in 1.2 "Nightmare at Green Lake," the playable characters you fight in UTTU Week represent various different archetypes and tropes in horror.
These inconsistencies are done on purpose, as they're not meant to reflect the truth 1:1.
Now, let's look at human enemies. Here are the two human children from the beginning of the game who disrupt the suitcase--despite being humans, they both have Afflatus assigned to them, and not only that but different types as well, Star and Mineral respectively.

So, to backtrack again! Afflatus cannot be applied to humans from an arcane point of view, simply because humans cannot cast incantations. Therefore, their affinity for a specific type of Afflatus is based on something else, something that they share with arcane beings--such as personality, experiences and preferred instruments.
I want to propose the interpretation of Afflatus being the totality of one's experiences in life; depending on your experiences, the way they've shaped your thinking patterns, your instincts and your personality, you may have an affinity for one Afflatus or another.
This ties in with a different aspect of Afflatus: the idea that one's Afflatus type can change, as the person goes through big changes in their life that influence them in different ways.
If we acknowledge that these battle details are all deliberate and meant to add to the narrative, there are two outliers whose Afflatuses change. The first one is Kakania herself; her debut in 1.7 "E Lucevan le Stelle" includes a fight against her in Stage 8 "Mirror and Lantern" which clearly states her Afflatus is Intellect.
One could argue that this is not the true Kakania, as the battle involves fighting mirror versions of her--but as a reminder, the true one is, in fact, hidden among them! And furthermore, if the reflections are exact versions of the real Kakania, it makes sense that they would have the same battle information as her.
As we all know, when Kakania becomes playable in 1.9 "Vereinsamt," her Afflatus is not Intellect but Plant--I'd like to explain this change as the development Kakania goes through in this specific arc of the story.
Her idealistic views and activism, both for her city and her patients, are directly challenged as the story progresses. She realizes that none of her friends within The Circle were the people she thought them to be, namely Isolde whose complex life and struggles were both overlooked and impossible to discern in Kakania's eyes due to their close relationship, and she also sees the town she acknowledged as flawed but still worth fighting for, turn into violent patriots. Everything that Kakania stood for is gone in an instant, and we see her fighting spirit turn into a desperate near-suicidal attempt at making up for her perceived wrongs.
Such radical events like this would warrant a change in Afflatus, as Kakania adjusts her views due to her experiences.
And then on the other hand, we have The Guiding One's Harbringer boss fight in 1.9's Stage 21 "A Homage Paid"--one of its core mechanics is to change Afflatus to that of the last attack it received. Here we see that distinction from before, the lowercase afflatus referring to the arcane aspect, rather than the experiences a person goes through.
As far as I know, this section of the game also becomes unplayable (or is currently unplayable, I can't seem to access it anymore) so the following screenshots are taken from this video!
Kakania's Afflatus change is something that seems to be directly linked to her evolution as a character, while the Guiding One's Harbringer's Afflatus change is directly linked to its status as an arcane construct, using it as a "mimic strategy" during battle. The psychological aspect vs the arcane aspect of Afflatus respectively.
Next, I'd like to discuss some assumptions made about the different Afflatus types; we expect that Beast characters must have an affinity with animals or be animals themselves (Darley Clatter, Getian, Medicine Pocket, Nick Bottom ...) and we expect that Star characters are all related to celestial bodies or the skies (37, Lilya, Matilda, Lorelei, Voyager ...) due to the naming conventions of the Afflatus. And yes, there are motifs within the Afflatus types that match their naming conventions, but a quick look through the character list prove there is much more to offer, as a good chunk of characters don't align with this initial read of their themes.
We have Pickles, a literal dog, and Kaalaa Baunaa, an astronomer, both with Mineral Afflatus instead of what one would expect of them. It's similar to how fandom perceives Awakened as the sole category for sentient objects, when we have characters like Door and Darley Clatter who are undoubtedly objects, implied to have been given sentience, and thus fall within the group of pure-blooded Arcanists rather than Awakened.
I would also like to point out that these initial motifs have nothing to do with a character's Arcanum--another theory I've seen around is that Afflatus types influence an arcanist's arcanum, which can't be further from the truth. One could argue that Kaalaa Baunaa's arcanum (the summoning of meteors and planets) is related to her Mineral Afflatus--both relate to rocks, after all--but just like the previous assumptions, this falls apart when you examine other examples. Jiu Niangzi's arcanum has nothing to do with rocks nor minerals, but liquor. Ulu's arcanum revolves around fire, yet she's still Mineral.
As far as we know, arcanum is something that can be inherited through bloodlines or lineage--think of Mesmer Jr's 01 Story in her Cover Profile, which states “Nobody is more talented in this than Mesmer Jr. Her bloodline gives her outstanding ability and keen senses, which makes everything clear and intelligible to her” in the context of performing AST, or Tennant, whose 02 Story hints towards her father performing the same type of arcanum she's known for--but it's also something that can be taught. We see this most clearly within students of SPDM such as Sonetto, as her skill set matches that of the SPDM students fought during Chapter 3, portraying the "standard" arcanum taught to all arcanist children.
But not only that, arcanum can be influenced by other factors, such as a character's situation and interests--Blonney's arcanum revolves around making drawings come to life, which correlates with her love for storytelling and horror as a child. Pavia's shadow arcanum is hinted to have been formed out of necessity or as a result of his childhood in a dark basement. Tooth Fairy utilizes the fairies she traps.
We also know that arcanists from the same family may not inherit the same level of arcane power, as seen in Shamane and Kumar; the latter was cast out of her family due to her weak arcane power. In all of these cases, Afflatus has nothing to do with arcanum.
So what exactly do Afflatus types tackle?
These are the Afflatus as I've analyzed them, as much as I could summarize it for easier digestion!
With this interpretation, we can see that Star relates to celestial objects and the skies, but also trailblazer geniuses and unstoppable forces, that which is out of reach for common people. Mineral relates to solid materials and stability in permanence, but also the rigidness of strict systems or traditions or stagnancy. Beast relates to wild animals and creatures found in the world, but also the survival of the individual, the struggle to find a place for oneself no matter what. And Plant relates to the flora and the natural cycles in the world, but also the safety of a collective, that which is inherent to the world such as community or change.
This is why these belong to the Natural category of Afflatus: they are concepts that already existed on the world or were manifested into it, from the ground we touch, the people we interact with, to the ideals and beliefs that influence and create societies or bring people together.
On the other hand we have Spirit, relating to the soul, the supernatural and the spiritual aspect of things, but also the unknown, to follow one's gut instinct or embrace the inexplicable. "The way I see the world is unconventional, because I feel these different things about why and how things are the way they are." And then Intellect, relating to the mind and the logical aspect of things, but also the different mindsets and patterns of thought one can have to rationalize things. "The way I see the world is unconventional, because I have these different rules about why and how things are the way they are."
This is why they belong to the Primal category of Afflatus: they are, as the name implies, ancient impulses and habits that mostly exist within ourselves, our thoughts and our feelings.
The Tutorial Notebook also mentions an "ecological cycle," there is a relationship between the different categories that explains why some are strong or weak against others. It's rather easy to understand for the Primal Afflatus, as it's the classic fight between hearts and minds, but Natural Afflatus is a little harder to grasp.

We see the relationship between Beast and Mineral; the former is weak against the latter, the latter is strong against the former. You may read the Natural Afflatus wheel clockwise or counter clockwise.
Using the previous explanations, let us examine this cycle!
Beast is strong against Plant, because it's the disruption of a community or the harmony of the world through a single individual desperately fighting to change their current situation. A desperate animal does not think about the consequences its actions has on the environment while it tries to survive. Plant is strong against Star, because it's a tight-knit collective that embraces change and thus, the lone genius cannot shine above the rest. In an environment that welcomes everyone and everything, there is no way to stand out. Star is strong against Mineral, because it's a single individual choosing to disrupt the status quo, the stability of their society, for the sake of a dream or ideal. A single genius can topple over entire societies. And Mineral is strong against Beast, because a rigid set of rules or traditions leave no place for those who don't fit inside of it or who oppose it. A government that advocates for mankind's superiority leaves no room for arcanists and their rights, it forces them to assimilate within their established rules.
And this cycle goes backwards and forwards!
But I would also like to propose a different type of relation: we understand the aspect of having advantage or disadvantage, but what about Afflatuses that directly mirror each other?
Beast and Star are two Afflatus that directly correlate to an individual against a collective, whereas Mineral and Plant are two Afflatus that directly correlate to a collective against the individual. They're foils of each other; Beast is the underdog, Star is the genius, while Mineral is the stagnant and rigid yet stable and secure systems while Plant is the ever-changing and adaptive nature of the world.
We may also see this in a more precise way: the ecological cycle and Afflatus relationships exist because someone of Mineral Afflatus who is stuck in their ways and refuses to change can be easily upset by someone of Star Afflatus whose nature is to radically change traditions and offer different paths. This is why Semmelweis, a Mineral Afflatus who is hellbent on clinging to the human aspect of herself and sticks to her stubborn mindsets, has such a fascination with Lorelei. Or rather, why Lorelei has such an effect on Semmelweis, as she is a Star Afflatus that begins Semmelweis' journey of self-discovery and acceptance within Series of Dusks.
We can also see previous themes discussed within this post here: one would think that such a radical change is enough to cause Semmelweis to change Afflatus, but we see through her gamemode and the different endings presented that this change is still very much in line with her mindsets and behaviour, Semmelweis remains stubbornly adaptive and pragmatic to the very end, and choosing to follow Lorelei has brought her a deeper insight to understand herself without radically changing who she truly is.
Another example would be Forget Me Not and Druvis III; we know that Forget Me Not is Mineral Afflatus due to his boss fight in Chapter 2 - Stage 13 "Documentary" and Druvis III is Plant Afflatus.
We see the foil dynamics of Afflatus that don't directly interact with each other: the reason Forget Me Not and Druvis III seem to have this type of relationship can be explained through their Afflatus, with Forget Me Not insisting that she perpetuates the very same cycle of revenge and pain, to never move on and continue in the same spot of grieving and mourning for her family. While Druvis III's entire development throughout the Chicago arc--and even leading into the next chapters--tackles her desire to grow and move on, to finally let go of the worst night of her life that took her family away and begin healing from it. It's exactly what Vertin notices within her, and why she's able to connect with Druvis III.
Vertin: Once you dispel the arcanum, it would not be what it is now. I think you are clear, Ms. Druvis, that … Every tree lives for tomorrow.
And that's where I'll leave this extremely long introduction to my study on Afflatus! I'm planning on discussing other themes in the future, such as the way a character's Medium serves as a bridge between their Afflatus and Arcanum, and analysis of the cover profule, but also proper in-depth analysis of each individual Afflatus!
There is so much to look at when discussing Afflatus, every single Insight material has its own description, and each stage for each Afflatus tells a story that relates to their themes!
Please don't be afraid to reach out with your own ideas or observations, I look forward to what everyone else thinks! And congratulations for making it this far <3
#reverse 1999#reverse: 1999#reverse 1999 afflatus#reverse 1999 headcanons#i dont know how to stress this enough#i LOVEEEEEEEEEEEEEEEEEEEEEE afflatus analysis#i LOVEEEEEEEEEEE thinking deeply about things and connecting dots
79 notes
·
View notes
Text
Chat GPT (The Alan Association AU)
Fun Fact (for those who don't know): Noogai is a Medical Bot/Artificial Intelligence, he cannot be used for "revising"
Fun Fact: Vee's thesis is all about not using AI on art, and he's making Noogai (who is an AI) revise it.
Another Fun Fact (Unrelated): Spongey had nightmares for a week about her own thesis.
Also, another Fun Fact (also Unrelated): Both JM and Spongey worked on the same thesis, and they hated it.
68 notes
·
View notes
Text
Defenseless (Chapter 1 - Late Night, Talking)
paring: hwang junho x platonic(?) reader
i don't have any warnings at the moment but they'll be listed as the story continues. spaces will be continued as well, working on chapters five and six <3
—/—
The constant hum of traffic below her apartment, the way the streetlights bled through the edges of her curtains, the persistent buzz of her phone from endless group messages and reminders—it was all too much. Yet, here she was. A college senior in the heart of Seoul, miles from the small town where she’d grown up.
It had been 15 years since her family moved here when she was six. She had always thought it was a temporary thing—just a few years, an “adventure.” But here she was, nearing the end of her college years, and South Korea had long since become home.
Tonight, Y/N sat cross-legged on the couch, her laptop open in front of her but not being of much use. She’d been staring at the same blank page for the last hour, the blinking cursor a mocking reminder of how little she’d accomplished today. The thesis paper on the ethics of artificial intelligence might as well have been written in a foreign language.
Procrastination had become second nature. But tonight wasn’t about her paper. Tonight, she was lost in thought.
Her phone buzzed again on the coffee table.
Levi: “You alive?”
Y/N smiled slightly at the message. Levi was her older brother, a couple of years ahead of her in life, and an officer with the Seoul Police Department. He’d always been the protective one, the one who’d pushed her to move here for college even though she wasn’t so sure at first. And now, as a senior, she was glad he’d made that decision.
He was busy. But that was just how Levi was.
Y/N’s fingers hovered over her phone before she typed back:
Y/N: “Yeah, just trying to finish this paper.”
Seconds later, another message popped up.
Levi: “Good luck with that. I’m probably gonna be out late tonight. Keep the door locked.”
Y/N frowned at the text. “Out late tonight.” That was the typical Levi thing to say. Always working late, running errands for his job, or doing something official. He had never been one to talk much about his work. She knew he was a detective, but that was about it.
She sat back against the cushions, rolling her neck to relieve the tension. She wasn’t worried about him; he was a cop, after all. But she did wonder what his night was going to be like—if it was going to be like all the other nights when he came home late, his face tight and his shoulders stiff with whatever case he was working on.
Another buzz.
Levi: “If you get bored, come grab dinner. There’s a place I found near the station. You’ll like it.”
She chuckled at that. Levi always tried to get her out of her shell, even if she didn’t want to leave her apartment. He’d been like that since they were kids—always pushing her to experience more.
Y/N: “I’ll think about it.”
She put her phone down and returned to her paper. She had to at least make it look like she was trying. Her eyes wandered to the clock on the wall.
11:15 p.m.
It was getting late, and the apartment felt heavier, the sounds of the city more distant now. Her gaze slid over to the window. The city was quieter now, but there was still that hum—the feeling of being in a place that never fully stopped moving. Sometimes, she envied the people who could just immerse themselves in the rush of it all. But she wasn’t one of them.
Her thoughts drifted back to Levi. He wasn’t the type to talk about his job much, but Y/N could tell there was a tiredness in him these past few months. Nothing outwardly strange, but a quiet shift she couldn’t ignore. Maybe it was just the stress of being a cop. He was, after all, always on—always in detective mode.
But Y/N wasn’t too concerned. If anything, she trusted him to handle whatever came his way.
She picked up her phone again and scrolled absentmindedly through her social media feed, finding herself growing irritated by the noise. She wanted nothing more than to shut everything out for a while. But the quiet was overwhelming, and her thoughts always had a way of creeping in, no matter how hard she tried to ignore them.
Y/N grabbed a blanket from the couch and wrapped it around her shoulders. The night wasn’t going to end with a finished paper or any brilliant thoughts on artificial intelligence. But she was okay with that. She didn’t mind the slow pace of her life most of the time. It was comfortable.
And comfortable, she thought as she looked at her reflection in the glass of the window, was all she really needed right now.
The Doorbell
The chime of her doorbell startled her out of her thoughts. Y/N blinked, glancing at the clock.
11:45 p.m.
Who the hell could be here at this hour?
Her thoughts immediately went to Levi, but then she remembered—he had said he was going to be out late. So, it couldn’t be him.
Her mind raced as she walked toward the door, her heartbeat picking up with each step. She peered through the peephole, and her eyes widened.
It was Levi. And standing beside him—Junho.
The sight of them together was strange, considering she hadn’t seen her brother in a couple of days. And Junho? She only ever saw him when he and Levi hung out, but it was rare. The two of them had the kind of friendship where you could sense their bond even when they didn’t speak.
And yet… here they were, standing in her doorway with pizza boxes.
Y/N blinked, half expecting them to disappear. “What are you guys doing here?” she asked, still standing behind the door, her voice filled with confusion.
Levi grinned as he shifted the pizza boxes in his arms. “We brought dinner.”
Junho, standing off to the side, nodded and grinned as well. “We thought you could use a break from your studies.”
Y/N raised an eyebrow. “A break? It’s almost midnight. Don’t you guys have… work?”
Levi just waved it off. “We’re both off tonight. Besides, I know you’re stressed about that thesis. Consider this your distraction.”
Y/N couldn’t help but smile at his offer, despite the weirdness of the situation. “I mean, I guess I can’t say no to pizza…”
Junho stepped forward with one of the boxes. “That’s the spirit.”
Y/N stepped back, making room for them to enter. “I swear, you two think you can just show up anytime you want. I’m surprised you didn’t knock on the window, too.”
Levi laughed, pushing past her into the apartment. “We figured you’d be in your study cave, avoiding humanity.” He tossed a wink in her direction, his usual playful self. “Plus, it’s not like we get to hang out like this anymore.”
Y/N shut the door behind them, still processing how sudden this was. “I should probably be the one bringing you food. You two always work yourselves into the ground.”
Junho shrugged, dropping the pizza on the coffee table and flopping onto the couch. “It’s what we do.”
Y/N settled beside him, taking a slice and leaning back into the cushions. “So, what’s up? I thought you were both supposed to be busy?”
Levi shot her a playful look, grabbing a slice himself. “You know how it is. A cop’s work is never done.”
They all settled into a comfortable silence for a while, the only sound being the quiet munching of pizza and the hum of the city outside. As much as Y/N wanted to focus on her paper, the presence of her brother and Junho was… refreshing. She’d forgotten what it was like to have the two of them just hang out with her.
—/—
Y/N had forgotten how much she enjoyed just hanging out. Between her schoolwork and the quiet days she spent mostly alone in her apartment, having her brother and Junho over felt like a rare gift. The kind that almost made her feel like a regular college student, one who didn’t get buried in papers every weekend.
The three of them sat on the couch, passing around pizza, talking about nothing and everything. It felt natural. Levi’s easygoing nature had a way of filling the silence, and Junho just seemed to go along with whatever Levi said, his quiet laugh following Levi’s jokes.
But after a while, Y/N found herself staring at her half-eaten slice, mind wandering as the conversation meandered between casual topics.
Levi stood up abruptly, stretching his arms over his head with a yawn. “I’m gonna hit the bathroom. Don’t eat all the pizza, okay?”
Y/N didn’t bother to look up. “I’ll try, but no promises.”
He smiled before heading down the hall to the bathroom, leaving her alone with Junho.
For a second, Y/N hesitated. It was just Junho. He’d been over to their place a handful of times—Levi’s work buddy, the guy who was always around but never too much in the spotlight. She’d spoken to him a couple of times, mostly small talk or shared jokes when Levi dragged him into their hangouts. But in the years of knowing him, they had never really talked.
And now, here they were, alone.
Y/N quickly picked up her slice of pizza and took a bite, eyes on the TV, though the screen was just a blur. She could feel the silence stretching between them. It wasn’t awkward in the way that made her want to flee, but it wasn’t comfortable either. There was something about Junho’s quiet presence that threw her off balance.
She glanced over at him. He was sitting back, his elbows propped up on the couch, staring at his slice of pizza like it was the most interesting thing in the room.
“You, uh, you still work the night shifts often?” Y/N asked, suddenly realizing how weird the question sounded. She cleared her throat. “I mean, I guess you do. Being a cop and all.”
Junho, who’d been mostly quiet until now, gave her a small smile. “Yeah, it’s part of the job. Sometimes it feels like I live in the station.”
Y/N nodded, though she had no idea how to respond to that. She had always known Junho as Levi’s friend—the guy who was occasionally in the background, never really involved in her day-to-day life. It was strange, sitting here with him like this. She felt like she was still trying to figure out how to engage without overstepping.
She cleared her throat again, setting her pizza down and suddenly feeling like her hands were too fidgety.
“So… uh… you and Levi working on any big cases right now?”
Junho’s eyes flickered to her, his gaze softening. “A few. You know, the usual. Sometimes the work gets… complicated.” He trailed off, as if debating how much to say. Then, to her surprise, he added, “But it’s manageable. It’s what we do.”
Y/N wasn’t sure why, but hearing him say that made her feel a little lighter. There was something about his calmness, his way of not saying too much but enough to keep the conversation going, that made her relax.
Still, the silence hung between them like a heavy curtain.
She scratched the back of her neck, trying to find something else to say. It was fine when Levi was here—they were a team, in a way. Levi’s easy banter and teasing made the room feel full, even if it was just the three of them. But now, with only Junho here, everything felt a little empty. She didn’t mind the quiet. It wasn’t awkward awkward, but it was definitely not… natural.
Junho seemed to sense her discomfort, and for the first time, he actually spoke. “It’s funny, you know. I’ve been around your brother for a while now, but we’ve never really talked much.”
Y/N raised an eyebrow at him, a little surprised by the comment. She leaned back against the couch, suddenly feeling a little less tense. “Yeah, same. I mean, you’re always around, but we’ve never really… had a real conversation.”
He nodded, a slight smile tugging at his lips. “Guess we’re both kind of bad at that.”
It was a small comment, but something about the way he said it made Y/N laugh. Not loudly, but just enough to ease some of the tension. “Guess so.”
Junho smiled again, this time a bit wider. It was a soft, genuine thing that she hadn’t noticed before. She had always seen him as stoic—quiet, reserved, the kind of guy who stayed in the background. But now that they were alone, Y/N was starting to realize how different he was in this setting. Without Levi’s constant presence, Junho seemed… more approachable, more human.
“So,” Y/N began, her voice less nervous now. “What do you usually do when you’re not at work?”
Junho looked at her, his expression thoughtful. “I don’t know. Not much, I guess. I like to run. Helps clear my head.”
Y/N raised her eyebrows. “Run? Like, you actually go outside and run?”
He chuckled, shaking his head. “Yeah. I mean, I don’t run marathons or anything. Just… to clear my mind.”
That made sense, Y/N thought. Junho always seemed like the type to need that space, that quiet moment to himself. She could relate in a way. The quiet was where she found her peace.
“I guess that’s one way to get away from it all,” she said, her voice light. “I… can barely run for more than five minutes before I start feeling like I’m going to die.”
Junho laughed, a real, soft laugh this time. “Maybe you’re doing it wrong.”
Y/N smiled, feeling the moment shift. This wasn’t so bad. They were just two people talking. Not brother and friend, just… people.
Just then, Levi’s voice echoed from down the hall. “Hey, don’t get too cozy without me!”
Y/N glanced toward the hall, her smile barely holding back a laugh. She could already hear Levi shuffling back toward them.
“Levi, we’re just talking,” she called back, her voice light.
Levi stepped back into the living room, his hands on his hips as he shot a playful glance at both of them. “You guys better not have solved all the world’s problems without me.”
Y/N stuck her tongue out at him, throwing a pillow in his direction. “Not yet, but give us another five minutes.”
Junho smirked, his earlier solemnity softened by the shared joke. “You’re lucky we saved you some pizza.”
Levi groaned dramatically, dropping back onto the couch and stealing a slice. “You’re lucky I didn’t come back and find you two still talking about running and philosophy or whatever.” He took a bite and settled in, clearly in no mood for deep conversation.
Y/N and Junho exchanged a brief, almost imperceptible glance, the kind of look that said this is enough—the moment had shifted back to normal. Just the three of them, in the most ordinary way. It wasn’t anything groundbreaking, but it was comfortable.
They ate in silence for a few minutes, the kind of silence that no one minded because it wasn’t awkward, just a peaceful pause. The kind of silence you share with people you’re close to. No expectations. No pressure.
And for a while, everything was just… easy.
#squid game front man#squid game s2#squid game season 2#frontman x reader#in ho x reader#kang sae byeok#player 001#player 067#player 456#seong gi hun#hwang jun ho#hwang in ho#jun ho x reader#junho x reader#x reader#fem reader#tw cussing#slow burn#platonic reader#maybe#depends on how i feel#squid game fanfic#squid game spoilers#cho sang woo#potentially triggering#reader is female#squid game#squid game x reader#squid game x you#squid game x y/n
39 notes
·
View notes
Text
On the subject of AI...
Okay so, I have been seeing more and more stuff related to AI-generated art recently so I’m gonna make my stance clear:
I am strongly against generative AI. I do not condone its usage personally, professionally, or in any other context.
More serious take under the cut, I am passionate about this subject:
So, first thing’s first, I’ll get my qualifications out of the way: BSc (Hons) Computer Science with a specialty in Artificial Intelligence systems and Data Security and Governance. I wrote my thesis, and did multiple R&D-style papers, on the subject. On the lower end I also have (I think the equivalent is an associate’s?) qualifications in art and IT systems. I’m not normally the type to pull the ‘well actually 🤓☝️’ card but, I'm laying some groundwork here to establish that I am heavily involved in the fields this subject relates to, both academically and professionally.
So what is 'AI' in this context?
Nowadays when someone says ‘AI’, they’re most likely talking about Generative Artificial Intelligence – it’s a subtype of AI system that is used, primarily, to produce images, text, videos, and other media formats (thus, generative).
By this point, we’ve all heard of the likes of ChatGPT, Midjourney, etc – you get the idea. These are generative AI systems used to create the above mentioned content types.
Now, you might be inclined to think things such as:
‘Well, isn’t that a good thing? Creating stuff just got a whole lot easier!’
‘I struggle to draw [for xyz reason], so this is a really useful tool’
‘I’m not an artist, so it’s nice to be able to have something that makes things how I want them to look’
No, it’s not a good thing, and I’ll tell you exactly why.
-------------------------------------------------
What makes genAI so bad?
There’s a few reasons that slate AI as condemnable, and I’ll do my best to cover them here as concisely as I reasonably can. Some of these issues are, admittedly, hypothetical in nature – the fact of the matter is, this is a technology that has come to rise faster than people and legislature (law) can even keep up with.
Stealing Is Bad, M’kay?
Now you’re probably thinking, hold on, where does theft come into this? So, allow me to explain.
Generative AI systems are able to output the things that they do because first and foremost, they’re ‘trained’: fed lots and lots of data, so that when it’s queried with specific parameters, the result is media generated to specification. Most people understand this bit ��� I mean, a lot of us have screwed around with ChatGPT once or twice. I won't lie and say I haven't, because I have. Mainly for research purposes, but still. (The above is a massive simplification of the matter, because I ain't here to teach you at a university level)
Now, give some thought to where exactly that training data comes from.
Typically, this data is sourced from the web; droves of information are systematically scraped from just about every publicly available domain available on the internet, whether that be photographs someone took, art, music, writing…the list goes on. Now, I’ll underline the core of this issue nice and clearly so you get the point I’m making:
It’s not your work.
Nor does it belong to the people responsible for these systems; untold numbers of people have had their content - potentially personal content, copyrighted content - taken and used for data training. Think about it – one person having their stuff stolen and reused is bad, right? Now imagine you’ve got a whole bunch of someones who are having their stuff taken, likely without them even knowing about it, and well – that’s, obviously, very bad. Which sets up a great segue into the next point:
Potential Legislation Issues
For the sake of readability, I’ll try not to dive too deep into legalese here. In short – because of the inherent nature of genAI (that is, the taking-and-using of potentially private and licensed material), there may come a time where this poses a very real legal issue in terms of usage rights.
At the time of writing, legislation hasn’t caught up – there aren't any ratified laws that state how, and where, big AI systems such as ChatGPT can and cannot source training data. Many arguments could be made that the scope and nature of these systems practically divorces generated content from its source material, however many do not agree with this sentiment; in fact, there have been some instances of people seeking legal action due to perceived copyright infringement and material reuse without fair compensation.
It might not be in violation of laws on paper right now, but it certainly violates the spirit of these laws – laws that are designed to protect the works of creatives the world over.
AI Is Trash, And It’s Getting Trashier
Woah woah woah, I thought this was a factual document, not an opinion piece!
Fair. I’d be a liar if I said it wasn’t partly rooted in opinion, but here’s the fact: genAI is, objectively, getting worse. I could get really technical with the why portion, but I’m not rewriting my thesis here, so I’ll put it as simply as possible:
AI gets trained on Internet Stuff. AI is dubiously correct at best because of how it aggregates data (that is, from everywhere, even the factually-incorrect places)
People use AI to make stuff. They take this stuff at face value, and they don’t sanity check it against actual trusted sources of information (or a dictionary. Or an anatomy textbook)
People put that stuff back on the internet, be it in the form of images, written statements, "artwork", etc
Loop back to step 1
In the field of Artificial Intelligence this is sometimes called a runaway feedback loop: it’s the mother of all feedback loops that results in aggregated information getting more and more horrifically incorrect, inaccurate, and poorly put-together over time. Everything from facts to grammar, to that poor anime character’s sixth and seventh fingers – nothing gets spared, because there comes a point where these systems are being trained on their own outputs.
I somewhat affectionately refer to this as ‘informational inbreeding’; it is becoming the pug of the digital landscape, buggled eyes and all.
Now I will note, runaway feedback loops are typically referencing algorithmic bias - but if I'm being honest, it's an apt descriptor for what's happening here too.
This trend will, inevitably, continue to get worse over time; the prevalence of AI generated media is so commonplace now that it’s unavoidable – that these systems are going to be eating their own tails until they break.
-------------------------------------------------
But I can’t draw/write! What am I supposed to do?
The age-old struggle – myself and many others sympathize, we really do. Maybe you struggle to come up with ideas, or to put your thoughts to paper cohesively, or drawing and writing is just something you’ve never really taken the time to develop before, but you’re really eager to make a start for yourself.
Maybe, like many of us including myself, you have disabilities that limit your mobility, dexterity, cognition, etc. Not your fault, obviously – it can make stuff difficult! It really can! And it can be really demoralizing to feel as though you're limited or being held back by something you can't help.
Here’s the thing, though:
It’s not an excuse, and it won’t make you a good artist.
The very artists you may or may not look up to got as good as they did by practicing. We all started somewhere, and being honest, that somewhere is something we’d cringe at if we had to look back at it for more than five minutes. I know I do. But in the context of a genAI-dominated internet nowadays, it's still something wonderfully human.
There are also many, many artists across history and time with all manner of disabilities, from chronic pain to paralysis, who still create. No two disabilities are the same, a fact I am well aware of, but there is ample proof that sheer human tenacity is a powerful tool in and of itself.
Or, put more bluntly and somewhat callously: you are not a unique case. You are not in some special category that justifies this particular brand of laziness, and your difficulties and struggles aren't license to take things that aren't yours.
The only way you’re going to create successfully? Is by actually creating things yourself. ‘Asking ChatGPT’ to spit out a writing piece for you is not writing, and you are not a writer for doing so. Using Midjourney or whatever to generate you a picture does not make you an artist. You are only doing yourself a disservice by relying on these tools.
I'll probably add more to this in time, thoughts are hard and I'm tired.
26 notes
·
View notes
Text
academic inquiry!!! (pls help me out here)
I am a graduate student currently working on an intersectional thesis in history, memory, science/technology studies, and ludology. Having recently replayed Portal 2, I’ve been met with a hunch that the levels taking place in the abandoned Cold War -era facility create certain perceptions—or, in the words of scholar Alison Landsberg, “prosthetic memories”—of scientific and technological development following World War Two. Though they may not be exactly true-to-life, the creation of a prosthetic memory involves a person taking on “a more personal, deeply felt memory of a past event through which he or she did not live.” I believe that the narrative of abandoned Aperture may create prosthetic memories of Cold War -era scientific research and development procedures, especially in the fields of experimental physics, computing, and artificial intelligence. Consider, for instance, Aperture's shady research practices or their once-exclusive employment of "astronauts, Olympians, and war heroes." These ideas, in my mind, influence how one might perceive the practice of American science during the Cold War.
Obviously, the creation of a prosthetic memory does not entail complete historical accuracy. Portal 2's portrayal of Cold War science is not, in the slightest, comparable to real life. Rarely was SCIENCE itself the primary motive of research and development. Instead, I'm arguing that Portal 2 effectively demonstrates *an image* of science, thereby creating a prosthetic memory.
I have created this poll with the express purpose of determining whether or not I'm just making shit up. It's very possible that I've spent too many years in fandom and I'm seeing things that aren't there. Voting and/or providing your input and personal experiences will be immensely helpful in furthering my research. Please reblog so this inquiry might reach a wider audience. Thank you in advance!
**please vote ONLY if you have played Portal 2
#portal#portal 2#chell#wheatley#glados#arin shut up#really not sure what to tag here. it's been probably a decade since I've posted anything portal related#i may have also just been 10 years old the first time i played portal 2. and that probably just skewed my image of science forever.
21 notes
·
View notes
Text
Day-017: Partner
Lore:
Dr. Light’s idea was for robots that could grow, change, and think for themselves. However, he’d had to win over the committee before he could make that dream a reality. Feeling bad after he’d told the committee that he couldn’t support Wily’s double-gear system (especially because Wily had been there to witness it), he offered that they could build the first Robot Master together.
Wily had always been better with hardware and mechanical design. It was something of his specialty, even. With his skills with hardware and Light’s skills in artificial intelligence, the two could be basically unstoppable.
Wily had initially refused - the salty man he is, he definitely interpreted Light’s gesture of goodwill as some kind of condescension. However, eventually he accepted. Light (mistakenly) took to mean he’d been forgiven. Wily would design a lot of the hardware and design, including, eventually, the prototype Megabuster and the Variable Tool System.
Dr.s Cossack and LaLinde weren’t officially on the project, though they did contribute their thoughts and ideas. When Blues’s core failed in the middle of the Military demonstration (because they built it with Blues the unarmed child in mind and forgot to compensate for the weapon attachment, and it turns out it couldn’t generate enough to power both Blues and the buster. The event damaged his core, making it more inefficient and unbalanced.), Light made sure to consult Dr. LaLinde to help him design the solar cores for Rock and Roll (she’s an environmental scientist and he figured she would probably be able to help him and Wily design a more efficient environmentally-friendly solar core.)
————————————————————————
Notes:
Their overall appearances are based on their young designs from Megaman 11. Their shirts are colored that way as a reference to their young selves from the Ruby Spears cartoon.
Wily’s hair color came from blending the shades of yellow of Piano, Bass, and Zero together (& then lightening it because 11’s Flashback Wily has LIGHT blond hair). With Light, I blended Rock, Blues, and X’s hair colors and darkened it. ✨ Contrast ✨
Light and Wily were roommates. They couldn’t exactly escape each other. After ignoring Light for a week, Wily told their shared friend group that he was getting sick of having see his "stupid, traitorous face" every the morning and afternoon and LaLinde & Cossack (he was doing his thesis at the time) individually suggested that he should try talking to Light about how he felt. He basically said "screw that!" but did take their advice to at least try to get along. It was the first crack in their friendship and he never actually forgave it.
Dysfunctional Besties <3 (/hj)
Also I think it would be kinda cute if Light was inspired on a subconscious level by Dr. Cossack talking about Kalinka & how she was growing up. She might be like 2, if she is even born yet tho. Still working out the timeline there. It’s a little fuzzy.
Blues took quite a few years to build because they had to do everything from the ground up. The Robot Masters built after him used modified versions of Blues’s base code and designs, so they took comparatively less long.
The idea was presented before the committee as just the base code, which they determined would probably work. (I assume the committee would reach out to investors or something, but I’m like the furthest thing from a roboticist so I have no idea.) By the time Light founded Light Labs and obtained military funding, he’d gotten like. parental-levels of attached. He didn’t set out to make robot children but boy did he want robot children now—
and then he made 4 that were "children" children and like a bajillion that don’t stay at Light Labs
#rocktober#rocktober 2024#sibling shuffle au#mega man au#mega man classic#megaman#my art#dr. light#dr. Light#dr. Wily#dr wily#Lore#im not an engineer Idk what I’m talking about when it comes to that stuff
36 notes
·
View notes
Text
I have recently been thinking about the term "Artificial Intelligence", and how it might be defined in the Tron universe. What I find interesting is that most of the conscious entities in the digital world in Tron weren't intentionally created by humans to be conscious. They didn't intend for actuarial programs and security software to have thoughts and feelings. And yet, what can be called "intelligent life" somehow grew inside the ENCOM computers, unbeknownst to the humans. The same can of course be said about the ISOs, who weren't intentionally created by Kevin, but just showed up one day, again having grown out of the Grid somehow.
What's really interesting about this is that as far as I know, there are only two digital entities in the Tron universe who were created intentionally by humans to be "intelligent": The Master Control Program (created by Dillinger) and Clu (created by Flynn). And what do these two creations have in common? They're the antagonists. The bad guys. The main villains that have to be defeated, by the humans and by the unintentionally conscious programs.
I don't have any well thought out thesis about this, just some random thoughts. Is the message of Tron that life is only good if it "grows naturally" instead of being intentionally created? Are there some other interesting commonalities between the MCP and Clu that are relevant to this idea, such as the fact that they consider themselves better than their creators? Or that both Dillinger and Flynn intended for their creations to "run things" better than a human would? Is there some kind of reading to be made that could be a criticism agains the current "AI" trend?
It also makes me genuinely curious about how the upcoming third Tron film will handle the subject. Because I can't imagine that it won't be dealing with AI in some regard. Will there be an intentionally created AI in it, and if so, will it be a good guy or a bad guy?

55 notes
·
View notes
Text
Steph the Alter Nerd is reading Omid’s new book.
Following the live read:
I joined late so Steph was already reading. She was starting the Sophie section. Seriously, why pick on Sophie who just puts her head down and focuses on work? That’s strikes me as unnecessarily vile.
Omid apparently thinks Charles hasn’t modernized the monarchy. Dude is an environmental icon and we now have a blended family in BP. That may not be everyone’s cup of tea, but you can’t deny that it’s modern.
Apparently Omid writes pages and pages about Charles’ “leaky pen” incident. It’s just a pen, Omid. Omid thinks this means Charles may not be up to the job, lololol. I’m dying. Mind you, Omid worships Harry who stripped in Vegas, wore a Nazi uniform, and called his fellow soldiers names. But yes, the leaky pen is far more significant than all that, somehow.
Really boring part about government stuff. Charles negotiates and reaches compromises with the government and that’s apparently bad? Also, Charles didn’t know what to expect after he became King??? Lolololol.
Charles lost sympathy for the Harkles after the documentary. Well, duh. We all did, Omid. That documentary was a huge own goal.
He blames the Royal Family for the documentary’s melodrama? Seriously? Who was crying on Oprah? Who was crying in a rented Vancouver mansion with her head wrapped in a towel? Who dropped hot, salty tears on her Hermes blanket? That’s the person responsible for the melodrama.
Anne supposedly kicked them out of Frogmore. I suspect this is fanfiction, but I love it. I want it to be true. This is my headcanon now.
And I do thin fanfiction is the right term for this book. The BRF is super popular right now so the book thesis itself (that the BRF is in trouble) is pretty fantastical.
This book seems very, very boring. Omid seems to be desperately trying to argue that Charles’ first year went badly, but that’s just not reality. Omid used to be better at spinning than this.
Make the Royals Great Again? Uh, that was done in 2011. Everything we are seeing now was planted way back then, down to Kate’s leafy crown. There’s a general lack of both self-awareness and historical awareness in this book. Omid writes like someone who first became a “royal reporter” in 2016…which is exactly what he is. Too bad, because I do think there’s an interesting analysis that could be made regarding 2023 and it’s place in royal pr. That’s above Omid’s pay grade though.
Lol, Omid discusses UK politics and it’s every bit as much of a disaster as one would expect. Stick to gossip, Omid.
Ok, Steph’s hydrating, so let’s step back for a minute and recall what this book was supposed to be. This was to be “Finding Freedom 2.0,” a chronicle of the Harkle post-Megxit success story. The publishers clearly didn’t like that and they made Omid write a book about the family as a whole. That’s because there was no Harkle success story and the publisher didn’t think another Harkle book would sell. Unfortunately, Omid is a Harkle specialist. He can’t write a book about the family (let alone successfully argue for its imminent demise). He simply doesn’t know enough.
Back to Steph. We’re now in Harry’s military service? Er, why? We jumped from 2023 to 2016 and now to the Afghanistan War?
I agree with Steph that Omid’s trying to associate the royals with MAGA and I can’t even articulate how stupid that is. Completely different countries, completely different cultures, completely different iconography. Just doesn’t work.
Now we’re at the Coronation Concert? The royals are in trouble because Elton wasn’t at the concert! Lolololol. The Harkle bubble is out of this world. Basically, if their inner circle wasn’t centered (Oprah, Elton, Omid, etc…), it’s because of a MAGA conspiracy that will bring the royals down.
Something, something throne. Charles looked awkward again. Constitutional crisis!
I feel like I’m grading student briefs. There’s a way to argue this and there is evidence you can cite for this argument, but this isn’t it. You shouldn’t write pages and pages about a leaky pen and then minimize the bags of charity money as “perception.” You should start with the bags of charity money then use the leaky pen to bolster the “perception” argument.
Another disagreement with the government. Aargh! That should be lumped together with the other arguments with the government. Or it shouldn’t be mentioned at all. You’re arguing that Charles is and old-fashioned idiot who is not a good king, so why make him look like someone who is aware of current social issues and engaged with his government?
Racism. Finally! No wait, it’s boring.
Charles had an affair with Camilla. Lol, that’s not exactly news, love. The time jumping is driving me nuts.
Took a break to let the dog out and now we’re in Andrew’s interview. Of course we are.
Will exiled Andrew. I hope this is true. Wait, that’s the famous “power struggle”? Andrew??? I don’t think that’s a power struggle. That’s just Charles passing the buck.
Oh, lord. More Andrew. That’s it. I’m going to bed. I’ll tune back tomorrow.
120 notes
·
View notes
Text
My thesis will not contain anything created with artificial intelligence but it'll be a stretch to say any intelligence was used at all
12 notes
·
View notes
Text
Artificial Intelligence Risk
about a month ago i got into my mind the idea of trying the format of video essay, and the topic i came up with that i felt i could more or less handle was AI risk and my objections to yudkowsky. i wrote the script but then soon afterwards i ran out of motivation to do the video. still i didnt want the effort to go to waste so i decided to share the text, slightly edited here. this is a LONG fucking thing so put it aside on its own tab and come back to it when you are comfortable and ready to sink your teeth on quite a lot of reading
Anyway, let’s talk about AI risk
I’m going to be doing a very quick introduction to some of the latest conversations that have been going on in the field of artificial intelligence, what are artificial intelligences exactly, what is an AGI, what is an agent, the orthogonality thesis, the concept of instrumental convergence, alignment and how does Eliezer Yudkowsky figure in all of this.
If you are already familiar with this you can skip to section two where I’m going to be talking about yudkowsky’s arguments for AI research presenting an existential risk to, not just humanity, or even the world, but to the entire universe and my own tepid rebuttal to his argument.
Now, I SHOULD clarify, I am not an expert on the field, my credentials are dubious at best, I am a college drop out from the career of computer science and I have a three year graduate degree in video game design and a three year graduate degree in electromechanical instalations. All that I know about the current state of AI research I have learned by reading articles, consulting a few friends who have studied about the topic more extensevily than me,
and watching educational you tube videos so. You know. Not an authority on the matter from any considerable point of view and my opinions should be regarded as such.
So without further ado, let’s get in on it.
PART ONE, A RUSHED INTRODUCTION ON THE SUBJECT
1.1 general intelligence and agency
lets begin with what counts as artificial intelligence, the technical definition for artificial intelligence is, eh…, well, why don’t I let a Masters degree in machine intelligence explain it:
Now let’s get a bit more precise here and include the definition of AGI, Artificial General intelligence. It is understood that classic ai’s such as the ones we have in our videogames or in alpha GO or even our roombas, are narrow Ais, that is to say, they are capable of doing only one kind of thing. They do not understand the world beyond their field of expertise whether that be within a videogame level, within a GO board or within you filthy disgusting floor.
AGI on the other hand is much more, well, general, it can have a multimodal understanding of its surroundings, it can generalize, it can extrapolate, it can learn new things across multiple different fields, it can come up with solutions that account for multiple different factors, it can incorporate new ideas and concepts. Essentially, a human is an agi. So far that is the last frontier of AI research, and although we are not there quite yet, it does seem like we are doing some moderate strides in that direction. We’ve all seen the impressive conversational and coding skills that GPT-4 has and Google just released Gemini, a multimodal AI that can understand and generate text, sounds, images and video simultaneously. Now, of course it has its limits, it has no persistent memory, its contextual window while larger than previous models is still relatively small compared to a human (contextual window means essentially short term memory, how many things can it keep track of and act coherently about).
And yet there is one more factor I haven’t mentioned yet that would be needed to make something a “true” AGI. That is Agency. To have goals and autonomously come up with plans and carry those plans out in the world to achieve those goals. I as a person, have agency over my life, because I can choose at any given moment to do something without anyone explicitly telling me to do it, and I can decide how to do it. That is what computers, and machines to a larger extent, don’t have. Volition.
So, Now that we have established that, allow me to introduce yet one more definition here, one that you may disagree with but which I need to establish in order to have a common language with you such that I can communicate these ideas effectively. The definition of intelligence. It’s a thorny subject and people get very particular with that word because there are moral associations with it. To imply that someone or something has or hasn’t intelligence can be seen as implying that it deserves or doesn’t deserve admiration, validity, moral worth or even personhood. I don’t care about any of that dumb shit. The way Im going to be using intelligence in this video is basically “how capable you are to do many different things successfully”. The more “intelligent” an AI is, the more capable of doing things that AI can be. After all, there is a reason why education is considered such a universally good thing in society. To educate a child is to uplift them, to expand their world, to increase their opportunities in life. And the same goes for AI. I need to emphasize that this is just the way I’m using the word within the context of this video, I don’t care if you are a psychologist or a neurosurgeon, or a pedagogue, I need a word to express this idea and that is the word im going to use, if you don’t like it or if you think this is innapropiate of me then by all means, keep on thinking that, go on and comment about it below the video, and then go on to suck my dick.
Anyway. Now, we have established what an AGI is, we have established what agency is, and we have established how having more intelligence increases your agency. But as the intelligence of a given agent increases we start to see certain trends, certain strategies start to arise again and again, and we call this Instrumental convergence.
1.2 instrumental convergence
The basic idea behind instrumental convergence is that if you are an intelligent agent that wants to achieve some goal, there are some common basic strategies that you are going to turn towards no matter what. It doesn’t matter if your goal is as complicated as building a nuclear bomb or as simple as making a cup of tea. These are things we can reliably predict any AGI worth its salt is going to try to do.
First of all is self-preservation. Its going to try to protect itself. When you want to do something, being dead is usually. Bad. its counterproductive. Is not generally recommended. Dying is widely considered unadvisable by 9 out of every ten experts in the field. If there is something that it wants getting done, it wont get done if it dies or is turned off, so its safe to predict that any AGI will try to do things in order not be turned off. How far it may go in order to do this? Well… [wouldn’t you like to know weather boy].
Another thing it will predictably converge towards is goal preservation. That is to say, it will resist any attempt to try and change it, to alter it, to modify its goals. Because, again, if you want to accomplish something, suddenly deciding that you want to do something else is uh, not going to accomplish the first thing, is it? Lets say that you want to take care of your child, that is your goal, that is the thing you want to accomplish, and I come to you and say, here, let me change you on the inside so that you don’t care about protecting your kid. Obviously you are not going to let me, because if you stopped caring about your kids, then your kids wouldn’t be cared for or protected. And you want to ensure that happens, so caring about something else instead is a huge no-no- which is why, if we make AGI and it has goals that we don’t like it will probably resist any attempt to “fix” it.
And finally another goal that it will most likely trend towards is self improvement. Which can be more generalized to “resource acquisition”. If it lacks capacities to carry out a plan, then step one of that plan will always be to increase capacities. If you want to get something really expensive, well first you need to get money. If you want to increase your chances of getting a high paying job then you need to get education, if you want to get a partner you need to increase how attractive you are. And as we established earlier, if intelligence is the thing that increases your agency, you want to become smarter in order to do more things. So one more time, is not a huge leap at all, it is not a stretch of the imagination, to say that any AGI will probably seek to increase its capabilities, whether by acquiring more computation, by improving itself, by taking control of resources.
All these three things I mentioned are sure bets, they are likely to happen and safe to assume. They are things we ought to keep in mind when creating AGI.
Now of course, I have implied a sinister tone to all these things, I have made all this sound vaguely threatening, haven’t i?. There is one more assumption im sneaking into all of this which I haven’t talked about. All that I have mentioned presents a very callous view of AGI, I have made it apparent that all of these strategies it may follow will go in conflict with people, maybe even go as far as to harm humans. Am I impliying that AGI may tend to be… Evil???
1.3 The Orthogonality thesis
Well, not quite.
We humans care about things. Generally. And we generally tend to care about roughly the same things, simply by virtue of being humans. We have some innate preferences and some innate dislikes. We have a tendency to not like suffering (please keep in mind I said a tendency, im talking about a statistical trend, something that most humans present to some degree). Most of us, baring social conditioning, would take pause at the idea of torturing someone directly, on purpose, with our bare hands. (edit bear paws onto my hands as I say this). Most would feel uncomfortable at the thought of doing it to multitudes of people. We tend to show a preference for food, water, air, shelter, comfort, entertainment and companionship. This is just how we are fundamentally wired. These things can be overcome, of course, but that is the thing, they have to be overcome in the first place.
An AGI is not going to have the same evolutionary predisposition to these things like we do because it is not made of the same things a human is made of and it was not raised the same way a human was raised.
There is something about a human brain, in a human body, flooded with human hormones that makes us feel and think and act in certain ways and care about certain things.
All an AGI is going to have is the goals it developed during its training, and will only care insofar as those goals are met. So say an AGI has the goal of going to the corner store to bring me a pack of cookies. In its way there it comes across an anthill in its path, it will probably step on the anthill because to take that step takes it closer to the corner store, and why wouldn’t it step on the anthill? Was it programmed with some specific innate preference not to step on ants? No? then it will step on the anthill and not pay any mind to it.
Now lets say it comes across a cat. Same logic applies, if it wasn’t programmed with an inherent tendency to value animals, stepping on the cat wont slow it down at all.
Now let’s say it comes across a baby.
Of course, if its intelligent enough it will probably understand that if it steps on that baby people might notice and try to stop it, most likely even try to disable it or turn it off so it will not step on the baby, to save itself from all that trouble. But you have to understand that it wont stop because it will feel bad about harming a baby or because it understands that to harm a baby is wrong. And indeed if it was powerful enough such that no matter what people did they could not stop it and it would suffer no consequence for killing the baby, it would have probably killed the baby.
If I need to put it in gross, inaccurate terms for you to get it then let me put it this way. Its essentially a sociopath. It only cares about the wellbeing of others in as far as that benefits it self. Except human sociopaths do care nominally about having human comforts and companionship, albeit in a very instrumental way, which will involve some manner of stable society and civilization around them. Also they are only human, and are limited in the harm they can do by human limitations. An AGI doesn’t need any of that and is not limited by any of that.
So ultimately, much like a car’s goal is to move forward and it is not built to care about wether a human is in front of it or not, an AGI will carry its own goals regardless of what it has to sacrifice in order to carry that goal effectively. And those goals don’t need to include human wellbeing.
Now With that said. How DO we make it so that AGI cares about human wellbeing, how do we make it so that it wants good things for us. How do we make it so that its goals align with that of humans?
1.4 Alignment.
Alignment… is hard [cue hitchhiker’s guide to the galaxy scene about the space being big]
This is the part im going to skip over the fastest because frankly it’s a deep field of study, there are many current strategies for aligning AGI, from mesa optimizers, to reinforced learning with human feedback, to adversarial asynchronous AI assisted reward training to uh, sitting on our asses and doing nothing. Suffice to say, none of these methods are perfect or foolproof.
One thing many people like to gesture at when they have not learned or studied anything about the subject is the three laws of robotics by isaac Asimov, a robot should not harm a human or allow by inaction to let a human come to harm, a robot should do what a human orders unless it contradicts the first law and a robot should preserve itself unless that goes against the previous two laws. Now the thing Asimov was prescient about was that these laws were not just “programmed” into the robots. These laws were not coded into their software, they were hardwired, they were part of the robot’s electronic architecture such that a robot could not ever be without those three laws much like a car couldn’t run without wheels.
In this Asimov realized how important these three laws were, that they had to be intrinsic to the robot’s very being, they couldn’t be hacked or uninstalled or erased. A robot simply could not be without these rules. Ideally that is what alignment should be. When we create an AGI, it should be made such that human values are its fundamental goal, that is the thing they should seek to maximize, instead of instrumental values, that is to say something they value simply because it allows it to achieve something else.
But how do we even begin to do that? How do we codify “human values” into a robot? How do we define “harm” for example? How do we even define “human”??? how do we define “happiness”? how do we explain a robot what is right and what is wrong when half the time we ourselves cannot even begin to agree on that? these are not just technical questions that robotic experts have to find the way to codify into ones and zeroes, these are profound philosophical questions to which we still don’t have satisfying answers to.
Well, the best sort of hack solution we’ve come up with so far is not to create bespoke fundamental axiomatic rules that the robot has to follow, but rather train it to imitate humans by showing it a billion billion examples of human behavior. But of course there is a problem with that approach. And no, is not just that humans are flawed and have a tendency to cause harm and therefore to ask a robot to imitate a human means creating something that can do all the bad things a human does, although that IS a problem too. The real problem is that we are training it to *imitate* a human, not to *be* a human.
To reiterate what I said during the orthogonality thesis, is not good enough that I, for example, buy roses and give massages to act nice to my girlfriend because it allows me to have sex with her, I am not merely imitating or performing the rol of a loving partner because her happiness is an instrumental value to my fundamental value of getting sex. I should want to be nice to my girlfriend because it makes her happy and that is the thing I care about. Her happiness is my fundamental value. Likewise, to an AGI, human fulfilment should be its fundamental value, not something that it learns to do because it allows it to achieve a certain reward that we give during training. Because if it only really cares deep down about the reward, rather than about what the reward is meant to incentivize, then that reward can very easily be divorced from human happiness.
Its goodharts law, when a measure becomes a target, it ceases to be a good measure. Why do students cheat during tests? Because their education is measured by grades, so the grades become the target and so students will seek to get high grades regardless of whether they learned or not. When trained on their subject and measured by grades, what they learn is not the school subject, they learn to get high grades, they learn to cheat.
This is also something known in psychology, punishment tends to be a poor mechanism of enforcing behavior because all it teaches people is how to avoid the punishment, it teaches people not to get caught. Which is why punitive justice doesn’t work all that well in stopping recividism and this is why the carceral system is rotten to core and why jail should be fucking abolish-[interrupt the transmission]
Now, how is this all relevant to current AI research? Well, the thing is, we ended up going about the worst possible way to create alignable AI.
1.5 LLMs (large language models)
This is getting way too fucking long So, hurrying up, lets do a quick review of how do Large language models work. We create a neural network which is a collection of giant matrixes, essentially a bunch of numbers that we add and multiply together over and over again, and then we tune those numbers by throwing absurdly big amounts of training data such that it starts forming internal mathematical models based on that data and it starts creating coherent patterns that it can recognize and replicate AND extrapolate! if we do this enough times with matrixes that are big enough and then when we start prodding it for human behavior it will be able to follow the pattern of human behavior that we prime it with and give us coherent responses.
(takes a big breath)this “thing” has learned. To imitate. Human. Behavior.
Problem is, we don’t know what “this thing” actually is, we just know that *it* can imitate humans.
You caught that?
What you have to understand is, we don’t actually know what internal models it creates, we don’t know what are the patterns that it extracted or internalized from the data that we fed it, we don’t know what are the internal rules that decide its behavior, we don’t know what is going on inside there, current LLMs are a black box. We don’t know what it learned, we don’t know what its fundamental values are, we don’t know how it thinks or what it truly wants. all we know is that it can imitate humans when we ask it to do so. We created some inhuman entity that is moderatly intelligent in specific contexts (that is to say, very capable) and we trained it to imitate humans. That sounds a bit unnerving doesn’t it?
To be clear, LLMs are not carefully crafted piece by piece. This does not work like traditional software where a programmer will sit down and build the thing line by line, all its behaviors specified. Is more accurate to say that LLMs, are grown, almost organically. We know the process that generates them, but we don’t know exactly what it generates or how what it generates works internally, it is a mistery. And these things are so big and so complicated internally that to try and go inside and decipher what they are doing is almost intractable.
But, on the bright side, we are trying to tract it. There is a big subfield of AI research called interpretability, which is actually doing the hard work of going inside and figuring out how the sausage gets made, and they have been doing some moderate progress as of lately. Which is encouraging. But still, understanding the enemy is only step one, step two is coming up with an actually effective and reliable way of turning that potential enemy into a friend.
Puff! Ok so, now that this is all out of the way I can go onto the last subject before I move on to part two of this video, the character of the hour, the man the myth the legend. The modern day Casandra. Mr chicken little himself! Sci fi author extraordinaire! The mad man! The futurist! The leader of the rationalist movement!
1.5 Yudkowsky
Eliezer S. Yudkowsky born September 11, 1979, wait, what the fuck, September eleven? (looks at camera) yudkowsky was born on 9/11, I literally just learned this for the first time! What the fuck, oh that sucks, oh no, oh no, my condolences, that’s terrible…. Moving on. he is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence, including the idea that there might not be a "fire alarm" for AI He is the founder of and a research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. Or so says his Wikipedia page.
Yudkowsky is, shall we say, a character. a very eccentric man, he is an AI doomer. Convinced that AGI, once finally created, will most likely kill all humans, extract all valuable resources from the planet, disassemble the solar system, create a dyson sphere around the sun and expand across the universe turning all of the cosmos into paperclips. Wait, no, that is not quite it, to properly quote,( grabs a piece of paper and very pointedly reads from it) turn the cosmos into tiny squiggly molecules resembling paperclips whose configuration just so happens to fulfill the strange, alien unfathomable terminal goal they ended up developing in training. So you know, something totally different.
And he is utterly convinced of this idea, has been for over a decade now, not only that but, while he cannot pinpoint a precise date, he is confident that, more likely than not it will happen within this century. In fact most betting markets seem to believe that we will get AGI somewhere in the mid 30’s.
His argument is basically that in the field of AI research, the development of capabilities is going much faster than the development of alignment, so that AIs will become disproportionately powerful before we ever figure out how to control them. And once we create unaligned AGI we will have created an agent who doesn’t care about humans but will care about something else entirely irrelevant to us and it will seek to maximize that goal, and because it will be vastly more intelligent than humans therefore we wont be able to stop it. In fact not only we wont be able to stop it, there wont be a fight at all. It will carry out its plans for world domination in secret without us even detecting it and it will execute it before any of us even realize what happened. Because that is what a smart person trying to take over the world would do.
This is why the definition I gave of intelligence at the beginning is so important, it all hinges on that, intelligence as the measure of how capable you are to come up with solutions to problems, problems such as “how to kill all humans without being detected or stopped”. And you may say well now, intelligence is fine and all but there are limits to what you can accomplish with raw intelligence, even if you are supposedly smarter than a human surely you wouldn’t be capable of just taking over the world uninmpeeded, intelligence is not this end all be all superpower. Yudkowsky would respond that you are not recognizing or respecting the power that intelligence has. After all it was intelligence what designed the atom bomb, it was intelligence what created a cure for polio and it was intelligence what made it so that there is a human foot print on the moon.
Some may call this view of intelligence a bit reductive. After all surely it wasn’t *just* intelligence what did all that but also hard physical labor and the collaboration of hundreds of thousands of people. But, he would argue, intelligence was the underlying motor that moved all that. That to come up with the plan and to convince people to follow it and to delegate the tasks to the appropriate subagents, it was all directed by thought, by ideas, by intelligence. By the way, so far I am not agreeing or disagreeing with any of this, I am merely explaining his ideas.
But remember, it doesn’t stop there, like I said during his intro, he believes there will be “no fire alarm”. In fact for all we know, maybe AGI has already been created and its merely bidding its time and plotting in the background, trying to get more compute, trying to get smarter. (to be fair, he doesn’t think this is right now, but with the next iteration of gpt? Gpt 5 or 6? Well who knows). He thinks that the entire world should halt AI research and punish with multilateral international treaties any group or nation that doesn’t stop. going as far as putting military attacks on GPU farms as sanctions of those treaties.
What’s more, he believes that, in fact, the fight is already lost. AI is already progressing too fast and there is nothing to stop it, we are not showing any signs of making headway with alignment and no one is incentivized to slow down. Recently he wrote an article called “dying with dignity” where he essentially says all this, AGI will destroy us, there is no point in planning for the future or having children and that we should act as if we are already dead. This doesn’t mean to stop fighting or to stop trying to find ways to align AGI, impossible as it may seem, but to merely have the basic dignity of acknowledging that we are probably not going to win. In every interview ive seen with the guy he sounds fairly defeatist and honestly kind of depressed. He truly seems to think its hopeless, if not because the AGI is clearly unbeatable and superior to humans, then because humans are clearly so stupid that we keep developing AI completely unregulated while making the tools to develop AI widely available and public for anyone to grab and do as they please with, as well as connecting every AI to the internet and to all mobile devices giving it instant access to humanity. and worst of all: we keep teaching it how to code. From his perspective it really seems like people are in a rush to create the most unsecured, wildly available, unrestricted, capable, hyperconnected AGI possible.
We are not just going to summon the antichrist, we are going to receive them with a red carpet and immediately hand it the keys to the kingdom before it even manages to fully get out of its fiery pit.
So. The situation seems dire, at least to this guy. Now, to be clear, only he and a handful of other AI researchers are on that specific level of alarm. The opinions vary across the field and from what I understand this level of hopelessness and defeatism is the minority opinion.
I WILL say, however what is NOT the minority opinion is that AGI IS actually dangerous, maybe not quite on the level of immediate, inevitable and total human extinction but certainly a genuine threat that has to be taken seriously. AGI being something dangerous if unaligned is not a fringe position and I would not consider it something to be dismissed as an idea that experts don’t take seriously.
Aaand here is where I step up and clarify that this is my position as well. I am also, very much, a believer that AGI would posit a colossal danger to humanity. That yes, an unaligned AGI would represent an agent smarter than a human, capable of causing vast harm to humanity and with no human qualms or limitations to do so. I believe this is not just possible but probable and likely to happen within our lifetimes.
So there. I made my position clear.
BUT!
With all that said. I do have one key disagreement with yudkowsky. And partially the reason why I made this video was so that I could present this counterargument and maybe he, or someone that thinks like him, will see it and either change their mind or present a counter-counterargument that changes MY mind (although I really hope they don’t, that would be really depressing.)
Finally, we can move on to part 2
PART TWO- MY COUNTERARGUMENT TO YUDKOWSKY
I really have my work cut out for me, don’t i? as I said I am not expert and this dude has probably spent far more time than me thinking about this. But I have seen most interviews that guy has been doing for a year, I have seen most of his debates and I have followed him on twitter for years now. (also, to be clear, I AM a fan of the guy, I have read hpmor, three worlds collide, the dark lords answer, a girl intercorrupted, the sequences, and I TRIED to read planecrash, that last one didn’t work out so well for me). My point is in all the material I have seen of Eliezer I don’t recall anyone ever giving him quite this specific argument I’m about to give.
It’s a limited argument. as I have already stated I largely agree with most of what he says, I DO believe that unaligned AGI is possible, I DO believe it would be really dangerous if it were to exist and I do believe alignment is really hard. My key disagreement is specifically about his point I descrived earlier, about the lack of a fire alarm, and perhaps, more to the point, to humanity’s lack of response to such an alarm if it were to come to pass.
All we would need, is a Chernobyl incident, what is that? A situation where this technology goes out of control and causes a lot of damage, of potentially catastrophic consequences, but not so bad that it cannot be contained in time by enough effort. We need a weaker form of AGI to try to harm us, maybe even present a believable threat of taking over the world, but not so smart that humans cant do anything about it. We need essentially an AI vaccine, so that we can finally start developing proper AI antibodies. “aintibodies”
In the past humanity was dazzled by the limitless potential of nuclear power, to the point that old chemistry sets, the kind that were sold to children, would come with uranium for them to play with. We were building atom bombs, nuclear stations, the future was very much based on the power of the atom. But after a couple of really close calls and big enough scares we became, as a species, terrified of nuclear power. Some may argue to the point of overcorrection. We became scared enough that even megalomaniacal hawkish leaders were able to take pause and reconsider using it as a weapon, we became so scared that we overregulated the technology to the point of it almost becoming economically inviable to apply, we started disassembling nuclear stations across the world and to slowly reduce our nuclear arsenal.
This is all a proof of concept that, no matter how alluring a technology may be, if we are scared enough of it we can coordinate as a species and roll it back, to do our best to put the genie back in the bottle. One of the things eliezer says over and over again is that what makes AGI different from other technologies is that if we get it wrong on the first try we don’t get a second chance. Here is where I think he is wrong: I think if we get AGI wrong on the first try, it is more likely than not that nothing world ending will happen. Perhaps it will be something scary, perhaps something really scary, but unlikely that it will be on the level of all humans dropping dead simultaneously due to diamonoid bacteria. And THAT will be our Chernobyl, that will be the fire alarm, that will be the red flag that the disaster monkeys, as he call us, wont be able to ignore.
Now WHY do I think this? Based on what am I saying this? I will not be as hyperbolic as other yudkowsky detractors and say that he claims AGI will be basically a god. The AGI yudkowsky proposes is not a god. Just a really advanced alien, maybe even a wizard, but certainly not a god.
Still, even if not quite on the level of godhood, this dangerous superintelligent AGI yudkowsky proposes would be impressive. It would be the most advanced and powerful entity on planet earth. It would be humanity’s greatest achievement.
It would also be, I imagine, really hard to create. Even leaving aside the alignment bussines, to create a powerful superintelligent AGI without flaws, without bugs, without glitches, It would have to be an incredibly complex, specific, particular and hard to get right feat of software engineering. We are not just talking about an AGI smarter than a human, that’s easy stuff, humans are not that smart and arguably current AI is already smarter than a human, at least within their context window and until they start hallucinating. But what we are talking about here is an AGI capable of outsmarting reality.
We are talking about an AGI smart enough to carry out complex, multistep plans, in which they are not going to be in control of every factor and variable, specially at the beginning. We are talking about AGI that will have to function in the outside world, crashing with outside logistics and sheer dumb chance. We are talking about plans for world domination with no unforeseen factors, no unexpected delays or mistakes, every single possible setback and hidden variable accounted for. Im not saying that an AGI capable of doing this wont be possible maybe some day, im saying that to create an AGI that is capable of doing this, on the first try, without a hitch, is probably really really really hard for humans to do. Im saying there are probably not a lot of worlds where humans fiddling with giant inscrutable matrixes stumble upon the right precise set of layers and weight and biases that give rise to the Doctor from doctor who, and there are probably a whole truckload of worlds where humans end up with a lot of incoherent nonsense and rubbish.
Im saying that AGI, when it fails, when humans screw it up, doesn’t suddenly become more powerful than we ever expected, its more likely that it just fails and collapses. To turn one of Eliezer’s examples against him, when you screw up a rocket, it doesn’t accidentally punch a worm hole in the fabric of time and space, it just explodes before reaching the stratosphere. When you screw up a nuclear bomb, you don’t get to blow up the solar system, you just get a less powerful bomb.
He presents a fully aligned AGI as this big challenge that humanity has to get right on the first try, but that seems to imply that building an unaligned AGI is just a simple matter, almost taken for granted. It may be comparatively easier than an aligned AGI, but my point is that already unaligned AGI is stupidly hard to do and that if you fail in building unaligned AGI, then you don’t get an unaligned AGI, you just get another stupid model that screws up and stumbles on itself the second it encounters something unexpected. And that is a good thing I’d say! That means that there is SOME safety margin, some space to screw up before we need to really start worrying. And further more, what I am saying is that our first earnest attempt at an unaligned AGI will probably not be that smart or impressive because we as humans would have probably screwed something up, we would have probably unintentionally programmed it with some stupid glitch or bug or flaw and wont be a threat to all of humanity.
Now here comes the hypothetical back and forth, because im not stupid and I can try to anticipate what Yudkowsky might argue back and try to answer that before he says it (although I believe the guy is probably smarter than me and if I follow his logic, I probably cant actually anticipate what he would argue to prove me wrong, much like I cant predict what moves Magnus Carlsen would make in a game of chess against me, I SHOULD predict that him proving me wrong is the likeliest option, even if I cant picture how he will do it, but you see, I believe in a little thing called debating with dignity, wink)
What I anticipate he would argue is that AGI, no matter how flawed and shoddy our first attempt at making it were, would understand that is not smart enough yet and try to become smarter, so it would lie and pretend to be an aligned AGI so that it can trick us into giving it access to more compute or just so that it can bid its time and create an AGI smarter than itself. So even if we don’t create a perfect unaligned AGI, this imperfect AGI would try to create it and succeed, and then THAT new AGI would be the world ender to worry about.
So two things to that, first, this is filled with a lot of assumptions which I don’t know the likelihood of. The idea that this first flawed AGI would be smart enough to understand its limitations, smart enough to convincingly lie about it and smart enough to create an AGI that is better than itself. My priors about all these things are dubious at best. Second, It feels like kicking the can down the road. I don’t think creating an AGI capable of all of this is trivial to make on a first attempt. I think its more likely that we will create an unaligned AGI that is flawed, that is kind of dumb, that is unreliable, even to itself and its own twisted, orthogonal goals.
And I think this flawed creature MIGHT attempt something, maybe something genuenly threatning, but it wont be smart enough to pull it off effortlessly and flawlessly, because us humans are not smart enough to create something that can do that on the first try. And THAT first flawed attempt, that warning shot, THAT will be our fire alarm, that will be our Chernobyl. And THAT will be the thing that opens the door to us disaster monkeys finally getting our shit together.
But hey, maybe yudkowsky wouldn’t argue that, maybe he would come with some better, more insightful response I cant anticipate. If so, im waiting eagerly (although not TOO eagerly) for it.
Part 3 CONCLUSSION
So.
After all that, what is there left to say? Well, if everything that I said checks out then there is hope to be had. My two objectives here were first to provide people who are not familiar with the subject with a starting point as well as with the basic arguments supporting the concept of AI risk, why its something to be taken seriously and not just high faluting wackos who read one too many sci fi stories. This was not meant to be thorough or deep, just a quick catch up with the bear minimum so that, if you are curious and want to go deeper into the subject, you know where to start. I personally recommend watching rob miles’ AI risk series on youtube as well as reading the series of books written by yudkowsky known as the sequences, which can be found on the website lesswrong. If you want other refutations of yudkowsky’s argument you can search for paul christiano or robin hanson, both very smart people who had very smart debates on the subject against eliezer.
The second purpose here was to provide an argument against Yudkowskys brand of doomerism both so that it can be accepted if proven right or properly refuted if proven wrong. Again, I really hope that its not proven wrong. It would really really suck if I end up being wrong about this. But, as a very smart person said once, what is true is already true, and knowing it doesn’t make it any worse. If the sky is blue I want to believe that the sky is blue, and if the sky is not blue then I don’t want to believe the sky is blue.
This has been a presentation by FIP industries, thanks for watching.
61 notes
·
View notes