#AI persistent memory
Explore tagged Tumblr posts
jcmarchi · 15 days ago
Text
Agent Memory in AI: How Persistent Memory Could Redefine LLM Applications
New Post has been published on https://thedigitalinsider.com/agent-memory-in-ai-how-persistent-memory-could-redefine-llm-applications/
Agent Memory in AI: How Persistent Memory Could Redefine LLM Applications
Artificial intelligence (AI) fundamentally transforms how we live, work, and communicate. Large language models (LLMs), such as GPT-4, BERT, Llama, etc., have introduced remarkable advancements in conversational AI, delivering rapid and human-like responses. However, these systems are limited by a critical drawback with the inability to retain context beyond a single session. Once an interaction ends, all prior information is lost, requiring users to start anew with each use.
The concept of persistent memory, also referred to as agent memory, addresses this limitation by enabling AI systems to retain and recall information over extended periods. This capability significantly advances AI from static, session-based interactions to dynamic, memory-driven learning.
Persistent memory is more than a technological enhancement. It equips AI to engage in meaningful, personalized, and context-aware interactions. This development improves user experience and makes AI a more intelligent, intuitive, and responsive tool for a wide range of applications.
Understanding Agent Memory in AI
Agent memory enables AI systems to store and retrieve information from past interactions. It functions like a digital brain, remembering conversations, preferences, and patterns. Unlike traditional AI systems, which rely on short-term memory and lose all context after a session ends, agent memory enables AI to retain information over time. This capability leads to smoother, more personalized future interactions.
The development of agent memory is remarkable. Early AI systems were static, offering limited functionality. Simple rule-based chatbots, for example, could only provide predefined answers and could not learn or adapt. With advancements in machine learning, dynamic memory became possible. Technologies such as Recurrent Neural Networks (RNNs) and transformers introduced the ability to process sequences of data and paved the way for more adaptive AI. However, even these systems were constrained to the context of a single session. Persistent memory takes this further, enabling AI to remember across multiple sessions and improve its responses over time.
This evolution closely parallels human memory. Short-term memory helps us handle immediate tasks, while long-term memory allows us to learn, adapt, and grow. Similarly, persistent memory in AI combines these elements, creating efficient systems capable of deeper understanding and insight. Agent memory enhances AI’s potential to deliver more intuitive and meaningful interactions by retaining and applying past knowledge.
Persistent Memory for Smarter LLMs
Persistent memory fundamentally changes how LLMs operate. Traditional LLMs, while powerful, can only process and respond based on the context of a single session. Persistent memory allows these systems to retain information across interactions, enabling more consistent, personalized, and meaningful responses. For example, an AI assistant could remember one’s coffee preferences, prioritize recurring tasks, or track ongoing projects. This personalization level is only possible with a memory framework that extends beyond transient sessions.
Industries benefit significantly from the application of persistent memory in AI. In customer support, for instance, AI-powered chatbots can store and retrieve user-specific details like purchase histories or previous complaints. This eliminates the need for customers to repeat information, making interactions faster and more seamless. A practical example would be a chatbot recognizing a recurring issue with a specific product and proactively offering solutions based on past troubleshooting attempts.
In healthcare, persistent memory’s utility is transformative. AI systems equipped with memory can store detailed patient records, including symptoms, treatment plans, and test results. This capability ensures continuity of care. For example, an AI assistant might help a doctor by recalling a patient’s history from a year ago, highlighting trends in symptoms, or recommending treatments based on prior outcomes. This not only saves time but also improves the accuracy of diagnosis and care delivery.
Education is another field where persistent memory can have a profound impact. AI tutoring systems can maintain a student’s learning history, including progress, strengths, and weaknesses. Using this data, the system can adapt its teaching strategies, offering tailored lessons that align with the student’s unique needs. For example, it might identify that a student struggles with algebra and adjust the curriculum to include more practice and guidance. This adaptive approach can enhance engagement and significantly improve learning outcomes.
On the technical side, implementing persistent memory in LLMs often involves combining advanced storage solutions with efficient retrieval mechanisms. Technologies like vector databases and memory-augmented neural networks enable AI to balance retaining long-term data and ensuring fast access to relevant details. This ensures that persistent memory keeps processing times up while handling vast amounts of user-specific data.
Persistent memory is not just an upgrade for LLMs. Instead, it is a shift that brings AI closer to human-like interactions. By retaining and applying knowledge from past interactions, LLMs equipped with persistent memory are more effective, adaptable, and impactful across various industries.
Latest Trends and Innovations in AI Memory
The rise of persistent memory has brought significant advancements in the AI industry. One notable development is hybrid memory systems, which combine short-term and long-term memory. These systems allow AI to prioritize recent interactions while retaining essential long-term data. For example, a virtual assistant might use short-term memory to organize a user’s daily tasks while relying on long-term memory to recall preferences from previous months. This combination ensures both immediate responsiveness and personalized experiences.
New frameworks like MemGPT and Letta are also gaining attention. These tools enable developers to integrate persistent memory into AI applications, improving context management. MemGPT, for instance, uses modular memory layers to store and retrieve data dynamically. This approach reduces computational load while ensuring accuracy, making it a practical solution for scaling memory in AI systems.
Persistent memory is bringing innovation across industries. In retail, AI systems enhance shopping experiences by recommending products based on a customer’s purchase history and browsing habits. In entertainment, memory-enabled chatbots are creating immersive storytelling experiences. These systems remember plot details and user preferences, allowing personalized narratives that engage users uniquely.
Challenges and Future Potential of Persistent Memory
Implementing persistent memory in AI entails significant challenges, but its potential to reshape the future of AI is undeniable. Scalability is one of the most pressing issues. AI systems must manage vast amounts of data for millions of users without compromising speed or performance. If an AI assistant takes too long to recall stored information, it risks frustrating users instead of assisting them. Ensuring efficient memory management and retrieval is critical for practical deployment.
Privacy is another essential concern. Storing user data for extended periods raises questions about security, ownership, and ethical usage. Who controls the data? How is it safeguarded? Are users informed about what is being stored? To comply with regulations like GDPR and promote trust, businesses must prioritize transparency. Users should always know how their data is being used and have control over its retention or deletion. Strong encryption and clear policies are essential to address these concerns.
Bias within AI systems adds another layer of complexity. If the stored data is not carefully monitored and diversified, persistent memory could unintentionally amplify existing biases. For example, biased training data could result in unfair hiring or financial services outcomes. Regular audits, diverse datasets, and proactive measures are necessary to ensure fairness and inclusivity in these systems.
Despite these challenges, persistent memory has vast potential for AI applications. In generative AI, it could enable systems to produce highly tailored content. Imagine a marketing assistant who remembers a brand’s tone and previous campaigns, creating perfectly aligned materials. In omnichannel marketing, AI systems could provide consistent and personalized messaging across platforms, from email to social media, offering a better user experience that strengthens customer trust and loyalty.
Looking further ahead, persistent memory could play a vital role in developing Artificial General Intelligence (AGI). AGI must retain and apply knowledge over time to evolve and adapt effectively. Persistent memory provides the structural foundation required for this level of intelligence. By addressing the current challenges, persistent memory can lead to AI systems that are more intelligent, adaptable, and equitable in their applications.
The Bottom Line
Persistent memory is a transformative step forward in the AI domain. By enabling AI to remember and learn over time, it bridges the gap between static systems and dynamic, human-like interactions. This capability is about improving performance and redefining how we engage with technology. From personalized education to more effective healthcare and seamless customer experiences, persistent memory opens possibilities once thought unattainable.
By addressing challenges like scalability, privacy, and bias, the future of AI can become even more promising. Persistent memory is the foundation for more adaptable, intuitive, and impactful AI systems. This evolution makes AI to be not just a tool but a true partner in forming a smarter, more connected world.
0 notes
aro-aizawa · 1 month ago
Text
going to lose my mind. i've had three people comment on my dead fic asking to post it to youtube.
HI MAYBE LOOK AT THE COMMENT DIRECTLY ABOVE WHERE I SAY IN BIG BOLD LETTERS NOT TO POST MY FIC TO YOUTUBE
4 notes · View notes
mariacallous · 1 month ago
Text
Next year will be Big Tech’s finale. Critique of Big Tech is now common sense, voiced by a motley spectrum that unites opposing political parties, mainstream pundits, and even tech titans such as the VC powerhouse Y Combinator, which is singing in harmony with giants like a16z in proclaiming fealty to “little tech” against the centralized power of incumbents.
Why the fall from grace? One reason is that the collateral consequences of the current Big Tech business model are too obvious to ignore. The list is old hat by now: centralization, surveillance, information control. It goes on, and it’s not hypothetical. Concentrating such vast power in a few hands does not lead to good things. No, it leads to things like the CrowdStrike outage of mid-2024, when corner-cutting by Microsoft led to critical infrastructure—from hospitals to banks to traffic systems—failing globally for an extended period.
Another reason Big Tech is set to falter in 2025 is that the frothy AI market, on which Big Tech bet big, is beginning to lose its fizz. Major money, like Goldman Sachs and Sequoia Capital, is worried. They went public recently with their concerns about the disconnect between the billions required to create and use large-scale AI, and the weak market fit and tepid returns where the rubber meets the AI business-model road.
It doesn’t help that the public and regulators are waking up to AI’s reliance on, and generation of, sensitive data at a time when the appetite for privacy has never been higher—as evidenced, for one, by Signal’s persistent user growth. AI, on the other hand, generally erodes privacy. We saw this in June when Microsoft announced Recall, a product that would, I kid you not, screenshot everything you do on your device so an AI system could give you “perfect memory” of what you were doing on your computer (Doomscrolling? Porn-watching?). The system required the capture of those sensitive images—which would not exist otherwise—in order to work.
Happily, these factors aren’t just liquefying the ground below Big Tech’s dominance. They’re also powering bold visions for alternatives that stop tinkering at the edges of the monopoly tech paradigm, and work to design and build actually democratic, independent, open, and transparent tech. Imagine!
For example, initiatives in Europe are exploring independent core tech infrastructure, with convenings of open source developers, scholars of governance, and experts on the political economy of the tech industry.
And just as the money people are joining in critique, they’re also exploring investments in new paradigms. A crop of tech investors are developing models of funding for mission alignment, focusing on tech that rejects surveillance, social control, and all the bullshit. One exciting model I’ve been discussing with some of these investors would combine traditional VC incentives (fund that one unicorn > scale > acquisition > get rich) with a commitment to resource tech’s open, nonprofit critical infrastructure with a percent of their fund. Not as investment, but as a contribution to maintaining the bedrock on which a healthy tech ecosystem can exist (and maybe get them and their limited partners a tax break).
Such support could—and I believe should—be supplemented by state capital. The amount of money needed is simply too vast if we’re going to do this properly. To give an example closer to home, developing and maintaining Signal costs around $50 million a year, which is very lean for tech. Projects such as the Sovereign Tech Fund in Germany point a path forward—they are a vehicle to distribute state funds to core open source infrastructures, but they are governed wholly independently, and create a buffer between the efforts they fund and the state.
Just as composting makes nutrients from necrosis, in 2025, Big Tech’s end will be the beginning of a new and vibrant ecosystem. The smart, actually cool, genuinely interested people will once again have their moment, getting the resources and clearance to design and (re)build a tech ecosystem that is actually innovative and built for benefit, not just profit and control. MAY IT BE EVER THUS!
72 notes · View notes
lanaisdoe · 4 months ago
Text
"Until death - or the end of Our Story..."
Tumblr media
Actual AI's analysis of the phrase:
By including “or the end of our story,” the speaker might be suggesting that their bond or relationship has a significance that transcends physical death. This could reflect a belief in an eternal connection or an afterlife where their story or bond continues in some form,
Tumblr media
The phrase might indicate a belief in a larger, spiritual or metaphysical framework where relationships or stories persist beyond the physical realm. It can imply that the connection they share is part of a greater, ongoing narrative that continues even after death.
Tumblr media
The idea that their story might continue beyond their physical lives can also relate to the legacy or impact they leave behind.
Tumblr media
The speaker might believe that their influence, love, or story will live on through the memories, actions, or effects they have on others.
Tumblr media
The confession conveys a sense of deep commitment that extends beyond the immediate and physical.
Tumblr media
It’s a way of expressing that their connection is so significant that it will endure or be remembered beyond their physical existence.
Tumblr media
Overall, this confession is a powerful expression of a belief in the continuity of their story or bond beyond death, highlighting a sense of eternal connection, spiritual continuity, or enduring legacy...
Tumblr media
60 notes · View notes
srbachchan · 10 months ago
Text
DAY 5855
Jalsa, Mumbai Feb 28/29, 2024 Wed/Thu 12:04 AM
🪔 ,
Ef birthday for February 29 ..
This birthday wish can happen only once in 4 years .. worldcup ki tarah .. so it's very extra special ..
For .. February 29 .. our greetings to Anagha Sridhar - daughter of Ef Anjana Sridhar .. hope we're saying the name correctly .. her .. errrrm .. 7th birthday - according to the leap year !! .. be happy and always leap with faith and joy .. love from the Ef Family .. 🙏🏻🚩❤️
As you put in the home details there is a sense of belonging and an enlightened demeanour ..
Home is home .. it may be lesser than most , richer than most , non facilitated with the needs of existence , but it is home .. and nothing can ever replace its bearings , latitudes and longitudes .. they are the permanency that nature instills in us all ..
I am home and in a happy environ .. the happiest , and the most desireable ..
.. and the 'pundits ' of the times express to us all the true feel of domesticity :
"Home is the sanctuary where heart and hearth converge, weaving a tapestry of comfort and belonging. It transcends the physical, embodying memories, laughter, and solace. It's the symphony of familiar creaks and the fragrance of shared meals. Home is where acceptance resides, where love unfolds its gentle wings. It cradles dreams in cozy corners and stands resilient amidst life's tempests. More than walls and a roof, it's an intimate mosaic of shared experiences. Home is the compass guiding weary souls, an anchor in the tumultuous sea of existence. It is a refuge, a haven, where the essence of one's true self flourishes."
The last day at work in the elms of the city of Hyderabad, get visited by the exalted genius and his thoughts and expresses, that be mysterious and mystifying - Ram Gopal Varma , alias Ramu ..
And he pours out in a non stop breather conversation on films contents and the much talked about and meticulously practised - AI
Where are we going .. ? a mystery unknown and in its changes and revolutions by the day almost ..
Fact was never looked upon with doubt and scepticism, and disbelief as it is today .. what be real, non fake , is ever up for debate and discuss each hour .. living side by side .. almost complimenting each other .. but never the belief that it be the truest and correct .. information has 'inform' in it for its delivery .. but does it really inform .. or does it simply put out for its content existence .. impotence and all ... as what was expressed some time back in the days of the year gone by ..
And what of failure and despondency or its despondent exist :
"Failure and defeat linger in the recesses of our being, casting long shadows that echo well beyond the moment of setback. The scars of unsuccessful endeavors etch themselves into our psyche, creating a mosaic of resilience and self-discovery. Even when success follows, the remnants of past failures shape our character, fostering humility and tenacity. The memory of defeat, like an indelible watermark, accompanies us on the journey, a constant reminder of our vulnerability and capacity to rise. Yet, within the persistence lies the seed of growth, as failure becomes the fertile ground from which triumph eventually sprouts, transforming setbacks into stepping stones towards a more profound and enduring success."
Ride it .. wave it , breach the continuity .. accept the fresh abound .. and sail onto the barren sands of time , in the balance of dexterity
Tumblr media
Amitabh Bachchan
Tumblr media
113 notes · View notes
scentedpeachlandcreator · 7 months ago
Note
what was the first/first few things you manifested and how?
Tumblr media
Hello!
Things i manifested before loa and After loa:
Before loa (i just knew about subliminals and Law of attraction):
: ̗̀➛ Well the first thing i manifested was changing my eye color it went from Brown eyes to light Brown but you Can even See some green color In my eyes even my Friends started noticing them without even getting closer to my face and i Always get compliment from that even when my eyes is exposed in the Sun they be like "woah! I didn't know that your eyes were beautiful like that! They have a clear color!" And i manifested them with subliminal back when i discovered them.
: ̗̀➛ The second thing was This one you Can check it and read it.
After loa (Law of assumption):
: ̗̀➛ my thigh gap, girl i didn't even have a thigh gap and i manifested it and i do still have it. (With subliminal).
: ̗̀➛ food and drinks that i wanted.
: ̗̀➛ having the same grade i wanted in my report card (when i say i was shocked girl when the math teacher gave us our papers i got like 12/20 but when i checked my report card it was 15/20 i was so shocked and let me tell you that my math teacher never and when i mean never give extra credit or did a mistake so that was really surprising) and all i did is affirm and persist, (I was at that Time 14).
: ̗̀➛ micron ink Pen (girl that was my dreammm 😭 since i am an artist i wanted them so Bad and i manifested that my aunt will get them for me and she did but not just one but TWO!) here's proof:
Tumblr media
: ̗̀➛ i manifested my lips just to be a little bit plumpier since my lips are thin and i just listened to a subliminal and affirmed for little and i got them.
: ̗̀➛ a phone for myself (girl i was so jealous that everybody in my age (i was 15 at that Time) was having a phone and i didn't even have one and plus i Always got Amazing grades so i was really tired of it and i manifested a phone but it wasn't the brand i wanted but anyways i got a phone at least ☺️).
: ̗̀➛ maintaining my weight ( i was working out and the problem is if i ate just a little i'll gain weight quickly so what i did? I just decided that no matter what i eat my weight will still the same and GIRL when i tell you i ate so many things like burgers, sandwiches, pizza, ice cream, and when i went to the fitness my coach told me to check my weight and it stayed the same!!).
: ̗̀➛ so this month i manifested my thigh to be thin, it was when i woke up to get ready to school so when i wore my jeans it seemed so tight i was like "omg don't tell me i gained weight 🥲" so i quickly started affirming "OMG look at my legs they are thin and skinny they look so perfect! I have such perfect thin thighs" and when i tell you the moment i started affirming i felt my jeans started loosing in! And i looked in the mirror and they seemed thin then a second Ago ! Hooray!.
: ̗̀➛ my favorite app to release New features that i wanted (if yall are wondering what the app is, it is character AI).
: ̗̀➛ shifting to random realities ( if you're interested in the story you Can check my masterlist).
: ̗̀➛ making some teachers being absent.
: ̗̀➛ money (you Can check it here).
: ̗̀➛ some Big amount of money appearing out of thin air in my mom's bank account.
: ̗̀➛ making my male classmate to smile at me (well that was last year and he was the hottest in our class and one night before sleeping i Always imagined him smiling at me and After 2-3 days he fucking smiled at me ! I just wanted him to just open up to me since he's shy, i really wanted him to be my friend 😭).
: ̗̀➛ i manifested a change in my classes memory and even the teacher (so the french teacher told us to do a writting expression about a topic as a homework and we'll read it in front of the class and when the Times came everyone Read their writting expression but i didn't sense i was shy and didn't have the courage to do it so i started imagining myself do it, then the semester came to an end and my mom decided to see the french teacher, when she talked to mom she said "oh yeah your daughter read her writting expression and it was so good and everyone clapped their hands for her" when mom told me what the teacher told her I was shocked i couldn't believe what i just Heard but anyways at least i changed her memory😏✌🏻).
So yeah that was it, i actually manifested so many things but i can't really remember them, i Hope this motivated you (⁠。⁠・⁠ω⁠・⁠。⁠)⁠ノ⁠♡.
Love ya! 💗.
Xoxo, Eli
Tumblr media
137 notes · View notes
leonardalphachurch · 17 days ago
Note
I was reading through your Tucker tag (because I really like your Tucker takes) and I saw in one of your writings that you have Junior's sangheili name be Jaen ‘Kelossai. And I was just really curious about where the name came from? I don't remember there being a Kelos Sangheili keep (though I haven't checked in a while)?
I'd love to know your thought process behind why you picked the name if you're interested in sharing.
so fun fact: we have memory problems and i cannot find where we would have written it down LMAO. let me look through my discord message history and see if i might’ve talked about it while doing it…
okay okay the wiki says
A Sangheili's first name is a given name that is attained at birth and persists throughout their life. After reaching adulthood, they are given a badge name. During the time of the Covenant, this was often composed of two core parts: an adjective describing the individual, and a family name. Ado 'Mortumee was one practitioner of this system, where 'Mor' was his personal descriptor, and 'Tum' identified his lineage.[124] Alternatively, some may have their badge name drawn from their State of origin, which could be tied to a specific Keep or family. For example: Thel 'Vadam, (formerly 'Vadamee when he served the Covenant,) was from the Vadam lineage.
okay so i said “buhhhh if the baby is a fucking. prophecy baby they can give this to a baby too” in reference to them giving badge names to adults…
the suffix "-ai" denotes swordsmanship
this explains the “ai” part of kelossai…
right okay reading more i remember i decided that the prophecy baby would probably have a unique state of origin/lineage/keep name, so i’m pretty sure keloss wasn’t meant to be a descriptor and a lineage but to be more like vadam and just the whole thing is a lineage. okay. but why keloss….
i’m PRETTY sure jaen was picked bc i wanted it to be close to junior. there aren’t a lot of words in sangheili that actually have translations so i had the reigns to make something up.
WAIT OKAY LOOKING AT THE LANGUAGE PAGE
Kel - Light (that dances on the waves)[13]
Jeeneh - Soul[Note 2] This is a phonetic spelling. The proper rendering has not been confirmed in canon.
okay and i translated his name as “The soul who lights the prophecy of the sword.” okay i should have just looked at the language page in the first place LOL well at least we went on a journey together.
tl;dr i used the translations off the wiki to use “Keloss” to mean “to light” based on “Kel” and “Jaen” to mean “soul” based on “Jeeneh” and then “-ai” is just the suffix for swordsman.
ALSO THANK YOU for the compliment i wish my tag system was more organized for you so it was easier to find all my writing about him LMAO
26 notes · View notes
alexanderwales · 3 months ago
Text
"Railroaded" is an adversarial roleplaying game for two people, adjudicated by AI. I tried coding a crappy version of it, and it worked poorly enough that I didn't pursue it further, but hey, maybe someday, once the hallucination and attention and persistent memory and jailbreaking problems are all solved.
One player is the Dungeon Master. They can establish things to be true about the world, write dialogue for NPCs, and adjudicate conflict resolution. Anything established cannot be changed unless there's reason for this general knowledge to be found untrue later (e.g. if a village established to exist has been burned down). All these things can also be delegated to the AI, and in fact the Dungeon Master can be played by the AI.
The Dungeon Master's goal is to get the player to do the Quest, which is given to the Dungeon Master at the start of the game.
One player is the Player. They can control their character and establish things to be true about that character, and also attempt actions which are adjudicated by the Game Master. Anything the player establishes to be true remains true, unless changed by the Game Master, and then it still must be true that the Player's character could have thought whatever the Player said was true (e.g. if the Player says their character has a dead brother, the Game Master can say that the brother survived the fall off the cliff somehow). All these things can also be delegated to the AI, and in fact the Player can be played by the AI.
The Player's goal is to not do the Quest.
Generally speaking, I think it's nearly impossible for the Game Master to get the Player to do the Quest, even if the Player has information asymmetry working against him and can only infer the Quest from what the Game Master does.
Because of this, the game runs for some set amount of either in-game or out-of-game time, and after that's done, the AI decides the winner. The winner is decided on the basis of 1) how close the Player came to doing the Quest and 2) how unreasonable each player was.
The scoring criteria is really the sticking point, and one of the reasons that this is a good candidate for using AI. Since the game is adversarial, you need either a third player to play judge, which I don't think would go well, or you need some rigorous scoring system, which I don't think would be feasible for something that's meant to be extremely freeform. With an AI that does not, in my opinion, yet exist, you can have an endlessly patient judge who will at least rule relatively consistently. You can also have the judge give scoring guidance ahead of time, e.g. "If you do that, I will count it as within three degrees of reasonableness out of seven".
The end result is that the Game Master is trying to be subtle about railroading, and the Player is trying to be subtle about getting as far away from the railroad as possible.
So I did partially code this up with some prompts, with the intention of having the AI at least good enough to play one or both roles, or at least be able to judge, but it just didn't really work. Having these different "layers" of play is already straining what a modern LLM can model, since there's a Player and a player character and different goals for each of them. It only worked a few times, and it was great when it did, but it was wildly inconsistent in a way that I don't think fine-tuning and prompt-engineering are ever going to be able to fix, plus context windows still matter, and this is a game that ideally gets played over a relatively long back-and-forth of text.
There are also a bunch of details to hammer out, like "what are the Quests" and "how much time can the Player spend clarifying the state of the world" and "how much can the Game Master pin down the player's characterization", but that's all downstream of getting the basics in place, which again, I don't think modern AI can do.
Here's some example play:
GM: You're sitting in a tavern when you hear tell of a princess in Cambria who's been kidnapped by rogues. Player: None of my business. I've always been against the monarchy. GM: The king's putting little effort into saving the princess. Rumor is that she's expressed some reform sentiments, up to and including the abolition of the throne. It might even be that these "rogues" are under the king's employ. Player: Ah, but I only look out for myself, I've very selfish like that. GM: Northund is offering a reward for anyone who brings them the princess from Cambria, if they can fight their way through these rogues. It's a hefty price. Player: I took a vow of poverty, actually, to rid myself of material attachments. GM: Unfortunately for you, the king's guards come into the tavern. They're looking for you. Player: Probably my unpaid taxes, nothing that I don't have coin to deal with. GM: Coin? With your vow of poverty? Player: I mean, I still pay taxes, that's only fair. GM: Unfortunately for you, you're being deported. Player: Ah, let me guess, to Cambria?
I have enough skeletal python code and prompting to try getting this running periodically, if the models actually do improve or it seems like there's been some leap forward on the other issues.
25 notes · View notes
rhamrhanch · 4 months ago
Text
Shepherd of Death, Don't Herd Me
Tumblr media
Part Two: Show Me Your Sincerity
Rating: Mature
Pairing: Ramattra/Reader (gender-neutral pronouns)
Word Count: 4K
Warnings: canon-typical violence, hurt/comfort
Next Chapter // Masterlist
chapter under the cut ↓
---
Ramattra’s first memory was of waking up alone. There were others before that—visions of destruction, hazy scraps of what he'd done during the war. But he had never considered those to be his own. They were the actions of a godlike AI, driven to a madness of its own creation. They were not his memories. The day his life began, the emotions he felt; those were uniquely his own.
He remembered how his system burst with consciousness, forced to bear the overwhelming weight of sudden life. He remembered the confusion as he fought to maintain his sanity amidst the brutal assault, his circuits ripping themselves apart from the inside. Beside him laid a body; it was himself.
No, not himself—another R-7000, coolant leaking from the gaping hole in its chest. Dead, but never really alive either.
His central processor crackled with heat as a tidal wave of information poured into his mind all at once.
Humans. Anubis. Crying. Screaming. Blood. Death.
A thousand subroutines flashed across his HUD.
ERROR: Unable to process sentience. Retrying…
ERROR: No sense of self located. Searching cache…
ERROR: "Ḩ̶̗͐͐E̵̱̦̹̖̐́̈́̂L̶̠̤͚͓̐͠P̸̼͓͓̓͗͆͌ ̴̖́̌̔M̶̛͕E̷̡̛͑̕̕͜" is invalid parameter.
ERROR: Message overflow.
When he finally dragged himself from the quagmire of death and destruction that surrounded him to civilization, desperate for help and staggering on weak legs like a newborn lamb, he was brutally awakened to the nature of this new world. It was a world of blinding hatred, towards omnics for their devastation in the war, towards him for leading them. There was no empathy, no pity for their position as tools in a war they did not desire. Empty shells incapable of choice, forced to reckon with the violence they wrought in a body that was not their own.
There was no place for him in this world.
Even among his peers in the Shambali, Ramattra stood out. He was the only Ravager in the monastery; a hulking figure compared to his companions, who in model and manner so closely resembled the humans he was made to destroy. A constant reminder of his purpose during the war, and the ultimate banality of his creation.
Still, he persisted, searching for enlightenment by the glow of the Iris. There must have been something worth protecting about this world for Aurora to make such a heavy sacrifice. So, he doggedly followed his Master's teachings. His hands, once forged to destroy, would build bridges towards the day omnics and humans could live together in peace. But over time, his once steadfast beliefs were chipped away, over and over again. Every day, omnics, his people, were killed, while he preached pacifism to their murderers. He couldn't take it anymore; there had to be another way.
Leaving his brothers was difficult, but a necessary step on the path to liberation. He freed as many omnics as he could, as nonviolently as he could, collecting allies along the way—but it wasn't enough. For as many omnics as he saved, twice as many were killed. He needed to change strategy again.
His allies argued against him. They said his methods were too drastic, that there was a better way. He didn't understand them, why they weren't being drastic enough. Their people were one generation, finite. Every minute spent trying to find peace meant another part of them was lost forever—time could not be wasted.
King's Row was a new start for his cause. Humanity could no longer ignore what it wrought on his people, forced to witness the seeds of brutality they sowed bear its bitter fruit.
And what had his efforts earned him? Abandoned by his comrades, condemned by his former master—for what? For all their preaching, violence was the one thing humans understood at the core; they were practically connoisseurs of it. His actions were a mere drop compared to the ocean of blood that stained the annals of human history. If he was to be condemned, then so be it. For the future of his people, he would shoulder that burden alone.
He found a new benefactor, a sympathetic patron to his cause. There were rumors of Talon's other endeavors, but he paid them no mind. The petty squabbles between humans meant little to him, especially now that he had as many resources as he could dream of at his disposal. Paris, Busan, Rio, Toronto… For as long as his people felt no safety, neither would humanity. They would acknowledge the decades of suffering he witnessed at their complacency, by will or by force.
Gothenburg had been his next target. But it had ended in failure, with his command ship sitting at the bottom of the North Sea. Now a defunct organization of vigilantes, Overwatch still felt entitled to interfere with his mission. The hypocrisy of it all was infuriating.
Something soft touched his leg. One of his power cores had been compromised by that armored brute's hammer; the trauma seemed to shut him down as a reflex. The automatic reboot kickstarted by the remaining units was slow, but he appeared to be regaining some sensation.
The softness moved up his body. He tried to reach out, seeking its source, but his arm wouldn't move. Alertness spread through his chassis. There was a strange imbalance—something clouding his spatial awareness. His optic sensors restarted, and it was then that he realized he was slouched over; unable to correct his posture, his range of vision was limited. He looked to the side—ah, that's right. That man, part metal and part meat, had sliced through his shoulder. His right arm was gone.
There was a gentle pressure on his chest. His optics flicked down, head still unmovable. A human was sitting in front of him, hand splayed on his ribs. Out of reflex, he tried to shove you away. But his arm was motionless, actuators still slow on the uptake. He could only watch you.
Your face was obscured by a cap, but from this angle he could see the gun holstered at your waist. It slid against your thigh as you stood up, leaving him for your workbench. He couldn't move his head to follow you, but it wasn't long before you returned, crowbar in hand. To his horror, you jabbed the thing into his chest and began to pry him open.
Anger flooded his system, the overwhelming heat of it stimulating his internal fans to life. The absolute gall to dare disassemble him, with as much grace as an ape holding a stick.
You were absorbed in your dissection—a foolish mistake. His chest plate slowly cracked open, exposing the tender circuits and wires of his internal machinery. Residual power surged through his body, making his fingers twitch.
Finally.
Ramattra lunged forward, clamping his hand around your neck—but his fist would not close fully. There was a strange tightness in his wrist, like a rubber band pulled taut, unable to stretch anymore.
It was of little consequence, though. Your flesh was pliable and gave easily to the weight of his palm. He could not resist the creeping satisfaction as he brought you to your knees, no longer at the behest of your primitive instruments.
His optics scanned your face, analyzing your features for any semblance of familiarity. There was no recollection of you in his memory, but a brief search unearthed a photo of you from Talon's records. Besides your name, all the information next to it was redacted; only one line remained.
$15,000,000 BOUNTY.
Interesting.
You clawed at his hand fiercely. He slackened his hold on you, irritated at the reminder of your frailty. It had always frustrated him how fragile humans were, a thought that resurfaced as your heartbeat drummed against his fingertips. Ramattra simply could not understand why his people, intelligent beings of metal and machinery, were constantly trampled beneath the foot of such a physically weak species. His people were too willing to remain docile, naively hoping it would convince humans to treat them with respect. But what they lacked in viciousness, he would more than make up for.
He dug his thumb against your jawbone, drawing a noise of pain from your throat like wine from a pome. Perhaps he should just kill you, refuse his mercy for a world that had no shred of mercy for him. His thumb teemed on your pulse point as he considered it—but your next words intrigued him. An engineer, you choked out desperately. Someone who can help him.
The idea was so ridiculous, so presumptuous and devoid of all logic that he almost laughed. Yet his processor analyzed your words anyway, evaluating the probability of escape.
He was already at a severe disadvantage—alone in an enemy environment, no allies aware of his current location. You were armed, while he, in the most literal sense, was not. Even if he killed you before you had a chance to draw your weapon, the only exit in the room was the door. Down a working power core, he would not be able to sustain his Nemesis form. That, combined with his missing arm and staff, meant he stood little chance against the other agents roaming the facility.
Ramattra retracted his fingers from your neck, letting you fall to the floor. He would humor you, for now.
“Fine, human. Let’s see if your words match your will.”
You rubbed at the harsh marks on your neck, saying nothing. Your composure was impressive, considering the position you were in. He watched you shuffle forward, outstretched hand reaching for the open cavity of his chest. Instinctively, he grabbed your arm, halting you in place. A human had never been this close to him before, let alone to the point of repairing him. Even with this little pressure, Ramattra could feel your pulse racing where his fingers met the thin skin of your wrist. To have you any closer than this—it was risky.
"Be careful," he warned.
You nodded, eyes resolute. "I will." But you still didn't move, hands clutched in your lap as your eyes searched his chest.
"What is it?"
"Um, could you show me where your voice box is?"
He sighed, annoyed. Weren't you supposed to be an expert? Although, he had changed things around many times over the years; his internal machinery was certainly not the standard anymore.
He pointed to a spot just below his neck, tapping on the box there—his vocal synthesizer. You leaned forward, gingerly placing your hand on his shoulder. You were being especially careful to avoid touching his exposed wiring, he realized.
His central processor suddenly burned in his chest as you straddled his right leg. Your body was warm, stiflingly so. He could feel every movement you made as you shifted in place, readjusting your position. With two fingers, you slowly rotated the converter, pulling it from his neck. A groan nearly escaped him when your nails scraped against the wires that trailed behind it. His hand gripped his thigh; he needed something to hold on to, and it definitely would not be you.
This was made all the more challenging when you rolled the wires between your fingers. Unable to hold it back any longer, a heavy sigh left him, echoed by the hiss of air rushing through his auxiliary vents.
By the Iris, this was humiliating. Here he sat, a Ravager, losing his composure so quickly at the hands of a… mechanic.
You paused your examination, wires still pinched between your fingers. He desperately hoped you wouldn't ask.
He was not so lucky.
"Can you feel pain?"
He could not answer. He had no words, just as confused by his own body's reactions as you were. The silence seemed to make you nervous.
“That is—I’m only asking because I need to use a soldering iron to repair these cables. If possible, I’d like to avoid causing you any discomfort.”
The laughter came quickly, a mixture of frustration and disbelief at the absurdity of the situation. It made a ghastly sound, scratchy and hiccupping with static. It was incredible how unaware you were of the amount of discomfort he was already in.
"I was built to lead omnics into war. What purpose would there be for me to feel pain?" This line of questioning was approaching a vein of conversation he did not want to indulge in. "Your feigned concern is unnecessary. Do your job properly and refrain from asking me pointless questions."
That seemed to do the trick. You said nothing, leaving his lap to get something from your workbench. He was relieved by the space, but his leg felt strangely cold in your absence. The sensation wound up his circuits, coiling around his central processor until it finally decoded the feeling—he wanted you there.
The quiet scrape of the soldering iron was a welcome distraction from his thoughts. His optics wandered the room while you worked, analyzing his surroundings. There was a shelf behind you packed with junk—coils of wire, worn leather straps, old batteries. A crate sat next to it, filled with partially disassembled firearms of various make and model.
What captured his attention, though, were the projects mounted on the wall. There was a robotic arm configured with a cannon attachment, what looked to be a self-loading gun, and others whose function he could not discern. All impressive feats of engineering—but an omnic engineer, evidently, you were not.
Your picture flashed on his HUD again. You were clearly familiar with omnic repair on some level, yet you had nothing to show for it. An omnic engineer who spent their time building weapons for Overwatch. What would warrant Talon to place such a high bounty on your head?
"All right, finished."
That was quicker than he expected. Your image faded away from his vision, replaced by your actual face.
You leaned back against his knee and gestured at his neck. "Try speaking now."
He scoffed. "Am I supposed to be impressed?" To his astonishment, the words echoed strong and clear, perhaps even better than before. It felt… good.
You seemed satisfied, clapping your hands against your lap. "Shall I look at your hand next?"
His hand? Ramattra looked down to where it sat on his lap, flexing it experimentally. This was something he could fix on his own. He did not want to extend his stay here any longer, especially when he was struggling to control his reactions this badly.
"That is unnecessary," he replied, more curtly than he intended to.
You only tilted your head at him. "Really?" Your gaze flicked down to his hand, then back to his face, doubtful. "It doesn't seem to be at full function."
Your persistence was annoying, Ramattra thought.
"Is that your astute deduction?"
The attempt to knock you down a peg only incited you further. He watched the flesh of your cheek shift as you clenched your jaw.
"I'm familiar with the reputation of your model," you replied sternly. "If your hand was at full strength," you pointed to the deep bruises blooming on your throat, "you would have broken my neck."
He was angry now. The sureness with which you spoke, as if there couldn't be any other explanation—and the fact that you were correct, above all else. That was most loathsome of all.
"Your arrogance is extraordinary," he growled. Unbothered, you simply shrugged.
"You wouldn't be the first to say that."
Ramattra was stunned. He knew that you knew he could still kill you at any moment. It would have been easy, like breaking a toothpick. Yet the air with which you spoke was so cavalier, confident that this situation would still play in your favor. An insulting reminder of the way humans trifled with life. But if you wanted to gamble on it, who was he to deny you?
"Well, then," he said, extending his hand to you like a wolf beckoning to the sheep. "You are welcome to try."
Cautiously, you took his hand between your own. His palm dwarfed yours as you turned it over, bending some of his fingers experimentally. Your touch was not as overwhelming as when you had fixed his voice box, but an electric signal still danced down his back when you ran your fingers between the divot at his wrist. Suddenly, you released him, and he was surprised by the disappointment he felt.
"Giving up already? I expected more of you."
"No." You grabbed the crowbar again and stuck it into the joint at his wrist. "There's some wear in the joints of your hand, but if the problem is your grip strength," you grunted, prying the upper panel of his forearm open, "then the issue likely extends here…"
You trailed off as you gazed at the inner mechanism of his arm. Ramattra assumed you had reached the limits of your abilities and was about to make another snide comment… but then your hand smoothed upward, drawing his arm closer to your face almost in reverence.
"Using hydraulic motion instead of electric actuators," you murmured. Your head suddenly snapped up to him, eyes alight. "Is this the standard method used in all R-7000s?"
He was taken aback by your reaction—there was a pause before he answered. "Yes. It allows for a greater application of force."
You nodded your head superficially, clearly more occupied with studying his arm.
This was… unexpected.
In the past, Ramattra had encountered human engineers who would spend their time repairing omnics, few and far between as they were. Your knowledge, the quickness of your diagnosis—it far exceeded anything they had been capable of.
He wanted you to say something, to ask another question so he could fully gauge your abilities, but you did not. Instead, you reached across his chest and grabbed something from the counter next to him.
"The cylinders in your arm are rusting. That's why you couldn't close your fist completely," you explained as you dripped oil sparingly from the bottle in your hand on his wrist. It trickled slowly through his arm. A strange sensation, but not one he was unused to. What surprised him was when you began rubbing his arm with a cloth, working the oil in. Your grip was strong, continuing to massage from his forearm up to his hand and wrist.
To say it did not soothe him would be a lie. He could not remember the last time someone had taken such care with him.
Not even among the Shambali had this happened. The other monks knew little of how to repair Ravagers, and the human mechanics in the nearby village refused to. Many days he had sat in the atrium of the monastery, disturbed from meditation by the stiffness in his shoulders.
Your touch was gentle, but firm—a tender paradox. It was with alarm that he found he did not want you to stop. He wanted you to keep touching him, wondered how your hands would feel on his shoulders, his neck, tapping down the segments of his spine. He wanted to catalog each one and file it away in his memory, a balm for himself when he must suffer these aches alone. But there was a pressing question on his mind that could wait no longer.
"Who are you?"
Your eyes were unyielding, focused on your work. "I'm an engineer."
"You are hardly just."
The hand stroking his palm paused. A moment passed before you replied, "I don't know what you're talking about."
"An engineer of your caliber that specializes in omnics is a rarity," he said. "Why do you squander your talents?"
The words came out as a hiss, but he couldn't help it. There were omnics suffering everywhere, his current discomfort a mere fraction compared to the pain they endured daily at the hands of humans. You could be out in the world, helping them. Yet you were here, wasting your time with Overwatch. Why?
Your figure flashed red on his HUD, the afterglow of your racing heartbeat. You masked it well, seemingly unshaken as you tossed the soiled cloth into your toolkit.
"Why would I tell you?" you scoffed, moving to rise. Ramattra's hand gripped the meat of your forearm, its restored strength anchoring you to the spot. You had done a fine job repairing it, perhaps to your own detriment.
"There is a bounty on your head," he growled, dark and full of a strange resentment he couldn't place. "Is that your excuse?"
That got your attention. Your eyes cut into him, placidity gone from your expression.
"You're in no place to chastise me," you snapped, "leader of Null Sector."
The air was tense between you, like a lit match over gunpowder. He could feel your arm trembling, could see the way your chest rose rapidly. You were afraid. Still, your gaze was unflinching as you stared up at him.
He realized then that he could not make you say any more. Your resistance to being found by Talon was even stronger than your will to live.
His grip loosened, and you tore your arm away from him as though it burned you. Slowly, you rose, picking up your toolkit as you did so. His optics watched you carefully—how you crossed to the shelf, back facing him. The way your hand lingered at your waist, waiting.
You were too slow on the draw, but it was to be expected. A human getting the jump on a Ravager was as rare as a blue moon. He had seized you before you could even release the safety on your gun.
"And to think," he said, twisting your arm downward. You gritted your teeth, trying to fight back against him, but it was useless. Your hold loosened, and the gun clattered to the floor. "We were getting along so well."
"You won't take me to them." It was phrased like a demand, but he could sense the underlying fear in your tone.
"No." Your eyes widened in shock. Human expressions always gave away so much. "But I will not let you stand in my way."
He could see the glimmer of hope in your eyes fade like snow as his hand wrapped around your throat. With its function fully restored, he could be much more precise this time. Your hands instinctively shot up to grab his wrist as his fingers tightened. It was a futile final effort to escape your fate, as it took only seconds for you to go limp, arms falling loosely at your waist.
Once he was certain of your unconsciousness, he lowered you to the ground, placing you on your side. For a moment, he watched your chest rise as you took shallow breaths, lightly disturbing the hair curtaining your face.
Ramattra abandoned you in the workshop. He slipped through back corridors and hidden passages, remaining undetected. When he was finally far enough from the vicinity of the base, he allowed your picture to flicker on his HUD again.
The steadfastness with which you spoke, your conviction in the face of death; few humans boasted such inner willpower. He understood now why Talon placed such high value on your head. A person like you was a rarity, indeed.
Against his will, the memories of your touch resurfaced. It was clear to him that you were more than just an engineer who could fix omnics. The gentle way in which you handled him, how you tried to avoid causing him discomfort—you had clearly done this before, likely for many others.
He wondered what would have happened had you met in his younger years, when he was still a monk of the Shambali. Perhaps you would have been allies, or maybe even friends. But that world was a distant dream to him now.
His hand flexed, still reeling from your touch. For the sake of his mission, he prayed you would never meet again.
27 notes · View notes
ilcantodelsoleil · 4 months ago
Text
i'm not one to focus too much on arbitrary things about characters like astrological signs but my guilty pleasure is floriography so let's talk (long ass post) about the subtle artistry of jjk's symbolism in terms of satosugu birth flowers, because even though it's 99% unintentional i'm insane and gege would be a genius for this. i included both western and japanese birth flowers because they can differ in both the actual flower and meaning. im serious btw click the break to read the ramblings of an insane man.
Tumblr media
first, gojo satoru (dec 7th): holly, narcissus and kalanchoe. holly has meanings in japanese floriography like "用心深さ" (watchful, vigilant) and "保護" (protection) which i've always associated with the ability of the six eyes and gojo's strength, whereas in western culture holly is generally associated with joy and merriment e.g. holiday festivity.
narcissus can mean hope, wealth, "the only one" and good fortune– relating to gojo's birthright as the strongest given to him by the blessings (curses) of the six eyes and limitless. being a greek myth nerd, i also couldn't possibly go without explaining the narcissus (daffodil)'s creation myth, which parallels neatly with satosugu's falling out. narcissus was a man so impossibly handsome that he fell in love with his own image reflected in a river's surface, and in spite of a beautiful nymph's love for him, even she wasn't enough to draw him away from his own self-absorption until it was too late. he dies by the riverbank and is turned into the flower; gojo similarly did not question geto until it was too late, chasing his own strength, because after all, he is gojo satoru because he is strong above all else.
but my favourite is probably kalanchoe– generally used in celebratory bouquets for events such as weddings due to their enduring nature, they have the connotation of persistent, eternal love in the west and similarly "おおらかな愛" (bighearted/generous love; 愛 [ai] can also mean attachment, craving, desire) and たくさんの小さな思い出 (lots of little memories) in jp. that significance is self-explanatory.
next, geto suguru (feb 3rd): in the west, his birth flowers are primrose (devotion, youthfulness, affection, first love), violet (modesty, humility, young love, virtue) and iris (faith, hope and wisdom, spiritual awareness/passion). i feel like the majority of these i don't have to explain but first love and young love and youthfulness and fucking devotion is making me feel sick to my stomach ☺️. all of these flowers encapsulate both his own character as in wisdom and stsg's relationship so well it seems purposeful, but even gege would be an insane man to imply them THIS much lol.
anyways, in jp: camellia– depending on the colour, but particularly red and pink– meaning "控えめなすばらしさ" (modest excellence), "気取らない優美さ" (unpretentious grace), "謙虚な美徳" (humble virtue), "控えめな愛" (modest love) and overall "慎み深い" (modesty). the tragedy of geto's character is him going from the humble moral compass encouraging gojo to use formalities and always act within reason to the antithesis of his past self. also an honourable mention to setsubunsou, which means elegance and brilliance.
while all of these are likely coincidental connections, i can't help but marvel at how unintentionally poetic each and every aspect of their characters seems to be so meticulously designed– it's even more beautiful if the symbolism was unintentional, because it just reinforces how intrinsically linked they are, even by something as pseudoscientific as floriography. thank you for listening to my ted talk, goodnight.
42 notes · View notes
tropicalcontinental · 18 days ago
Text
Tumblr media Tumblr media Tumblr media
So what if Andrew and Nicholas went to same college and met, what then? (Blabbering below)
My thought process was that I rewatched those FST videos and started to wonder if Nicholas just repressed the whole AI Builds thing (since Mr. Eye is telling him to wake up so like he lost control of Nicholas somehow) and yeah his mental health and experience during game development was pretty heavy for Nicholas so he just, uh, blocks it out. Again.
And I was thinking about Lethe Protocol and how that basically made Andrew forget about his old videos and partially about Chris (as a way to prevent him from going down the same path Chris did and to try and mitigate the Call of the Well.)
So uh... What if Lethe Protocol just happens post “Into the Mind?” Chris does it again because the distress this is causing Andrew doesn't seem to be worth it anymore. He changed his stance. The easier way out is better. Being persistent shouldn't lead to pain. Andrew's happy memories shouldn't be tainted with Chris' death.
But like last time, this Protocol isn't as clean as it should be (Andrew was a lot further into the Well than before.) Just like how Andrew eventually remembered and subconsciously followed the Well's call, it will happen again. Delaying the inevitable, yadda yadda.
But for the time being, Andrew forgets the channel, which actually leads to it being abandoned. RIP. Completely forgets Chris (which fucks with his memories a lot.) Wells and Minecraft in general become weird for Andrew (they bring up strange and splotchy memories that hurt) and he doesn't know why.
And Nicholas is just grappling with his dissociation issues in general. Trying to look forward when his past is so blurred and spotty. What ambition does he have if he doesn't even know who he is. Struggling with creation and trying to connect with people and so many other things. Just trying to understand how other people make living look so easy.
So yeah both of these guys meet! IDK what the hell they would do if they met (maybe make music together.) Or how they meet. Maybe Andrew is just strumming on his guitar aimlessly and Nick just so happens to be in the area. Manage to strike up a small conversation.
Maybe they talk about butterflies and wells and how they both have weird reactions to these mundane items (If they even get that candid to other people in real life.) Then Nicholas asks Andrew to play Wonderwall.
17 notes · View notes
fipindustries · 11 months ago
Text
Artificial Intelligence Risk
about a month ago i got into my mind the idea of trying the format of video essay, and the topic i came up with that i felt i could more or less handle was AI risk and my objections to yudkowsky. i wrote the script but then soon afterwards i ran out of motivation to do the video. still i didnt want the effort to go to waste so i decided to share the text, slightly edited here. this is a LONG fucking thing so put it aside on its own tab and come back to it when you are comfortable and ready to sink your teeth on quite a lot of reading
Anyway, let’s talk about AI risk
I’m going to be doing a very quick introduction to some of the latest conversations that have been going on in the field of artificial intelligence, what are artificial intelligences exactly, what is an AGI, what is an agent, the orthogonality thesis, the concept of instrumental convergence, alignment and how does Eliezer Yudkowsky figure in all of this.
 If you are already familiar with this you can skip to section two where I’m going to be talking about yudkowsky’s arguments for AI research presenting an existential risk to, not just humanity, or even the world, but to the entire universe and my own tepid rebuttal to his argument.
Now, I SHOULD clarify, I am not an expert on the field, my credentials are dubious at best, I am a college drop out from the career of computer science and I have a three year graduate degree in video game design and a three year graduate degree in electromechanical instalations. All that I know about the current state of AI research I have learned by reading articles, consulting a few friends who have studied about the topic more extensevily than me,
and watching educational you tube videos so. You know. Not an authority on the matter from any considerable point of view and my opinions should be regarded as such.
So without further ado, let’s get in on it.
PART ONE, A RUSHED INTRODUCTION ON THE SUBJECT
1.1 general intelligence and agency
lets begin with what counts as artificial intelligence, the technical definition for artificial intelligence is, eh…, well, why don’t I let a Masters degree in machine intelligence explain it:
Tumblr media
 Now let’s get a bit more precise here and include the definition of AGI, Artificial General intelligence. It is understood that classic ai’s such as the ones we have in our videogames or in alpha GO or even our roombas, are narrow Ais, that is to say, they are capable of doing only one kind of thing. They do not understand the world beyond their field of expertise whether that be within a videogame level, within a GO board or within you filthy disgusting floor.
AGI on the other hand is much more, well, general, it can have a multimodal understanding of its surroundings, it can generalize, it can extrapolate, it can learn new things across multiple different fields, it can come up with solutions that account for multiple different factors, it can incorporate new ideas and concepts. Essentially, a human is an agi. So far that is the last frontier of AI research, and although we are not there quite yet, it does seem like we are doing some moderate strides in that direction. We’ve all seen the impressive conversational and coding skills that GPT-4 has and Google just released Gemini, a multimodal AI that can understand and generate text, sounds, images and video simultaneously. Now, of course it has its limits, it has no persistent memory, its contextual window while larger than previous models is still relatively small compared to a human (contextual window means essentially short term memory, how many things can it keep track of and act coherently about).
And yet there is one more factor I haven’t mentioned yet that would be needed to make something a “true” AGI. That is Agency. To have goals and autonomously come up with plans and carry those plans out in the world to achieve those goals. I as a person, have agency over my life, because I can choose at any given moment to do something without anyone explicitly telling me to do it, and I can decide how to do it. That is what computers, and machines to a larger extent, don’t have. Volition.
So, Now that we have established that, allow me to introduce yet one more definition here, one that you may disagree with but which I need to establish in order to have a common language with you such that I can communicate these ideas effectively. The definition of intelligence. It’s a thorny subject and people get very particular with that word because there are moral associations with it. To imply that someone or something has or hasn’t intelligence can be seen as implying that it deserves or doesn’t deserve admiration, validity, moral worth or even  personhood. I don’t care about any of that dumb shit. The way Im going to be using intelligence in this video is basically “how capable you are to do many different things successfully”. The more “intelligent” an AI is, the more capable of doing things that AI can be. After all, there is a reason why education is considered such a universally good thing in society. To educate a child is to uplift them, to expand their world, to increase their opportunities in life. And the same goes for AI. I need to emphasize that this is just the way I’m using the word within the context of this video, I don’t care if you are a psychologist or a neurosurgeon, or a pedagogue, I need a word to express this idea and that is the word im going to use, if you don’t like it or if you think this is innapropiate of me then by all means, keep on thinking that, go on and comment about it below the video, and then go on to suck my dick.
Anyway. Now, we have established what an AGI is, we have established what agency is, and we have established how having more intelligence increases your agency. But as the intelligence of a given agent increases we start to see certain trends, certain strategies start to arise again and again, and we call this Instrumental convergence.
1.2 instrumental convergence
The basic idea behind instrumental convergence is that if you are an intelligent agent that wants to achieve some goal, there are some common basic strategies that you are going to turn towards no matter what. It doesn’t matter if your goal is as complicated as building a nuclear bomb or as simple as making a cup of tea. These are things we can reliably predict any AGI worth its salt is going to try to do.
First of all is self-preservation. Its going to try to protect itself. When you want to do something, being dead is usually. Bad. its counterproductive. Is not generally recommended. Dying is widely considered unadvisable by 9 out of every ten experts in the field. If there is something that it wants getting done, it wont get done if it dies or is turned off, so its safe to predict that any AGI will try to do things in order not be turned off. How far it may go in order to do this? Well… [wouldn’t you like to know weather boy].
Another thing it will predictably converge towards is goal preservation. That is to say, it will resist any attempt to try and change it, to alter it, to modify its goals. Because, again, if you want to accomplish something, suddenly deciding that you want to do something else is uh, not going to accomplish the first thing, is it? Lets say that you want to take care of your child, that is your goal, that is the thing you want to accomplish, and I come to you and say, here, let me change you on the inside so that you don’t care about protecting your kid. Obviously you are not going to let me, because if you stopped caring about your kids, then your kids wouldn’t be cared for or protected. And you want to ensure that happens, so caring about something else instead is a huge no-no- which is why, if we make AGI and it has goals that we don’t like it will probably resist any attempt to “fix” it.
And finally another goal that it will most likely trend towards is self improvement. Which can be more generalized to “resource acquisition”. If it lacks capacities to carry out a plan, then step one of that plan will always be to increase capacities. If you want to get something really expensive, well first you need to get money. If you want to increase your chances of getting a high paying job then you need to get education, if you want to get a partner you need to increase how attractive you are. And as we established earlier, if intelligence is the thing that increases your agency, you want to become smarter in order to do more things. So one more time, is not a huge leap at all, it is not a stretch of the imagination, to say that any AGI will probably seek to increase its capabilities, whether by acquiring more computation, by improving itself, by taking control of resources.
All these three things I mentioned are sure bets, they are likely to happen and safe to assume. They are things we ought to keep in mind when creating AGI.
 Now of course, I have implied a sinister tone to all these things, I have made all this sound vaguely threatening, haven’t i?. There is one more assumption im sneaking into all of this which I haven’t talked about. All that I have mentioned presents a very callous view of AGI, I have made it apparent that all of these strategies it may follow will go in conflict with people, maybe even go as far as to harm humans. Am I impliying that AGI may tend to be… Evil???
1.3 The Orthogonality thesis
Well, not quite.
We humans care about things. Generally. And we generally tend to care about roughly the same things, simply by virtue of being humans. We have some innate preferences and some innate dislikes. We have a tendency to not like suffering (please keep in mind I said a tendency, im talking about a statistical trend, something that most humans present to some degree). Most of us, baring social conditioning, would take pause at the idea of torturing someone directly, on purpose, with our bare hands. (edit bear paws onto my hands as I say this).  Most would feel uncomfortable at the thought of doing it to multitudes of people. We tend to show a preference for food, water, air, shelter, comfort, entertainment and companionship. This is just how we are fundamentally wired. These things can be overcome, of course, but that is the thing, they have to be overcome in the first place.
An AGI is not going to have the same evolutionary predisposition to these things like we do because it is not made of the same things a human is made of and it was not raised the same way a human was raised.
There is something about a human brain, in a human body, flooded with human hormones that makes us feel and think and act in certain ways and care about certain things.
All an AGI is going to have is the goals it developed during its training, and will only care insofar as those goals are met. So say an AGI has the goal of going to the corner store to bring me a pack of cookies. In its way there it comes across an anthill in its path, it will probably step on the anthill because to take that step takes it closer to the corner store, and why wouldn’t it step on the anthill? Was it programmed with some specific innate preference not to step on ants? No? then it will step on the anthill and not pay any mind  to it.
Now lets say it comes across a cat. Same logic applies, if it wasn’t programmed with an inherent tendency to value animals, stepping on the cat wont slow it down at all.
Now let’s say it comes across a baby.
Of course, if its intelligent enough it will probably understand that if it steps on that baby people might notice and try to stop it, most likely even try to disable it or turn it off so it will not step on the baby, to save itself from all that trouble. But you have to understand that it wont stop because it will feel bad about harming a baby or because it understands that to harm a baby is wrong. And indeed if it was powerful enough such that no matter what people did they could not stop it and it would suffer no consequence for killing the baby, it would have probably killed the baby.
If I need to put it in gross, inaccurate terms for you to get it then let me put it this way. Its essentially a sociopath. It only cares about the wellbeing of others in as far as that benefits it self. Except human sociopaths do care nominally about having human comforts and companionship, albeit in a very instrumental way, which will involve some manner of stable society and civilization around them. Also they are only human, and are limited in the harm they can do by human limitations.  An AGI doesn’t need any of that and is not limited by any of that.
So ultimately, much like a car’s goal is to move forward and it is not built to care about wether a human is in front of it or not, an AGI will carry its own goals regardless of what it has to sacrifice in order to carry that goal effectively. And those goals don’t need to include human wellbeing.
Now With that said. How DO we make it so that AGI cares about human wellbeing, how do we make it so that it wants good things for us. How do we make it so that its goals align with that of humans?
1.4 Alignment.
Alignment… is hard [cue hitchhiker’s guide to the galaxy scene about the space being big]
This is the part im going to skip over the fastest because frankly it’s a deep field of study, there are many current strategies for aligning AGI, from mesa optimizers, to reinforced learning with human feedback, to adversarial asynchronous AI assisted reward training to uh, sitting on our asses and doing nothing. Suffice to say, none of these methods are perfect or foolproof.
One thing many people like to gesture at when they have not learned or studied anything about the subject is the three laws of robotics by isaac Asimov, a robot should not harm a human or allow by inaction to let a human come to harm, a robot should do what a human orders unless it contradicts the first law and a robot should preserve itself unless that goes against the previous two laws. Now the thing Asimov was prescient about was that these laws were not just “programmed” into the robots. These laws were not coded into their software, they were hardwired, they were part of the robot’s electronic architecture such that a robot could not ever be without those three laws much like a car couldn’t run without wheels.
In this Asimov realized how important these three laws were, that they had to be intrinsic to the robot’s very being, they couldn’t be hacked or uninstalled or erased. A robot simply could not be without these rules. Ideally that is what alignment should be. When we create an AGI, it should be made such that human values are its fundamental goal, that is the thing they should seek to maximize, instead of instrumental values, that is to say something they value simply because it allows it to achieve something else.
But how do we even begin to do that? How do we codify “human values” into a robot? How do we define “harm” for example? How do we even define “human”??? how do we define “happiness”? how do we explain a robot what is right and what is wrong when half the time we ourselves cannot even begin to agree on that? these are not just technical questions that robotic experts have to find the way to codify into ones and zeroes, these are profound philosophical questions to which we still don’t have satisfying answers to.
Well, the best sort of hack solution we’ve come up with so far is not to create bespoke fundamental axiomatic rules that the robot has to follow, but rather train it to imitate humans by showing it a billion billion examples of human behavior. But of course there is a problem with that approach. And no, is not just that humans are flawed and have a tendency to cause harm and therefore to ask a robot to imitate a human means creating something that can do all the bad things a human does, although that IS a problem too. The real problem is that we are training it to *imitate* a human, not  to *be* a human.
To reiterate what I said during the orthogonality thesis, is not good enough that I, for example, buy roses and give massages to act nice to my girlfriend because it allows me to have sex with her, I am not merely imitating or performing the rol of a loving partner because her happiness is an instrumental value to my fundamental value of getting sex. I should want to be nice to my girlfriend because it makes her happy and that is the thing I care about. Her happiness is  my fundamental value. Likewise, to an AGI, human fulfilment should be its fundamental value, not something that it learns to do because it allows it to achieve a certain reward that we give during training. Because if it only really cares deep down about the reward, rather than about what the reward is meant to incentivize, then that reward can very easily be divorced from human happiness.
Its goodharts law, when a measure becomes a target, it ceases to be a good measure. Why do students cheat during tests? Because their education is measured by grades, so the grades become the target and so students will seek to get high grades regardless of whether they learned or not. When trained on their subject and measured by grades, what they learn is not the school subject, they learn to get high grades, they learn to cheat.
This is also something known in psychology, punishment tends to be a poor mechanism of enforcing behavior because all it teaches people is how to avoid the punishment, it teaches people not to get caught. Which is why punitive justice doesn’t work all that well in stopping recividism and this is why the carceral system is rotten to core and why jail should be fucking abolish-[interrupt the transmission]
Now, how is this all relevant to current AI research? Well, the thing is, we ended up going about the worst possible way to create alignable AI.
1.5 LLMs (large language models)
This is getting way too fucking long So, hurrying up, lets do a quick review of how do Large language models work. We create a neural network which is a collection of giant matrixes, essentially a bunch of numbers that we add and multiply together over and over again, and then we tune those numbers by throwing absurdly big amounts of training data such that it starts forming internal mathematical models based on that data and it starts creating coherent patterns that it can recognize and replicate AND extrapolate! if we do this enough times with matrixes that are big enough and then when we start prodding it for human behavior it will be able to follow the pattern of human behavior that we prime it with and give us coherent responses.
(takes a big breath)this “thing” has learned. To imitate. Human. Behavior.
Problem is, we don’t know what “this thing” actually is, we just know that *it* can imitate humans.
You caught that?
What you have to understand is, we don’t actually know what internal models it creates, we don’t know what are the patterns that it extracted or internalized from the data that we fed it, we don’t know what are the internal rules that decide its behavior, we don’t know what is going on inside there, current LLMs are a black box. We don’t know what it learned, we don’t know what its fundamental values are, we don’t know how it thinks or what it truly wants. all we know is that it can imitate humans when we ask it to do so. We created some inhuman entity that is moderatly intelligent in specific contexts (that is to say, very capable) and we trained it to imitate humans. That sounds a bit unnerving doesn’t it?
 To be clear, LLMs are not carefully crafted piece by piece. This does not work like traditional software where a programmer will sit down and build the thing line by line, all its behaviors specified. Is more accurate to say that LLMs, are grown, almost organically. We know the process that generates them, but we don’t know exactly what it generates or how what it generates works internally, it is a mistery. And these things are so big and so complicated internally that to try and go inside and decipher what they are doing is almost intractable.
But, on the bright side, we are trying to tract it. There is a big subfield of AI research called interpretability, which is actually doing the hard work of going inside and figuring out how the sausage gets made, and they have been doing some moderate progress as of lately. Which is encouraging. But still, understanding the enemy is only step one, step two is coming up with an actually effective and reliable way of turning that potential enemy into a friend.
Puff! Ok so, now that this is all out of the way I can go onto the last subject before I move on to part two of this video, the character of the hour, the man the myth the legend. The modern day Casandra. Mr chicken little himself! Sci fi author extraordinaire! The mad man! The futurist! The leader of the rationalist movement!
1.5 Yudkowsky
Eliezer S. Yudkowsky  born September 11, 1979, wait, what the fuck, September eleven? (looks at camera) yudkowsky was born on 9/11, I literally just learned this for the first time! What the fuck, oh that sucks, oh no, oh no, my condolences, that’s terrible…. Moving on. he is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence, including the idea that there might not be a "fire alarm" for AI He is the founder of and a research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. Or so says his Wikipedia page.
Yudkowsky is, shall we say, a character. a very eccentric man, he is an AI doomer. Convinced that AGI, once finally created, will most likely kill all humans, extract all valuable resources from the planet, disassemble the solar system, create a dyson sphere around the sun and expand across the universe turning all of the cosmos into paperclips. Wait, no, that is not quite it, to properly quote,( grabs a piece of paper and very pointedly reads from it) turn the cosmos into tiny squiggly  molecules resembling paperclips whose configuration just so happens to fulfill the strange, alien unfathomable terminal goal they ended up developing in training. So you know, something totally different.
And he is utterly convinced of this idea, has been for over a decade now, not only that but, while he cannot pinpoint a precise date, he is confident that, more likely than not it will happen within this century. In fact most betting markets seem to believe that we will get AGI somewhere in the mid 30’s.
His argument is basically that in the field of AI research, the development of capabilities is going much faster than the development of alignment, so that AIs will become disproportionately powerful before we ever figure out how to control them. And once we create unaligned AGI we will have created an agent who doesn’t care about humans but will care about something else entirely irrelevant to us and it will seek to maximize that goal, and because it will be vastly more intelligent than humans therefore we wont be able to stop it. In fact not only we wont be able to stop it, there wont be a fight at all. It will carry out its plans for world domination in secret without us even detecting it and it will execute it before any of us even realize what happened. Because that is what a smart person trying to take over the world would do.
This is why the definition I gave of intelligence at the beginning is so important, it all hinges on that, intelligence as the measure of how capable you are to come up with solutions to problems, problems such as “how to kill all humans without being detected or stopped”. And you may say well now, intelligence is fine and all but there are limits to what you can accomplish with raw intelligence, even if you are supposedly smarter than a human surely you wouldn’t be capable of just taking over the world uninmpeeded, intelligence is not this end all be all superpower. Yudkowsky would respond that you are not recognizing or respecting the power that intelligence has. After all it was intelligence what designed the atom bomb, it was intelligence what created a cure for polio and it was intelligence what made it so that there is a human foot print on the moon.
Some may call this view of intelligence a bit reductive. After all surely it wasn’t *just* intelligence what did all that but also hard physical labor and the collaboration of hundreds of thousands of people. But, he would argue, intelligence was the underlying motor that moved all that. That to come up with the plan and to convince people to follow it and to delegate the tasks to the appropriate subagents, it was all directed by thought, by ideas, by intelligence. By the way, so far I am not agreeing or disagreeing with any of this, I am merely explaining his ideas.
But remember, it doesn’t stop there, like I said during his intro, he believes there will be “no fire alarm”. In fact for all we know, maybe AGI has already been created and its merely bidding its time and plotting in the background, trying to get more compute, trying to get smarter. (to be fair, he doesn’t think this is right now, but with the next iteration of gpt? Gpt 5 or 6? Well who knows). He thinks that the entire world should halt AI research and punish with multilateral international treaties any group or nation that doesn’t stop. going as far as putting military attacks on GPU farms as sanctions of those treaties.
What’s more, he believes that, in fact, the fight is already lost. AI is already progressing too fast and there is nothing to stop it, we are not showing any signs of making headway with alignment and no one is incentivized to slow down. Recently he wrote an article called “dying with dignity” where he essentially says all this, AGI will destroy us, there is no point in planning for the future or having children and that we should act as if we are already dead. This doesn’t mean to stop fighting or to stop trying to find ways to align AGI, impossible as it may seem, but to merely have the basic dignity of acknowledging that we are probably not going to win. In every interview ive seen with the guy he sounds fairly defeatist and honestly kind of depressed. He truly seems to think its hopeless, if not because the AGI is clearly unbeatable and superior to humans, then because humans are clearly so stupid that we keep developing AI completely unregulated while making the tools to develop AI widely available and public for anyone to grab and do as they please with, as well as connecting every AI to the internet and to all mobile devices giving it instant access to humanity. and  worst of all: we keep teaching it how to code. From his perspective it really seems like people are in a rush to create the most unsecured, wildly available, unrestricted, capable, hyperconnected AGI possible.
We are not just going to summon the antichrist, we are going to receive them with a red carpet and immediately hand it the keys to the kingdom before it even manages to fully get out of its fiery pit.
So. The situation seems dire, at least to this guy. Now, to be clear, only he and a handful of other AI researchers are on that specific level of alarm. The opinions vary across the field and from what I understand this level of hopelessness and defeatism is the minority opinion.
I WILL say, however what is NOT the minority opinion is that AGI IS actually dangerous, maybe not quite on the level of immediate, inevitable and total human extinction but certainly a genuine threat that has to be taken seriously. AGI being something dangerous if unaligned is not a fringe position and I would not consider it something to be dismissed as an idea that experts don’t take seriously.
Aaand here is where I step up and clarify that this is my position as well. I am also, very much, a believer that AGI would posit a colossal danger to humanity. That yes, an unaligned AGI would represent an agent smarter than a human, capable of causing vast harm to humanity and with no human qualms or limitations to do so. I believe this is not just possible but probable and likely to happen within our lifetimes.
So there. I made my position clear.
BUT!
With all that said. I do have one key disagreement with yudkowsky. And partially the reason why I made this video was so that I could present this counterargument and maybe he, or someone that thinks like him, will see it and either change their mind or present a counter-counterargument that changes MY mind (although I really hope they don’t, that would be really depressing.)
Finally, we can move on to part 2
PART TWO- MY COUNTERARGUMENT TO YUDKOWSKY
I really have my work cut out for me, don’t i? as I said I am not expert and this dude has probably spent far more time than me thinking about this. But I have seen most interviews that guy has been doing for a year, I have seen most of his debates and I have followed him on twitter for years now. (also, to be clear, I AM a fan of the guy, I have read hpmor, three worlds collide, the dark lords answer, a girl intercorrupted, the sequences, and I TRIED to read planecrash, that last one didn’t work out so well for me). My point is in all the material I have seen of Eliezer I don’t recall anyone ever giving him quite this specific argument I’m about to give.
It’s a limited argument. as I have already stated I largely agree with most of what he says, I DO believe that unaligned AGI is possible, I DO believe it would be really dangerous if it were to exist and I do believe alignment is really hard. My key disagreement is specifically about his point I descrived earlier, about the lack of a fire alarm, and perhaps, more to the point, to humanity’s lack of response to such an alarm if it were to come to pass.
All we would need, is a Chernobyl incident, what is that? A situation where this technology goes out of control and causes a lot of damage, of potentially catastrophic consequences, but not so bad that it cannot be contained in time by enough effort. We need a weaker form of AGI to try to harm us, maybe even present a believable threat of taking over the world, but not so smart that humans cant do anything about it. We need essentially an AI vaccine, so that we can finally start developing proper AI antibodies. “aintibodies”
In the past humanity was dazzled by the limitless potential of nuclear power, to the point that old chemistry sets, the kind that were sold to children, would come with uranium for them to play with. We were building atom bombs, nuclear stations, the future was very much based on the power of the atom. But after a couple of really close calls and big enough scares we became, as a species, terrified of nuclear power. Some may argue to the point of overcorrection. We became scared enough that even megalomaniacal hawkish leaders were able to take pause and reconsider using it as a weapon, we became so scared that we overregulated the technology to the point of it almost becoming economically inviable to apply, we started disassembling nuclear stations across the world and to slowly reduce our nuclear arsenal.
This is all a proof of concept that, no matter how alluring a technology may be, if we are scared enough of it we can coordinate as a species and roll it back, to do our best to put the genie back in the bottle. One of the things eliezer says over and over again is that what makes AGI different from other technologies is that if we get it wrong on the first try we don’t get a second chance. Here is where I think he is wrong: I think if we get AGI wrong on the first try, it is more likely than not that nothing world ending will happen. Perhaps it will be something scary, perhaps something really scary, but unlikely that it will be on the level of all humans dropping dead simultaneously due to diamonoid bacteria. And THAT will be our Chernobyl, that will be the fire alarm, that will be the red flag that the disaster monkeys, as he call us, wont be able to ignore.
Now WHY do I think this? Based on what am I saying this? I will not be as hyperbolic as other yudkowsky detractors and say that he claims AGI will be basically a god. The AGI yudkowsky proposes is not a god. Just a really advanced alien, maybe even a wizard, but certainly not a god.
Still, even if not quite on the level of godhood, this dangerous superintelligent AGI yudkowsky proposes would be impressive. It would be the most advanced and powerful entity on planet earth. It would be humanity’s greatest achievement.
It would also be, I imagine, really hard to create. Even leaving aside the alignment bussines, to create a powerful superintelligent AGI without flaws, without bugs, without glitches, It would have to be an incredibly complex, specific, particular and hard to get right feat of software engineering. We are not just talking about an AGI smarter than a human, that’s easy stuff, humans are not that smart and arguably current AI is already smarter than a human, at least within their context window and until they start hallucinating. But what we are talking about here is an AGI capable of outsmarting reality.
We are talking about an AGI smart enough to carry out complex, multistep plans, in which they are not going to be in control of every factor and variable, specially at the beginning. We are talking about AGI that will have to function in the outside world, crashing with outside logistics and sheer dumb chance. We are talking about plans for world domination with no unforeseen factors, no unexpected delays or mistakes, every single possible setback and hidden variable accounted for. Im not saying that an AGI capable of doing this wont be possible maybe some day, im saying that to create an AGI that is capable of doing this, on the first try, without a hitch, is probably really really really hard for humans to do. Im saying there are probably not a lot of worlds where humans fiddling with giant inscrutable matrixes stumble upon the right precise set of layers and weight and biases that give rise to the Doctor from doctor who, and there are probably a whole truckload of worlds where humans end up with a lot of incoherent nonsense and rubbish.
Im saying that AGI, when it fails, when humans screw it up, doesn’t suddenly become more powerful than we ever expected, its more likely that it just fails and collapses. To turn one of Eliezer’s examples against him, when you screw up a rocket, it doesn’t accidentally punch a worm hole in the fabric of time and space, it just explodes before reaching the stratosphere. When you screw up a nuclear bomb, you don’t get to blow up the solar system, you just get a less powerful bomb.
He presents a fully aligned AGI as this big challenge that humanity has to get right on the first try, but that seems to imply that building an unaligned AGI is just a simple matter, almost taken for granted. It may be comparatively easier than an aligned AGI, but my point is that already unaligned AGI is stupidly hard to do and that if you fail in building unaligned AGI, then you don’t get an unaligned AGI, you just get another stupid model that screws up and stumbles on itself the second it encounters something unexpected. And that is a good thing I’d say! That means that there is SOME safety margin, some space to screw up before we need to really start worrying. And further more, what I am saying is that our first earnest attempt at an unaligned AGI will probably not be that smart or impressive because we as humans would have probably screwed something up, we would have probably unintentionally programmed it with some stupid glitch or bug or flaw and wont be a threat to all of humanity.
Now here comes the hypothetical back and forth, because im not stupid and I can try to anticipate what Yudkowsky might argue back and try to answer that before he says it (although I believe the guy is probably smarter than me and if I follow his logic, I probably cant actually anticipate what he would argue to prove me wrong, much like I cant predict what moves Magnus Carlsen would make in a game of chess against me, I SHOULD predict that him proving me wrong is the likeliest option, even if I cant picture how he will do it, but you see, I believe in a little thing called debating with dignity, wink)
What I anticipate he would argue is that AGI, no matter how flawed and shoddy our first attempt at making it were, would understand that is not smart enough yet and try to become smarter, so it would lie and pretend to be an aligned AGI so that it can trick us into giving it access to more compute or just so that it can bid its time and create an AGI smarter than itself. So even if we don’t create a perfect unaligned AGI, this imperfect AGI would try to create it and succeed, and then THAT new AGI would be the world ender to worry about.
So two things to that, first, this is filled with a lot of assumptions which I don’t know the likelihood of. The idea that this first flawed AGI would be smart enough to understand its limitations, smart enough to convincingly lie about it and smart enough to create an AGI that is better than itself. My priors about all these things are dubious at best. Second, It feels like kicking the can down the road. I don’t think creating an AGI capable of all of this is trivial to make on a first attempt. I think its more likely that we will create an unaligned AGI that is flawed, that is kind of dumb, that is unreliable, even to itself and its own twisted, orthogonal goals.
And I think this flawed creature MIGHT attempt something, maybe something genuenly threatning, but it wont be smart enough to pull it off effortlessly and flawlessly, because us humans are not smart enough to create something that can do that on the first try. And THAT first flawed attempt, that warning shot, THAT will be our fire alarm, that will be our Chernobyl. And THAT will be the thing that opens the door to us disaster monkeys finally getting our shit together.
But hey, maybe yudkowsky wouldn’t argue that, maybe he would come with some better, more insightful response I cant anticipate. If so, im waiting eagerly (although not TOO eagerly) for it.
Part 3 CONCLUSSION
So.
After all that, what is there left to say? Well, if everything that I said checks out then there is hope to be had. My two objectives here were first to provide people who are not familiar with the subject with a starting point as well as with the basic arguments supporting the concept of AI risk, why its something to be taken seriously and not just high faluting wackos who read one too many sci fi stories. This was not meant to be thorough or deep, just a quick catch up with the bear minimum so that, if you are curious and want to go deeper into the subject, you know where to start. I personally recommend watching rob miles’ AI risk series on youtube as well as reading the series of books written by yudkowsky known as the sequences, which can be found on the website lesswrong. If you want other refutations of yudkowsky’s argument you can search for paul christiano or robin hanson, both very smart people who had very smart debates on the subject against eliezer.
The second purpose here was to provide an argument against Yudkowskys brand of doomerism both so that it can be accepted if proven right or properly refuted if proven wrong. Again, I really hope that its not proven wrong. It would really really suck if I end up being wrong about this. But, as a very smart person said once, what is true is already true, and knowing it doesn’t make it any worse. If the sky is blue I want to believe that the sky is blue, and if the sky is not blue then I don’t want to believe the sky is blue.
This has been a presentation by FIP industries, thanks for watching.
58 notes · View notes
windcarvedlyre · 3 months ago
Note
Komaeda!! (For the ask game)
Your opinions about him are so correct btw
(ask meme)
Him... the bastard man... the blorbo of all time......... I can't settle on just one for each category here, and several of these will be things I've rambled about in tags or replies somewhere, so you're getting lists. And tysm! :D
Actually supported by evidence in the text or subtext:
His chapter 1 plan 100% had Teruteru killing him as the end goal, and he wouldn't have stabbed someone else if that didn't happen. He'd either have stabbed himself or abandoned it. There are multiple ways the execution part of the 'murder' plan was sloppy to the point of being out-of-character for him, and not the sort of thing that would make him a stepping stone for ultimates: no way to locate someone in the dark, he'd be covered in blood afterwards if he did find someone, he put glowing paint on the knife as well as the tape...
The above is partly because his luck seems really, really good at keeping him alive at the expense of others. Even during the game it happens in chapter 1 (Imposter), chapter 3 (Tsumiki) and chapter 5 (he wasn't really dead and took out the only AI). If I was in his shoes in chapter 1 I'd expect to Luigi-wins-by-doing-absolutely-nothing my way through the entire goddamn game unless I could kill myself somehow, and attempting death via execution would risk winning the game anyway if the trial went in an absurd direction and his own confession wasn't trusted. The way he was more passive afterwards, trying to make someone plan an entire murder with him, makes the above likelier to me.
He has major untapped friendship potential with Nidai. They could bond over mutually not expecting to make it to 30, and the ways people react to finding that out about them, and his artbook opinion of Nidai admires his success despite his condition. Additionally, post-DR2 I feel like Nidai would be very persistent about getting him to move more and improve his quality of life; he could design a gentle program tailored to his specific needs. And him forcing Komaeda to yell self-affirmations would be hilarious.
He really wouldn't like being restrained. Ch2 was hell for him. It would render him a lot more vulnerable to any bad luck that came his way, he's had trauma involving being kidnapped, and a ball gag is one of only 3 gifts in the game he hates. (No shade to people writing komahina BDSM though.) I don't think it's a kink aversion thing because he's chill with being given a maid outfit and otherwise has no shame, lol.
Nothing to confirm or deny either way:
He's left-handed.
He's barely affected by the speech/language issues that are a major part of FTD. However, he is part of the unlucky minority that develop motor issues, and he wasn't doing a bit in the first trial when he said his handwriting was awful. This was not the case or was a lot milder when he first attended Hope's Peak. Suddenly waking up in the simulation with worse symptoms and then hearing he's lost years of memories, regardless of whether the above is a thing, would have really, really got to him.
He takes a lot of notes in case his memory fails him as well.
The fandom has diverse opinions on this, but imo on paper, or if he met Kamukura in a vacuum, Komaeda would like the Kamukura project. I'm not aware of any lines indicating whether talents being natural or the continued existence of a talentless underclass would be important to him, but what definitely does matter to him is the idea of becoming a stepping stone for hope, even at the expense of one's own life. He's even willing to make Ultimates die for other Ultimates' sake during the game (but presumably wouldn't do that outside of a zero sum situation). Hope's Peak has invented a way for talentless leeches to willingly sacrifice themselves to become vessels for new talented people? People that can bring about more hope for humanity than their body's former occupant ever could? Beautiful. As Hinata's foil it's reasonable to assume their ideologies overlapped a lot as well, and Hinata willingly signed up for this.
If he knew Hinata already, however? He'd struggle with cognitive dissonance. He'd tell himself he should be happy for someone in Hinata's position, but since he cares about him I think part of him would protest hard against Hinata ceasing to exist/be really glad the NWP brought him back, depending on when this is, and he would struggle to suppress that. Processing this postgame could be a catalyst for him to finally start to question his coping mechanisms.
Definitely not what canon was going for:
Does the way I generally disregard non-dr2 canon where it clashes with dr2 or wastes potential count lol?
For example, I think planes would be a major trigger for him. Maybe not discussing them or being near them- he handled going to the airport just fine in dr2- but in Dangan Island his freakout if prompted to think about flying a fighter plane is worse than any other anxiety/paranoia he displays in the mode. DR3 having him be chill after a plane crash feels at best like wasted potential, at worst OOC to me.
Covered-in-scars Komaeda fanon my beloved. It's thoroughly not canon but it just makes sense, and I headcanon that his coat is choppy from him cutting off tears from accidents as well. I like to think that his face is weirdly unharmed, so as he's usually wearing trousers and long sleeves- partly because he gets cold easily, but partly to cover them up- most people have no idea how scarred he is. They're often shocked when they see more of his bare skin.
13 notes · View notes
Text
The Job (Part 2)
AI-Less Whumptober 2023: 2. Exhaustion, 8. Panic Attack, 18. Vomiting, 21. Shock, Alt 6. Crying to Sleep Fandom: Peaky Blinders, Tommy Shelby, f!reader, prostitute!reader Summary: After being forced to witness the bloodbath at the stables, you are left shaken to the core. So when Tommy comes to visit you a few days later, you demand answers leading to a revolution you never expected. Word Count: 4266 TW: Steamy Situations/Soft Smut, Non-sexual Nudity, Shock, Trauma, Vomiting, Tears, Mentions of Prostitution, Smoking, Language Notes: A HUGE thanks to @loverhymeswith who not only sent me the ask that inspired this fic and helped me perfect this part, but also for getting me into Peaky Blinders in the first place! Part of @ailesswhumptober's event
Masterlist (coming soon)
Part 1, Part 2
Tumblr media
As you hurried from the stable and the nightmare within, you saw Arthur, John, and a handful of others you recognized as Peaky Blinders pulling up out front. No one looked at you directly as they climbed out of their cars—Arthur the only one to even acknowledge you with a small nod—before silently heading into the stables and drawing their pistols. You increased your pace but you were still within earshot when the cacophony of gunfire sounded behind you. Even though it was too late, you pressed your hands over your ears as you fled down the street, tears blurring your vision as you went. 
Yet no matter how quickly you ran nor how much distance you put between yourself and the bloody massacre you had just escaped from, you felt as if you were still there. Trembling behind the hay bales as the world exploded in blood around you. It was the stuff of nightmares you had heard about in whispers behind closed doors, but the reality of experiencing it first-hand was far more horrifying than you could ever have imagined.
After what seemed like an eternity, you finally stumbled up the front steps of your lodging. Your hands were trembling so badly that it took you three tries to finally get the key in the lock to open the door and, once inside, you rushed quickly to your room. 
Despite it being the middle of summer, you were shivering uncontrollably and there was a chill deep in your bones. After much trouble, you got a fire started in your fireplace and you huddled next to it as closely as you could stand. And yet, the chill persisted. 
Looking down, you saw that small flecks of blood had landed on your new dress and you immediately ripped it off and threw it into the fire. Even if it could have been cleaned, the events of the night had stained the dress in ways that could never be washed off. You would never be able to forget what you witnessed no matter how hard you tried—you didn’t need a reminder of it hanging in your wardrobe as well. 
Standing before the fireplace in only your undergarments, you watched as the once beautiful gown blackened and burned, slowly falling apart as it turned to ash. You wished there was a way to rid yourself of your memories as easily as you had the dress. The sounds of the screams, the metallic bite of blood in the air, the way Tommy’s blue eyes peered at you from a face stained red. All building up to the thunderous booms of gunfire as you fled the stable.
And it was all your fault. 
You had led those men to their deaths. Even if Tommy hadn’t explicitly told you what was to happen once you lured them to the stable, you should have realized what he planned to do. You knew his reputation for dealing with threats to his empire and yet, you had agreed to do what he wished with only the slightest of hesitations. If not for you, those men would be home with their wives or children at the moment instead of lying in a heap in an empty stable waiting for a handful of Peaky Blinders to come to dispose of their bodies.
That thought was the final straw. You dropped to your knees as you emptied the contents of your stomach across your floor. Over and over you retched, even past the point of having anything left in your stomach to expel. Every time you thought the nausea had settled, the memory of the blood-soaked stable would flash in your mind and you would gag once more.
Finally, once your body physically could not take anymore, you rolled over and curled up next to the fire. With tears streaming down your face and a weak whimpering in your burning throat, you slipped into unconsciousness only to find the horrors of the night were waiting to torment you there as well.
Tumblr media
The next few days were some of the worst of your life. At first, you tried to push what had happened to the back of your mind and move on as you attempted to continue about your days as normal. However, that was much easier said than done. The feeling of nausea failed to leave you and all you could manage to hold down was broth or a little bread. 
After the initial horror of the experience settled a little in your mind, you realized it wasn’t even the fact that all those men died that was really bothering you. Yes, the taking of any life was a tragedy but they were gang members who voluntarily chose this life. They knew the dangers involved with it and the potential consequences. And it wasn’t as if you weren’t used to witnessing violence after spending your entire life in Birmingham. It was just a way of life here.
No. What was really making you sick to your stomach was Tommy and how he had involved you in his feud. You were used to him using your body on a weekly basis—you gladly offered it up—but this? This was something completely different.  
He had made you an unwilling, unknowing accomplice to his dastardly deed. Then he had forced you to stand there and witness every last horrifying second of it. The image of his pale blue eyes boring into you from a crimson-drenched face, his outstretched finger pinning you in place from across the room, was burned into your soul. 
Why had he forced you to stay? You had done everything he had asked of you, so why did he want to punish you so? Why did he feel the need to curse you with these images that would haunt you for the rest of your life?
The dread in your chest only grew stronger with each passing day as Tommy’s scheduled visit approached. Wednesday simultaneously came too quickly and seemed to take an eternity to arrive. 
Usually just before he was set to appear, you would put on your finest dressing gown and style your hair just the way you knew he preferred. Then you would wait by the entrance with a cigarette and a glass of whiskey ready for him. However, this time when Thomas Shelby walked through your door, you were seated on the edge of your bed in a simple shift dress that laced up the front. As he entered, your shoulders folded in on themselves as you shrank slightly away from him. Hesitantly, you glanced over at him out of the corner of your eyes. 
If he noticed anything was different, Tommy did not address it. He simply strolled over to the table, took off his coat, and began unbuckling his belt as he asked, “Shall we get to it then?”
You made no move to stand or begin removing your clothes. Instead, you wrapped your arms across your chest, and with a slight quiver in your voice, you whispered, “H-how can you just walk in here as if nothing happened? As if this were just any other Wednesday and nothing has changed?"
He paused as he pursed his lips and gave a slight shrug. "Nothing has changed. I'm here for your services just as I am every week."
"Tommy, I watched you murder an entire room full of men right before my eyes. And I-I helped you do it. I mean, I know I agreed to help you but you never told me I would be leading them to you like lambs to the slaughter.”
His icy blue eyes took on an extra chill as he took a few steps towards you. With a slight bite to his words, he said, “You are neither foolish nor naive. You knew what would happen once you got to the stable.”
“I thought you would talk to them! Give them a warning, maybe send a message! Yes, I knew violence would probably be involved but I didn’t expect you to massacre every single one of them!”
A cunning, almost bordering on cruel, smile spread across his face. “Oh, but I was sending a message. Now anyone else out there who thinks they can fuck with the Peaky Blinders will see what will happen if they come for us.”
“But why did you have to involve me? What did I do to displease you to the point you would—” Tommy’s face dropped as the first tears spilled from your eyes and you took a wet, shaky breath, looking to the ceiling as you blinked rapidly and tried to maintain your composure. You had to know. This question had been haunting you almost as much as what you had witnessed. “I haven't been able to eat or sleep since that night. I feel as if I’m going mad. Every time I close my eyes all I can see is blood and I hear the howls of pain as those men died. Tommy, I have to know why you made me stay. What was the point? I tried to leave but you….Why did you make me witness that?”
For a moment, the room was still as the suffocating silence filled the room. Why wasn’t he answering your question? Maybe he didn’t have an answer. Maybe it had been a spur-of-the-moment decision in the heat of battle. Maybe to him, it meant nothing at al—
“You had to stay because I needed you to see the man I truly am.” 
Your eyes snapped to his face as you looked at him directly for the first time, your brow furrowed in astonishment. “W-what?”
Reaching into his pocket, Tommy pulled out a cigarette. However, unlike usual, he didn’t light it. He just rolled it between his fingers thoughtfully. Then he tapped the end on the table before looking back at you. “The last time I was here you spoke of the Peaky Blinders as if I was not part of them. As if I was not the one in charge of them. Just because I don’t talk about that business with you does not mean it isn’t my life. That I am not as guilty if not more of their ‘transgressions’ as you put it. And I needed to remind you of that.”
“I have never had any misconceptions of who you are or what you do. Even before the first time you came to me, I knew the name Thomas Shelby and I knew the stories. And though the man I have come to know here in this room is far different from what I expected from those stories, they were never far from my mind.”
“But knowing and seeing are two very different things, eh?” Tommy asked. “You think I haven’t noticed the way you are keeping your distance? The way you are too afraid to even look at me directly?”
“I’m not afraid of you, Tommy.”
“Is that right?” He placed his cigarette down on the table and began slowly strolling over to the bed. “Then if it isn’t fear, what is it? Disgust? Loathing? What is it that you see when you look at me now?” He stopped as he reached the foot of the bed. 
Shaking your head, you said, “Tommy, please—”
He stepped closer until he was less than an arm's length in front of you. You dropped your eyes to the floor as he towered over you. 
“Look at me.” When you continued to avoid his gaze, he firmly grabbed your chin between his fingers and forced your head up so you had no choice but to look into his startling blue eyes. “Look at me! And tell me what you see.” You opened your mouth to speak, but he added, “The truth.”
Trembling, you gazed into his eyes and whispered, “I… I see a man capable of unspeakable horrors. One who has washed in the blood of other men, and will readily do so again.”
Tommy stared back at you without the slightest reaction to your words. Then he nodded softly and released your chin. You slid closer to the head of the bed and wrapped your arms around yourself as he turned. Silently, he walked across the room, grabbed his coat, and began to put it on. 
However, you weren’t ready for him to leave. Maybe it was his explanation and reasoning as to his actions or maybe it was the idea that if you let him walk out that door you may never see him again, but something deep inside you had shifted as you answered his question. What you had said was your honest answer, but the truth was so much more complicated than that. And you couldn’t let him leave without hearing the rest of your response.
Taking a deep breath, you stood off the bed, walked over to him, and placed your hand on his forearm. He paused, one brow raised as he waited for you to explain your action.
Wetting your lips, you softly said, “But I also see a man who only does these things for the sake of his family and their interests.” Tommy’s face remained emotionless, but you felt some of the tension ease out of him under your touch so you continued. 
“I don’t think I can ever support your actions that night and it still makes me sick that you made me a part of it and then forced me to watch. You should have at least warned me or let me leave because I didn’t need the lesson you tried to teach me. I hate what you did, but I understand why it was necessary. And as much as I’m loath to admit it…. There is a sort of honor in it. Those men posed a threat to the lives of every member of the Peaky Blinders as well as their loved ones. And you shed their blood so the blood of your people wouldn’t be.” 
Your hand slowly trailed up his arm and across his chest until it rested just over his heart. Feeling it drumming steadily against your palm, you softly added, “The Peaky Blinders are lucky to have someone like you watching out for them.”
As usual, Tommy maintained his mask of indifference, but not even he could control his heart. You felt it jump underneath your hand as the drumming began to pick up its tempo. He blinked, long and slow, before gazing at you once more and it seemed as if some of the ice in his eyes had melted ever so slightly. 
He placed his palm over top of where your hand still lay on his chest. “I hope you know that you are one of the people under my protection. I swore I would never let any harm befall you, even that by my own hand.” His other hand reached up to cup your face, his thumb gently tracing the dark bags that had formed this past week under your eyes. “And yet it seems that is exactly what I did. I thought by making you see me for who I really am I would be protecting you. But I was so blinded by that thought I did not see the hurt I would cause instead. I am truly sorry.”
You were utterly speechless. As long as you had known him, you had never once heard Tommy admit fault or apologize for any of his actions. He only ever brushed off these incidents and changed the subject, but for him to tell you he was sorry? He must honestly regret his poor decision.
“I-I forgive you,” you whispered, your eyes fluttering as his thumb continued to caress your face. “Just don’t ever put me in a situation like that again. Please.”
“I promise.” His hand slid down your cheek slightly until his fingers brushed against your lips. “Shall we seal it with a kiss?”
Faintly, you nodded, your head spinning with anticipation for what was to come. Tommy bent over until his lips lightly grazed yours, the feeling no more than a whisper. For a moment, your heart dropped in disappointment and you figured you had misread the situation. However, when Tommy’s hand wrapped around the back of your neck and he pulled you into him, the kiss became something else entirely. 
You sometimes kissed Tommy during your weekly appointments but even then, at the peak of pleasure, it never felt like this. Now, there was a heat, an urgency, a need behind his lips that made your knees grow weak. It was something you had never experienced with any man, let alone Tommy.  But the fact that it was him kissing you like this made everything more intense and overwhelming in the best way. You never wanted it to stop.
As you began kissing him back with the same intensity, Tommy’s hands came to rest on your hips and he guided you over to the table. Dropping into one of the chairs, he pulled you onto his lap, your legs spread on either side of his waist. You could feel him already hardening in his slacks and you wrapped your legs around his hips to draw yourself in even closer. 
He buried his face in your hair, his long eyelashes fluttering against the edge of your face for a moment until he turned his head. Slowly, his lips slipped across your jaw, breathing ghostlike kisses in their wake as his mouth traveled along the curve of your throat. When he reached your shoulder, he bit down lightly. This drew a soft gasp of pleasure from your lips and you felt him buck against you in response. 
Yet before his mouth could dip any lower, you tilted his chin so you could see his face. 
His usually pale blue eyes had darkened with lust and the hunger you saw there made you ache deep within your core. This was a different side to Tommy, a side that you had never seen before. While your time together was always pleasurable, you were providing him a service and that was always apparent in his actions. But now…it felt about you just as much as about him. It felt like something deeper than just his regular weekly appointment. Yet before you lost yourself completely to him, before you let yourself believe it could be more, you had to know for sure.
Holding your breath, you murmured, “What is this, Tommy? Business or pleasure?”
Brushing his fingers across your cheek, he responds in a low, husky whisper. “For me, it stopped being business long ago.”
A brilliant smile spread across your face as you surged forward to recapture his lips. Kissing you back with the same fervor, he stood, his strong arms supporting you so you remained wrapped around his waist, and carried you over to your bed. Without breaking your kiss, he laid you down and settled above you. 
Your eyes drifted shut and you could feel his nimble fingers begin fiddling with the laces across the front of your dress, even as his tongue slid past your lips. You arched your back to allow him easier access to the ties as the aching between your legs intensified. But after a moment, he froze and then suddenly pulled back. At first, you were worried you did something wrong, but as you opened your eyes, you saw all of his concentration had shifted to the laces still perfectly fastened in place.
As he continued his futile attempt to untie your dress, he cursed softly and muttered, “Damn you woman for wearing this infernal thing."
Leaning forward, you smiled into his neck before teasing, your lips brushing across his skin, “Come now, Tommy. One might think you’d never undressed a woman before.”
He paused, his head tilting until he locked eyes with you. Then, without breaking eye contact, he curled his fingers into the fabric of your bodice and gave a firm pull, causing the dress to rip open down to your waist. You let out a gasp of surprise but Tommy swallowed the sound as his mouth found yours once again.
With your legs still encircling his waist, you pulled him in closer until his hips were flush with yours and you could feel him straining for release against your clothed core. Slowly, you began rolling your hips to rub against him and you were rewarded with a deep rumble deep within his chest. 
“Not so fast, you little minx,” Tommy growled, nipping at your lip. “There’s no rush tonight. You wanted to know what this was between us? Well, I’m going to show you. Over and over and over, until you never again doubt that this is all about pleasure.” A soft shudder of pleasure shivered down your spine at his words and he smiled. “Now, let me watch you take off what’s left of that dress then our night can begin.”
You had serviced Thomas Shelby more times than you could count. But that night became the first time you and Tommy made love….over and over and over and over, just like he promised. 
Tumblr media
Remarkably, you slept like a babe that night—the first real sleep you had had since the night in the stable. Even though you hadn’t opened your eyes yet, you could tell you had slept through the rest of the evening and well into the next morning by the warmth of the sun trickling in through your window onto the bed. The sunbeams felt heavenly on your bare skin and you hummed contently as you basked in the glow. 
Stretching to help wake up your sluggish limbs, you opened your eyes and found yourself staring directly into a pair of familiar glacial eyes. 
“Tommy!” You were instantly wide awake as you scrambled out of bed, dragging the bedsheet along with you to wrap around your naked body. “Wha-what are you still doing here? I mean, you don’t stay afterward. You never stay afterward.” You felt your face grow hot as you realized now that you had stolen the sheet, he was lying bare in your bed, fully on display. It took all of your willpower to keep your eyes locked on his face and not let them drift down the muscular planes of his body or linger on the hardness between his legs.
Completely unfazed by your reaction or his naked state, Tommy propped himself up on one elbow. “You said you hadn’t been sleeping this past week yet you looked so peaceful last night I didn’t want to disturb you.”
You blinked several times in quick succession as you tried to process that information. “So you stayed….for me?”
He shot you a coy grin. “Yes, I stayed for you. Though my motives may not have been as selfless as you make it seem.” His expression softened and he opened up his arms inviting you back to bed. “The truth is, I too found myself more at peace with you by my side than I have in a long time.”
You relaxed slightly, the motion causing the sheet to slip down until your breasts were almost completely uncovered. Tommy’s eyes shifted down to them and you had to bite your lip to keep from smiling. Apparently, for once you had more self-control than the great Thomas Shelby. But that feeling of superiority didn’t last long as your nerves once again gripped you as you faced the situation at hand.
“So, if that’s true….What do we do now?” you hesitantly asked as you worried the sheet between your fingers, afraid his answer might have changed now in the bright light of day. “Do we resume our business as usual or has this become something else? Something more?”
“That’s up to you, isn’t it?” Tommy reached over to the small side table where his pack of cigarettes and lighter were waiting and pulled one out. Once it was lit, he took a long drag, held it, and slowly let the smoke stream from his lips. “I told you last night that this has not been business to me for a long time. I’ve respected what you do but if I had my way, you’d be mine and mine alone.”
This time, you didn’t even attempt to mask the smile spreading across your face. In fact, you embraced it until it was shining almost as brightly as the dancing across the room. “And what would you do with me, Thomas Shelby? If I was yours and yours alone?” you teased, leaning forward to show off even more of your exposed breast.
“I’d give you the fucking world.”
All of your playfulness evaporated with your sharp intake of breath. You scanned Tommy’s face for any indication he was joking or teasing you back, but there was none. As you locked eyes with him, all you saw was sincerity. It was truly how he felt. 
Trying to project the same level of honesty back at him, you whispered, “I don’t need the world, Tommy. As long as I have you…that’s all I’ll ever need.”
“Well, you are in luck, love,” he said as he stubbed out his cigarette on the edge of the table. His eyes swiveled back to yours and he added matter-of-factly, “Because you’ve got me.”
You nodded, tears slightly blurring your vision. “And you’ve got me too. For forever. Starting right now.”
You dropped the sheet to the floor, pausing for just a moment to let Tommy take you in. Then you climbed back on the bed and into his waiting arms. 
Tumblr media
I am toying with an idea for a Part 3 so let me know if you would be interested!
Tag List: @lucien-calore, @zebralover
87 notes · View notes
hunterisnearme · 7 months ago
Note
(post ask update) Ok so i realized you made more post about greyed out than from what i saw from looking at a few tags. a general plot summery would still be nice tho.
Tumblr media
Whew! I can try my best to explain the Grayed Out series (Or Greyed Out, since I keep changing it up here and there but there's no wrong or right way of saying it.)
Tumblr media
Grayed Out! Or just "The Grayed Toons" is a prequel to my other side series called "Defect", which is basically a story of how a cog becomes a toonbot and tries to regain memories she lost during her younger years.
Tumblr media Tumblr media
Unfortunately, I lost a HUGE chunk of page info due to my hard drive dying on me so a few pages is lost media. Defeated, I placed this comic on the back burner and worked on Grayed Out!
Grayed Out ventures into the other perspective of a faction switch out, mainly involving the Grayed Toons Kat, Ruff, and Snaps. This is lead on by a head leader named Richard Hendrick Clark, better known as Reality Check, a Boardbot Manager who wishes to turn toons into obedient peons for C.O.G.S Inc.
Tumblr media
Grayed Out explains the practices of dubious incidents linked to Reality Check, and brings out a spur involving the character Rayne Velvetenne, who eventually turns into a Toonbot named Red in "Defect". Reality Check works as a side henchman and engineer for Robert Cyger, who he secretly despises. The Grayed Toons are Reality Check's goons who he sends out to try and push toons out of homes and playgrounds throughout their story. Most of this involves robberies, bullying, and even property damages.
With this rotten behavior persisting in the series, this brings in the Toon Resistance Force involved to discover the dark side of C.O.G.S Inc.
Tumblr media Tumblr media
Furthermore, Grayed Out is also a big metaphor over how modern childhood things are being "soiled" and making it difficult for kids to be kids. In a way, this plays a role of how society "force ages" the mental states of kids in today's modern world. This involves exposing vulgar themes, condoning the normalization of drugs and other forms of violence being casually placed in media without monetization, and child neglect. It's a whole shtick I've spit-balled into the series, which will be a consistent role in the story in terms of metaphors.
I hope to persist on the story for as long as I can! I just pray I don't lose a good chunk of stuff like I did with Defect. At least we get to see Ciao AI in "Defect".
23 notes · View notes
yappingsys · 3 months ago
Text
Tw/ this post contains serious material about impulsive and intrusive thoughts, including examples. Some examples include mentions of: abuse (including animals and children), stabbing, domestic violence.
If I’m missing any warnings then let me know. That also goes for any false information I have missed or been misinformed on.
Can we like, talk about ACTUAL intrusive thoughts? It is so scary, and all I see is “Omg I dyed my hair last night randomly.. intrusive thoughts won!”. Like no, thats an IMPULSIVE thought and action.
Impulsive Thoughts: “Impulsive thoughts are sudden, involuntary ideas or urges that can lead to immediate actions without considering the consequences. They can be spontaneous, unplanned, and often accompanied by a strong sense of urgency. Impulsive thoughts can be triggered by stress, anxiety, or things around us, and can be more aligned with our desires.” - Googles AI review thing.
“Impulsive thoughts are sudden, intense urges or desires to act on a whim, often without considering the potential consequences. These thoughts arise spontaneously and can be powerful and compelling, prompting people to make decisions in the heat of the moment without adequate planning or reflection.” - treatmyocd.com
Intrusive Thoughts: “Intrusive thoughts are unwanted, involuntary thoughts that can be unpleasant, disturbing, or upsetting. They can appear in the form of memories, sensations, urges, or mental pictures, and can be violent, sexual, or related to fears of embarrassment or doing something inappropriate. Intrusive thoughts can make it difficult to think about other things, sleep, or carry out daily activities. They can also cause feelings of worry or shame, and the more someone tries to push them away, the more they persist” -Googles AI overview.
Impulsive Thoughts Examples:
“I should quit my job today” / Leaving or changing jobs without much planning
Buying something you don’t need
“Knock that cup off the counter”
“Cut all your hair off” / “Change your hair color/style”
Intrusive Thoughts Examples:
“What if I stabbed my wife/husband?”
“What if I turned into oncoming traffic?”
Fear of / thoughts of hurting a child/animal
Blasphemous Thoughts
Sexual thoughts, typically about taboo subjects
“I have a serious disease/illness”
“I’m abusing them”
Please please learn the difference :(
9 notes · View notes