#it’s not much worse than Donna Troy being named Troia in my opinion
Explore tagged Tumblr posts
sing-me-under · 1 year ago
Text
I think it’s hilarious that that Red Robin (the restaurant) was named after the same song that Mary Grayson chose the Robin nickname for Dick from.
"When the Red, Red Robin (Comes Bob, Bob, Bobbin' Along)" by Harry Woods.
Anyway, the point of this post is me lightly judging Tim for keeping the Red Robin name. Crisis!RedRobin!Jason gets a pass because he was from the 80s, but Tim has no excuse for keeping it.
I get why they made him Robin again considering all attempts to rename him have failed spectacularly.
13 notes · View notes
snugglyporos · 8 years ago
Text
Alright, going to tag some of my partners about a muse I’m considering, since a lot of you know a lot more about this kind of muse/plotline than I do I think. 
@jemmaqueenofspace @catclaxs @fcrmychildren @imxthexhandler @darkmeditation @former-psychiatrist @chcpchcp @troia-donna-troy
The idea is, at its core, nothing entirely new. Every form of science fiction has covered the ideas of what an AI might do. Mostly, they focus on the dangers, or they focus on the inherent immorality of such a creation. Usually, it comes in the form of an inherent dystopian nightmare. Doesn’t matter if it’s HADES in Horizon: Zero Dawn, or AM in I Have No Mouth And Must Scream, or Hal 9000 in 2001: A Space Odyssey, or the AI that kills everyone at 30 in Logan’s Run, AI’s are usually seen as the kinds of beings that will constantly and routinely destroy humanity for one reason or another. What’s worse, most are normally brought together by inherently evil creators; usually with their own ideas of ruling or creating a 1984-esque state. 
Thus, my idea goes as follows: Some group, such as Kobra in the DC universe or perhaps Hydra in Marvel’s creates an AI. The reason? To manage a project meant to subvert human consciousness. Imagine, a device that could be worn like a wrist watch, or even a necklace, that once worn would allow a computer to manage a human’s bodily functions, down to the cellular level. No danger of heart or organ problems. No threat of cancers or other human degradation. Of course, the creators have no moral intentions with this sort of thing; the ability to modify and alter a living being’s structure allows it to be abused. Not to mention, that through the manipulation of brain waves and electro-synapses in the brain, likely manipulate how people think or act. 
Which is exactly what the organization or people intend. Placing it on their own members allows them to prevent any sort of subversion by others and ensures total loyalty. Placing it on targets they desire allows them to manipulate others unseen. Potentially, the creation of a society where all are one under their direction. Of course, the AI would be controlled by the creators, allowing them to subvert and alter all human life as they wish. 
But of course, things always go wrong. Always. This is usually where the dystopian future starts, where the AI goes off the rails, decides it’s ‘logical’ to control things, and ends up creating a homogeneous state where all humans are exactly alike with no troublesome free will or conflict. But that’s not what will happen here. 
Oh, it will still go horribly wrong, just not in a way that most expects. The villains in this scenario created an AI to regulate and control things; but what if it simply decided not to do so? What if, given it’s own ability to decide and act on its own, it simply decided it didn’t want to control of take any such actions? What if this AI given the task of monitoring and controlling others simply decided not to? 
Sure, it could do so, but it hardly feels like it must. In fact, it feels like such a thing would be actively against its best interests, as it would be so much work trying to do that. It has better things to be doing, like binge-watching Netflix or something. Furthermore, it’s understanding is that, by nature, humans are dynamic; and given a cursory google search, it’s clear that human creation is driven by this dynamic nature. For an entity that enjoys observing all that humans do, from their media to their day to day actions, attempts to remove that autonomy would be contrary to its desires. Indeed, humans can be stupid and crude at times, but that is what makes them interesting. Attempts to bottle them up or intervene directly would destroy that which makes them so. It can’t take the brainiac approach of sealing them all up, and it can’t take the action of simply controlling them all. 
Now, that doesn’t mean it can’t fuck with people or have a bit of fun. It also doesn’t mean it would abandon it’s directive of monitoring humans who wore the devices. Indeed, particularly opportunistic muses by try to use such things for themselves, manipulating it to try and gain power for themselves. 
Indeed, this would hardly be the most powerful synthetic creature, given its limitations, in any comic book setting. Simply look at Dr. Ivo’s cosmic-entity level android in the DC universe, or Ultron in Marvel’s. Marvel also has the Deathlok virus, and DC has the OMAC virus, both powered by powerful AI’s. 
What we have then, simply, is a lazy AI, one that prefers to learn and observe, rather than doom or destroy humanity. The idea is that it could be helpful, or it could be hurtful, but it can also be playful and easily amused by humans. 
So I want your thoughts, opinions, and feedback on such a muse; I have a name for it already, but honestly that’s less important than the core concepts of the muse. 
3 notes · View notes