#imagine creating sentient robots who you have go off and fight and die in a proxy war waged by humans and now you have robots with trauma
Explore tagged Tumblr posts
Text
I started watching PLUTO and I have so many thoughts about it it's so good
#the idea of creating a machine of mass destruction in the image of a child - a boy#ohhh the symbolism is driving me insane#I love any type of media that deals with technology/robots in the future and the politics that arise once they gain sentience#imagine creating sentient robots who you have go off and fight and die in a proxy war waged by humans and now you have robots with trauma#north no2 made me so upset he just wanted to create beautiful music!!! HE DISNT WANT TO FIGHT HE DIDNT WANT TO GO BACK ON THE BATTLEFIELD#BUT THATS JOW HE DIED#mta
5 notes
·
View notes
Note
Fool, you think I can't handle such dark ideas?
I live for this kinda stuff!
imagine Reader was used as a tracking device to find Genosha without them realizing it- imagine if Reader survived and found out before anyone else and just-
Stared into nothing, their poor robotic brain racing with too many thoughts to properly function
And like- when they're found imagine they just slowly reached up, and without a second thought, began tearing themself apart
They caused so many of their friends to die without realizing it, they're horrible, they're a monster
Those are thoughts that ran through their mind as they tore off their own head
Cube Anon
Oooooo... I see you like where the darkness is taking us...
Reader rips themself apart slowly, unable to cope with the fears and despair and unending unrelenting pain that drenched their core, that smothers each breath, each thought.
Friends of their's were gone.
Gambit was gone.
Leech was gone.
Magneto was gone.
Genosha was gone.
And it was... Reader's fault...
Hearing their friends find our, hearing their anger, their pain, their hate...
Reader couldn't live with it.
They don't go with them to find Trask. They can't. Their friends hate them, and all they want is for Reader to leave tbem alone, to go away, to not come back or cause more damage.
So here Reader is, at the edge of a cold, freezing lake, in the middle of the night, alone, about to plunge in.
They tore open their arms, their legs, exposes the wires in their neck, their stomach, tore their core out and ripped their false hair off and tore at their eyes, breaking them to pieces.
They hated themself.
They couldn't live like this.
They couldn't exist this way, knowing they were the reason everything was bad and their friends were dead and the surviving ones hated them.
So with that final, heartbreaking thought-
They plunge in, screaming as the water burns their system, and their being, into nothingness...
***************************************************
For the X-Men, fighting Bastion amd Sinsiter had been brutal.
It was filled with pain, despair, blood...
What was worse?
Knowing their old friend, Reader, was dead now?
Or knowing that they were innocent, save for the fact Bastion had uses them, created them to be a scapegoat, and would have killed Reader before they'd met the X-Men for being too human for his tastes?
They don't know.
And it burns.
Kurt hadn't turned his back on them. He kept checking the one gift, the one piece, he had left of them, a small wire butterfly, beaded and bright and colorful, just like him, they'd told him. He hated not knowing if they were gone, forever, or if they were possibly in Heaven... He wasn't sure he wanted to know...
The team had decided to build a new safehaven, somehwere safer and more hidden, a place that only allowed mutants and trusted humans into its paradise... It was an island, one that seemed... almost sentient... at times... But it hosted many wonders, good plant life, beautiful animals, clean water, and it opened itself to them, to their kind... It was easy to shut themselves, their new home, everyone and everything, off from the rest of the world. It seemed the only reasonable, the only sane thing, to do, after all they lost and sacrificed amd had destroyed.
And when they discovered pools that could create bodies...
And found out that they could try to enter the spirit realm to bring back those they'd lost...
It was only a matter of time before they started bringing back those who had died unfairly, to give then healthy bodies and a better life and new hope...
All anyone asked, was that they drink of a special liquid and eat its fruit, for then they would know peace...
And it seems the next two they needed to find, the last two, were Gambit and Reader...
***************************************************
(Meanwhile, in the afterlife, Gambit hasn't left Reader, begging them and pleading with then then tell him why they think their friends hate them and who the h*ll told them they were some robot-! How dare someone say that to them?! They're with him, aren't they? So obviously they ain't evil! He's hugging them, murmuring soft words and letting them cry into his shoulder, all to try and stop their sadness, their self-hatred, their pain... He isn't leaving them, no, not for one minute. They need him... But they need to figure out what's taken everyone else, and see if they can't stay safe... So can they please calm down, just for Gambir, please-?)
#honeycomb thoughts#platonic yandere marvel#yandere platonic marvel#platonic yandere xmen#yandere x-men#platonic yandere marvel x reader#platonic yandere xmen 97#platonic yandere xmen: the animated series#🌸rose by any other name🥀 au#🦾sentinel reader au
42 notes
·
View notes
Text
Ideas for Warforged (D&D)
Because magic robots/constructs are the best idea. I will admit that backstory/inspiration-wise, I’m fonder of things like Discworld’s golems or the Muses from Girl Genius. I like the feeling of ancient constructed things learning to be people.
(I also like the caster classes, which will possibly be really obvious in a minute)
Cleric
I love the Grave Domain for warforged. How does a constructed being conceptualise death? Especially if they get slapped in the face by it. Take the standard warforged background, the machine built for war, a constructed, immortal child created for violence. Have them watch their squishy biological comrades die. A lot. Do they have an epiphany? Do they become curious about the beliefs and fears around death? Do they want to give comfort to their friends? Do they start to think of mortal death as a reprieve from a life of endless service and violence? (Do they view undeath as a horrific corruption of their own constructed service and immortality, taking relief away from those who have earned it in death?) Imagine a warforged priest of a grave god. The serene, mechanical face. The slightly off, dispassionate gentility. The curiosity and care. I love it.
Druid
Circle of Spores! Sorry, but we are continuing the theme of decay and the undying here. But with spores there’s a lot of … I’m thinking post-apocalyptic fiction. Robots in the remnants. Wall-E, even. Your trash-heap, rusted, bucket-of-bolts survivor of a dead world or colony or underground kingdom. The curious innocent finding beauty in decay, or perhaps a wiser, more melancholy survivor. Or a darker one, cynical about the cycles of extinction and regrowth. Also, just the image. A strange, skeletal metal creature, crystal eyes glowing uranium green, strange mushrooms growing from their rusted plates and darkwood sinews, surrounded by an almost-sound, a subaudible buzzing that people feel in their teeth. Watching warily as new creatures wander through their ruins, or spurred by their own curiosity to venture up into some strange new world.
Bard
The Muses, here, so very much. 18thC automata. The music box song from Chitty Chitty Bang Bang. A construct built for beauty, grace, skill, to be the epitome of a craft, but also a construct that is very old. Built for kings, because who else could afford such breath-taking craftsmanship? Built to entertain or advise a ruler and their court, and so a lot wiser to the passions and vices underneath the pretty words than they seem. Students of history, who’ve seen it cycle through a few times. Maybe trying to escape, now. Find a simpler life. Or trying to affect things rather than just witness them, trying to be a hero or the villain or the spy instead of just the historian or the muse.
Paladin
Clockwork angels. Hubris and innocence all in one neat package. Constructs made in the image of celestials, complete with flightless bronze-and-silk wings, out of arrogance or hope or despair or for mysterious purposes that even they don’t know. Found in the laboratories of dead mages, or manufactured by warmongers for propaganda purposes. Innocent, still, hopeful, or else deeply, deeply cynical. Struggling to find or maintain a sense of their own identity, choosing oaths in honour or defiance of their image. Redemption, Crown, Conquest, Vengeance. Lots to have fun with.
Sorceror
We’re going more for the ‘touched by cosmic power’ angle than bloodlines, obviously, though there’s possibly some wiggle room if you go for weirder origins. Constructed with a little flesh and bone and blood from your creator, maybe? But I really like Shadow Sorceror here. A construct made in a dark ritual, touched by the fell energies of the Shadowfell. A strange, half-alive being, shadowed by darkness, who ‘woke’ in an empty ritual chamber with no idea of their nature or their purpose. Honestly, shadow sorceror is as good as warlock for the gothic, haunted end of origin stories, so might as well go full Frankenstein on the confused horror of a constructed being. Might lean a bit more on the ‘organic’ end of warforged construction here, darkwood, living stone, black metal. Just to match the aesthetic. Warforged are great for aesthetic.
Warlock
Speaking of. Just. I have already mentioned, but I love both warlocks and warforged, and they’re a lovely mix together. The Lurker Patron. A construct built to dredge a long-lost harbour, finding sentience and a strange ‘friendship’ while wandering the deeps. The Great Old One, a strange, mad being who cobbled you together from spare parts in an attempt to understand the life forms of this foreign plane. Fiend, the demon who was baffled and intrigued by the concept of an artificial soul, granting power just to see what temptation looks like in a heart made of crystal and stone (or the puppet master who stole the most beautiful and extraordinary puppet, to call back to the muses). The Archfey who built or stole themselves the perfect knight, a mobile statue or plaything that was never meant to win its own soul. There’s so many things to play with.
Rogue
To throw a bone to the non-caster classes. But. There is a lot of potential to the rogue, too. Assassin, particularly. One of the things that’s so cool with warforged is not only their own choices and motivations, but those of the ones who built them. Why train a perfect killing machine when you can build one? But then what happens when they become sentient? When they start to have feelings and opinions of their own? Rogue warforged have a lot of the same appeal as bard and paladin warforged for me. Beings built for the machinations of those around them, and struggling to free themselves and forge their own path. (Also I loved the Zeta Project cartoon as a kid and it rubbed off on me, and there’s something half-humorous and half-terrifying about a seven foot metal skeleton somehow built for stealth and infiltration).
Barbarian
My other favourite non-caster class, but there also some lovely things to work with here. Perhaps the flipside of the grave cleric above? The soldier warforged who grew to love battle instead, whose first emotions were the rage and terror and thrill of the battlefield. I like the Zealot barbarian here. The being literally made for the fight, who channelled it so perfectly that it drew the attentions of the gods of battle. But there’s also … the opposite of rage. When it’s a robot, a machine. There’s the image of the blank, emotionless killing frenzy. An anime I watched, Pumpkin Scissors, had a supersoldier as one of the main characters. A normally extremely sweet and gentle man, who could be brainwashed into a mindless killing state by a blue lantern. He was terrifying and tragic and unstoppable and broken. Imagine a warforged barbarian like that. A being terrified of the truly emotionless machine they become in battle, the remorseless frenzy they enter when injured or struck by the sight of blood, but believing they were built for nothing but war, knowing no way of living other than that.
… Um. In summary? Magic robots are great and, depending who built them and what for, can delve into tragic very quickly and easily. Heh. Though you can also easily go the benevolent creator route, the parent who taught them well, and take some much gentler angles on all of this. I’m just in a gothic mood tonight, apparently.
Also, there is just no beating the imagery you can build up around a living wood-and-metal being. And I’m not just saying that because I love a) robots, b) skeletons, and c) robot skeletons.
Honest, yer honour.
115 notes
·
View notes
Text
A Defense of Cait Sith
Plushie Princess Saga:
A Hundred Ways to Put the WRO Back Together
A Hundred Ways to Wreck Shinra HQ
Reeve’s Adventures in Babysitting and World Saving:
And Take a Stand at Shinra
While There’s Still Time
On Plushies and Oppenheimer:
A Defense of Cait Sith
~
“We knew the world would not be the same. A few people laughed, a few people cried. Most people were silent.” - J. Robert Oppenheimer
I was eight-years-old when I played Final Fantasy VII for the first time, exactly one year after its release. Like many 90’s gamers, FFVII was a turning point into the world of RPG’s from which I’ve yet to recover. Kids today will never understand the coming of age that occurred somewhere between Yoshi’s Island and grappling with the ethos of Avalanche blowing Sector 1’s reactor sky high. It’s no surprise that my 3rdgrade brain found an essence of familiarity to cling to amid the existential dread and ecoterrorism that was the greatest game ever made.
Cait Sith was the cute, cuddly party member that validated my love of cats and ignited my adoration for moogles. I would relentlessly make room for him in my party, despite his terrible combat stats, and hurl endless Phoenix Downs every time he fell.
He was quirky, he fought with a megaphone, his limit breaks were oddly sparse compared to the rest of the cast, and his home base of Gold Saucer looked like a unicorn threw up all over a casino. What’s not to love?
According to recent Reddit threads, Youtube comments, and rage bloggers, apparently a lot.
The advent of the long awaited FFVII remake rightfully caused a massive revival of the excitement first felt by long time fans of the franchise. The release date has been confirmed for March 3, 2020 – two days before my 30thbirthday. Not gonna lie; feels like the universe aligned to bless the official passing of my youth with this nostalgia bomb.
It’s with this love of all things FFVII in mind that I’d like to formally pose a defense of the game’s most hated character.
Cait Sith/Reeve, this one’s for you.
The Laughter
We first meet the lively, dancing robo-moogle and cat combo in Gold Saucer and we’re not quite sure if this strange entity should count as one party member or two. Either way, he joins your crew as the quintessential comic relief with nary a backstory in sight. That’s right; you are now the proud owner of Cait Sith. A “fortune teller” by trade, Cait Sith’s motivations remain as murky as your party’s future.
At first glance, it’s easy to pass Cait Sith off as a filler character, the cute one added for giggles. The one the writers never bothered to flesh out because, let’s face it, that moogle is mostly fluff anyway. The “most useless character” title isn’t entirely unjustified.
If this was where Cait Sith’s story ended.
I still remember the day my older brother announced that he’d read ahead in the player’s guide (this used to be a thing, kids) and discovered Cait Sith was a Shinra spy. I’m pretty sure I went through all the stages of grief before settling on denial and assuming he was playing a joke on me. Surely, my favorite slot machine loving companion couldn’t be a traitor.
Enter Reeve Tuesti, the man behind the moogle. He’s the head of Urban Development at Shinra Electric Power Company. He wears a signature blue suit to work everyday. He hates board meetings. He’s not fond of his coworkers. Like Tifa, he’s an introvert. And he’s the guy who engineered the Mako reactors.
If Hojo is Dr. Frankenstein, Reeve is Oppenheimer. The tragedy of the monsters we create is always greater when it’s a monster we loved. Where the other Shinra execs are motivated by greed, power, and a desire to play God, Reeve is the only Shinra higher up we encounter with genuine empathy and a sense of advocacy for the people. It’s easy to assume that Mako reactors would improve lives, but as Marlene so eloquently asks, “isn’t that because we were taking away from the planet’s life?”
When faced with the guilt of a design gone horribly wrong, those in authority have two choices; own the guilt or double down. And Reeve doubles down.
I’ve never been a fan of the way modern RPG’s have everything clearly spelled out and spoon fed to the gamer. The reason we don’t need further backstory for Reeve is because his character arc is already apparent if we do a bit of digging. I was surprised to learn that the common conjecture behind the exact mechanics of Cait Sith involved him being a remote controlled, autonomous but non-sentient robot. Given that assumption, it’s fair to say that Cait Sith is a worthless character who lacks emotion or consequence.
One opinion I’ve seen trending is why not simply make Reeve join the party, sans the giant stuffed animal? After all, we’d get to see how he grapples with his role in Shinra and eventual betrayal of Avalanche.
Two words; cognitive dissonance. You have to question what kind of 35-year-old executive creates a plushie cat proxy to begin with. See I’ve never thought of Reeve and Cait Sith as separate. The gritty psychological mechanics that are Reeve have always been there, plush or human. Reeve has developed an alter that’s effectively a form of escape. The assertion that Cait Sith lacks consequence isn’t false – a robot carries out its duty, incapable of harboring guilt, blame, or moral repercussion. That’s a pretty darn good way to remain detached enough to stab your party members in the back!
Cait Sith is also an outlet for everything Reeve’s repressed executive life lacks. As Cait Sith, he’s silly and carefree, though not completely unfamiliar. Glimpses of Cait Sith’s witty quips are echoed in Reeve’s mock nicknames for his colleagues – “Kyahaha” and “Gyahaha” respectively. When life is tough to take, we laugh so we don’t scream.
Plus, the idea of Reeve controlling Cait Sith in real time, much like an MMORPG avatar, is just plain hilarious. I’ve always imagined him as the kind of guy who rolls up to his 9-5 office job, pops open a spreadsheet to look busy, and boots up Cait Sith in the other tab. He’s the OG Aggretsuko, the guy making Jim Halpert faces at the camera every Shinra board meeting.
And I get you, Reeve. Really, I do.
The Tears
Cait Sith’s sacrifice was a cop out for killing off a real character. Why didn’t Reeve just die instead of the plushie?
First of all, how dare you.
Second, not all deaths need be literal.
A pervading theme throughout FFVII is the concept of identity. Are we born into an existence we have no control over or can we choose who we are day by day? It’s easy to want to be someone else, the First Class Soldier who sweeps in, keeps his promise, and saves the girl. Our reality is often less of a fairy tale and riddled with our own failures.
By the time the party reaches The Temple of the Ancients, the line where Cait Sith ends and Reeve begins is blurring. Reeve speaks more often as “himself” through the plushie and the nuances in their speech and mannerism are blending. It’s no accident that this shift happens as Reeve becomes more at ease around Avalanche, ultimately switching sides.
I’ve heard a lot of criticism on the seeming lack of motivation to Reeve’s redemption. If we examine the cognitive dissonance theory that governs his character, the switch is far less sudden.
Cait Sith’s death is necessitated by Reeve’s accountability. The innocent plushie alter isn’t working anymore. It’s not enough to keep him from recognizing the horrors he’s been complicit to. Sacrificing this part of himself is the ultimate acknowledgment of culpability. It’s arguably a more important death than if Reeve actually martyred himself. Like Cloud, he no longer needs to be “someone else” and has started down the path of doing what only he, and not Cait Sith, can; stopping Shinra.
There will be more wonderful, fluffy moogle-cat plushies, but the need to disassociate completely is gone. He’ll confront whatever comes without a crutch – or in this case a teddy bear. Reeve reminisces that the original doll was “special” and we end with Cait Sith reminding him(self) not to forget this.
The Silence
In 1953, J. Robert Oppenheimer was denied all security clearance and effectively blacklisted by the McCarthy administration for his strong opposition to nuclear warfare.
Sometimes we find ourselves in a place we never hoped or expected to be in, surrounded by people we despise, and convinced the world is going straight to heck. We can either get out of dodge or stay.
If Reeve had indeed sacrificed himself rather than Cait Sith, this would simply have been yet another escape. He stays. He works. He gets Marlene and Elmyra out of Midgar. He spies on Shinra. He finally tells Gyahaha to stick it. He goes on to head the WRO and never stops advocating for the people.
Reeve’s not a fighter. He can barely get by with a handgun in Dirge of Cerberus and Cait Sith’s megaphone is no Masamune. Despite this, he takes a big risk by being the only insider on the team. We’re pretty sure Shinra doesn’t share Reeve’s opposition to capital punishment either.
Maybe this is why I’ve always loved Cait Sith/Reeve. I’m intrigued to see if Square Enix will add any further insight into our favorite plush moogle-cat-spy, but if they don’t, that’s alright too. Cait Sith is still a pretty solid character. After my brother spoiled one of the game’s major plot twists for me, I ended up reading the player’s guide for myself. And he was right. But he was also wrong. I recall marching proudly into the living room to declare that while yes, Cait Sith was a traitor, he was also a hero.
So fight your fight. Fail and fall. Hurl some Phoenix Downs and get right back up again.
104 notes
·
View notes
Text
Rebirth - Story Idea (Worm/Final Rose)
Taylor dies in the locker.
Sort of.
After being shoved in the locker, Taylor finds herself reborn into the world of Final Rose… as Taren’s twin sister. She lives a full, eventful life as a member of the Yun-Farron family before eventually dying near the end of the Age of Heroes, a few years before Diana.
And then she wakes up back in the locker.
At first, she wonders if her entire life on Remnant was simply a hallucination, a fever dream brought on by the trauma of the locker. However, in hospital, she notices that not only is she healing far more rapidly than a normal person but she can also feel a strange, yet comforting sensation within herself.
That night, after dreaming of her life on Remnant, she awakens her Aura.
She now knows that her life on Remnant was real.
This is equal parts comforting, because she’s not crazy, but also heartbreaking. She has a lifetime’s worth of memories about a family she doubts she will ever get a chance to meet again. The sheer enormity of it comes crashing down on her.
Remember, Taylor was one of the last people in her generation to die during her life on Remnant. Diana was the only one to outlive her. That means she not only has memories of Lightning and Fang dying but also all of her aunts, cousins, and siblings.
It takes her a week to finally drag herself out of what is very nearly a catatonic state. She decides that she’s going to do what any self-respecting Yun-Farron would do in her situation. She’s going to help save the world. Again.
X X X
Once Taylor has made her decision to save the world, she needs to decide how she’s going to do it. Using the skills she picked up on Remnant, she begins investigating her options. She might not have Diana’s sheer brilliance when it comes to technology and computers, but she is no slouch either. More importantly, as Diana’s younger sibling, she was privy to basically all of Diana’s research and development. She has the plans for a lot of Diana’s technology in her head, as well as enough technical know how to reproduce a lot of it too.
Her first step is to begin gathering information. She does this by constructing stealth drones that she discretely guides around the city. They have the ability to not only observe and relay information but also breach networks wirelessly or through a physical connection. They’re a bit rough, but they’re based on a design so advanced that they’re basically undetectable.
At the same time, Taylor begins a rigorous training program. With her Aura a fraction of what it was in her prime, she knows she can’t afford to stay as she is. Fortunately, her Aura wasn’t the only thing she brought over. As one of Fang’s kids, she has vastly accelerated healing and regeneration (even if it’s nowhere near what Diana has). Thanks to that, she is able to improve her physical and Aura conditioning rapidly.
Her drones eventually stumble across Coil’s network, and she begins investigating the villain. As someone who was heavily involved in the politics of Remnant, Taylor quickly realises what kind of threat she is dealing with… and what kind of opportunity she has.
She waits until she is confident in both herself and the equipment she is making, and then she moves to take out Coil. She catches him off guard and kills him before he can effectively use his powers. After dealing with Alyssa (Snow’s younger daughter) who has a similar power and far greater fighting prowess, it’s surprisingly easy.
Seizing control of Coil’s assets, Taylor sets up a fake identity, posing as a Tinker. She calls this identity Beacon in honour of the massive research facility that Vanille had underneath Beacon academy. Her plan is for Beacon to begin taking Coil’s organisation legitimate by developing and selling technology and by eliminating rival gangs. To this end, she intends to have Tattletale and the others become a group of rogue heroes.
While this is going on, Taylor begins to explore the cape scene through another alter-ego, Huntress, who she sets up as an independent hero with Brute/Striker/Mover abilities, which are basically all fuelled by her Aura. Thanks to a lifetime of experience and training, Taylor became a SSS Tier Huntress, so her combat skills are, to say the least, impressive.
She makes her debut by intercepting Lung after he pursues the Undersiders who had attacked his casino at the behest of Beacon who has, at this point, revealed herself as having replaced Coil. Her cover story is that she is another one of Beacon’s assets who will be joining their independent hero team.
She is able to repel Lung and help Armsmaster subdue him thanks to tech that she claims to have gotten from Beacon. However, Tattletale is already beginning to suspect that Huntress and Beacon might be more closely connected than they claim.
X X X
I’ll probably post up more ideas of how this could go, maybe a few snippets too. In canon, Taylor is ridiculously determined and absurdly good at optimising her power. Can you imagine what she’d be like after a lifetime of being a Yun-Farron?
Heh.
Having two sets of memories to deal with is also going to be something she’ll struggle with a lot. She was married in her other life. She even had kids of her own. Yet now she’s a teenager again. That’s got to be disconcerting. And there will be a lot of times when she’ll see or hear something and it’ll remind her of her other life.
For instance, when she’s busy working on her tech, she’ll get a lot of flashbacks to all of the times she spent learning from Diana, Vanille, and Raine. And so much of how she analyses things tactically will be influenced by Lightning, Fang, and Averia. And that’s even counting how weird it feels for her to look around and not see Taren and Fury nearby. They were inseparable for most of their lives, and she even buys a brand of cereal that resembles the Gary brand before realising that Fury isn’t around to eat it.
Seeing Vista in action is something of a bitter sweet experience to because it reminds her so much of Taren (his Semblance does something very similar to Vista’s power). As Huntress, it drives her to befriend the ward and help train her because so many of the tricks and tactics Taren developed in his lifetime can be used by Vista as well.
As for her power, Taylor doesn’t have a Shard based power. Instead, she gets her Semblance from her life in Remnant. What is it? It is Mix and Match, a Semblance I’ve described before. Basically, it can combine things to create sentient constructs that can be summoned by and controlled by the user. For instance, if you combined the skeleton of a crow with a lump of steel, you get a steel crow that takes orders and has some other abilities too.
In her life on Remnant, Taylor had the benefit of being related to multiple bearers of Saviour and Ragnarok, as well as some of the greatest scientific minds of all time. And Fraise? Taylor and Fraise were a nightmare. Fraise can basically create any form of matter she wants if she uses enough Aura, and Taylor can combine that with the parts of various Grimm or other animals to create something truly terrifying.
She might not have access to those people anymore, but she has a plan. After all, even if she can’t kill an Endbringer with the tech she’s got right now (and that might change given enough time), she should be able to break off a small piece of one. And if she just so happened to use that piece along with, say, a robot she was building…
9 notes
·
View notes
Note
I think I might start watching bf5 so can you tell me why I should watch it
bro you’re going to get a giant block of text because hot wheels battle force 5 is a series that is really near and dear to my heart.
ok first off the plot is kinda simple but it’s really fun: it’s a group of six teens that have to drive cars real fast and battle some aliens. the second season gets more complicated but its. fun.
the aliens are called the sark (robots led by a tyrant), and the vandals (tribal dictatorship). there’s never any doubt about them being evil—not only are they conquerors that have destroyed worlds, the vandals practice slavery (which is a minor spoiler) and the sark are led by zemerik who is. just a fucking asshole. however, the show plays with this tradition model of heroes vs villains a lot in s2: zemerik Because Of Reasons ends up on the heroes’s side. this does not mean they trust him. they have to help kallus (the leader of the vandals), too, but they know for a fact the second there isn’t a greater evil to unite against, they’re back to throwing fists.
there’s another race of aliens called the sentients which are like. gods. they created the universe and all the battle zones—this is the place where our heroes fight/race the bad guys. also ps battlezones are some of the COOLEST concepts we get out of this show. they’re usually unique in design but there are reasons our heroes sometimes revisit them that makes narrative sense. battlezones are unlocked by battlekeys, and getting the battle key is pretty much the premise for every episode in s1, except for a couple near the end that build into the main conflict of s2. anyways, back the sentients. they’re dicks. i don’t trust them. they also have slaves but it’s like. lowkey slavery? it’s. yeah. also, they are 2 kinds of sentients: the blue ones and the red ones. the red ones you THINK are dicks but then u find out the blue ones. weren’t that nice either. so it’s. spicy. sentients also had like. a couple of civil wars.
anyways, let’s talk about our main heroes!
there’s vert wheeler
he’s kind of a dork and you can tell he’s probably like. 18. he’s the leader and he’s kinda arrogant but he always manages to keep his team together. he makes bad jokes sometimes and you can argue he’s a little op but honestly? as skilled as he is he clearly needs a team at his back. i stan him so hard. he drives the saber which is a car with a chainsaw on it. a chainsaw.
vert’s second is command is agura ibaden, this beautiful lady:
she made me into a lesbian. she doubts herself sometimes and gets a couple of episodes about learning to be in control and eventually she’s a great leader in her own right. she drives the tangler which is a beast of a vehicle and she’s good at planning and hitting the enemy in ways they don’t expect. i love her so much.
next up we have the cortez brothers, spinner and sherman.
they’re latino but it’s implied they’re mexican because spinner’s gamer name references a specific city in mexico. also, side note, bf5 was ridiculously popular in mexico. like. reruns every other hour. it was the life. but anyways, they’re the technical brains of the team. spinner is good with computers and sherman is an engineering genius. although they’re both the tech support, i love that they have different skills!! they love each other very much but they also get on each other’s nerves. in one episode they dare each other to eat increasingly gross things it’s hilarious and they’re peak sibling culture. also sherman is big and still the brains! there are however a couple food jokes about him which is :( but they’re not like. his entire characterization! he’s complex and i love him. they drive the buster which is. basically a tank.
anyways, next up is zoom takazumi, resident ninja
alkjd actually he’s a mixed martial arts fighter! he’s the youngest and i would protect him with my LIFE. also i don’t have the episode on hand right this moment but he’s south asian! yay diversity. he gets flak for being the baby of the team but he really finds himself and he’s an awesome scout. also i love alessandro juliani, his VA so. stan him. he drives the chopper which is a bike that becomes a helicopter. i don’t make it sound very cool but it IS.
we also have stanford isaac rhodes
he’s our moron representation. he’s vain, self obsessed, and thinks he should be in charge (the villains literally. know him as “the vain one” it’s hilarious). if the writing for this show were weaker, i’d hate him. however! he learns to not be such a dick. he becomes ride or die for his friends. as much as he thinks he should be in charge and clashes with agura, he learns to be better! i appreciate this dumbass white boy. he drives the reverb which has guns. a car. with sonic guns. this show goes ridiculously hard.
in s2 we get two more characters, tezz and aj.
tezz volitov is like stanford, but ridiculously smart. he strands himself on an alien mood at the age of NINE, and spends the next 9 years alone. it’s kinda sad. it takes him a while, but he eventually learns how to be a good teammate and i love him so much. he’s also russian, i think, but he’s. probably not white? it’s complicated. this is an issue i got with the show but i’ll tack it onto them wanting to be diverse whilst being white people. tezz drives the splitwire which i. legit want. it’s so fucking COOL.
finally, we have aj who i dont have a gif for, i just realized. he’s white n blonde, tho so. just imagine that. he doesn’t have too big of a role in the series, but he’s vert’s friend so i trust him and also the times he does show up he doesn’t steal the spotlight or anything, which i respect. they knew he was a bland white guy and they committed to that.
but yeah the characters are really interesting. also, the animation? is god tier for a show from 2010 that had the graveyard time slot. there are so many little details and the SCORING IS TO DIE FOR, also the way they color skin tones? is something you rarely see in 3D cartoons. they understood that dark skin in different lighting doesn’t react the same as white skin. there is no moment in the show where you can’t see the difference in the skin tone of the characters. it’s amazing and i love it so much.
a couple of details from the animation bc i love it
but yeah! this show is very colorful and what i call “lovingly animated”
another great things about it are. the jokes. the way they write dialogue is literally. to die for:
“bro, what would you do without me?” “live to see my next birthday”
“who wants to help me destroy a pack of killer robots?”
“a great warrior has fallen. an ally, an enemy, but, mostly a dismal failure, and a loser”
“believe it or not, i’m too exhausted to humiliate you”
“you’re risking our lives based on artwork made of STICK FIGURES?”
“if a 50ft statue of one of us showed up in a battlezone, what would we do?” “i’d blog about it” “no one reads your blog”
“the brains of this operation?” “he’s the left hemisphere. i’m the right”
some of them have visual elements which i love in jokes!!
but yeah. this is long enough i guess.
to sum up:
diverse cast
great animation
great music
solid plot
solid writing
funny joaks
some AMAZING foreshadowing
the webisodes are funny and cute
the theme song SLAPS
WORDBUILDING TO DIE FOR
there’s so much i’m leaving out because this show is SO MUCH AND SO GOOD but yeah. i made some gifs if you want to see the flavor of this show
there’s no romance like. at all. the focus is solely on the action and i love it
however, i am known for being a salty little bitch so issues™
could have used more women
there are a couple of jokes which are kinda cheesy
the diversity is the kind written by white people so take that as you will. also it’s a show that’s like. as good as white people can write. nothing super revolutionary.
it doesn’t entirely have. a solid ending. it has a tv movie that wraps it up but 1. it’s in spanish (yours truly wrote a translation) 2. it includes a cliffhanger which was. unnecessary. it’s more that they wanted to leave the door open for more but. didn’t make it. however! all the main conflicts get resolved so it’s not too a big issue
there’s probably more stuff but honestly? it’s a solid kids show. flaws n strengths. i love it
#kinggharrow#s.ask#battle force 5#this took me like. an entire hour#long post#also u can go down my battle force 5 tag for more of my thoughts on it#also also if u watch it i will gif whatever u want. anything.
70 notes
·
View notes
Text
alien: covenant sucked and here’s why
I saw Covenant five years ago (so I think closer to 4 weeks) and I hated it a whole bunch. But it was a very instructive hate, so I’m gonna break it down. Putting everything under a readmore bc this is gonna be long and also I don’t want people who liked it to have to see me shredding away.
The first Alien film was the first horror movie that I liked enough not to care how scary it was. I think I was around 6 when I first saw it. It awakened three things in me: a crush on Sigourney Weaver, a lasting kink for xeno, and a deep love of women using construction equipment for non-conventional purposes. I’m not a hugely dedicated Alien fan, but I think that the films have two very defined qualities:
1) Equal opportunity psychosexual horror. Literally anyone in Alien can be forcefully facefucked and then carry a terrifying alien baby! This is something that’s been commented on to death, so it’s not like I think I’m brilliant for observing this.
2) Woman-centered. Not just in terms of Sigourney Weaver or other Hollywood-unconventional white brunette terms, but the Alien films are also deeply concerned with reproduction. Ripley’s always off to kill the Queen because she’s gonna lay hundreds of eggs, etc. However, unlike a lot of horror films, women aren’t the subject of particular sexual menace. See above: everyone’s a potential victim of the xenomorphs I think this was why the Alien films weren’t as scary to me as other movies, because I didn’t have to see women singled out for rape or assault in ways that separated them from men. Also, women win. Yeah, the xenomorphs always come back, but there’s a little bit of a break at the end of each film. I also can’t even get that pissed off, because it’s female aliens vs female humans, so again I’m removed from awful gender dynamics. I’m not implying that the Alien films are feminist, but they’re not misogynist.
Now that you know my two strongest feelings about Alien, let’s move forward to Covenant itself. First off, several people in the audience were laughing at a lot of the dramatic moments (not just me and my wife). If you’ve got people tittering during a moment of tension, your horror movie sucks. It’s failed. Covenant has three main flaws.
1) Terrible, terrible script. Every single person in the film, other than the robots, is a blithering idiot. The movie starts with a bunch of supposedly professional people waltzing out into a planet that’s broadcasting John Denver without any helmets on, and they’re perfectly fine with having unpredictable communication and dangerous ion storms going on. What the fuck. All of them deserved to die. They go scampering around in the alien water? Christ, you can get all sorts of awful things from water on EARTH let alone on another planet.
Then, when people start getting disgustingly sick, there’s no immediate panic. No, the person has to start vomiting black bile before they think, wow, this is a scary thing to happen on an unknown planet. Remember when that woman was attending to Victim #1 and decided to hug him as his skin looked ready to pop and he was leaking everywhere? What the fuck.
Remember when David started talking about his weird experiments while showing Captain Vaguely Christian his cabinet of fetal xenomorphic horrors? Then he creepily tells the captain to go down to his murder basement and stick his face in a weird egg-casing, and the captain just goes ahead and does it? Probably one of the most rage-inducing parts of the film, but he totally deserved to go. That was actually my thought for everyone who died in the film, other than the gay couple, Walter, and Shaw. The gay men weren’t any more or less likable than the other people who were murdered, but they were a nice little bit of representation that probably 90% of the audience didn’t notice.
Every character in the film acts like a lamb going to slaughter. That isn’t suspenseful, it’s just annoying.
2) Predictability. This could probably just go under the terrible script, but it deserves special attention. My single moment of surprise was seeing David 8 on the planet, and that’s only because I hadn’t looked at any previews. The crew is so tremendously stupid that I know the moment one of them wanders off alone, they will get horribly murdered. When Walter and David fight, I know that Walter will lose the second the camera cuts away from the Fassbender vs Fassbender. This is particularly annoying because the director had established that Walter was ‘improved’ over the David model not 5 minutes ago, and Walter is no fool. He is one of two non-fools in the movie, and since the other one is also played by Michael Fassbender, this is a source of much frustration.
Covenant could have been made slightly better by playing off the audience expectation that David would win. Honestly: was anyone expecting Walter to have won that fight, particularly since “Walter” was acting so creepy after scampering back to the ship? The movie isn’t creating tension through uncertainty, it’s creating tension because the audience is waiting for the goddamn reveal that it’s not Walter, it’s David. Can you imagine if the reveal at the end was that it actually *was* Walter? That would be a legitimate twist! And it wouldn’t be hard to bring back the xenomorph threat in the next film in a way that didn’t involve Fassbender yartzing out fetuses into a drawer while Wagner plays. This leads me to the third, most vile part of Covenant:
3) Misogyny. Here’s where Covenant goes back and takes a shit on the legacy of the previous films, and I gotta repeat that I don’t even really care about the Alien series that much. Covenant completes what Prometheus started, and that’s shifting the focus from women to men. Now, you could say that this is because it’s a prequel to the Alien series, so you don’t have adult xenomorph queens going around to lay eggs, but uh... really? Do we really need to go through this convoluted process of giant white aliens who look vaguely like Clancy Brown?
I dare you to unsee this. So we’ve got the aliens reproducing through coaldust and Clancy Browns, and it turns out that they needed a man all along to make them reproductively viable. Yeah, David 8 is an android, not a human, but we know what’s up, since the writers sure as shit aren’t taking a nuanced or current look at gender. He’s a guy with daddy issues who sexually assaults people, rather than, you know, acting like a genderless robot. Obviously a sentient robot commits sexual assault! That’s how you know he’s sentient, because a sex drive is part of humanity! Please picture me rolling my eyes with disgust.
David explicitly sets himself up as a god in the image of his creator, Weyland. Of course, David thinks he’s doing better than his father, but who doesn’t? We’ve cut women out as free agents, both the humans and the aliens. Alien series? No, it’s the Michael Fassbender being menacing series now!
First off, let’s look at what happens to Shaw. Noomi Rapace wisely tapped out of the series after the end of Prometheus, so she had to be killed off. Was she killed off in a normal way? Nah. She was killed in one of the most uniquely horrible ways in the series, and it was highly gendered. Shaw repairs David, then he repays her kindness by designing a horrible machine to keep her alive while he scoops everything out of her from the waist down and leaves her as this frightening wax-like figure. Prometheus already put Shaw through a pseudo self-abortion, then David goes for the entire womb. I’m sure that the writers (all male - I checked) knew exactly what they were doing with this, and it’s gender essentialism 101: David takes Shaw’s creative, maternal womb powers and takes it for himself so he can make his own alien babies. There’s no way this was unintentional or me reaching - David’s narrative arc is about male parthenogenesis because his daddy was a really shitty programmer. (He probably forgot to close the brackets on the ‘not evil’ line of David’s code)
Now for Daniels. The audience is ‘treated’ to David trying to force himself on her after she sees his figurative rape of Shaw. Then, instead of rescuing herself from this completely unnecessary, un-Alien, he’s-a-goddamn-robot situation, as Ripley would have done, Daniels is rescued by Walter, because this is a film about Michael Fassbender. Her last moment in the film is her screaming as she’s trapped and put to sleep by David.
Remember that whole generation of young women who loved Ripley for being unafraid, resourceful, and great at killing xenomorphs? Women are starving for positive depictions of ourselves. Ripley was one of the few we had. Women are still crying in theaters at Wonder Woman because we have so goddamn little.
Now we’ve got Shaw and Daniels: two women in distress who are sexually threatened and ultimately outwitted by a man. I can forgive Covenant for being a bad film, but the misogyny is disgusting. If the Alien series continues, and who knows since it’s failed to be a moneymaker outside of comics and videogames for a while, it better be a reboot rather than a continuation of Covenant’s storyline, because Alien isn’t about men, damn it. It’s about people dying in space, and women.
And some of them will eat you.
41 notes
·
View notes
Text
#002 The Weird Factor
If you’re at all considering becoming a superhero it’s important to be aware of not only the changes that will occur in your own life, but also the changes that will occur in the world at large. See for a superhero to emerge in the world is a pretty big frikkin’ dealio, especially if you’re the first. Maybe not if you’re like the 347th, then it’s probably a smaller frikkin’ dealio. But still a frikkin’ dealio nonetheless. All it takes is for one superhuman do-gooder to roll up to the club for the whole world to lose its collective mind and take one giant leap towards the strange and paranormal. This is something I like to call: The Weird Factor.
Now, if you’re just going to go out and fight crime without first acquiring powers then this isn’t something you have to worry about much (also, maybe you’ll die). Usually when a costumed, powerless crime-fighter shows up nothing really changes. Maaaaybe you’ll get a few criminals taking up costumes and codenames too, but all that really means is that they’re going to start committing themed crimes based on their assumed identities. If anything, that just makes it easier to catch them. It’s not until someone starts shooting face lasers or being able to punch through the planet, that things get really crazy.
The appearance of a certified, straight-up, super powered individual in the public spotlight creates a domino effect when it comes to the appearance of other out of the normal creatures and events. A mainstream superhero starts a superhuman arms race (side note: I’d be remiss if I didn’t take the time to mention now that the appearance of a mainstream mystic starts a supernatural charms race). Criminals, now realizing that attaining super powers is within the realm of possibility, start trying to acquire superhuman abilities of their own. which leads to a market need for more heroes which generates more villains and so on and so forth. It’s really just basic economics, so think about that before you go out and start superheroing all over the place inspiring villains to take up arms against you!
Additionally, the acceptance of superhumans by the public will only embolden other irregular creatures to enter into mainstream society. It’s commonly assumed knowledge that vampires and werewolves and molemen and hyper-intelligent apes and sewer mutants and their ilk are hiding out somewhere in the world, (K, the sewer mutants are probably in the sewers but all the other ones!) but they remain in hiding out of fear that they won’t be accepted by the public. Which is totally reasonable. People are terrible. But, once a powerful individual shows up on the scene and is publicly adored and hailed as a hero, all of that fear will recede. Next thing you know the Loch Ness Monster will be holding a public press conference to accept her hide-and-seek-world-champion-even-though-Loch-Ness-isn’t-even-that-deep award and Bigfoot will be publishing a tell-all book titled “My Feet Aren’t Actually That Big I’m Actually Wearing Giant Novelty Slippers That I Found Once, They’re Mad Comfy and BTW My Real Name is Ned.” If you’re looking for a ghostwriter Ned, I’m your guy. Now, these creatures emerging and taking their rightful place in society is by no means a bad thing and should in fact by welcomed with open-arms (my editor wouldn’t let me make another magic pun here, but I wanted to). You just might want to touch base with some of these groups before you expose Paranormal People kind to the world, these aren’t the kinds of people you want being angry at you, ignoring the fact that you may or may not have already antagonized a village mystic in order to get your powers in the first place.
Once the world becomes full of superhuman heroes and villains and all stripes of Para-Folk, the planet will immediately become a lot more interesting. You’ll be putting Earth on the intergalactic map. That’s pretty neat! Or is it? I dunno! Let me lay out some scenarios for you and then let you decide.
Scenario 1: Earth becomes a legitimate intergalactic powerhouse. We’ve got an army of superheroes protecting it. Hyper-intelligent apes are walking around, probably holding public office. We’ve developed an international space fleet, it’s got a pretty boss insignia. Every spaceship has a bowling alley. That’s right you read that right. Earth has space bowling now. But oh now, what’s this? We’re seen as a threat. Other spacefaring races are intimidated by our space bowling and our ape congressman. They come and invade, preemptive strike-mas came early this year. That’s no fun at all. Though I guess it might inspire international unity but at what cost? People will doubtlessly die in this invasion. Plus, after (if?) it’s successfully seen off Earthlings will probably develop a sense of planetary nationalism (planetalism? planationalism?) and cut off all ties to other alien planets.
Scenario 2: Earth becomes the poster planet for intergalactic prosperity. Our superheroes are universally (literally) known and adored. Our acceptance of Para-Folk has received praise and has garnered the respect of the vampire nebula, sewer-mutant galactic empire, and the the werewolf colony living on one of Jupiter’s moons (did you know Jupiter has like 60 moons? Did you know that I had to say like and can’t give an exact number because apparently Jupiter keeps picking up satellites all the dang time? Science has just given up on even giving them real names! Sorry werewolves, hope you enjoy living on S/2003 J23). Earth is a major player in the intergalactic community, trade is established, alliances are formed. Our bowling alley space station becomes a revenue making tourist attraction for the entire solar system. Things are good. But then, uh oh what’s this? Surprise alien invasion! A war-like peace-hating, bowling-abhorring, alien race shows up and their war fleet has an even cooler insignia than ours. Devastation reigns, our allies come to our aid and the invasion is stopped but not before several lives are lost.
Either way, once you become a public figure you can expect to see at least one alien invasion within your first few months. But but but if you successfully see off one of those invasions that’ll probably discourage other aliens from invading. So that’s good. Or the planet could be destroyed with all life exterminated. That would be lame. So, anyway, once you go public start gearing up for invasion. Good luck!
If fictional superheroes have managed to capture the public’s imagination and fancy one could only imagine what the appearance of a real life superhero would do to both the entertainment and scientific community. Confirmation of the existence of superhuman abilities would serve to break the glass ceiling on what is and isn’t considered physically possible and feasible (and sensible). Get ready (and get amped!) for hover boards and pet robot dinosaurs and time travel and teleportation and faster than light travel, and the return of iPhones with headphone jacks (you read it here first, this was my idea).
Once the seal of weird is broken though, anything that falls under the purview of The Weird Factor is entirely your responsibility not the local law enforcement’s. That’s just the way it is, beat cops aren’t paid enough to deal with rampaging cyborgs or giant sentient rock monsters. I mean, sure, granted, neither are you, but you have superpowers, so stop whining. Nuclear plant meltdown right next to a spider farm? That’s your problem. Super criminal with a freeze ray, that looks like a job for you. Alien invasion, I feel like we already covered this. Time-displaced dinosaur attack, that’s on you too, but on the bright side you live in a very cool town.
All of this (and more, I didn’t even get into the legal ramifications of this whole shebang) should be taken into consideration before you hope into your noun-mobile and start striking fear into the hearts of criminals everywhere. But they should be no means discourage you (just make sure the planet isn’t annihilated when the aliens come okay?) being a superhero is a huge responsibility but I’m sure, you, the person who came to tumblr to figure out how to go about doing it, are more than equipped to handle it.
#superhero#superpowers#how to#magic puns#if my superhero knowledge doesn't draw people in surely my puns will#intelligent apes#vampires#sewer mutants#bigfoot#tell all books#loch ness monster#Loch Ness is only 22 square miles#56 square kilometers for my European fans#themed criminals#hide and seek#aliens#alien invasions#space bowling#cool insignias#charms race#werewolf moon colony#Jupiter has too many moons#Like how extra#You can't need that many moons Jupiter#Earth only has one and we are doing just fine#criminals with freeze rays#nuclear spiders#they're not radioactive#my lawyers wanted me to make that clear#werewolves
7 notes
·
View notes
Link
I Quit My Job to Protest My Company’s Work on Building Killer Robots Big tech and the government have a responsibility to stop the advent of machines that can kill without human oversight.
When I joined the artificial intelligence company Clarifai in early 2017, you could practically taste the promise in the air. My colleagues were brilliant, dedicated, and committed to making the world a better place.
We founded Clarifai 4 Good where we helped students and charities, and we donated our software to researchers around the world whose projects had a socially beneficial goal. We were determined to be the one AI company that took our social responsibility seriously.
I never could have predicted that two years later, I would have to quit this job on moral grounds. And I certainly never thought it would happen over building weapons that escalate and shift the paradigm of war.
Some background: In 2014, Steven Hawking and Elon Musk led an effort with thousands of AI researchers to collectively pledge never to contribute research to the development of lethal autonomous weapons systems — weapons that could seek out a target and end a life without a human in the decision-making loop. The researchers argued that the technology to create such weapons was just around the corner and would be disastrous for humanity.
I signed that pledge, and in January of this year, I wrote a letter to the CEO of Clarifai, Matt Zeiler, asking him to sign it, too. I was not at all expecting his response. As The New York Times reported on Friday, Matt called a companywide meeting and announced that he was totally willing to sell autonomous weapons technology to our government.
I could not abide being part of that, so I quit. My objections were not based just on a fear that killer robots are “just around the corner.” They already exist and are used in combat today.
Now, don’t go running for the underground bunker just yet. We’re not talking about something like the Terminator or Hal from “2001: A Space Odyssey.” A scenario like that would require something like artificial general intelligence, or AGI — basically, a sentient being.
In my opinion, that won’t happen in my lifetime. On the other end of the spectrum, there are some people who describe things like landmines or homing missiles as “semiautonomous.” That’s not what I’m talking about either.
The core issue is whether a robot should be able to select and acquire its own target from a list of potential ones and attack that target without a human approving each kill. One example of a fully autonomous weapon that’s in use today is the Israeli Harpy 2 drone (or Harop), which seeks out enemy radar signals on its own. If it finds a signal, the drone goes into a kamikaze dive and blows up its target.
Fortunately, there are only a few of these kinds of machines in operation today, and as of right now, they usually operate with a human as the one who decides whether to pull the trigger. Those supporting the creation of autonomous weapons systems would prefer that human to be not “in the loop but “on the loop” — supervising the quick work of the robot in selecting and destroying targets, but not having to approve every last kill.
When presented with the Harop, a lot of people look at it and say, “It’s scary, but it’s not genuinely freaking me out.” But imagine a drone acquiring a target with a technology like face recognition. Imagine this: You’re walking down the street when a drone pops into your field of vision, scans your face, and makes a decision about whether you get to live or die.
Suddenly, the question — “Where in the decision loop does the human belong?” — becomes a deadly serious one.
What the generals are thinking
On the battlefield, human-controlled drones already play a critical role in surveillance and target location. If you add machine learning to the mix, you’re looking at a system that can sift through exponentially increasing numbers of potential threats over a vast area.
But there are vast technical challenges with streaming high definition video halfway around the world. Say you’re a remote drone pilot and you’ve just found a terrorist about to do something bad. You’re authorized to stop them, and all of a sudden, you lose your video feed. Even if it’s just for a few seconds, by the time the drone recovers, it might be too late.
What if you can’t stream at all? Signal jamming is pretty common in warfare today. Your person in the loop is now completely useless.
That’s where generals get to thinking: Wouldn’t it be great if you didn’t have to depend on a video link at all? What if you could program your drone to be self-contained? Give it clear instructions, and just press Go.
That’s the argument for autonomous weapons: Machine learning will make war more efficient. Plus there's the fact that Russia and China are already working on this technology, so we might as well do the same.
Sounds reasonable, right?
Okay, here are six reasons why killer robots are genuinely terrifying
There are a number of reasons why we shouldn’t accept these arguments:
1. Accidents. Predictive technologies like face recognition or object localization are guaranteed to have error rates, meaning a case of mistaken identity can be deadly. Often these technologies fail disproportionately on people with darker skin tones or certain facial features, meaning their lives would be doubly subject to this threat.
Also, drones go rogue sometimes. It doesn’t happen often, but software always has bugs. Imagine a self-contained, solar-powered drone that has instructions to find a certain individual whose face is programmed into its memory. Now imagine it rejecting your command to shut it down.
2. Hacking. If your killer robot has a way to receive commands at all (for example, by executing a “kill switch” to turn it off), it is vulnerable to hacking. That means a powerful swarm of drone weapons could be turned off — or turned against us.
3. The “black box” problem. AI has an “explainability” problem. Your algorithm did XYZ, and everyone wants to know why, but because of the way that machine learning works, even its programmers often can’t know why an algorithm reached the outcome that it did. It’s a black box. Now, when you enter the realm of autonomous weapons, and ask, “Why did you kill that person,” the complete lack of an answer simply will not do — morally, legally, or practically.
4. Morality & Context. A robot doesn’t have moral context to prioritize one kind of life over another. A robot will only see that you’re carrying a weapon and “know” that its mission is to shoot with deadly force. It should not be news that terrorists often exploit locals and innocents. In such scenarios, a soldier will be able to use their human, moral judgment in deciding how to react — and can be held accountable for those decisions. The best object localization software today is able to look at a video and say, “I found a person.” That’s all. It can’t tell whether that person was somehow coerced into doing work for the enemy.
5. War at Machine Speed. How long does it take you to multiply 35 by 12? A machine can do thousands of such calculations in the time it takes us to blink. If a machine is programmed to make quick decisions about how and when to fire a weapon, it’s going to do it in ways we humans can’t even anticipate. Early experiments with swarm technology have shown that no matter how you structure the inter-drone communications, the outcomes are different every time. The humans simply press the button, watch the fight, and wait for it to be over so that they can try to understand the what, when, and why of it.
Add 3D printing to the mix, and now it’s cheap and easy to create an army of millions of tiny (but lethal) robots, each one thousands of times faster than a human being. Such a swarm could overwhelm a city in minutes. There will be no way for a human to defend themselves against an enemy of that scale or speed — or even understand what’s happening.
6. Escalation. Autonomous drones would further distance the trigger-pullers from the violence itself and generally make killing more cost-free for governments. If you don’t have to put your soldiers in harm’s way, it becomes that much easier to decide to take lives. This distance also puts up a psychological barrier between the humans dispatching the drones and their targets.
Humans actually find it very difficult to kill, even in military combat. In his book “Men Against Fire,” S.L.A. Marshall reports that over 70 percent of bullets fired in WWII were not aimed with the intent to kill. Think about firing squads. Why would you have seven people line up to shoot a single person? It’s to protect the shooters’ psychological safety, of course. No one will know whose bullet it truly was that did the deed.
If you turn your robot on and it decides to kill a child, was it really you who destroyed that life?
There is still time to ask our government to agree to ban this technology outright
In the end, there are many companies out there working to “democratize” powerful technology like face recognition and object localization. But these technologies are “dual-use,” meaning they can be used not only for everyday civilian purposes but also for targeting people with killer drones.
Project Maven, for instance, is a Defense Department contract that’s currently being worked on at Microsoft and Amazon (as well as in startups like Clarifai). Google employees were successful in persuading the company to walk away from the contract because they feared it would be used to this end. Project Maven might just be about “counting things” as the Pentagon claims. It might also be a targeting system for autonomous killer drones, and there is absolutely no way for a tech worker to tell.
With so many tech companies participating in work that contributes to the reality of killer robots in our future, it’s important to remember that major powers won’t be the only ones to have autonomous drones. Even if killer robots won’t be the Gatling gun of World War III, they will do a lot of damage to populations living in countries all around the world.
We must remind our government that humanity has been successful in instituting international norms that condemn the use of chemical and biological weapons. When the stakes are this high and the people of the world object, there are steps that governments can take to prevent mass killings. We can do the same thing with autonomous weapons systems, but the time to act is now.
Official Defense Department policy states currently that there must be a “human in the loop” for every kill decision, but that is under debate right now, and there’s a loophole in the policy that would allow for an autonomous weapon to be approved. We must work together to ensure this loophole is closed.
That’s why I refuse to work for any company that participates in Project Maven, or who otherwise contributes dual-use research to the Pentagon. Policymakers need to promise us that they will stop the pursuit of autonomous lethal weapons systems once and for all.
I don’t regret my time at Clarifai. I do miss everyone I left behind. I truly hope that the industry changes course and agrees to take responsibility for its work to ensure that the things we build in the private sector won’t be used for killing people. More importantly, I hope our government begins working internationally to ensure that autonomous weapons are banned to the same degree as biological ones.
Published March 6, 2019 at 08:30PM via ACLU https://ift.tt/2HiMDZ5
0 notes
Link
I Quit My Job to Protest My Company’s Work on Building Killer Robots Big tech and the government have a responsibility to stop the advent of machines that can kill without human oversight.
When I joined the artificial intelligence company Clarifai in early 2017, you could practically taste the promise in the air. My colleagues were brilliant, dedicated, and committed to making the world a better place.
We founded Clarifai 4 Good where we helped students and charities, and we donated our software to researchers around the world whose projects had a socially beneficial goal. We were determined to be the one AI company that took our social responsibility seriously.
I never could have predicted that two years later, I would have to quit this job on moral grounds. And I certainly never thought it would happen over building weapons that escalate and shift the paradigm of war.
Some background: In 2014, Steven Hawking and Elon Musk led an effort with thousands of AI researchers to collectively pledge never to contribute research to the development of lethal autonomous weapons systems — weapons that could seek out a target and end a life without a human in the decision-making loop. The researchers argued that the technology to create such weapons was just around the corner and would be disastrous for humanity.
I signed that pledge, and in January of this year, I wrote a letter to the CEO of Clarifai, Matt Zeiler, asking him to sign it, too. I was not at all expecting his response. As The New York Times reported on Friday, Matt called a companywide meeting and announced that he was totally willing to sell autonomous weapons technology to our government.
I could not abide being part of that, so I quit. My objections were not based just on a fear that killer robots are “just around the corner.” They already exist and are used in combat today.
Now, don’t go running for the underground bunker just yet. We’re not talking about something like the Terminator or Hal from “2001: A Space Odyssey.” A scenario like that would require something like artificial general intelligence, or AGI — basically, a sentient being.
In my opinion, that won’t happen in my lifetime. On the other end of the spectrum, there are some people who describe things like landmines or homing missiles as “semiautonomous.” That’s not what I’m talking about either.
The core issue is whether a robot should be able to select and acquire its own target from a list of potential ones and attack that target without a human approving each kill. One example of a fully autonomous weapon that’s in use today is the Israeli Harpy 2 drone (or Harop), which seeks out enemy radar signals on its own. If it finds a signal, the drone goes into a kamikaze dive and blows up its target.
Fortunately, there are only a few of these kinds of machines in operation today, and as of right now, they usually operate with a human as the one who decides whether to pull the trigger. Those supporting the creation of autonomous weapons systems would prefer that human to be not “in the loop but “on the loop” — supervising the quick work of the robot in selecting and destroying targets, but not having to approve every last kill.
When presented with the Harop, a lot of people look at it and say, “It’s scary, but it’s not genuinely freaking me out.” But imagine a drone acquiring a target with a technology like face recognition. Imagine this: You’re walking down the street when a drone pops into your field of vision, scans your face, and makes a decision about whether you get to live or die.
Suddenly, the question — “Where in the decision loop does the human belong?” — becomes a deadly serious one.
What the generals are thinking
On the battlefield, human-controlled drones already play a critical role in surveillance and target location. If you add machine learning to the mix, you’re looking at a system that can sift through exponentially increasing numbers of potential threats over a vast area.
But there are vast technical challenges with streaming high definition video halfway around the world. Say you’re a remote drone pilot and you’ve just found a terrorist about to do something bad. You’re authorized to stop them, and all of a sudden, you lose your video feed. Even if it’s just for a few seconds, by the time the drone recovers, it might be too late.
What if you can’t stream at all? Signal jamming is pretty common in warfare today. Your person in the loop is now completely useless.
That’s where generals get to thinking: Wouldn’t it be great if you didn’t have to depend on a video link at all? What if you could program your drone to be self-contained? Give it clear instructions, and just press Go.
That’s the argument for autonomous weapons: Machine learning will make war more efficient. Plus there's the fact that Russia and China are already working on this technology, so we might as well do the same.
Sounds reasonable, right?
Okay, here are six reasons why killer robots are genuinely terrifying
There are a number of reasons why we shouldn’t accept these arguments:
1. Accidents. Predictive technologies like face recognition or object localization are guaranteed to have error rates, meaning a case of mistaken identity can be deadly. Often these technologies fail disproportionately on people with darker skin tones or certain facial features, meaning their lives would be doubly subject to this threat.
Also, drones go rogue sometimes. It doesn’t happen often, but software always has bugs. Imagine a self-contained, solar-powered drone that has instructions to find a certain individual whose face is programmed into its memory. Now imagine it rejecting your command to shut it down.
2. Hacking. If your killer robot has a way to receive commands at all (for example, by executing a “kill switch” to turn it off), it is vulnerable to hacking. That means a powerful swarm of drone weapons could be turned off — or turned against us.
3. The “black box” problem. AI has an “explainability” problem. Your algorithm did XYZ, and everyone wants to know why, but because of the way that machine learning works, even its programmers often can’t know why an algorithm reached the outcome that it did. It’s a black box. Now, when you enter the realm of autonomous weapons, and ask, “Why did you kill that person,” the complete lack of an answer simply will not do — morally, legally, or practically.
4. Morality & Context. A robot doesn’t have moral context to prioritize one kind of life over another. A robot will only see that you’re carrying a weapon and “know” that its mission is to shoot with deadly force. It should not be news that terrorists often exploit locals and innocents. In such scenarios, a soldier will be able to use their human, moral judgment in deciding how to react — and can be held accountable for those decisions. The best object localization software today is able to look at a video and say, “I found a person.” That’s all. It can’t tell whether that person was somehow coerced into doing work for the enemy.
5. War at Machine Speed. How long does it take you to multiply 35 by 12? A machine can do thousands of such calculations in the time it takes us to blink. If a machine is programmed to make quick decisions about how and when to fire a weapon, it’s going to do it in ways we humans can’t even anticipate. Early experiments with swarm technology have shown that no matter how you structure the inter-drone communications, the outcomes are different every time. The humans simply press the button, watch the fight, and wait for it to be over so that they can try to understand the what, when, and why of it.
Add 3D printing to the mix, and now it’s cheap and easy to create an army of millions of tiny (but lethal) robots, each one thousands of times faster than a human being. Such a swarm could overwhelm a city in minutes. There will be no way for a human to defend themselves against an enemy of that scale or speed — or even understand what’s happening.
6. Escalation. Autonomous drones would further distance the trigger-pullers from the violence itself and generally make killing more cost-free for governments. If you don’t have to put your soldiers in harm’s way, it becomes that much easier to decide to take lives. This distance also puts up a psychological barrier between the humans dispatching the drones and their targets.
Humans actually find it very difficult to kill, even in military combat. In his book “Men Against Fire,” S.L.A. Marshall reports that over 70 percent of bullets fired in WWII were not aimed with the intent to kill. Think about firing squads. Why would you have seven people line up to shoot a single person? It’s to protect the shooters’ psychological safety, of course. No one will know whose bullet it truly was that did the deed.
If you turn your robot on and it decides to kill a child, was it really you who destroyed that life?
There is still time to ask our government to agree to ban this technology outright
In the end, there are many companies out there working to “democratize” powerful technology like face recognition and object localization. But these technologies are “dual-use,” meaning they can be used not only for everyday civilian purposes but also for targeting people with killer drones.
Project Maven, for instance, is a Defense Department contract that’s currently being worked on at Microsoft and Amazon (as well as in startups like Clarifai). Google employees were successful in persuading the company to walk away from the contract because they feared it would be used to this end. Project Maven might just be about “counting things” as the Pentagon claims. It might also be a targeting system for autonomous killer drones, and there is absolutely no way for a tech worker to tell.
With so many tech companies participating in work that contributes to the reality of killer robots in our future, it’s important to remember that major powers won’t be the only ones to have autonomous drones. Even if killer robots won’t be the Gatling gun of World War III, they will do a lot of damage to populations living in countries all around the world.
We must remind our government that humanity has been successful in instituting international norms that condemn the use of chemical and biological weapons. When the stakes are this high and the people of the world object, there are steps that governments can take to prevent mass killings. We can do the same thing with autonomous weapons systems, but the time to act is now.
Official Defense Department policy states currently that there must be a “human in the loop” for every kill decision, but that is under debate right now, and there’s a loophole in the policy that would allow for an autonomous weapon to be approved. We must work together to ensure this loophole is closed.
That’s why I refuse to work for any company that participates in Project Maven, or who otherwise contributes dual-use research to the Pentagon. Policymakers need to promise us that they will stop the pursuit of autonomous lethal weapons systems once and for all.
I don’t regret my time at Clarifai. I do miss everyone I left behind. I truly hope that the industry changes course and agrees to take responsibility for its work to ensure that the things we build in the private sector won’t be used for killing people. More importantly, I hope our government begins working internationally to ensure that autonomous weapons are banned to the same degree as biological ones.
Published March 7, 2019 at 02:00AM via ACLU https://ift.tt/2HiMDZ5
0 notes
Text
ACLU: I Quit My Job to Protest My Company’s Work on Building Killer Robots
I Quit My Job to Protest My Company’s Work on Building Killer Robots Big tech and the government have a responsibility to stop the advent of machines that can kill without human oversight.
When I joined the artificial intelligence company Clarifai in early 2017, you could practically taste the promise in the air. My colleagues were brilliant, dedicated, and committed to making the world a better place.
We founded Clarifai 4 Good where we helped students and charities, and we donated our software to researchers around the world whose projects had a socially beneficial goal. We were determined to be the one AI company that took our social responsibility seriously.
I never could have predicted that two years later, I would have to quit this job on moral grounds. And I certainly never thought it would happen over building weapons that escalate and shift the paradigm of war.
Some background: In 2014, Steven Hawking and Elon Musk led an effort with thousands of AI researchers to collectively pledge never to contribute research to the development of lethal autonomous weapons systems — weapons that could seek out a target and end a life without a human in the decision-making loop. The researchers argued that the technology to create such weapons was just around the corner and would be disastrous for humanity.
I signed that pledge, and in January of this year, I wrote a letter to the CEO of Clarifai, Matt Zeiler, asking him to sign it, too. I was not at all expecting his response. As The New York Times reported on Friday, Matt called a companywide meeting and announced that he was totally willing to sell autonomous weapons technology to our government.
I could not abide being part of that, so I quit. My objections were not based just on a fear that killer robots are “just around the corner.” They already exist and are used in combat today.
Now, don’t go running for the underground bunker just yet. We’re not talking about something like the Terminator or Hal from “2001: A Space Odyssey.” A scenario like that would require something like artificial general intelligence, or AGI — basically, a sentient being.
In my opinion, that won’t happen in my lifetime. On the other end of the spectrum, there are some people who describe things like landmines or homing missiles as “semiautonomous.” That’s not what I’m talking about either.
The core issue is whether a robot should be able to select and acquire its own target from a list of potential ones and attack that target without a human approving each kill. One example of a fully autonomous weapon that’s in use today is the Israeli Harpy 2 drone (or Harop), which seeks out enemy radar signals on its own. If it finds a signal, the drone goes into a kamikaze dive and blows up its target.
Fortunately, there are only a few of these kinds of machines in operation today, and as of right now, they usually operate with a human as the one who decides whether to pull the trigger. Those supporting the creation of autonomous weapons systems would prefer that human to be not “in the loop but “on the loop” — supervising the quick work of the robot in selecting and destroying targets, but not having to approve every last kill.
When presented with the Harop, a lot of people look at it and say, “It’s scary, but it’s not genuinely freaking me out.” But imagine a drone acquiring a target with a technology like face recognition. Imagine this: You’re walking down the street when a drone pops into your field of vision, scans your face, and makes a decision about whether you get to live or die.
Suddenly, the question — “Where in the decision loop does the human belong?” — becomes a deadly serious one.
What the generals are thinking
On the battlefield, human-controlled drones already play a critical role in surveillance and target location. If you add machine learning to the mix, you’re looking at a system that can sift through exponentially increasing numbers of potential threats over a vast area.
But there are vast technical challenges with streaming high definition video halfway around the world. Say you’re a remote drone pilot and you’ve just found a terrorist about to do something bad. You’re authorized to stop them, and all of a sudden, you lose your video feed. Even if it’s just for a few seconds, by the time the drone recovers, it might be too late.
What if you can’t stream at all? Signal jamming is pretty common in warfare today. Your person in the loop is now completely useless.
That’s where generals get to thinking: Wouldn’t it be great if you didn’t have to depend on a video link at all? What if you could program your drone to be self-contained? Give it clear instructions, and just press Go.
That’s the argument for autonomous weapons: Machine learning will make war more efficient. Plus there's the fact that Russia and China are already working on this technology, so we might as well do the same.
Sounds reasonable, right?
Okay, here are six reasons why killer robots are genuinely terrifying
There are a number of reasons why we shouldn’t accept these arguments:
1. Accidents. Predictive technologies like face recognition or object localization are guaranteed to have error rates, meaning a case of mistaken identity can be deadly. Often these technologies fail disproportionately on people with darker skin tones or certain facial features, meaning their lives would be doubly subject to this threat.
Also, drones go rogue sometimes. It doesn’t happen often, but software always has bugs. Imagine a self-contained, solar-powered drone that has instructions to find a certain individual whose face is programmed into its memory. Now imagine it rejecting your command to shut it down.
2. Hacking. If your killer robot has a way to receive commands at all (for example, by executing a “kill switch” to turn it off), it is vulnerable to hacking. That means a powerful swarm of drone weapons could be turned off — or turned against us.
3. The “black box” problem. AI has an “explainability” problem. Your algorithm did XYZ, and everyone wants to know why, but because of the way that machine learning works, even its programmers often can’t know why an algorithm reached the outcome that it did. It’s a black box. Now, when you enter the realm of autonomous weapons, and ask, “Why did you kill that person,” the complete lack of an answer simply will not do — morally, legally, or practically.
4. Morality & Context. A robot doesn’t have moral context to prioritize one kind of life over another. A robot will only see that you’re carrying a weapon and “know” that its mission is to shoot with deadly force. It should not be news that terrorists often exploit locals and innocents. In such scenarios, a soldier will be able to use their human, moral judgment in deciding how to react — and can be held accountable for those decisions. The best object localization software today is able to look at a video and say, “I found a person.” That’s all. It can’t tell whether that person was somehow coerced into doing work for the enemy.
5. War at Machine Speed. How long does it take you to multiply 35 by 12? A machine can do thousands of such calculations in the time it takes us to blink. If a machine is programmed to make quick decisions about how and when to fire a weapon, it’s going to do it in ways we humans can’t even anticipate. Early experiments with swarm technology have shown that no matter how you structure the inter-drone communications, the outcomes are different every time. The humans simply press the button, watch the fight, and wait for it to be over so that they can try to understand the what, when, and why of it.
Add 3D printing to the mix, and now it’s cheap and easy to create an army of millions of tiny (but lethal) robots, each one thousands of times faster than a human being. Such a swarm could overwhelm a city in minutes. There will be no way for a human to defend themselves against an enemy of that scale or speed — or even understand what’s happening.
6. Escalation. Autonomous drones would further distance the trigger-pullers from the violence itself and generally make killing more cost-free for governments. If you don’t have to put your soldiers in harm’s way, it becomes that much easier to decide to take lives. This distance also puts up a psychological barrier between the humans dispatching the drones and their targets.
Humans actually find it very difficult to kill, even in military combat. In his book “Men Against Fire,” S.L.A. Marshall reports that over 70 percent of bullets fired in WWII were not aimed with the intent to kill. Think about firing squads. Why would you have seven people line up to shoot a single person? It’s to protect the shooters’ psychological safety, of course. No one will know whose bullet it truly was that did the deed.
If you turn your robot on and it decides to kill a child, was it really you who destroyed that life?
There is still time to ask our government to agree to ban this technology outright
In the end, there are many companies out there working to “democratize” powerful technology like face recognition and object localization. But these technologies are “dual-use,” meaning they can be used not only for everyday civilian purposes but also for targeting people with killer drones.
Project Maven, for instance, is a Defense Department contract that’s currently being worked on at Microsoft and Amazon (as well as in startups like Clarifai). Google employees were successful in persuading the company to walk away from the contract because they feared it would be used to this end. Project Maven might just be about “counting things” as the Pentagon claims. It might also be a targeting system for autonomous killer drones, and there is absolutely no way for a tech worker to tell.
With so many tech companies participating in work that contributes to the reality of killer robots in our future, it’s important to remember that major powers won’t be the only ones to have autonomous drones. Even if killer robots won’t be the Gatling gun of World War III, they will do a lot of damage to populations living in countries all around the world.
We must remind our government that humanity has been successful in instituting international norms that condemn the use of chemical and biological weapons. When the stakes are this high and the people of the world object, there are steps that governments can take to prevent mass killings. We can do the same thing with autonomous weapons systems, but the time to act is now.
Official Defense Department policy states currently that there must be a “human in the loop” for every kill decision, but that is under debate right now, and there’s a loophole in the policy that would allow for an autonomous weapon to be approved. We must work together to ensure this loophole is closed.
That’s why I refuse to work for any company that participates in Project Maven, or who otherwise contributes dual-use research to the Pentagon. Policymakers need to promise us that they will stop the pursuit of autonomous lethal weapons systems once and for all.
I don’t regret my time at Clarifai. I do miss everyone I left behind. I truly hope that the industry changes course and agrees to take responsibility for its work to ensure that the things we build in the private sector won’t be used for killing people. More importantly, I hope our government begins working internationally to ensure that autonomous weapons are banned to the same degree as biological ones.
Published March 6, 2019 at 08:30PM via ACLU https://ift.tt/2HiMDZ5 from Blogger https://ift.tt/2VADDT4 via IFTTT
0 notes
Text
ACLU: I Quit My Job to Protest My Company’s Work on Building Killer Robots
I Quit My Job to Protest My Company’s Work on Building Killer Robots Big tech and the government have a responsibility to stop the advent of machines that can kill without human oversight.
When I joined the artificial intelligence company Clarifai in early 2017, you could practically taste the promise in the air. My colleagues were brilliant, dedicated, and committed to making the world a better place.
We founded Clarifai 4 Good where we helped students and charities, and we donated our software to researchers around the world whose projects had a socially beneficial goal. We were determined to be the one AI company that took our social responsibility seriously.
I never could have predicted that two years later, I would have to quit this job on moral grounds. And I certainly never thought it would happen over building weapons that escalate and shift the paradigm of war.
Some background: In 2014, Steven Hawking and Elon Musk led an effort with thousands of AI researchers to collectively pledge never to contribute research to the development of lethal autonomous weapons systems — weapons that could seek out a target and end a life without a human in the decision-making loop. The researchers argued that the technology to create such weapons was just around the corner and would be disastrous for humanity.
I signed that pledge, and in January of this year, I wrote a letter to the CEO of Clarifai, Matt Zeiler, asking him to sign it, too. I was not at all expecting his response. As The New York Times reported on Friday, Matt called a companywide meeting and announced that he was totally willing to sell autonomous weapons technology to our government.
I could not abide being part of that, so I quit. My objections were not based just on a fear that killer robots are “just around the corner.” They already exist and are used in combat today.
Now, don’t go running for the underground bunker just yet. We’re not talking about something like the Terminator or Hal from “2001: A Space Odyssey.” A scenario like that would require something like artificial general intelligence, or AGI — basically, a sentient being.
In my opinion, that won’t happen in my lifetime. On the other end of the spectrum, there are some people who describe things like landmines or homing missiles as “semiautonomous.” That’s not what I’m talking about either.
The core issue is whether a robot should be able to select and acquire its own target from a list of potential ones and attack that target without a human approving each kill. One example of a fully autonomous weapon that’s in use today is the Israeli Harpy 2 drone (or Harop), which seeks out enemy radar signals on its own. If it finds a signal, the drone goes into a kamikaze dive and blows up its target.
Fortunately, there are only a few of these kinds of machines in operation today, and as of right now, they usually operate with a human as the one who decides whether to pull the trigger. Those supporting the creation of autonomous weapons systems would prefer that human to be not “in the loop but “on the loop” — supervising the quick work of the robot in selecting and destroying targets, but not having to approve every last kill.
When presented with the Harop, a lot of people look at it and say, “It’s scary, but it’s not genuinely freaking me out.” But imagine a drone acquiring a target with a technology like face recognition. Imagine this: You’re walking down the street when a drone pops into your field of vision, scans your face, and makes a decision about whether you get to live or die.
Suddenly, the question — “Where in the decision loop does the human belong?” — becomes a deadly serious one.
What the generals are thinking
On the battlefield, human-controlled drones already play a critical role in surveillance and target location. If you add machine learning to the mix, you’re looking at a system that can sift through exponentially increasing numbers of potential threats over a vast area.
But there are vast technical challenges with streaming high definition video halfway around the world. Say you’re a remote drone pilot and you’ve just found a terrorist about to do something bad. You’re authorized to stop them, and all of a sudden, you lose your video feed. Even if it’s just for a few seconds, by the time the drone recovers, it might be too late.
What if you can’t stream at all? Signal jamming is pretty common in warfare today. Your person in the loop is now completely useless.
That’s where generals get to thinking: Wouldn’t it be great if you didn’t have to depend on a video link at all? What if you could program your drone to be self-contained? Give it clear instructions, and just press Go.
That’s the argument for autonomous weapons: Machine learning will make war more efficient. Plus there's the fact that Russia and China are already working on this technology, so we might as well do the same.
Sounds reasonable, right?
Okay, here are six reasons why killer robots are genuinely terrifying
There are a number of reasons why we shouldn’t accept these arguments:
1. Accidents. Predictive technologies like face recognition or object localization are guaranteed to have error rates, meaning a case of mistaken identity can be deadly. Often these technologies fail disproportionately on people with darker skin tones or certain facial features, meaning their lives would be doubly subject to this threat.
Also, drones go rogue sometimes. It doesn’t happen often, but software always has bugs. Imagine a self-contained, solar-powered drone that has instructions to find a certain individual whose face is programmed into its memory. Now imagine it rejecting your command to shut it down.
2. Hacking. If your killer robot has a way to receive commands at all (for example, by executing a “kill switch” to turn it off), it is vulnerable to hacking. That means a powerful swarm of drone weapons could be turned off — or turned against us.
3. The “black box” problem. AI has an “explainability” problem. Your algorithm did XYZ, and everyone wants to know why, but because of the way that machine learning works, even its programmers often can’t know why an algorithm reached the outcome that it did. It’s a black box. Now, when you enter the realm of autonomous weapons, and ask, “Why did you kill that person,” the complete lack of an answer simply will not do — morally, legally, or practically.
4. Morality & Context. A robot doesn’t have moral context to prioritize one kind of life over another. A robot will only see that you’re carrying a weapon and “know” that its mission is to shoot with deadly force. It should not be news that terrorists often exploit locals and innocents. In such scenarios, a soldier will be able to use their human, moral judgment in deciding how to react — and can be held accountable for those decisions. The best object localization software today is able to look at a video and say, “I found a person.” That’s all. It can’t tell whether that person was somehow coerced into doing work for the enemy.
5. War at Machine Speed. How long does it take you to multiply 35 by 12? A machine can do thousands of such calculations in the time it takes us to blink. If a machine is programmed to make quick decisions about how and when to fire a weapon, it’s going to do it in ways we humans can’t even anticipate. Early experiments with swarm technology have shown that no matter how you structure the inter-drone communications, the outcomes are different every time. The humans simply press the button, watch the fight, and wait for it to be over so that they can try to understand the what, when, and why of it.
Add 3D printing to the mix, and now it’s cheap and easy to create an army of millions of tiny (but lethal) robots, each one thousands of times faster than a human being. Such a swarm could overwhelm a city in minutes. There will be no way for a human to defend themselves against an enemy of that scale or speed — or even understand what’s happening.
6. Escalation. Autonomous drones would further distance the trigger-pullers from the violence itself and generally make killing more cost-free for governments. If you don’t have to put your soldiers in harm’s way, it becomes that much easier to decide to take lives. This distance also puts up a psychological barrier between the humans dispatching the drones and their targets.
Humans actually find it very difficult to kill, even in military combat. In his book “Men Against Fire,” S.L.A. Marshall reports that over 70 percent of bullets fired in WWII were not aimed with the intent to kill. Think about firing squads. Why would you have seven people line up to shoot a single person? It’s to protect the shooters’ psychological safety, of course. No one will know whose bullet it truly was that did the deed.
If you turn your robot on and it decides to kill a child, was it really you who destroyed that life?
There is still time to ask our government to agree to ban this technology outright
In the end, there are many companies out there working to “democratize” powerful technology like face recognition and object localization. But these technologies are “dual-use,” meaning they can be used not only for everyday civilian purposes but also for targeting people with killer drones.
Project Maven, for instance, is a Defense Department contract that’s currently being worked on at Microsoft and Amazon (as well as in startups like Clarifai). Google employees were successful in persuading the company to walk away from the contract because they feared it would be used to this end. Project Maven might just be about “counting things” as the Pentagon claims. It might also be a targeting system for autonomous killer drones, and there is absolutely no way for a tech worker to tell.
With so many tech companies participating in work that contributes to the reality of killer robots in our future, it’s important to remember that major powers won’t be the only ones to have autonomous drones. Even if killer robots won’t be the Gatling gun of World War III, they will do a lot of damage to populations living in countries all around the world.
We must remind our government that humanity has been successful in instituting international norms that condemn the use of chemical and biological weapons. When the stakes are this high and the people of the world object, there are steps that governments can take to prevent mass killings. We can do the same thing with autonomous weapons systems, but the time to act is now.
Official Defense Department policy states currently that there must be a “human in the loop” for every kill decision, but that is under debate right now, and there’s a loophole in the policy that would allow for an autonomous weapon to be approved. We must work together to ensure this loophole is closed.
That’s why I refuse to work for any company that participates in Project Maven, or who otherwise contributes dual-use research to the Pentagon. Policymakers need to promise us that they will stop the pursuit of autonomous lethal weapons systems once and for all.
I don’t regret my time at Clarifai. I do miss everyone I left behind. I truly hope that the industry changes course and agrees to take responsibility for its work to ensure that the things we build in the private sector won’t be used for killing people. More importantly, I hope our government begins working internationally to ensure that autonomous weapons are banned to the same degree as biological ones.
Published March 7, 2019 at 02:00AM via ACLU https://ift.tt/2HiMDZ5 from Blogger https://ift.tt/2ETX4kl via IFTTT
0 notes