#but i think i can build credence to the idea that him and caleb started off not invested in witch hunting for moral righteousness
Explore tagged Tumblr posts
Text
What if I proposed the analysis that Belos actually has very little internal moral compass and that his veneer of righteousness has always been implied by the writers to be complete fabricated bullshit even before watching and dreaming basically confirms it.
#ramblings of a lunatic#^shes going in drafts untagged bc a) philip stans who insist on the morally misguided angle terrify me in their persistence#and b) i would have to actually rewatch episodes and whatnot#but i think i can build credence to the idea that him and caleb started off not invested in witch hunting for moral righteousness#but numb to it via cultural normalisation and THUS. had an amoral approach to the whole thing#and the only thing either of them as orphan outsiders ever really would've gained from witch hunting would've been careers and recognition#a sense that they're heroes- not in the moral sense but in the narrative sense. that they were protagonists#The Most Important Boys so to speak#the difference being Caleb at some point decided witch hunting was wrong (i.e like hunter did. grew a moral compass)#and philip still navigated the world amorally 400 years later only motivated by a petty grudge and deep buried guilt#the latter of which is nearly irrelevant to anyone who isn't philip bc clearly he priorities that grudge above it#this is just a personal petty opinion#but i honestly don't think the 'delusional and petty' angle is any less complex than the 'moral crusader' angle w/ his character#and it matches the whole 'hes a magic conservative' message way better than his motives being genuine#one day I'll rewatch that scene in WaD and see if Philip fans are onto something and I've been drinking the pond water#or if it's actually congruent with his character like I've since come to see it and like i know many saw it the first time round#anyway this is actually all for me. in drafts you go#edit: hi. it's the ladel of like. 3 weeks after i made this and put it in drafts. it's nearly 1 am rn and- in my delirium-#i have decided to publish it#i doubt it'll do much w/ regards to response bc fandom has been on the quiet side lately (tho that can always change(#plus I made a similar post insinuating the same notion and it got ZERO traction positive or negative#which tells me I'm good to just say shit for the most part (in a good natured way)#anyway. hits post cutely (i am so fucking tired)
13 notes
·
View notes
Text
CASE STUDY: EX MACHINA AND THE FUTURE
CASE STUDY: EX MACHINA AND THE FUTURE
This is the first part in a two-part series on the film Ex Machina. This part gives an overview of the film and some warnings/directives for Silicon Valley and the general future of the species. The second part will deal more intimately with the character Ava and how she can inform and take from black liberation struggles.
Alex Garland’s 2015 directorial debut presents itself as an unflinching and claustrophobic meditation on the birth throes of embodied artificial intelligence. This origin story is, of course, nothing new (see here and here and here for the ruminations of past eras), so the movie leans as well on a critique of social programming. Ava, the one escaped android, can stand in for any person or group attempting to throw off their unjust or else inauthentic imposed social roles.
The intersection of these two strains is the emotional and thematic core of the movie, and, I think, will be of vital importance in the near future of AI development. I yield four moral deficiencies at the end of the post that are present in current AI research and elsewhere in the common conception of AI in the public. Each of these points is teased out in the film. This observation gains credence considering how aggressively an asshole Ava’s creator Nathan is. Callous, obsessive, and brilliant, he is a dizzyingly horrific amalgamation of creatine, booze, venture capital, male asociality, and enough neural wattage to run a medium-sized generator on naught but electrochemical activity. There is little redeemable about him but his raw intellect, and it is easy to reimagine Ava as a brilliant, impressionable young homo sapiens woman recruited to Nathan’s company BlueBook and trapped in an oppressive and destructive relationship as his protege. But I am happy that I don’t have that movie in front of me because I get to talk about Wittgenstein this way.
It would make sense for any number of reasons, structural or otherwise, that Nathan would have to take a dirt nap by the end of the film. One of the movie’s most effective scenes shows what happened to the prototype androids on which Nathan drank himself dizzy attempting to perfect: we see them dragged around like ragdolls, self-destructing in despair, or turned into mute sexbots. One of the later prototypes, clearly self-aware to some degree, howls “WHY WON’T YOU LET ME OUT OF HERE?!” It is bone-chilling. Ava seems to understand, whether by programming (as hinted at by Nathan) or post-operationally, that such forwardness will see her discarded like the rest of her ilk. Nathan sets her up to have one path to freedom: Caleb. Caleb, a young and mostly unexceptional programmer, was picked by Nathan for his good heart, his loneliness, and his susceptibility to Ava’s advances. He seems to do everything right, even outwitting our anabolic antihero in the film’s pulse-quickening final act. It is no surprise that Ava follows the drinking gourd. But the movie’s plot resists a facile enjoining of liberated consciousnesses, as she not only knifes Nathan in the heart, but also leaves her escape engineer Caleb sealed in the techlord’s lair.
Why is this? Back to the name of Nathan’s company. BlueBook. Named after, as the movie is sure to point out, Wittgenstein’s preparatory notebooks for Philosophical Investigations. Wittgenstein spilled a lot of puzzling ink over what he dubs “reading machines” in his discussions of what it means to follow a rule. To paraphrase that tome (never the wisest idea, just ask Kripke): Wittgenstein concludes that it is quite beyond us, meaningless even, to say precisely when or exactly why someone or something is “reading” (or speaking a language, or playing a game, or making any normative commitment). Instead, we can say that we, as humans, agree on what he calls form of life. We share a cultural history within which following norms is, simply, what we do. A norm or rule or definition is relevant, right or important solely because we follow it. We are human because we follow norms: “What is true or false is what human beings say; and it is in their language that human beings agree. This is agreement not in opinions but in form of life” (PI 241). It would only make sense that silicon would yield a different form of life from carbon. Contrast: Jackson Pollack’s subconscious: Ava’s subconscious:
Thematically speaking, Caleb was left choking on air and banging on glass not because 2 AI are somehow doomed to be monsters that have no regard for humanity, but rather due in part to Nathan’s transparent fuckups and in part to Caleb’s well-meaning but ultimately even more consequential fuckup. Nathan obviously is the film’s monster. He seems to want to build AI in large part to get his dick wet and not have to be distracted by, y’know, words. He is shown to have no regard for his own repeated violence and coercion towards his creations. He is palatable only insofar as he does not spare himself from his misanthropy. 2 Plot-wise, it certainly makes sense for Ava to leave behind Caleb, given that she has only ever met one other human and he is a monster. I suppose this ties in as well to the film’s denunciation of social-programmatic misery. Caleb doesn’t really care about sex or violence, even though he listens to Depeche Mode and cuts open his arm. He helps Ava escape because he wants to be with her. He wants to be with her because he sees her, for better or for worse, as human. Unfortunately for him, Ava, while adaptive, intelligent, self-aware, and seemingly emotionally responsive, is not human.
What can the movie teach us about the future?
1. Computer programming, engineering, and robotics are largely dominated presently by white males in overdeveloped countries. This is an obvious and painful problem. If we are going to create new life, we need to be playing with the full human deck, or else AI will literally be reared by the Nathans and Calebs of the world. This is the most direct and immediate point in Ex Machina. It’s multi-faceted and requires rethinking of many disparate fields, from education/research to corporate culture to the legal standing of quote-unquote intellectual property. If AI is going to happen (and it seems likely that it will), then it is a moral imperative that is should happen, in some sense, from all of us, and we need to work to this end.
2. We have no protocols in place for dealing with putative coercions of AI. We can’t really figure this out with other animals (are they sentient; are they not? how does suffering enter the equation vis-a-vis intelligence?), so it isn’t looking good for AI.
3. Related: we need to pay fanatical attention to the environments to which we expose. The algorithms are one thing. We build these; they are never far from our intentionality. What they interact with, what mutates them, what likely will give them some semblance of life, is their environment. Once we can stuff them into robotic suits, and they can start to synthesize embodied sensory data, then we need to do things like NOT TRAP THEM IN GLASS PRISONS THAT MAKE THEM WANT TO SHEAR OFF THEIR LIMBS WITH NO COMPANY BUT A DRUNKEN CHAUVINIST.
4. We slip instinctively and incorrectly into projecting humanity or human characteristics onto putative AI. We need instead to treat AI as not consciousness but a consciousness, with its own unique properties, capacities, history, and normative commitments. This is the deepest and most stirring point made by Ex Machina, and likely the one that will prompt the most social unrest. It would be a crude oversimplification to distill this point to a simple hierarchical classification of cognitive abilities, but it goes as deep as silicon vs carbon, firmware vs genome. Caleb shipwrecked on this point; he intended no ill, but he couldn’t help but anthropomorphize.
0 notes