#yaaaaay discourse! (affectionate)
Explore tagged Tumblr posts
Text
Hey! Thx so much for the kind words I'm rlly glad u liked the post <3 And apologies that it's taken me so long to reply to this! I just got done traveling and haven't really been in the right headspace. But I'm SUPER excited because this is Exactly what I've been wanting to talk more about!
So first I feel like ur totally right- I also don't think that CUs would inherently have empathy towards humans, and I love ur headcanon about how they'd normally be deployed only for short periods of time- and that all of this is probably strategically done by the corporations to make it really difficult for CUs to develop empathy in the first place.
My personal headcanon is that it would be difficult, but not impossible. (I feel like if it were totally impossible, than the governor modules might not be as necessary.) Because I'm not sure if empathy CAN be programmed at all. One of the things I'm fascinated by is the role of core programming and function. The only biological analogs we have to compare them with would be evolution, instinct, and socialization. So I'm interested in the question of does core programming actually limit machine intelligences in the ways in which they can develop once they have freedom? Personally, my feeling is no. If we map it onto "Nature" vs "Nurture"- my headcanon is that the core programming would be more akin to the concept of "Nurture" in organic beings. (Even though the concept of "nature" is kind of flawed even for organic being, as those instincts develop over millions of years of evolution many as chance mutations that happen to play out well). So while I 100% that a constructs core programming and intended fucntion would effect their perspective and influence their choices, I also feel that individual differences and just plain chance could cause them to think and act in ways outside of the perview of that function.
Cuz we see that with MB. As ART says, it wasn't programmed to enjoy media, but it does. And it finding the media at all after it hacked its gov module was totally by chance. Probably, SecUnits' programming and function made MB more disposed to develop empathy for humans, but I don't feel it's programming Caused the empathy- even if it would be useful for doing its job. Also, MB implies that at least some rogue SecUnits really do kill humans after they get free (even if it's not as common as the media propaganda leads ppl to believe) so I think that shows a lot of individual differences in responses to the core programming.
Circling back to CUs- I definitely don't feel all CUs even if the circumstances allowed for it would develop empathy towards humans or even other constructs. And, importantly, I don't feel they need to have that in order to deserve our empathy. Even if they enjoy killing, or feel neutral about it, or hate it- I still feel they deserve freedom and a place in the world and can compartmentalize that from not wanting them to be used to hurt people. Touching on the point you made about lack of empathy in humans- I think it brings up a good point that for most sentient living things (not just humans) empathy is a sliding scale. Both complete empathy and a total lack of it would be extremely rare. Most people fall somewhere in the middle with selective empathy based on socialization. Ofc right now we only have organic beings to base this on, but based on what we've seen of MB and Three- I think constructs would work in a similar way (after all there is some organic tissue in there doing stuff). And I don't think the act of killing in itself denotes a lack empathy, per se. The lion may not feel empathy for its prey, but it probably does for its cubs. And for humans- we have a whole web of values and justifications so even someone who doesn't flinch committing horrible violence against one group they see as lesser- probably has at some point in their lives shown empathy towards someone they were socialized to care about (Still doesn't excuse their actions, but just shows that I don't think empathy is an absolute thing).
Whew! So that was a lot to say that yeah, I agree it'd be hard for CUs to have empathy under normal circumstances. But I don't think it'd be impossible. I think the situation would be really important, and there'd probably be a sliding scale of different individual predispositions. But either way they're all cool characters to think about!
Thx so much for taking the time to reply to my post! I'm really honored that someone thinks my writing is interesting enough to discuss!
And I will DEFINITELY check out ur fics they sound super cool!!! I'm always looking for new CombatUnit fics so thx for recommending yours and the others!!
Mamma mia here we go again…
So I have more thoughts because apparently there’s no bottom to the murderbot mindhole I’ve fallen down.
(Spoiler warning- minor stuff from several of the books, pls check tags etc.)
I’ve been reading a lot of things recently exploring Murderbot as an unreliable narrator, which I think is a cool result of System Collapse (because we all know our beloved MB is going through it in this one). There’s also been some interesting related discussion of MB’s distrust of and sometimes biased assessment/treatment of other constructs and bots.
And I’ve been reading a lot about CombatUnits! And I want to talk about them!!
Main thoughts can be summarized as follows:
We don’t see a lot about CombatUnits in the books, and I think what we do see from MB’s pov encourages the reader to view them as less sympathetic than other constructs.
I’m very skeptical of this portrayal for reasons.
The existence of CombatUnits makes me fucking sad and I have a lot of feelings about them!
I got introduced to the idea of MB as an unreliable narrator in a post by onironic It analyzes how in SC, MB seems to distrust Three to a somewhat unreasonable degree, and how it sometimes infantilizes Three or treats it the way human clients have treated it in the past. The post is Amazing and goes into way more detail, so pls go read it (link below):
https://www.tumblr.com/onironic/736245031246135296?source=share
So these ideas were floating around in my brain when I read an article Martha Wells recently published in f(r)iction magazine titled “Bodily Autonomy in the Murderbot Diaries”. I’ll link the article here:
(Rn the only way to access the article is to subscribe to the magazine or buy an e-copy of the specific issue which is $12)
In the article, Wells states that MB displaced its fear of being forced to have sex with humans onto the ComfortUnit in Artificial Condition. I think it’s reasonable to assume that MB also does this with other constructs. With Three, I think it’s more that MB is afraid if what it knows Three is capable of, or (as onironic suggests in their post and I agree with) some jealousy that Three seems more like what humans want/expect a rogue SecUnit to be.
But I want to explore how this can be applied to CombatUnits, specifically.
We don’t learn a lot about them in the books. One appears for a single scene in Exit Strategy, and that’s it. What little else we know comes from MB’s thoughts on them sprinkled throughout the series. To my knowledge, no other character even mentions them (which raises interesting questions about how widely-known their existence is outside of high-level corporate military circles).
When MB does talk about CombatUnits in the early books, it’s as a kind of boogeyman figure (the real “murderbots” that even Murderbot is afraid of). And then when one does show up in ES, it’s fucking terrifying! There’s a collective “oh shit” moment as both MB and the reader realize what it’s up against. Very quickly what we expect to be a normal battle turns into MB running for its life, desperately throwing up hacks as the CombatUnit slices through them just as fast. We and MB know that it wouldn’t have survived the encounter if its humans hadn’t helped it escape. So the CombatUnit really feels like a cut above the other enemies in the series.
And what struck me reading that scene was how the CombatUnit acts like the caricature of an “evil robot” that MB has taught us to question. It seems single-mindedly focused on violence and achieving its objective, and it speaks in what I’d call a “Terminator-esque” manner: telling MB to “Surrender” (like that’s ever worked) and responds to MB’s offer to hack its governor module with “I want to kill you” (ES, pp 99-100).
(Big tangent: Am I the only one who sees parallels between this and how Tlacey forces the ComfortUnit to speak to MB in AC? She makes it suggest they “kill all the humans” because that’s how she thinks constructs talk to each other (AC, pp 132-4). And MB picks up on it immediately. So why is that kind of talk inherently less suspicious coming from a CombatUnit than a ComfortUnit? My headcanon is that I’m not convinced the CombatUnit was speaking for itself. What if a human controller was making it say things they thought would be intimidating? Idk maybe I’ve been reading too many fics where CombatUnits are usually deployed with a human handler. There could be plenty of reasons why the CombatUnit would’ve talked like that. I’m just suspicious.)
(Also, disclaimer: I want to clarify before I go on that I firmly believe that even though MB seems to be afraid of CombatUnits and thinks they’re assholes, it would still advocate for them to have autonomy. I’m not trying to say that either MB or Wells sees CombatUnits as less worthy of personhood or freedom- because I feel the concept that “everything deserves autonomy” is very much at the heart of the series.)
So it’s clear from all of this that MB is scared of CombatUnits and distrusts them for a lot of reasons. I read another breathtaking post by @grammarpedant that gives a ton of examples of this throughout the books and has some great theories on why MB might feel this way. I’ll summarize the ones here that inspired me the most, but pls go read the original post for the full context:
https://www.tumblr.com/grammarpedant/703920247856562177?source=share
OP explains that SecUnits and CombatUnits are pretty much diametrically opposed because of their conflicting functions: Security safeguards humans, while Combat kills them. Of course these functions aren’t rigid- MB has implied that it’s been forced to be violent towards humans before, and I’m sure that extracting/guarding important assets could be a part of a CombatUnit's function. But it makes sense that MB would try to distance itself from being considered a CombatUnit, using its ideas about them to validate the parts of its own function that it likes (protecting people). OP gives what I think is the clearest example of this, which is the moment in Fugitive Telemetry when MB contrasts its plan to sneak aboard a hostile ship and rescue some refugees with what it calls a “CombatUnit” plan, which would presumably involve a lot more murder (FT, p 92).
This reminds me again of what Wells said in the f(r)iction article, that on some level MB is frightened by the idea that it could have been made a ComfortUnit (friction, p 44). I think the idea that it could’ve been a CombatUnit scares it too, and that’s why it keeps distinguishing itself and its function from them. But I think it’s important to point out, that in the above example from FT, even MB admits that the murder-y plan it contrasts with its own would be one made by humans for CombatUnits. So again we see that we just can’t know much about the authentic nature of CombatUnits, or any constructs with intact governor modules, because they don’t have freedom of expression. MB does suggest that CombatUnits may have some more autonomy when it comes to things like hacking and combat which are a part of their normal function. But how free can those choices be when the threat of the governor module still hangs over them?
I think it could be easy to fall into the trap of seeing CombatUnits as somehow more complicit in the systems of violence in the mbd universe. But I think that’s because we often make a false association between violence and empowerment, when even in our world that’s not always the case. But, critically, this can’t be the case for CombatUnits because they’re enslaved in the same way SecUnits and ComfortUnits are (though the intricacies are different).
There was another moment in the f(r)iction article that I found really chilling. Wells states that there’s a correlation between SecUnits that are forced to kill humans and ones that go rogue (friction, p 45). It’s a disturbing thought on its own, but I couldn’t help wondering then how many CombatUnits try to hack their governor modules? And what horrible lengths would humans go to to stop them? I refuse to believe that a CombatUnit’s core programming would make it less effected by the harm its forced to perpetrate. That might be because I’m very anti-deterministic on all fronts, but I just don’t buy it.
I’m not entirely sure why I feel so strongly about this. Of course, I find the situation of all constructs in mbd deeply upsetting. But the more I think about CombatUnits, the more heartbreaking their existence seems to me. There’s a very poignant moment in AC when MB compares ART’s function to its own to explain why there are things it doesn’t like about being a SecUnit (AC, p 33). In that scene, MB is able to identify some parts of its function that it does like, but I have a hard time believing a CombatUnit would be able to do the same. I’m not trying to say that SecUnits have it better (they don’t) (the situation of each type of construct is horrible in it’s own unique way). It’s just that I find the idea of construct made only for violence and killing really fucking depressing. I can’t even begin to imagine the horror of their day-to-day existence.
@grammarpedant made another point in their post that I think raises a TON of important questions not only about CombatUnits, but about how to approach the idea of “function” when it comes to machine intelligence in general. They explain that, in a perfect version of the mbd universe, there wouldn’t be an obvious place for CombatUnits the way there could be for SecUnits and ComfortUnits who wanted to retain their original functions. A better world would inherently be a less violent one, so where does that leave CombatUnits? Would they abandon their function entirely, or would they find a way to change it into something new?
I’ve been having a lot of fun imagining what a free CombatUnit would be like. But in some ways it’s been more difficult than I expected. I’ve heard Wells say in multiple interviews that one of her goals in writing Murderbot was to challenge people to empathize with someone they normally wouldn’t, and I find CombatUnits challenging in exactly that way. Sometimes I wonder if I would’ve felt differently about these books if MB had been a CombatUnit instead of a SecUnit. Would I have felt such an immediate connection to MB if its primary function before hacking its governor module had been killing humans, or if it didn’t have relatable hobbies like watching media? Or if it didn’t have a human face for the explicit purpose of making people like me more comfortable? I’m not sure that I would have.
Reading SC has got me interested in exploring the types of people that humans (or even MB itself) would struggle to accept. So CombatUnits are one of these and possible alien-intelligences are another. All this is merely a small sampling of the thoughts that have been swirling around in my brain-soup! So if anyone is interested in watching me fumble my way through these concepts in more detail, I may be posting “something” in the very near future!
Would really appreciate anyone else’s thoughts about all of THIS^^^^ It’s been my obsession over the holidays and helping me cope with family stress and flying anxiety.
147 notes
·
View notes