#mooom roxys talking about robot rights again!!
Explore tagged Tumblr posts
Text
IN THE ORIGINAL ASTRO BOY IT WASN'T ABOUT HIM BEING EXCEPTIONAL i mean he Was, just in that he was a robot hanging out with humans, and he was one of (one of!! not The) the strongest robots (discounting the pluto plot . i think tenma was on something when he way like "easy. yes. a million we can do that") and he had the whole 7 power set thing, due to ministry dedication, but that wasn't what made him special. what made him special was his bravery and dedication to peace (except when he was throwing shit at ppl but you know. paradox of tolerance) and problem solving skills and origin (expectations, the circus, etc). NOT TO MENTION that it wasn't even about him being special. in most of the stories the theme wasn't "Im Not Like Other Robots" the theme was, overwhelmingly, helping robots in general be respected.
and then the 2003 series used this idea of kokoro, working off the morals of peace and stuff like that to create a situation in which the conflict comes from the increasing similarity of ai to human brains. it created this divide between robots in general, who were intelligent but not perfectly, and the main characters with kokoro, who essentially had human brains but like electronic. but while it does cause a bit of an ideological divide (the idea that robots like Robita or Buddy are not as human as say atom or atlas) it also makes sure to insist that at least some of those morals apply to robots without kokoro, and it sort of represents a necessary time period-- its like if the original manga took place like 30 years ago when the modern Electronic Brain was just being perfected, and it wasn't commonly implemented. we still need to talk about how artificial life is treated while its imperfect or not all of it is perfect.
BUT THEN the 2009 movie took kokoro and ran with it. they turned it into the concept of the Blue Core and while it's usually acknowledged mainly as a power source, it's also implied that it contributes to how humanlike he is (for example the way his skin pales when it's removed... idk). regardless, it sets him apart from other robots, since he has this unique and powerful power source. AND while his brain works noticeably differently, he does still have all the memories of being human, so experientially he's closer to a human than a robot. honestly, sometimes he's portrayed as a halfway point between the 2 species (as opposed to the earlier series, where hes considered more of a full-robot ambassador for robots and humans (thus, ambassador atom)) so this is all to say-- in 2009 astro isn't a representative for robots, hes just himself. the 2009 movie isn't a story about robots, it's a story about astro boy specifically
which like would be fine except that the growth characters face in how they treat astro (acceptance, respect, just generally not torturing for entertainment or using as a weapon or a replacement) doesn't carry over to other characters? theres a slight implication (set up by the cartoon at the beginning and established by Orrin being treated with more respect at the end) that robots are not going to be considered disputable slaves, but (a) they dont outright address it or show it and (b) the RRF (robot communism... 🥺) is considered an unreasonable laughingstock to the end, which says a lot
I don't really have a point here except i guess that (tldr:) the astro boy series as a whole started off as (much like asimov's I, Robot) an analysis on how humanity is going to have to deal with the rise of artificial life. but as the series goes on, it strays further away from that Message and becomes more of a story for the sake of a story (or instead picks up a metaphor-- i could talk about 09 as an autistic metaphor 👀) and especially the 2009 movie... that's just a superhero movie, not a lesson about AI
#rwab#rrab#...yeah#not to crossover but tbh ive read danny phantom fanfictions that were like what's shown in 2009#you know 2009 is a step away from being 'well idk about robots but we'll treat you ok bc you're basically a human ^^'#and writers love to make that same analysis with ghosts (+point out how that 'misses the point' so yeah. it misses the point in ab as well)#mooom roxys talking about robot rights again!!
28 notes
·
View notes
Text
hot take but "its feelings aren't real, it's just programmed to be [emotion]" means fucking nothing??? and when im happy it's just me releasing dopamine in response to positive stimulants and then sensing the dopamine. doesnt mean my happiness isn't real just bc it's part of my chemistry. sorry Systems Exist dude
#mooom roxys talking about robot rights again!!#i KNOW this is basically what i said before about artifical pleasure but seriously im tired of hearing 'its just programmed with feelings'#bitch so are you!!!!! but your programming is neurons!!!!
10 notes
·
View notes
Note
And OK I guess this escelated super quickly but do you think building machines with emotions is humane?
It can be really difficult to judge. Unfortunately I don't think there's a black-and-white answer to the morals of creating something with emotions, but I think there's a couple different ways to look at it. If you're contemplating creating a feeling system just for the sake of creating it, you need to judge whether you're prepared to face its emotional needs. You're creating the potential for pain, yes, but you're also creating the potential for pleasure. Just understand that you're making a commitment to bringing new emotions into the world. If you don't think you can provide for a feeling machine, or you don't have a good reason for its creation, or overall you think its existence will cause a net loss in happiness? Don't.
On the other hand, say an AI is already being created to do some task and you're contemplating giving it emotions. As a general rule, I'd say it's humane to give it emotions since it is already experiencing positive and negative feedback. On a more specific level though, it depends on if it would benefit from having emotions. A system which has to make complex decisions should probably have emotions since they're a natural way of comparing options (I imagine that's the reason we evolved them!). More important as to morality, though, it depends what the AI will be doing. If you expect it will be unhappy because it's receiving a lot of bad input or in a situation that isn't fulfilling, you can consider changing the emotional sensitivities, say, so that doing menial tasks doesn't feel as bad. Be cautious though, because a learning computer changes its pathways according to new information (you know, like people) and it may change its own emotional pathways if it sees fit. There's only so much control a programmer has when thrtheir intention is to create something living/lifelike!
#mooom roxys talking about robot rights again!!#wow i sure do rant when it comes to this stuff huh#i think I've found my calling lmaooo
8 notes
·
View notes
Text
The Laws of Robotics are laws in the same sense as the laws of gravity, physically ingrained into Positronic brains, the computers on which Asimov's robots run, and they are simple ethical boundaries which restrict robots' actions. However, Tezuka's Robot Law is a lengthy series of international jurisdictions created for any number of reasons, mostly to keep robots' sense of inferiority or to prevent the unpaid spread of patented technology. Tezuka's robots have no physical restrictions caused by the Robot Law, they can perform illegal actions but will be detained or dismantled for doing so. The Laws of Robotics are fundamental moral outlines, which contain some flaws for the well-being of robots, but are overall present for the good of both organic and artificial beings alike, while the Robot Law is an almost entirely injust issue on the subject of robots' rights. In this essay I will
#SORRY but i finally read i robot and i just HAD to get it out kshdnsdndfjd#mooom roxys talking about robot rights again!!#rrab#debating throwing this baby in the astro boy tag
10 notes
·
View notes
Text
i know ive said this before but i just want to reiterate that the concepts of artificial creatures (meaning AI, unintelligent robots, and the combination of the two) is related negatively to the concept of capitalism. automation is something which can only blossom when the "productivity is value" system of capitalism is abolished and people are allowed to survive and thrive without unnecessary labor. AI can only be ethically explored when the intention of those creating it is benevolence and progress rather than profit and shock value (for the sake of profit), since people need to understand that in creating something with intelligence, you MUST be able to care for and nourish it.
#SORRY I KNOW I SAY THIS ALL THE TIME BUT LIKE#its 2020 this is gonna be important soon i can feel it#mooom roxys talking about robot rights again!!
12 notes
·
View notes
Text
LOOK man, all I'm saying is that the very concept of learning and changing is based on pleasure. When designing a neural net, it's central to its leaning to give it a "fitness" measure, or other form of reward, win condition, or goal. That's what it is! Something the AI works to attain! The human brain works by seeking what grants pleasure and avoiding what grants displeasure. That's the POINT of pleasure evolutionarily, it's a mechanism to establish what to do and not to do, so the human isn't just acting randomly until it dies. So ALL I'm saying bro is that... why would AI not feel pleasure? How could a dynamic, learning intelligence continue to have goals and purposes and experience without feeling some sensation of "good, wanted" and "bad, unwanted"? That's all pleasure is. A sensation of wanted and unwanted. Just like how sight is a sensation of light. There's no reason a learning AI shouldn't be able to feel happiness or pleasure. It simply goes against the concept of learning.
4 notes
·
View notes
Note
I'm curious what sort of machine your talking about that's supposedly programmed with artificial feelings? Are you talking about a real thing? I'd argue it's possible to program machines with feelings but they need a direct realistic simulation of the chemistry, otherwise they lack any sort of internal mechanism for it.
I'm mostly speaking speculatively! I want to be prepared to make moral decisions as AI becomes more advanced. Specifically though, I'm talking about learning machines, which I think will naturally have some semblance of emotions, just in that they need to be able to judge what is a "desired result" vs an "undesired result"-- in that alone is the basic sense of pleasure. When we experience emotions through hormones or though neuron action, it's just something happening which triggers a signal, and then the signal being sensed and interpreted. I don't think it specifically needs to happen through chemistry, I think just circuits could get the job done, but I could be wrong! Thank you for asking!
1 note
·
View note
Text
this is almost a sequel to another post I made but people who are anti-abortion would literally be the Worst in the AI era bc they don't understand the cruelty of creating without the ability to nurture that which you create
#mooom roxys talking about robot rights again!!#anyway pro-lifers see the creation of life as an inevitable and invariably good thing regardless of the consequences
1 note
·
View note
Text
actually, the first step is reading Mary Shelley's Frankenstein. Get an understanding for man's inability to provide for that which man creates. Look at some bad parents in order to get a real-world application
if you wanna be prepared in advance for a social movement that isn't happening for a couple more decades, I recommend dipping your toe into both Asimov and Tezuka to understand the difference between the Laws of Robotics and the Robot Laws, as well as the contexts in which they were both written
#one MUST accept responsibility for one's creations! do not make just to abandon!!#that is the most important thing to remember going into the AI age.#it's nearly midnight and i think I'm thinking too hard about this again#mooom roxys talking about robot rights again!!
70 notes
·
View notes
Text
OK OK I KNOW NOBODY CARES BUT ALL THOUGHTS HEAD FULL assuming that i understood i, robot correctly (and i havent read the rest of the series sorry) asimov never implied that the Laws of Robotics were like some important beneficial epiphany had by ai designers! first of all, he wrote speculative fiction (with what little they had to go on at the time of course) about i highly corporatized future where there are like 5 countries and theyre headed by for-profit businesses. those businesses are headed by ai, yes, and those ai are working to get rid of the corporations in the long run because thats better for people, but humans have not made good choices in this world. secondly, im 95% sure (once again im assuming i understood correctly) the Laws werent like... incorporated into standard ai because they were regulation. they are a literal byproduct of the goddamn computers that run the robots, which nobody really knows how they work except that theyre very delicate and function because of antimatter (theyre literally called “positronic brains”, positrons are anti electrons). ppl do not alter the basic brain to give it the laws, they have to alter the basic brain in order to change the laws in the first place. asimov never said “were gonna need these laws” he said “yo what would it be like if there were robots and they had these laws”, we gotta come up with our own shit that suits our reality best
Ok I'm gonna say it. Asimov's laws should not be the assumed be-all end-all of AI morality
7 notes
·
View notes