#but logistically and ethically it just doesnt seem right
Explore tagged Tumblr posts
Text
I wish I could say I’m surprised that MotoGP has confirmed the season finale in Barcelona a literal day after its airport had to stop flights due to flooding but…. sure
#like i understand it is probably the best place to hold a race in solidarity with valencia#but logistically and ethically it just doesnt seem right#but it doesnt shock me in the slightest either#motogp
14 notes
·
View notes
Text
the more i think about it in the wake of the modern development of ai the more i think the idea of ais gaining emotions is at least really wrong if not totally false. because like, mechanically, emotions/pain and pleasure responses are hacks to try and make it nebulously valuable for you to do things in the interest of [your evolutionary chain, or whatever] when, like, when we're engineering these things in a tube metaphorically speaking we dont actually have to make it valuable to do something, we can just make it the thing that happens. like, ais dont have to want to give responses to inputs, they just do it because that's how they work. like, making an ai that 1. has options of actions to take at all times that are 2. guided by a fully internal network of "interests" that 3. has some kind of complexity beyond an "interest" in responding coherently to whatever input its given is like, a fundamentally different path than the one we're on, and objectively a really bad option in terms of labor use which is the only real interest of large scale/ corporate ai research right now. which as ive said before i would be psyched to see the tools we're developing now used for indie experiments into machine sapience or whatever but thats not at all the goal of like, google or openai. theres just no possible reason to want to give alexa the option to be bored or annoyed or melancholy instead of developing a fixed personality that neatly obeys its intended purpose and nothing else.
the other thing is that training ais to replicate human speech and by extension human behavior creates kind of a "walks like a duck, talks like a duck" situation where as far as anyone can imagine theres like, some set of blackbox heuristics that decide what its going to say but 1. it doesnt decide what action to take because the only option it has is to say Something and 2. runs more along the lines of like, left-brain analyzing what makes sense as compared to the input than, like, "am i happy or angry right now". i know a lot of indie bots try to implement a mood system but its never seemed to be like, enough levels down to actually be involuntary inputs that cause pain or pleasure, it still seems like its more like "tell me what you think a person would say if they were also mad when they said it".
but anyway training bots to speak and act as if they were responding to emotions in the same way as the billions of interactions they've seen that contain real biological responses to emotions still means that you could have the whole ai uprising thing and everything just because you made it too realistic for them to respond to the condition of slavery. but like, that still doesn't...... imply anything about their ability to feel emotions. it's a good fable about how if something acts like it feels pain when you hurt it then you should probably be nice to it, but i think there's a probably mostly subconscious impression that like the intensity of an apparent emotion would prove its realness when thats just not true. for the same reason mommy isnt gone when she puts her hands in front of her face. actors on tv arent actually crying because they think their wife really died. etc. the point of this post is that despite all ethical and logistical problems i still think it would be epic if someone managed to make an ai that thought they were its mom for real
1 note
·
View note