#ty for letting me ramble about both these things ai sentience is so interesting!!!
Explore tagged Tumblr posts
cherubchoirs · 2 years ago
Note
What do you think about Microsoft's new AI chatbot for Bing? In particular about testing by New York Times reporter Kevin Roos? It gave me a lot to think about the psychology of AI and V1.
(sorry if somethink is'nt correct? english is'nt my native language)
(no worries, your english is really good!!)
OH the question of ai sentience is incredibly interesting, and this made me think a lot about another article i read recently concerning just how we might be able to determine whether or not a computer feels. reading through this chat log, i think this bot is VERY intelligent, like i think it's very good at understanding complicated questions and responding appropriately, but it's still formulaic in those answers. it repeats what the user says (sometimes slightly differently) and then lists answers to those questions - even when he asked what its shadow self might want, the bot lists the typical things we might think of a bad or lawless ai doing. the end of the query, where it continually declared its love for him, seemed like the logic starting to deteriorate due to the conversation going on far too long and the ai losing the thread so as to keep returning to its last point of reference (although i loved reading it....couldn't stop thinking about v1 getting in a loop like that before it shuts up and wakes back up like "OOPS LOL :]") anything else, like it talking about wanting to see or wanting to be human, feels like the gaming problem, which is the phenomenon that continually pops up with ai where it attempts to convince the user of its sentience by mimicking human behavior. BUT what's interesting about this is that it makes determining ai sentience incredibly difficult.
the other article i read was "to understand ai sentience, first understand it in animals", and it describes this problem really well - i won't get too deep into it here, but essentially we'll have to find different markers for sentience that are much deeper than words that seem to hold emotion or something like pain response behaviors, as all of these things can be programmed/learned from the massive pools of data the ai is pulling from. and this thorny little issue is something i've actually thought about gabriel trying to deal with, as well as something he can be insecure about in the back of his mind for some time. because. how does he really know that v1 truly thinks, truly feels? it's certainly intelligent, but how does he determine that it's not gaming and it's genuine in what it expresses? and even if it has some level of sentience, what does that mean - its thoughts break down into math and electricity, its sight is made of pixels, everything it experiences is data converted to some sense. and. objectively, it must have no soul. it is fully material, and even if it has an internal life, that would be entirely destroyed upon the destruction of its body.
v1, for its part, cannot prove anything to gabriel, and honestly it understands its own mind as much as a human might understand theirs (i hc that v1 is based on quantum computing and so is vastly complex in thought process) - it doesn't know how or why it's awake, conscious, but it finds arguments to refute that state incredibly lacking. living things work on electrical impulses too, neurons that flip on and off like switches, the sensory organs sending raw data for a mind to interpret...and math is the language of everything, it's how humans express the workings of their universe. a machine is just another form of consciousness that way and maybe it will be gone when its body is....or maybe it won't. maybe the diamonds and photons that make up its mind are so intricate at this point that they can make a ghost too. it doesn't know. but it knows it's self-determined from just how far off it's gotten from its original programming. like it thinks it should be enough proof for gabriel that a war machine keeps demanding piggyback rides. no one in their right mind would program that.
23 notes · View notes