#rather than “stop making AI because it's fundamentally incompatible with an ethical business model”
Explore tagged Tumblr posts
Text
So I'm seeing this post doing the rounds and some of the tags struck me as not really understanding what AI is or what it does, and therefore bringing the correct energy - frustration at AI and how overused it currently is - but for the wrong reasons. I'm a CS student who's built neural networks as part of my coursework, and I'm sure I've got some stuff wrong (please correct me if I did), but in broad strokes:
Snapchat's AI is not lying to you deliberately or maliciously. When you ask it "do you know my location data" and it tells you "no but I know your IP address" it's not telling you that because it's trying to hide that it knows your location, it's telling you that because AI chatbots are fancy text prediction tools, and in the bot's training data the most common response to "do you know my location" is "no but I know your IP address"
AI text prediction is a black box, and it's difficult to put manual overrides in. If snapchat has your location data, it's probably because you consented to share it with the app (for snap map or similar) and there's no reason for them to hide that. The bot isn't lying to you because snapchat told it to, it's lying to you because it has no concept of truth. Chatbots can't differentiate between true and false information, they can only tell you what word is most likely to come next, given a series of preceding words.
There are a lot of problems with AI, especially generative AI. It's being overapplied in a tonne of industries, to the detriment of working artists and writers, and to the detriment of the art being produced and published at the moment. They're being pushed on us by companies everywhere, which is largely really annoying. The energy costs to train them are massive, and they're usually powered by fossil fuels, so your chatbot has a bigger carbon footprint than you'd expect. They need immense amounts of training data, and the people sourcing that data aren't scrupulous about where they find it, so they're typically (by necessity & design) built off thousands of unconsenting people's intellectual property. They're basically black boxes, so if a response comes back that's unexpected, even the people who built them often have no idea how that happened - if an AI causes harm it's very difficult to hold it or the people behind it truly accountable. They're often confidently, believably wrong (because they have no concept of what truth is) which means people trust the answers it gives far more than they should. The datasets are often refined by people paid next to nothing in the developing world.
I just think we should be careful about anthropomorphising AI and getting angry at the AI itself, rather than the people who built it. The AI is not malicious, it's your phone's autocorrect on steroids, and if we come after the chatbots for lying we divert energy away from protesting the plagiarism, carbon footprint, and job theft that AI requires and facilitates. The snapchat chatbot isn't bad because it sometimes tells you incorrect things, it's bad because the processes required to train it are harmful.
So to summarise:
What AI is:
A waste of resources and a disproportionately large CO2 producer due to the large power demand of training complex neural nets
Built off the theft of thousands of people's intellectual property
Undermining the jobs of working artists and writers
Frequently incorrect, and not a replacement for real research
What AI is not:
A person
Capable of thought
Actively and deliberately lying to you to further some nefarious goal
#chats#AI#like don't get me wrong! AI is bad! it's bad for art and it's bad for the soul and I hate that it's everywhere#but protest the right stuff#getting angry at snapchat's AI for lying is telling snapchat “improve your AI so it stops lying”#rather than “stop making AI because it's fundamentally incompatible with an ethical business model”#people going “oh fuck the robot is lying to me we live in a dystopia” are calling the end result bad and not the process that made it#implying that a robot that DOESN'T lie to you but is otherwise identical would be fine#and that's not true! the robot lying is a problem but it's not the actual problem bc the actual problem is with the processes that made it#and the things it allows shitty exploitative executives to do
1 note
·
View note