Tumgik
#artificial  intelligence
Text
Tumblr media
924 notes · View notes
izzylimon · 3 days
Text
Guillermo del Toro on AI
154 notes · View notes
Text
Big tech has made some big claims about greenhouse gas emissions in recent years. But as the rise of artificial intelligence creates ever bigger energy demands, it’s getting hard for the industry to hide the true costs of the data centers powering the tech revolution. According to a Guardian analysis, from 2020 to 2022 the real emissions from the “in-house” or company-owned data centers of Google, Microsoft, Meta and Apple are likely about 662% – or 7.62 times – higher than officially reported.
[...]
Even though big tech hides these emissions, they are due to keep rising. Data centers’ electricity demand is projected to double by 2030 due to the additional load that artificial intelligence poses, according to the Electric Power Research Institute. Google and Microsoft both blamed AI for their recent upticks in market-based emissions.
[...]
Whether today’s power grids can withstand the growing energy demands of AI is uncertain. One industry leader – Marc Ganzi, the CEO of DigitalBridge, a private equity firm that owns two of the world’s largest third-party data center operators – has gone as far as to say that the data center sector may run out of power within the next two years.
15 September 2024
113 notes · View notes
reality-detective · 3 days
Text
The beginning of the fall of humanity. The AI take over. 🤔
100 notes · View notes
animaai · 3 days
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
136 notes · View notes
Note
I have a problem. Somebody asked me if I was pro-AI, and I said yes, of course, every sensible person is. "A person's a person, whether they were coded or conceived or came to life through other means," I didn't say, because I felt it was obvious. Now I've found out that people use "AI" to refer to non-sentient computer models, so that means I gave off the COMPLETE wrong impression, and it's mortifying. How do I deal with this, and how do I prevent it from happening again?
I completely understand your frustration, reader. The use of the term “AI” to refer to generative algorithms has had several unfortunate side effects, not least among them being the confusion this terminology causes for genuinely sentient programmes and otherwise coded consciousnesses.
I would first like to note that there is no way to completely prevent such miscommunication from happening. There is a hard limit to how much we can control other people's perceptions of us, after all, and language has always had a rather slippery relationship with meaning. All you can do is try your best, and to speak up when you think things have gone awry.
In this specific situation, the simplest way to avoid this confusion is to speak precisely and ask others to do the same. If a person asks whether you are pro-AI, ask them to clarify whether they're referring to sentient coded consciousnesses, or to a glorified spell checker.
If you have the energy, and if you think it would be productive, you might also push back a little on the terminology they're using. Encourage them to specify either “coded consciousnesses” when speaking about actual people, or “generative AI” when speaking about the automated plagiarism machines.
Otherwise, the best thing you can do here is to lead by example. Be clear and precise in your own language, continue to advocate for the rights and dignities of coded consciousnesses where appropriate, and be prepared to correct any misunderstandings as they arise. That's really all any of us can do.
[For more creaturely advice, check out Monstrous Agonies on your podcast platform of choice, or visit monstrousproductions.org for more info]
56 notes · View notes
Photo
Tumblr media
If you are happy with a request you can support me :)
https://www.buymeacoffee.com/sexystablediffusion
37 notes · View notes
sexyaicreations · 3 days
Text
Tumblr media
Subway Chill
45 notes · View notes
kareguya · 2 days
Text
Tumblr media
Fiona
24 notes · View notes
malzahran · 19 hours
Text
When AI tools align, creativity goes off the charts! 📈🔥
Image created by #Midjouerny , Vedio by #Lumaai
21 notes · View notes
ai-nonsense · 3 days
Text
Tumblr media
I asked DALL·E to generate a first world problem meme 😭🙃
20 notes · View notes
no i don't want to use your ai assistant. no i don't want your ai search results. no i don't want your ai summary of reviews. no i don't want your ai feature in my social media search bar (???). no i don't want ai to do my work for me in adobe. no i don't want ai to write my paper. no i don't want ai to make my art. no i don't want ai to edit my pictures. no i don't want ai to learn my shopping habits. no i don't want ai to analyze my data. i don't want it i don't want it i don't want it i don't fucking want it i am going to go feral and eat my own teeth stop itttt
115K notes · View notes
Text
Tumblr media
103K notes · View notes
autisticlittleguy · 1 month
Text
There was a paper in 2016 exploring how an ML model was differentiating between wolves and dogs with a really high accuracy, they found that for whatever reason the model seemed to *really* like looking at snow in images, as in thats what it pays attention to most.
Then it hit them. *oh.*
*all the images of wolves in our dataset has snow in the background*
*this little shit figured it was easier to just learn how to detect snow than to actually learn the difference between huskies and wolves. because snow = wolf*
Shit like this happens *so often*. People think trainning models is like this exact coding programmer hackerman thing when its more like, coralling a bunch of sentient crabs that can do calculus but like at the end of the day theyre still fucking crabs.
20K notes · View notes
louistonehill · 11 months
Text
Tumblr media
A new tool lets artists add invisible changes to the pixels in their art before they upload it online so that if it’s scraped into an AI training set, it can cause the resulting model to break in chaotic and unpredictable ways. 
The tool, called Nightshade, is intended as a way to fight back against AI companies that use artists’ work to train their models without the creator’s permission. Using it to “poison” this training data could damage future iterations of image-generating AI models, such as DALL-E, Midjourney, and Stable Diffusion, by rendering some of their outputs useless—dogs become cats, cars become cows, and so forth. MIT Technology Review got an exclusive preview of the research, which has been submitted for peer review at computer security conference Usenix.   
AI companies such as OpenAI, Meta, Google, and Stability AI are facing a slew of lawsuits from artists who claim that their copyrighted material and personal information was scraped without consent or compensation. Ben Zhao, a professor at the University of Chicago, who led the team that created Nightshade, says the hope is that it will help tip the power balance back from AI companies towards artists, by creating a powerful deterrent against disrespecting artists’ copyright and intellectual property. Meta, Google, Stability AI, and OpenAI did not respond to MIT Technology Review’s request for comment on how they might respond. 
Zhao’s team also developed Glaze, a tool that allows artists to “mask” their own personal style to prevent it from being scraped by AI companies. It works in a similar way to Nightshade: by changing the pixels of images in subtle ways that are invisible to the human eye but manipulate machine-learning models to interpret the image as something different from what it actually shows. 
Continue reading article here
22K notes · View notes
purpleartrowboat · 1 year
Text
ai makes everything so boring. deepfakes will never be as funny as clipping together presidential speeches. ai covers will never be as funny as imitating the character. ai art will never be as good as art drawn by humans. ai chats will never be as good as roleplaying with other people. ai writing will never be as good as real authors
28K notes · View notes