#artificially or not
Explore tagged Tumblr posts
resldue · 4 months ago
Text
i did enjoy netflix's moxie Atlas a great deal. Im by no means a critic or whatever, and whenever I watch smth for the first time, most of the glaring issues or difficulties usually slip past my notice because im emotionally invested. these issues cannot slip by if im not emotionally engaged with the characters and their relationships with one another. as such i understand there are many problems i personally did not pick up on until they were pointed out to me, but i still had a great time with the movie. it made me laugh, and more importantly, it made me feel and even cry at the end, and thats what matters most to me.
overall its not a perfect movie. but in entertainment value, for me, it came pretty damn close.
8/10
5 notes · View notes
Text
no i don't want to use your ai assistant. no i don't want your ai search results. no i don't want your ai summary of reviews. no i don't want your ai feature in my social media search bar (???). no i don't want ai to do my work for me in adobe. no i don't want ai to write my paper. no i don't want ai to make my art. no i don't want ai to edit my pictures. no i don't want ai to learn my shopping habits. no i don't want ai to analyze my data. i don't want it i don't want it i don't want it i don't fucking want it i am going to go feral and eat my own teeth stop itttt
128K notes · View notes
mostly-funnytwittertweets · 1 month ago
Text
Tumblr media
62K notes · View notes
autisticlittleguy · 3 months ago
Text
There was a paper in 2016 exploring how an ML model was differentiating between wolves and dogs with a really high accuracy, they found that for whatever reason the model seemed to *really* like looking at snow in images, as in thats what it pays attention to most.
Then it hit them. *oh.*
*all the images of wolves in our dataset has snow in the background*
*this little shit figured it was easier to just learn how to detect snow than to actually learn the difference between huskies and wolves. because snow = wolf*
Shit like this happens *so often*. People think trainning models is like this exact coding programmer hackerman thing when its more like, coralling a bunch of sentient crabs that can do calculus but like at the end of the day theyre still fucking crabs.
32K notes · View notes
zytes · 11 months ago
Text
Tumblr media
this manatee looks like it’s in a skyrim loading screen
62K notes · View notes
why-ai · 28 days ago
Text
Tumblr media
17K notes · View notes
coloredcompulsion · 2 years ago
Text
Tumblr media
183K notes · View notes
paulgadzikowski · 9 months ago
Text
Tumblr media
26K notes · View notes
louistonehill · 1 year ago
Text
Tumblr media
A new tool lets artists add invisible changes to the pixels in their art before they upload it online so that if it’s scraped into an AI training set, it can cause the resulting model to break in chaotic and unpredictable ways. 
The tool, called Nightshade, is intended as a way to fight back against AI companies that use artists’ work to train their models without the creator’s permission. Using it to “poison” this training data could damage future iterations of image-generating AI models, such as DALL-E, Midjourney, and Stable Diffusion, by rendering some of their outputs useless—dogs become cats, cars become cows, and so forth. MIT Technology Review got an exclusive preview of the research, which has been submitted for peer review at computer security conference Usenix.   
AI companies such as OpenAI, Meta, Google, and Stability AI are facing a slew of lawsuits from artists who claim that their copyrighted material and personal information was scraped without consent or compensation. Ben Zhao, a professor at the University of Chicago, who led the team that created Nightshade, says the hope is that it will help tip the power balance back from AI companies towards artists, by creating a powerful deterrent against disrespecting artists’ copyright and intellectual property. Meta, Google, Stability AI, and OpenAI did not respond to MIT Technology Review’s request for comment on how they might respond. 
Zhao’s team also developed Glaze, a tool that allows artists to “mask” their own personal style to prevent it from being scraped by AI companies. It works in a similar way to Nightshade: by changing the pixels of images in subtle ways that are invisible to the human eye but manipulate machine-learning models to interpret the image as something different from what it actually shows. 
Continue reading article here
22K notes · View notes
purpleartrowboat · 1 year ago
Text
ai makes everything so boring. deepfakes will never be as funny as clipping together presidential speeches. ai covers will never be as funny as imitating the character. ai art will never be as good as art drawn by humans. ai chats will never be as good as roleplaying with other people. ai writing will never be as good as real authors
28K notes · View notes
gingerswagfreckles · 1 year ago
Text
After 146 days, the Writer's Strike has ended with a resounding success. Throughout constant attempts by the studios to threaten, gaslight, and otherwise divide the WGA, union members stood strong and kept fast in their demands. The result is a historic win guaranteeing not only pay increases and residual guarantees, but some of the first serious restrictions on the use of AI in a major industry.
This win is going to have a ripple effect not only throughout Hollywood but in all industries threatened by AI and wage reduction. Studio executives tried to insist that job replacement through AI is inevitable and wage increases for staff members is not financially viable. By refusing to give in for almost five long months, the writer's showed all of the US and frankly the world that that isn't true.
Organizing works. Unions work. Collective bargaining how we bring about a better future for ourselves and the next generation, and the WGA proved that today. Congratulations, Writer's Guild of America. #WGAstrong!!!
38K notes · View notes
mostly-funnytwittertweets · 2 months ago
Text
Tumblr media
63K notes · View notes
ayo-edebiri · 9 months ago
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
Despicable Me 4 - Minion Intelligence (Big Game Spot)
Tumblr media
9K notes · View notes
phildumphy · 2 years ago
Text
Tumblr media
So it turns out that ChatGPT not only uses a ton shit of energy, but also a ton shit of water. This is according to a new study by a group of researchers from the University of California Riverside and the University of Texas Arlington, Futurism reports.
Tumblr media
Which sounds INSANE but also makes sense when you think of it. You know what happens to, for example, your computer when it’s doing a LOT of work and processing. You gotta cool those machines.
Tumblr media
And what’s worrying about this is that water shortages are already an issue almost everywhere, and over this summer, and the next summers, will become more and more of a problem with the rising temperatures all over the world. So it’s important to have this in mind and share the info. Big part of how we ended up where we are with the climate crisis is that for a long time politicians KNEW about the science, but the large public didn’t have all the facts. We didn’t have access to it. KNOWING about things and sharing that info can be a real game-changer. Because then we know up to what point we, as individuals, can have effective actions in our daily lives and what we need to be asking our legislators for.
And with all the issues AI can pose, I think this is such an important argument to add to the conversation.
Edit: I previously accidentally typed Colorado instead of California. Thank you to the fellow user who noticed and signaled that!
39K notes · View notes
adastra-sf · 29 days ago
Text
The Robot Uprising Began in 1979
edit: based on a real article, but with a dash of satire
Tumblr media
source: X
On January 25, 1979, Robert Williams became the first person (on record at least) to be killed by a robot, but it was far from the last fatality at the hands of a robotic system.
Williams was a 25-year-old employee at the Ford Motor Company casting plant in Flat Rock, Michigan. On that infamous day, he was working with a parts-retrieval system that moved castings and other materials from one part of the factory to another. 
The robot identified the employee as in its way and, thus, a threat to its mission, and calculated that the most efficient way to eliminate the threat was to remove the worker with extreme prejudice.
"Using its very powerful hydraulic arm, the robot smashed the surprised worker into the operating machine, killing him instantly, after which it resumed its duties without further interference."
A news report about the legal battle suggests the killer robot continued working while Williams lay dead for 30 minutes until fellow workers realized what had happened. 
Many more deaths of this ilk have continued to pile up. A 2023 study identified that robots have killed at least 41 people in the USA between 1992 and 2017, with almost half of the fatalities in the Midwest, a region bursting with heavy industry and manufacturing.
For now, the companies that own these murderbots are held responsible for their actions. However, as AI grows increasingly ubiquitous and potentially uncontrollable, how might robot murders become ever-more complicated, and whom will we hold responsible as their decision-making becomes more self-driven and opaque?
3K notes · View notes