#algorithmic plagiarism
Explore tagged Tumblr posts
ssnakey-b · 5 months ago
Text
AI and data scraping are actively pushing artists out of the Internet.
In case there was still some doubt for some fucking reason, here's a testimony from mochosum on Instagram explaining why she had to delete 6 years' worth of work from the Internet.
This is the sad reality of algorithmic plagiarism if it is allowed to continue and artists continue to receive no protection from platforms or the law; we are just going to have to stop publishing our works on the Internet entirely:
instagram
The Internet, once a boon for artists and independent creators in general, has become increasingly hostile towards us. Let's stop beating around the bush here; at this point, supporting generative AI and LLMs as they currently exist is being an enemy of artists.
Corporations are not entitled to our art, they are not entitled to our music, they are not entitled to our writings, they are not entitled to our medical data, they are not entitled to our voice, they are not entitled to our face.
Fight back, NOW. Before it's too late.
6 notes · View notes
whumpacabra · 2 days ago
Text
I don’t have a posted DNI for a few reasons but in this case I’ll be crystal clear:
I do not want people who use AI in their whump writing (generating scenarios, generating story text, etc.) to follow me or interact with my posts. I also do not consent to any of my writing, posts, or reblogs being used as inputs or data for AI.
96 notes · View notes
ssnakey-b · 9 months ago
Text
The fact that corporations think that "opt-out" is a solution is frankly an insult to artists' intelligence and to the public at large's.
First of all, it still puts the responsibility on the artists rather than, you know, the millionaire if not billionaire corporations stealing and monetizing their work. It's still forcing to play by corporations' rule, which is of course the intent; to drill in the idea that whatever corporate CEOs want is the rule, and we are the ones who need to deal with it, and any effort on their part is to be treated as the highest of honours.
Second, it is utterly absurd to expect artists to look up every single website using every single model of every single algorithm to check their opt-out option, which by the way often requires to create an account for their site/app (because of course) and/or only allows to opt out for individual pieces we don't want to be scraped.
So if you just want to say "no" across the board, you are expected to scour thousands of websites and applications every single time you publish any work. It's so transparent that "opt-out" options solely exist as an excuse to say "See? See? We do ask for artists' consent!"
It's so fucking crass.
And speaking of, last but not least, it assumes consent by default, WHICH IS THE EXACT OPPOSITE OF HOW CONSENT WORKS. It should never EVER be assumed in any context, and it's still true when it comes to intellectual property. If the persent in front of you isn't explicitly saying "yes", then it's "no". End of story.
Edit: Also, I'm calling bullshit on the entire concept, because that's just not how web scrapers work. They are already grabbing data they have no right to, such as identifying information and medical records, so why would we assume they will respect a "pretty please" ?
Please be aware that the "opt-out" choice is just a way to try to appease people. But Tumblr has not been transparent about when has data been sold and shared with AI companies, and there are sources that confirm that data has already been shared before the toggle was even provided to users.
Also, it seems to include data they should not have been able to give under any circumstance, including that of deactivated blogs, private messages and conversations, stuff from private blogs, and so on.
Do not believe that "AI companies will honor the "opt-out request retroactively". Once they've got their hands on your data (and they have), they won't be "honoring" an opt-out option retroactively. There is no way to confirm or deny what data do they have: The fact they are completely opaque on what do they currently "own" and have, means that they can do whatever they want with it. How can you prove they have your data if they don't give everyone free access to see what they've stolen already?
So, yeah, opt out of data sharing, but be aware that this isn't stopping anyone from taking your data. They already have been taking it, before you were given that option. Go and go to Tumblr's Suppport and leave your Feedback on this (politely, but firmly- not everyone in the company is responsible for this.)
Finally: Opt out is not good under any circumstance. Deactivated people can't opt out. People who have lost their passwords can't opt out. People who can't access internet or computers can't opt out. People who had their content reposted can't opt out. Dead people can't opt out. When DeviantArt released their AI image generator, saying that it wasn't trained on people who didn't consent to it, it was proven it could easily replicate the styles of people who had passed away, as seen here. So, yeah. AI companies cannot be trusted to have any sort of respect for people's data and content, because this entire thing is just a data laundering scheme.
Please do reblog for awareness.
32K notes · View notes
dougielombax · 6 months ago
Text
Insane shit made for no one by an algorithm on behalf of itself.
Regurgitating disposable, meaningless, content.
With the power of weaponised plagiarism.
Yes this is about AI-generated crap.
1 note · View note
weedlovingweed · 11 months ago
Text
so insanely tired of people making post about AI that act like a machine using peoples' art to replicate/create something resembling their work is the same as a human mind with years of different experiences and inspirations etc taking the time to create work inspired by another artist. even a human being tracing art is doing more than AI. and if i ever see that fuuuuuucking dumbass bitch post with the computer and the brain like "inputs information and makes something but we think one is bad and one is good" i'll kill someone. it's not that simple and unless you are truly stupid you would know that! plus, if people are getting in trouble for using AI to write shit for them they should get in trouble for using AI to draw for them. it's plagiarism either way. (and i DO think the life experience, heart, soul, etc put into something is only possible through a living being, not a computer mimicking them. like we can all agree on that for the AI writing movies, but not art????)
1 note · View note
puddlellama · 4 months ago
Text
I'm a big fan of "algorithmic image generation" for "AI art" since it's clear, honest, and sounds worse than any slur if you say it with enough derision.
Tumblr media
94K notes · View notes
wordstome · 10 months ago
Text
how c.ai works and why it's unethical
Okay, since the AI discourse is happening again, I want to make this very clear, because a few weeks ago I had to explain to a (well meaning) person in the community how AI works. I'm going to be addressing people who are maybe younger or aren't familiar with the latest type of "AI", not people who purposely devalue the work of creatives and/or are shills.
The name "Artificial Intelligence" is a bit misleading when it comes to things like AI chatbots. When you think of AI, you think of a robot, and you might think that by making a chatbot you're simply programming a robot to talk about something you want them to talk about, and it's similar to an rp partner. But with current technology, that's not how AI works. For a breakdown on how AI is programmed, CGP grey made a great video about this several years ago (he updated the title and thumbnail recently)
youtube
I HIGHLY HIGHLY recommend you watch this because CGP Grey is good at explaining, but the tl;dr for this post is this: bots are made with a metric shit-ton of data. In C.AI's case, the data is writing. Stolen writing, usually scraped fanfiction.
How do we know chatbots are stealing from fanfiction writers? It knows what omegaverse is [SOURCE] (it's a Wired article, put it in incognito mode if it won't let you read it), and when a Reddit user asked a chatbot to write a story about "Steve", it automatically wrote about characters named "Bucky" and "Tony" [SOURCE].
I also said this in the tags of a previous reblog, but when you're talking to C.AI bots, it's also taking your writing and using it in its algorithm: which seems fine until you realize 1. They're using your work uncredited 2. It's not staying private, they're using your work to make their service better, a service they're trying to make money off of.
"But Bucca," you might say. "Human writers work like that too. We read books and other fanfictions and that's how we come up with material for roleplay or fanfiction."
Well, what's the difference between plagiarism and original writing? The answer is that plagiarism is taking what someone else has made and simply editing it or mixing it up to look original. You didn't do any thinking yourself. C.AI doesn't "think" because it's not a brain, it takes all the fanfiction it was taught on, mixes it up with whatever topic you've given it, and generates a response like in old-timey mysteries where somebody cuts a bunch of letters out of magazines and pastes them together to write a letter.
(And might I remind you, people can't monetize their fanfiction the way C.AI is trying to monetize itself. Authors are very lax about fanfiction nowadays: we've come a long way since the Anne Rice days of terror. But this issue is cropping back up again with BookTok complaining that they can't pay someone else for bound copies of fanfiction. Don't do that either.)
Bottom line, here are the problems with using things like C.AI:
It is using material it doesn't have permission to use and doesn't credit anybody. Not only is it ethically wrong, but AI is already beginning to contend with copyright issues.
C.AI sucks at its job anyway. It's not good at basic story structure like building tension, and can't even remember things you've told it. I've also seen many instances of bots saying triggering or disgusting things that deeply upset the user. You don't get that with properly trigger tagged fanworks.
Your work and your time put into the app can be taken away from you at any moment and used to make money for someone else. I can't tell you how many times I've seen people who use AI panic about accidentally deleting a bot that they spent hours conversing with. Your time and effort is so much more stable and well-preserved if you wrote a fanfiction or roleplayed with someone and saved the chatlogs. The company that owns and runs C.AI can not only use whatever you've written as they see fit, they can take your shit away on a whim, either on purpose or by accident due to the nature of the Internet.
DON'T USE C.AI, OR AT THE VERY BARE MINIMUM DO NOT DO THE AI'S WORK FOR IT BY STEALING OTHER PEOPLES' WORK TO PUT INTO IT. Writing fanfiction is a communal labor of love. We share it with each other for free for the love of the original work and ideas we share. Not only can AI not replicate this, but it shouldn't.
(also, this goes without saying, but this entire post also applies to ai art)
5K notes · View notes
ohnoitstbskyen · 1 year ago
Text
re: Somerton
Not for nothing, but I think we should remember that James Somerton's fans and subscribers are normal people, just like you. They are people who received his output in good faith, and extended to him a normal amount of grace and benefit of the doubt, which he took advantage of.
I don't think it's helpful to respond to the exposé on Somerton with sentiments along the lines of "wow, how could anyone ever think THIS GUY'S videos were any good, ha ha ha, how did he ever get subscribers?" because 1) you have the substantial benefit of hindsight and a disengaged outsider perspective, and 2) it's a rhetoric that creates a divide between you (refined, savvy, smart, sophisticated) and Somerton's audience (gullible, unrefined, easily taken advantage of, terrible taste), which is a false divide, with a false sense of security.
Somerton's success happened because he stole good writing. He found interesting, insightful, in-depth work done by other people, applied the one skill he actually has which is marketing, and re-packaged it as his own. He targeted a market which is starving for the exact kind of writing he was stealing, and pushed his audience to disengage from sources that conflicted with him.
Hbomberguy makes this point in his exposé video: good queer writing is hard to find and incredibly easy to lose. The writers Somerton stole from were often poor or precarious, writing freelance work for small circles under shitty conditions, without the means or the reach or the privileges necessary to find bigger markets. And, as Hbomb demonstrated, when people did discover Somerton's plagiarism, he used his substantial audience to hound them away and dissuade anyone else from trying to hold him accountable.
He stole queer writing by marginalized people, about experiences and perspectives that people are desperate to hear more about, and even if his delivery and aesthetics were naff, his words resonated with people because the original writers who actually wrote them poured their goddamn hearts and souls into it.
Somerton also maintained a consistent narrative of persecution and marginalization about himself. He took the plain truth, which is that queer people and perspectives are discriminated against, and worked that into a story about himself as a lone, brave truth-teller, daring to voice an authentic queer perspective, constantly beset by bigots and adversaries who sought to tear him down. As @aranock, who works with some of the people he targeted, writes in this post, Somerton weaponized whatever casual bias and bigotry he could find in his audience to reinforce his me vs them narrative (usually misogyny and various forms of transphobia), which is what grifters do. They find a vulnerable thread in a community and pull on it. And while you may not have the particular vulnerability that he exploited, you do have vulnerabilities, and they can be exploited too.
People felt compelled to support him, even if his work was sometimes shoddy, because he presented himself as a vulnerable, marginalized person in need of help, he pulled on that vulnerable thread.
Again, he has a degree in marketing, and just like propaganda, nobody is immune to marketing.
YouTube as a system is set up to push for more, constantly more. More content, more videos, more output, more more more more, and part of Somerton and Illuminaughty's success was their ability to push out large amounts of content to the hungry algorithm, even if it was of inferior quality. The algorithm rewarded their volume of output with more eyeballs and attention, and therefore more opportunities to find people who were vulnerable to their grift.
It is a system which quite literally rewards the exact kind of plagiarism that they do, because watch-time and engagement are easily measurable metrics for a corporation, and academic rigor is not. There is pressure to deliver, and a lot of rewards to gain from cutting corners to do it.
Somerton and Illuminaughty and Internet Historian are extreme and very obvious cases, so blatant that you can make a four hour video essay exposing what they've done, but the vast majority of this kind of plagiarism isn't going to be obvious - sometimes it might not even be obvious to the people who are doing it. Casual plagiarism is endemic to the modern internet, and most people don't get educated on what the exact boundaries are between proper sourcing and quoting vs plagiarizing. We had an entire course module at my university aimed at teaching students the exact differences and definitions, and people still made good faith mistakes in their essays and papers that they had to learn to correct during their education.
All of this to say: it is extremely easy in hindsight to call Somerton's work shitty and shoddy, his aesthetics flat and uninspired, and to imagine that as a sophisticated person with good taste and critical faculties, you would never be taken in by this kind of grifter. It is extremely easy to distance yourself from the people he preyed on, and imagine that you will never have to worry about your fave doing your dirty like that.
But part of the point of Hbomberguy's video is that plagiarism is extremely easy to get away with, and often difficult for the average person to spot and call out, and with the rise of AI tools blurring the lines even further, it is not going to get any easier.
So I think we should resist the temptation to think of Somerton's audience as people with bad taste and poor faculties. We should resist the temptation to distance ourselves from the perfectly normal people he preyed on. Many times in your life, a modestly clever man with a marketing degree has fooled you too.
On a personal note, by the same token, I am resisting the temptation to assume that I am too good to be vulnerable to the systemic pressures that produced Somerton and Illuminaughty. No, I've never made a video by word-for-word reciting someone else's work, but I know for a fact that I could do a better job of double-checking my work and citing my sources. I feel the exact same pressure to get a video out as fast as possible, I have the exact same rewards dangled in front of me by YouTube as a platform, and I can't pretend it doesn't affect my work. To me, Hbomb's video felt like a wake-up call to do better.
8K notes · View notes
supremewriter · 2 years ago
Text
"Discover Your Perfect Research Topic with Supreme Writer: A Revolutionary AI Tool for Scholars"
Choosing a topic for your research paper can be one of the most challenging aspects of the writing process. It can be difficult to determine what subject matter is relevant, interesting, and feasible for your research. However, with Supreme Writer, this challenge is a thing of the past.
Supreme Writer is a cutting-edge AI tool designed specifically for scholars, postgraduate students, and researchers. With its advanced algorithms and natural language processing technology, Supreme Writer makes it easier than ever to choose the perfect topic for your research paper.
All you have to do is provide Supreme Writer with some basic information about your research interests and the type of research paper you're writing, and the tool will generate a list of potential topics for you to choose from. This list will include topics that are relevant, interesting, and feasible for your research, taking the guesswork out of the topic selection process.
In addition to helping you choose a topic, Supreme Writer also provides you with a wealth of resources and tools to help you write your research paper. With its built-in style guide and editor, Supreme Writer makes it easy to write clear, concise, and well-organized research papers.
Furthermore, Supreme Writer also ensures that your research paper is free from plagiarism and is of the highest quality. With its advanced algorithms, Supreme Writer checks your writing for similarities with existing texts, both online and in its database. If any similarities are found, Supreme Writer provides suggestions for how to rephrase the content to make it original and unique.
In conclusion, Supreme Writer is a revolutionary AI tool that makes it easier than ever to choose a perfect topic for your research paper. With its advanced algorithms, natural language processing technology, and comprehensive set of resources and tools, Supreme Writer provides scholars, postgraduate students, and researchers with a one-stop solution for all their research writing needs.
Go try you're free trial today!: supremewriter.io
0 notes
laziestgirlintown · 8 months ago
Text
One among many sources Every single chatgpt question costs water. One planet, quite a lot of intelligences, fuck this thieving parroting "intelligence" monster
Tumblr media
120K notes · View notes
crazy-pages · 1 year ago
Text
I'm going to throw my two cents in to the conversation about why James Somerton didn't get caught earlier. Part of the answer is of course that he did get caught, he just bullied and lied to get away with it for a while, but I know a lot of people still express confusion. And of course he went out of his way to make sure his audience didn't know about other queer history sources other than himself. But still. How could he have so many viewers of his videos and none of them had seen X source material?
Well. To be blunt, most of his videos were pretty basic. He tended to copy the highlights of what he was plagiarizing, not the really advanced stuff. And insofar as he copied the advanced stuff, he had a tendency to chop it up and serve it out of context alongside other plagiarized work. The material he was presenting was revolutionary to an audience unfamiliar with queer history, but like. I'm guessing 'Disney villains are queer coded' is not exactly a new concept to the kind of people who read multiple books about queer coding in film.
Now I'm not a film studies person, I'm a physicist. But you know what I do when I get a video in my YouTube recommendations about some fairly basic physics concept?
I skip it. No shade to the creator, but like. I hit that topic a decade ago and I've added literally thousands of hours of studying and research to my brain since. I'm just going to give it a pass, all right?
These kinds of videos self-select for an audience which isn't going to be familiar with the source material. The people who know it are unlikely to keep listening after the first minute or so.
And you've got to remember how much of this content the experts have consumed! With very few exceptions for weird little things that stuck in my head after all these years, I would probably not notice a physics explanation plagiarized from one of my textbooks! Not because I wasn't intimately acquainted with the textbook, but because I was intimately acquainted with many such textbooks. Spend enough time learning this stuff and it all blurs together a little bit. Does this explanation sound familiar because you've heard it before, or because you've just read books which cover this specific topic seven different times? And does that wording or that example ring a bell because it's plagiarized, or because it's common to the field?
Catching this kind of plagiarism requires having the kind of people who are already familiar with these sources, and therefore uninterested in video summaries on the topic, to watch the video. And among those people who do, it requires them to match Somerton's words to one specific source on the topic out of many, that they probably read quite some time ago. And then you have the filter of how many of those subject matter experts have the source on hand to check, to turn a vague "...hmm" into something solid.
If you know enough about queer history to say that some of his plagiarism was obvious, now that you've watched the video, then you should remember that there is a reason you probably weren't one of the people watching his videos! And because YouTube promotes videos through algorithmic engagement, none of this stuff has to pass the sniff test for any other expert in the field before it gets released. No experts have to like it for it to get published or for it to get good reviews or for it to get a recommendation in, I don't know, the New York Times.
The only people who have to like the videos for them to get traction are people who are just trying to learn introductory queer history and film theory. The exact people who aren't going to notice this. And for those of you who to whom it is obvious, ask yourself. When was the last time you watched a basic level queer history introduction on YouTube?
2K notes · View notes
elexuscal · 1 year ago
Text
Something about the current James Somerton discourse i think is missing when people say: "Why didn't people notice he was a bigot?"
is that he was plagarizing from non-bigoted creators.
i'd watched a handful of his videos. I'd noticed a couple comments that made me raise an eyebrow, and I even wrote a post here on tumblr about how much I viscerally disagreed about his comment about "all the interesting gays died of AIDS". (though i left Somerton's name out of that at the time, wanting to take him in good faith... ugh.)
It's obvious now, in retrospect. When you take away all the things he didn't write, what you're left with is just an ugly dust pile of misogyny, Euro-centerism, transphobia, and acephobia.
But before? When you didn't realise that like 80% of what he said he'd stolen? It was masked by all "his" genuinely thoughtful commentary. If someone makes 7 insightful takes, and then one (1) bad one, you're more likely to think, "that's a mostly reasonable person who holds some things I disagree with".
I'm hardly the first to say it, but the point of Hbomberguy's video is not, "Somerton was a uniquely awful person and everyone who watched his content were idiots for not noticing".
It's:
a) the YouTube algorithm (and online algorithms in general) promote low-quality content farm content over well-researched pieces that take longer to make
b) if something a creator says seems fishy, be willing to dig into it more and double-check
c) Look out for the hallmark signs of plagiarism and corner-cutting in general, like lack of attributions or content being churned out at an unbelievably high rate.
Somerton was a charismatic guy who used his status as a gay man as a shield and took full advantage of our social brains' tendency towards parasocial relationship, while actively tricking his audience by stealing other peoples' words. Don't blame his audience for falling for it. Learn from it.
1K notes · View notes
naamahdarling · 15 days ago
Text
I'm probably going to piss some people off with this, but.
The use of AI and machine learning for harmful purposes is absolutely unacceptable.
But that isn't an innate part of what it does.
Apps or sites using AI to generate playlists or reading lists or a list of recipes based on a prompt you enter: absolutely fantastic, super helpful, so many new things to enjoy, takes jobs from no-one.
Apps or sites that use a biased algorithm (which is AI) which is not controllable by users or able to be turned off by them, to push some content and suppress others to maximize engagement and create compulsive behavior in users: unethical, bad, capitalism issue, human issue.
People employing genAI to create images for personal, non-profit use and amusement who would not have paid someone for the same service: neutral, (potential copyright and ethics issue if used for profit, which would be a human issue).
People incorporating genAI as part of their artistic process, where the medium of genAI is itself is a deliberate part of the artist's technique: valid, interesting.
Companies employing genAI to do the work of a graphic designer, and websites using genAI to replace the cost of stock photos: bad, shitty, no, capitalist and ethical human issue.
People attacking small artists who use it with death threats and unbelievable vitriol: bad, don't do that.
AI used for spell check and grammar assistance: really great.
AI employed by eBay sellers to cut down on the time it takes to make listings: good, very helpful, but might be a bad idea as it does make mistakes and that can cost them money, which would be a technical issue.
AI used to generate fake product photos: deceptive, lazy, bad, human ethical issue.
AI used to identify plagiarism: neutral; could be really helpful but the parameters are defined by unrealistic standards and not interrogated by those who employ it. Human ethical issue.
AI used to analyze data and draw up complex models allowing detection of things like cancer cells: good; humans doing this work take much longer, this gives results much faster and allows faster intervention, saving lives.
AI used to audit medical or criminal records and gatekeep coverage or profile people: straight-up evil. Societal issue, human ethical issue.
AI used to organize and classify your photos so you don't have to spend all that time doing it: helpful, good.
AI used to profile people or surveil people: bad and wrong. Societal issue, human issue, ethical issue.
I'm not going to cover the astonishingly bad misinformation that has been thrown out there about genAI, or break down thought distortions, or go into the dark side of copyright law, or dive into exactly how it uses the data it is fed to produce a result, or explain how it does have many valid uses in the arts if you have any imagination and curiosity, and I'm not holding anyone's hand and trying to walk them out of all the ableism and regurgitated capitalist arguments and the glorification of labor and suffering.
I just want to point out: you use machine learning (AI) all the time, you benefit from it all the time. You could probably identify many more examples that you use every day. Knee-jerk panicked hate reflects ignorance, not sound principles.
You don't have beef with AI, you have beef with human beings, how they train it, and how they use it. You have beef with capitalism and thoughtlessness. And so do I. I will ruthlessly mock or decry misuse or bad use of it. But there is literally nothing inherently bad in the technology.
I am aware of and hate its misuse just as much as you do. Possibly more, considering that I am aware of some pretty heinous ways it's being used that a lot of people are not. (APPRISS, which is with zero competition for the title the most evil use of machine learning I have ever seen, and which is probably being used on you right now.)
You need to stop and actually think about why people do bad things with it instead of falling for the red herring and going after the technology (as well as the weakest human target you can find) every time you see those two letters together.
You cannot protect yourself and other people against its misuse if you cannot separate that misuse against its neutral or helpful uses, or if you cannot even identify what AI and machine learning are.
327 notes · View notes
eskawrites · 26 days ago
Text
I feel like I’m seeing another uptick of people talking about using AI for fics/writing in general and I know some of it’s in a mostly unserious way but I still just wanna say
1) Generative AIs are literally built on the concept of mosaic plagiarism. You are, by definition, stealing from the work of countless writers on the internet
2) AI writing is not writing, it offers zero value beyond in-the-moment entertainment. If you want that satisfaction of doing something creative you have to actually, you know, do something creative. If you want the instant gratification of a story go read/watch/play something that was made by actual artists
3) even if you have no qualms about the plagiarism and deterioration of human skill and creativity, AI is a major threat to the environment and every time you use it you’re contributing to a massive waste of energy and resources
4) using AI just for ideas or just for inspiration or just to rewrite a sentence or just to find a different word is still using AI and it is still harming the environment and it is still stealing from others. There are other tools to use. The internet is full of free resources created by actual writers that can help you find that cool word you’re looking for or show you different ways to approach style and voice. And if you’re looking for inspiration there are literally endless amounts of prompts and ideas that are only a google search away
4a) this is also true for people who are only using AI as a joke. It’s still harmful and you are helping the problem continue by using it, training it, and normalizing it
5) art is valuable because it is created by humans. Making something worthwhile isn’t about creating a masterpiece, it’s about putting part of yourself—whether that part is passionate or heartbroken or angry or inspired or silly or reverent or filled with brainworms—into the world. And even if you are the worst writer/artist/musician who has ever walked the earth (and trust me, you aren’t), anything you create on your own still has an impact. You are changing the world! You are putting something out there that leaves an impression on you and anyone who comes across it! But when you use AI for that, you haven’t made anything. You’ve just rearranged someone else’s work and dropped it on the ground. And by the time you make your third work, or your tenth, or your hundredth, you will not have grown or learned or changed or experienced any of the actual meaning and beauty of creativity. And if you don’t want any of those things, that’s fine! But that means being a writer or an artist or whatever is not for you, and you shouldn’t go around cosplaying as one with a computer algorithm that is destroying the planet, stealing from hard-working artists, eliminating jobs, and contributing to mass misinformation and the deterioration of reading comprehension
175 notes · View notes
probablyasocialecologist · 10 months ago
Text
We don’t yet know exactly why a group of people very publicly graffitied, smashed, and torched a Waymo car in San Francisco. But we know enough to understand that this is an explosive milestone in the growing, if scattershot, revolt against big tech. We know that self-driving cars are wildly divisive, especially in cities where they’ve begun to share the streets with emergency responders, pedestrians and cyclists. Public confidence in the technology has actually been declining as they’ve rolled out, owing as much to general anxiety over driverless cars as to high-profile incidents like a GM Cruise robotaxi trapping, dragging, and critically injuring a pedestrian last fall. Just over a third of Americans say they’d ride in one. We also know that the pyrotechnic demolition can be seen as the most dramatic act yet in a series of escalations — self-driving cars have been vocally opposed by officials, protested, “coned,” attacked, and, now, set ablaze in a carnivalesque display of defiance. The Waymo torching did not take place in a vacuum. To that end, we know that trust in Silicon Valley in general is eroding, and anger towards the big tech companies — Waymo is owned by Alphabet, the parent company of Google — is percolating. Not just at self-driving cars, of course, but at generative AI companies that critics say hoover up copyrighted works to produce plagiarized output, at punishing, algorithmically mediated work regimes at the likes of Uber and Amazon, at the misinformation and toxic content pushed by Facebook and TikTok, and so on. It’s all of a piece. All of the above contributes to the spreading sense that big tech has an inordinate amount of control over the ordinary person’s life — to decide, for example, whether or not robo-SUVs will roam the streets of their communities — and that the average person has little to no meaningful recourse.
522 notes · View notes
doomdoomofdoom · 2 months ago
Text
Some more from the notes because this is the funniest shit I've ever seen
from @fennel-tea:
Tumblr media Tumblr media Tumblr media
from @peachcat14:
Tumblr media Tumblr media
from @aierie--dragonslayer:
Tumblr media Tumblr media
from @thelealinhypehouse:
Tumblr media Tumblr media Tumblr media
20K notes · View notes