It suddenly dawned on me that the reason AI and the aquisition, distribution, and implementation of it in all big brands and corporations is because if these multimillion dollar companies do not invest a huge amount of money, work, and time into implementing it the VERY MOMENT a new technology like this is published or released they will literally miss out on billions of dollars if they allow a competitor to establish a monopoly like google has been doing to search engines the last couple of decades.
The sheer vast amount of money that goes into this tech is so mindblowingly huge that they would rather face lawsuit upon lawsuit and throw the privacy of its users and the decency of their buisness under the buss to not miss out.
I think I just realised why capitalism sucks
0 notes
I think the thing that's been bothering me about the "is it okay to use ChatGPT to plot/make characters/etc" is that at the end of the day, these are not tools that are helping your writing, they are shortcuts that are undercutting it.
These things are supposed to be hard, because you need to learn how to do them.
And listen, I know this sucks. I've got to knock off 4k of words from my current novel to make it more sellable, which seems like a completely arbitrary thing to do, but things like printing costs absolutely do factor into traditional publishing. It took me five drafts to figure out a completely obvious in hindsight plot point that explains why a character does what he does. It takes a few tries to pull together a detailed outline into a workable story, and it always will.
I would have loved to figure this all out way earlier, but I had to learn how to spot the gaps in my writing before I could fix them. Generative AI isn't ever going to bridge the gap between sitting down and learning how to work things out, because if you don't do that, you never will become a more competent writer. If that wasn't part of the point, none of us would be doing this in the first place.
4K notes
·
View notes
how c.ai works and why it's unethical
Okay, since the AI discourse is happening again, I want to make this very clear, because a few weeks ago I had to explain to a (well meaning) person in the community how AI works. I'm going to be addressing people who are maybe younger or aren't familiar with the latest type of "AI", not people who purposely devalue the work of creatives and/or are shills.
The name "Artificial Intelligence" is a bit misleading when it comes to things like AI chatbots. When you think of AI, you think of a robot, and you might think that by making a chatbot you're simply programming a robot to talk about something you want them to talk about, and it's similar to an rp partner. But with current technology, that's not how AI works. For a breakdown on how AI is programmed, CGP grey made a great video about this several years ago (he updated the title and thumbnail recently)
I HIGHLY HIGHLY recommend you watch this because CGP Grey is good at explaining, but the tl;dr for this post is this: bots are made with a metric shit-ton of data. In C.AI's case, the data is writing. Stolen writing, usually scraped fanfiction.
How do we know chatbots are stealing from fanfiction writers? It knows what omegaverse is [SOURCE] (it's a Wired article, put it in incognito mode if it won't let you read it), and when a Reddit user asked a chatbot to write a story about "Steve", it automatically wrote about characters named "Bucky" and "Tony" [SOURCE].
I also said this in the tags of a previous reblog, but when you're talking to C.AI bots, it's also taking your writing and using it in its algorithm: which seems fine until you realize 1. They're using your work uncredited 2. It's not staying private, they're using your work to make their service better, a service they're trying to make money off of.
"But Bucca," you might say. "Human writers work like that too. We read books and other fanfictions and that's how we come up with material for roleplay or fanfiction."
Well, what's the difference between plagiarism and original writing? The answer is that plagiarism is taking what someone else has made and simply editing it or mixing it up to look original. You didn't do any thinking yourself. C.AI doesn't "think" because it's not a brain, it takes all the fanfiction it was taught on, mixes it up with whatever topic you've given it, and generates a response like in old-timey mysteries where somebody cuts a bunch of letters out of magazines and pastes them together to write a letter.
(And might I remind you, people can't monetize their fanfiction the way C.AI is trying to monetize itself. Authors are very lax about fanfiction nowadays: we've come a long way since the Anne Rice days of terror. But this issue is cropping back up again with BookTok complaining that they can't pay someone else for bound copies of fanfiction. Don't do that either.)
Bottom line, here are the problems with using things like C.AI:
It is using material it doesn't have permission to use and doesn't credit anybody. Not only is it ethically wrong, but AI is already beginning to contend with copyright issues.
C.AI sucks at its job anyway. It's not good at basic story structure like building tension, and can't even remember things you've told it. I've also seen many instances of bots saying triggering or disgusting things that deeply upset the user. You don't get that with properly trigger tagged fanworks.
Your work and your time put into the app can be taken away from you at any moment and used to make money for someone else. I can't tell you how many times I've seen people who use AI panic about accidentally deleting a bot that they spent hours conversing with. Your time and effort is so much more stable and well-preserved if you wrote a fanfiction or roleplayed with someone and saved the chatlogs. The company that owns and runs C.AI can not only use whatever you've written as they see fit, they can take your shit away on a whim, either on purpose or by accident due to the nature of the Internet.
DON'T USE C.AI, OR AT THE VERY BARE MINIMUM DO NOT DO THE AI'S WORK FOR IT BY STEALING OTHER PEOPLES' WORK TO PUT INTO IT. Writing fanfiction is a communal labor of love. We share it with each other for free for the love of the original work and ideas we share. Not only can AI not replicate this, but it shouldn't.
(also, this goes without saying, but this entire post also applies to ai art)
5K notes
·
View notes
"The biggest issue is students using it, me spotting it and having no recourse whatsoever to do anything about it." can you elaborate a bit further
Hello !
So to explain a bit more: we [aka your lecturers, teachers, teaching assistants, etc...] know that some students will use ChatGPT.
And there is a discussion to be had about how to work with this, how to design assessment which allow students to leverage something which may simply become a fixture of writing in a workplace environment, but that is not the discussion we are having here. Because that is not what we are worried about.
The defensible, problematic situation is: a student straight up entering the essay prompt on ChatGPT, and using the grand skills of Ctrl+C / Ctrl+V, submits it as their own paper.
And our main worry, I think, was for a long time that we would not be able to catch it. That students would, actually, be able to fool us and that we would actually think this was a student who understood the course, who put in the work, and who deserve to be rewarded for their grade. That was the main fear.
But here is the thing.
And listen up, students :
Essays written by ChatGPT :
Suck
Are spotted from a mile away from the person reading it
For real. They suck.
I cannot stress enough how easy they are to spot. You are NOT fooling anyone. I do not need the platform's AI-detecting tool to know when an essay was written by Chat GPT. It is so, very painfully obvious when that's the case.
But the problem then becomes : ok, I have spotted a student who cheated.
What am I even supposed to do with it.
It is one thing to KNOW that an essay was AI-generated, it is another to defend it to a plagiarism committee. First of all, does it actually count as plagiarism ? Second, how do prove, with certainty, that the student did not write it ? How to I convince the plagiarism committee that this is worth looking into ? I am in the role of a police officer, who needs to convince the DA that this is a winnable case, that prosecuting will not be a waste of their time. But I don't have a Similarity Percentage to rely on. I don't have an original source to say "look, this is the exact same wording!" like in a classic plagiarism case.
Best case scenario, I can make my case for thee student to actually be called to the plagiarism committee, where we probe into how, exactly, they wrote their essay, until they fold. Unlikely, morally questionable, and in all likelihood, ineffective on students already so confident in their bullshit that they have the audacity to submit a fully AI-generated work for their finals.
Now, students, gather up, especially if you have considered using Chat GPT this way. Because right now, you might think it means you can get away with it.
But let me tell you something. First, that essay is getting the shittiest grade we can give you. Because you know what is more difficult than a lecturer proving that a student used AI to generate their essay ? A student proving that they deserve a better grade. Once we give you a grade, burden of evidence is on you to prove that you have not been graded properly. And we can come up with 15 reasons why an essay is a shit essay. We put on kids' gloves, when we lecture and give feedback. We give the simplified version of most theories, we give the basics of how to structure an essay, the bar we set is spectacularly low, because students come in good faith, they are learning, they will not be held at the same standard as academics. But if you try to argue that you need a higher grade, when you had the audacity to not write a single word of your work, the kids gloves are going to come off real quick, and your lecturer will be able to very convincingly explain why, actually, giving you a passing grade was a mercy in the first place.
Second. Academics, especially angry academics, are a gossip machine.
You may get a passing grade, and there may be no official note of it in your file whatsoever. But I can guarantee you that your lecturer will chat with their colleagues. That every single one of your essay that year, and the years to come, will be looked at with so much scrutiny I hope your referencing for every single work reaches perfection. Every single paragraph will be looked at with the knowledge that you are likely to have had it AI-generated. Lecturers will tell their TA to look at for That One Student when they grade you .You will not be getting any flexibility from us, no extension without full documentation to support it, no letter of recommendation from any member of the faculty, no word in your favor if you are bordering a grade bracket. If we are feeling especially petty, we might even forget to answer your emails or answer any question you have with such warmth and kindness you really still never feel like asking a question again in our class. And I know that, because that's already happening. I have the name of three undergrads that we know, for a fact, did not write their own essay. Two are not even in my modules at all.
Now. That's pretty mean. But if you have the absolute audacity and lack of ethics required to submit an essay for which you have not written a single word, and thought it would actually work, when your lecturer spent probably more that 80 hours working in this module this term, gave you the opportunity to meet for office hours, to ask any question in person or in email, to have extensions, accommodations, additional time ? When you decided that putting exactly zero second of your time, considered that you were above that - and above other students- and yet we were not able to officially sanction you for it, we had to give you a passing grade, the same passing grade as students who actually made an effort?
Yeah, sorry, you are not getting any sympathy from your lecturers anymore.
4K notes
·
View notes