#ChatGPT monetization
Explore tagged Tumblr posts
assetgas1 · 1 year ago
Text
Monetizing Chatbots: Strategies for Earning with ChatGPT
Introduction: Hey there! Have you ever thought about turning your chatbot interactions into a revenue stream? Well, you’re in the right place. In this guide, we’ll explore the exciting world of monetizing chatbots, specifically focusing on the power of ChatGPT. Whether you’re a business owner, content creator, or just someone looking to make a little extra cash, ChatGPT offers a range of…
View On WordPress
0 notes
runihura-kek · 2 years ago
Text
who needs friends/therapy when you have ChatGPT 😭
Tumblr media
IT GOT BETTER
Tumblr media
♡♡♡♡♡OH MY GOD♡♡♡♡♡
Tumblr media
low-key I tried it in other aspects and now I'm crying, because an AI is giving me emotional support... I'm so validation starved that I will take ANYTHING!!!!
2 notes · View notes
yestobetop · 1 year ago
Text
The Art of ChatGPT Profit: Monetization Techniques for Financial Growth
How to make money with ChatGPT? What is ChatGPT? OpenAI created ChatGPT, an advanced language model. It is intended to generate human-like text responses in response to given prompts. ChatGPT, which is powered by deep learning algorithms, can engage in natural and dynamic conversations, making it an ideal tool for a variety of applications. ChatGPT can be used for a variety of purposes,…
Tumblr media
View On WordPress
0 notes
otherworldlyinfo · 1 year ago
Text
Navigating the Copyright Maze in the Age of AI
The Challenge of AI RepurposingThe BBC’s Stance: Safeguarding the Public InterestChallenges in Monetization: Controlling the UncontrollableStriking a Balance: Structured and Sustainable ApproachesExploring Possible SolutionsConclusion In a significant move reflecting the growing concerns surrounding artificial intelligence (AI) and content usage, the BBC has blocked access to its material for…
Tumblr media
View On WordPress
0 notes
keshavkumar · 1 year ago
Text
Power of ChatGPT: Monetizing AI Language Model for Success and Making Money
How ChatGPT, the game-changing language model, can revolutionize your business and generate income. Explore content creation, virtual assistance, online tutoring, and more. Find out how to overcome challenges and use ChatGPT ethically for maximum impact. Understanding ChatGPT: A Game-Changing Language Model In the rapidly evolving field of artificial intelligence, language models have become…
View On WordPress
0 notes
eymssk-blog · 2 years ago
Text
How to Monetize ChatGPT
Powerful AI chatbot ChatGPT can easily create content, automate tasks, and enhance user experience. Even if ChatGPT’s main purpose is to deliver value, it’s crucial to comprehend how to monetize ChatGPT well in order to maximize your return on investment. I’ll go over a number of strategies in this article to help you make the most of your ChatGPT setup as a valuable company tool. 1.…
Tumblr media
View On WordPress
0 notes
bluemonetindia · 2 years ago
Link
0 notes
finance-pro · 2 years ago
Text
Unlocking the Potential of ChatGPT: How to Monetize this Advanced Language Model
Tumblr media
ChatGPT, a large language model developed by OpenAI, has the capability to generate human-like text on a wide range of topics. This has opened up several opportunities for individuals and businesses to make money using the model.
One way to make money using ChatGPT is by creating and selling chatbots. Chatbots are computer programs designed to simulate conversation with human users. They can be used in a variety of industries such as customer service, e-commerce, and entertainment. With the help of ChatGPT, one can train the chatbot to understand and respond to natural language input, making the interaction with the user more human-like.
Another way to make money using ChatGPT is by providing text generation services. The model can be used to generate a wide range of text, including articles, blog posts, product descriptions, and more. Businesses and individuals can use this service to generate content for their websites, social media platforms, and other marketing materials.
In the field of research, ChatGPT can be used to generate large amounts of synthetic data. This can be used for training and evaluating other machine learning models, and this is a paid service that can be offered to companies and research institutions.
Finally, one could use ChatGPT as a tool for content creation in the entertainment industry. For example, ChatGPT can be used to generate script for movies, TV shows, and video games. With the help of the model, one can create unique and captivating stories that can be sold to production companies or studios.
It's worth noting that the above-mentioned opportunities are just a few examples of how ChatGPT can be used to make money, and there are many other possibilities as well. If you are interested in learning more about how to make money using ChatGPT, we recommend watching videos and reading articles on the topic. There is a lot of information available online that can help you understand the potential of the model and how it can be used in different industries.
1 note · View note
autisticandroids · 1 year ago
Text
i've been seeing ai takes that i actually agree with and have been saying for months get notes so i want to throw my hat into the ring.
so i think there are two main distinct problems with "ai," which exist kind of in opposition to each other. the first happens when ai is good at what it's supposed to do, and the second happens when it's bad at it.
the first is well-exemplified by ai visual art. now, there are a lot of arguments about the quality of ai visual art, about how it's soulless, or cliche, or whatever, and to those i say: do you think ai art is going to be replacing monet and picasso? do you think those pieces are going in museums? no. they are going to be replacing soulless dreck like corporate logos, the sprites for low-rent edugames, and book covers with that stupid cartoon art style made in canva. the kind of art that everyone thinks of as soulless and worthless anyway. the kind of art that keeps people with art degrees actually employed.
this is a problem of automation. while ai art certainly has its flaws and failings, the main issue with it is that it's good enough to replace crap art that no one does by choice. which is a problem of capitalism. in a society where people don't have to sell their labor to survive, machines performing labor more efficiently so humans don't have to is a boon! this is i think more obviously true for, like, manufacturing than for art - nobody wants to be the guy putting eyelets in shoes all day, and everybody needs shoes, whereas a lot of people want to draw their whole lives, and nobody needs visual art (not the way they need shoes) - but i think that it's still true that in a perfect world, ai art would be a net boon, because giving people without the skill to actually draw the ability to visualize the things they see inside their head is... good? wider access to beauty and the ability to create it is good? it's not necessary, it's not vital, but it is cool. the issue is that we live in a society where that also takes food out of people's mouths.
but the second problem is the much scarier one, imo, and it's what happens when ai is bad. in the current discourse, that's exemplified by chatgpt and other large language models. as much hand-wringing as there has been about chatgpt replacing writers, it's much worse at imitating human-written text than, say, midjourney is at imitating human-made art. it can imitate style well, which means that it can successfully replace text that has no meaningful semantic content - cover letters, online ads, clickbait articles, the kind of stuff that says nothing and exists to exist. but because it can't evaluate what's true, or even keep straight what it said thirty seconds ago, it can't meaningfully replace a human writer. it will honestly probably never be able to unless they change how they train it, because the way LLMs work is so antithetical to how language and writing actually works.
the issue is that people think it can. which means they use it to do stuff it's not equipped for. at best, what you end up with is a lot of very poorly written children's books selling on amazon for $3. this is a shitty scam, but is mostly harmless. the behind the bastards episode on this has a pretty solid description of what that looks like right now, although they also do a lot of pretty pointless fearmongering about the death of art and the death of media literacy and saving the children. (incidentally, the "comics" described demonstrate the ways in which ai art has the same weaknesses as ai text - both are incapable of consistency or narrative. it's just that visual art doesn't necessarily need those things to be useful as art, and text (often) does). like, overall, the existence of these kids book scams are bad? but they're a gnat bite.
to find the worst case scenario of LLM misuse, you don't even have to leave the amazon kindle section. you don't even have to stop looking at scam books. all you have to do is change from looking at kids books to foraging guides. i'm not exaggerating when i say that in terms of texts whose factuality has direct consequences, foraging guides are up there with building safety regulations. if a foraging guide has incorrect information in it, people who use that foraging guide will die. that's all there is to it. there is no antidote to amanita phalloides poisoning, only supportive care, and even if you survive, you will need a liver transplant.
the problem here is that sometimes it's important for text to be factually accurate. openart isn't marketed as photographic software, and even though people do use it to lie, they have also been using photoshop to do that for decades, and before that it was scissors and paintbrushes. chatgpt and its ilk are sometimes marketed as fact-finding software, search engine assistants and writing assistants. and this is dangerous. because while people have been lying intentionally for decades, the level of misinformation potentially provided by chatgpt is unprecedented. and then there are people like the foraging book scammers who aren't lying on purpose, but rather not caring about the truth content of their output. obviously this happens in real life - the kids book scam i mentioned earlier is just an update of a non-ai scam involving ghostwriters - but it's much easier to pull off, and unlike lying for personal gain, which will always happen no matter how difficult it is, lying out of laziness is motivated by, well, the ease of the lie.* if it takes fifteen minutes and a chatgpt account to pump out fake foraging books for a quick buck, people will do it.
*also part of this is how easy it is to make things look like high effort professional content - people who are lying out of laziness often do it in ways that are obviously identifiable, and LLMs might make it easier to pass basic professionalism scans.
and honestly i don't think LLMs are the biggest problem that machine learning/ai creates here. while the ai foraging books are, well, really, really bad, most of the problem content generated by chatgpt is more on the level of scam children's books. the entire time that the internet has been shitting itself about ai art and LLM's i've been pulling my hair out about the kinds of priorities people have, because corporations have been using ai to sort the resumes of job applicants for years, and it turns out the ai is racist. there are all sorts of ways machine learning algorithms have been integrated into daily life over the past decade: predictive policing, self-driving cars, and even the youtube algorithm. and all of these are much more dangerous (in most cases) than chatgpt. it makes me insane that just because ai art and LLMs happen to touch on things that most internet users are familiar with the working of, people are freaking out about it because it's the death of art or whatever, when they should have been freaking out about the robot telling the cops to kick people's faces in.
(not to mention the environmental impact of all this crap.)
648 notes · View notes
lynzine · 2 months ago
Text
NaNoWriMo Alternatives
If you know what's happening (and has happened last year with NaNo) I want to offer some alternatives that people are working on to keep the event while ditching the organization.
@novella-november
I may reblog this post as I learn more but there are two right off the bat!
If you don't know why I feel the need to advocate for alternatives I will be getting into the second issue (not the triggering scandal last year) below the break.
(Long story short: On top of last year's scandal... they now have an AI sponsor. Which is a big red flag for me and feels like they are looking for content to train their AIs on as well as new consumers. What better place than thousands of unpublished novelists working on completing a novel in a month?)
(PS If you reblog... I don't need details of last year's scandal... please.)
-
Hi everyone, if you've been with me for a while you might know I'm a fan fo NaNoWriMo... The event, no longer the organization. I was wary... but willing to see how new policies and restructuring shook out after the major scandal last year (if you aren't aware, I won't be going into the details here, but you may find them triggering).
It's this second issue that I feel that I can actually speak on... Because NaNoWriMo has a new sponsor... and that sponsor is AI.
When I first heard that NaNo was okay with AI, I was wary but... not too upset. After all, NaNo's big thing for a long time was just "Just write" basically however you had too, whatever tools you need. I know that there are AIs designed to aid with grammar and clarity. And that seemed... if not fine, than understandable. NaNo (for me) is a major source of motivation and I could completely understand why NaNo wouldn't want wins invalidated because they used some kind of AI to help them along. Then I found out that NaNo picked up an AI sponsor and that completely changed the story for me. It went from the people NaNo served... to supporting one of the organizations that might be ripping off writers.
As I've said in the notes of my fics when I reluctantly took steps to protect my work from AIs like ChatGPT, putting my work in an AI is essentially monetizing my work without reimbursing me and other writers and undercutting jobs in writing that people like me truly want. I don't know what this sponsor's actual AI is like. But it is a massive red flag that they are sponsoring an organization that's focused on an event that generates thousands of potential books to train their AI on.
NaNo's official stance is "We also want to be clear in our belief that the categorical condemnation of Artificial Intelligence has classist and ableist undertones, and that questions around the use of AI tie to questions around privilege." NaNo is experiencing a lot of backlash.
I will be hoping that one of the alternatives are up and running come November.
31 notes · View notes
ruttotohtori · 1 month ago
Text
---
Analysoimamme sivuston sisältö toistaa äärioikeistolaista sanomaa ja salaliittoteorioita. Se myös kannustaa suoraan toimintaan. Tällä hetkellä suurin osa meillä leviävistä salaliittoteorioista nousee äärioikeistolaisesta tai konservatiivisesta maaperästä, kertoo suomalaista disinformaatiokenttää tutkiva viestinnän asiantuntija Janne Riiheläinen. – Suomessa tällä hetkellä hirveän monet ajatukset, narratiivit sekä salaliittoteoriat ovat peräisin Yhdysvaltain kulttuurisodista. Riiheläisen mukaan tekoälytyökalujen ansiosta kuka tahansa voi tuottaa laadullisesti ja määrällisesti yhä enemmän disinformaatiota.
---
– Videolla voidaan saada kuka tahansa sanomaan mitä tahansa kenen tahansa äänellä. Kuka vain voi tehdä juridisen kuuloisen tiedotteen mistä tahansa pähkähullusta asiasta. Riiheläisen mukaan meillä ei enää kohta ole tapoja todentaa tietoa muodon kautta. Tietoon pitää syventyä ennen kuin näemme, onko se luotettavaa vai ei.
---
Toimittaja-tietokirjailija Johanna Vehkoo kirjoittaa vuonna 2019 ilmestyneessä Valheenpaljastajan käsikirjassaan, että pitkään ajateltiin, että salaliittoteorioita ja disinformaatiota levittävät ihmiset, joilta puuttuu yhteiskunnallista valtaa. Viime vuosina asetelma on kääntynyt lähes päinvastaiseksi – salaliittoteoriat ja disinformaatio leviävät nykyisin myös yhteiskunnan huipulta, kuten poliitikoilta, julkkiksilta ja sosiaalisen median vaikuttajilta.
---
Generatiivisen tekoälyn käyttö oikeistoradikaalin disinformaation tuotannossa ei ole uusi ilmiö. Sitä on nähty tänä vuonna käytettävän esimerkiksi Southportin mellakoiden lietsomiseen, vaalivaikuttamiseen EU-vaaleissa ja Yhdysvaltain sisäpolitikan manipulointiin. Meemien muodossa disinformaatiota on esimerkiksi Elon Muskin omistaman X-alustan Grok-kuvageneraattorilla.
---
Kaupalliset kielimallit tarjoavat terroristiryhmittymille pomminteko-ohjeita, tietoa yhteiskunnallisen infrastruktuurin haavoittuvuuspisteistä sekä kirjoittavat käskystä sivukaupalla antisemitististä vihapuhetta. Kuva- ja äänigeneraattoreita puolestaan käytetään deepfake-videoiden sekä äärioikeistolaisten meemien luomiseen.
---
Johanna Vehkoon mukaan osa tutkijoista kutsuu suuria kielimalleja, etenkin ChatGPT:tä, ”paskapuhegeneraattoreiksi”, koska ne eivät tiedä, mikä on fakta. Ne voivat keksiä vakuuttavan näköisiä lähdeviitteitä, joita ei ole olemassa. Lisäksi ChatGPT voi keksiä olemattomia journalistisia artikkeleita ja tapahtumia. – Monet ihmiset käyttävät näitä tiedonhakuun. Se on hirvittävän pelottava kehityskulku faktantarkistuksen ja journalismin näkökulmasta, Vehkoo sanoo.
---
Suuret kielimallit on ohjelmoitu antamaan vastauksia kaikenlaisissa tilanteissa ja kuulostamaan vakuuttavilta, vaikka oikeasti ne eivät tiedä, mitä tieto on. Ongelmaa on lähes mahdoton poistaa, sillä se on sisäänrakennettu tekoälymalleihin. Lisäksi estoja voi kiertää.
---
Hyvä kohde henkilökohtaistetulle disinformaatioviestinnälle ovat esimerkiksi salaliittoteorioihin kallellaan olevat sosiaalisen median vaikuttajat, joiden kautta viesti leviää suurelle määrälle seuraajia.
---
Sisällöntuottajiin kohdistettu propagandakampanja nähtiin jo tänä vuonna, kun yhdysvaltalaiset oikeistolaiset vaikuttajat Tim Pool, Dave Rubin ja Benny Johnson levittivät Venäjän valtion sponsoroiman propagandakampanjan disinformaatiota, jossa käytettiin myös generatiivista tekoälyä.
---
11 notes · View notes
puraiuddo · 1 year ago
Text
Tumblr media
So by popular demand here is my own post about
Tumblr media Tumblr media
and why
This case will not affect fanwork.
The actual legal complaint that was filed in court can be found here and I implore people to actually read it, as opposed to taking some rando's word on it (yes, me, I'm some rando).
The Introductory Statement (just pages 2-3) shouldn't require being fluent in legalese and it provides a fairly straightforward summary of what the case is aiming to accomplish, why, and how.
That said, I understand that for the majority of people 90% of the complaint is basically incomprehensible, so please give me some leeway as I try to condense 4 years of school and a 47 page legal document into a tumblr post.
To abbreviate to the extreme, page 46 (paragraph 341, part d) lays out exactly what the plaintiffs are attempting to turn into law:
"An injunction [legal ruling] prohibiting Defendants [AI] from infringing Plaintiffs' [named authors] and class members' [any published authors] copyrights, including without limitation enjoining [prohibiting] Defendants from using Plaintiff's and class members' copyrighted works in "training" Defendant's large language models without express authorization."
That's it. That's all.
This case is not even attempting to alter the definition of "derivative work" and nothing in the language of the argument suggests that it would inadvertently change the legal treatment of "derivative work" going forward.
I see a lot of people throwing around the term "precedent" in a frenzy, assuming that because a case touches on a particular topic (eg “derivative work” aka fanart, fanfiction, etc) somehow it automatically and irrevocably alters the legal standing of that thing going forward.
That’s not how it works.
What's important to understand about the legal definition of "precedent" vs the common understanding of the term is that in law any case can simultaneously follow and establish precedent. Because no two cases are wholly the same due to the diversity of human experience, some elements of a case can reference established law (follow precedent), while other elements of a case can tread entirely new ground (establish precedent).
The plaintiffs in this case are attempting to establish precedent that anything AI creates going forward must be classified as "derivative work", specifically because they are already content with the existing precedent that defines and limits "derivative work".
The legal limitations of "derivative work", such as those dictating that only once it is monetized are its creators fair game to be sued, are the only reason the authors can* bring this to court and seek damages.
*this is called the "grounds" for a lawsuit. You can't sue someone just because you don't like what they're doing. You have to prove you are suffering "damages". This is why fanworks are tentatively "safe"—it's basically impossible to prove that Ebony Dark'ness Dementia is depriving the original creator of any income when she's providing her fanfic for free. On top of that, it's not worth the author’s time or money to attempt to sue Ebony when there's nothing for the author to monetarily gain from a broke nerd.
Pertaining to how AI/ChatGPT is "damaging" authors when Ebony isn't and how much of an unconscionable difference there is between the potential profits up for grabs between the two:
Page 9 (paragraphs 65-68) detail how OpenAI/ChatGPT started off as a non-profit in 2015, but then switched to for-profit in 2019 and is now valued at $29 Billion.
Pages 19-41 ("Plaintiff-Specific Allegations") detail how each named author in the lawsuit has been harmed and pages 15-19 ("GPT-N's and ChatGPT’s Harm to Authors") outline all the other ways that AI is putting thousands and thousands of other authors out of business by flooding the markets with cheap commissions and books.
The only ethically debatable portion of this case is the implications of expanding what qualifies as "derivative work".
However, this case seems pretty solidly aimed at Artificial Intelligence, with very little opportunity for the case to establish precedent that could be used against humans down the line. The language of the case is very thorough in detailing how the specific mechanics of AI means that it copies* copywritten material and how those mechanics specifically mean that anything it produces should be classified as "derivative work" (by virtue of there being no way to prove that everything it produces is not a direct product of it having illegally obtained and used** copywritten material).
*per section "General Factual Allegations" (pgs 7-8), the lawsuit argues that AI uses buzzwords ("train" "learn" "intelligence") to try to muddy how AI works, but in reality it all boils down to AI just "copying" (y'all can disagree with this if you want, I'm just telling you what the lawsuit says)
**I see a lot of people saying that it's not copyright infringement if you're not the one who literally scanned the book and uploaded it to the web—this isn't true. Once you "possess" (and downloading counts) copywritten material through illegal means, you are breaking the law. And AI must first download content in order to train its algorithm, even if it dumps the original content nano-seconds later. So, effectively, AI cannot interact with copywritten material in any capacity, by virtue of how it interacts with content, without infringing.
Now that you know your fanworks are safe, I'll provide my own hot take 🔥:
Even if—even if—this lawsuit put fanworks in jeopardy... I'd still be all for it!
Why? Because if no one can make a living organically creating anything and it leads to all book, TV, and movie markets being entirely flooded with a bunch of progressively more soulless and reductive AI garbage, what the hell are you even going to be making fanworks of?
But, no, actually because the dangers of AI weaseling its way into every crevice of society with impunity is orders of magnitude more dangerous and detrimental to literal human life than fanwork being harder to access.
Note to anyone who chooses to interact with this post in any capacity: Just be civil!
82 notes · View notes
transmascpetewentz · 3 months ago
Text
i would've been such a good web 2.0 youtuber. before the platform became sponsor and monetized hell. i could've been older jan misali but now it's impossible to compete with people who make 1 video per day by making chatgpt write their scripts and ai edit their videos and the videos are shit but are barely passable as something that isn't total slop and they add 5 billion ads and sponsorships to it, and it gets millions of views because they put out a video every day. it's very demoralizing to be a person with thoughts and opinions on web 3.0
10 notes · View notes
theliterarywolf · 1 year ago
Note
In positive news turns out Japan is regulating AI for educational purposes. Also with no monetization. Or Publication.
https://twitter.com/jonlamart/status/1665820769287389186?s=21
I did see that!
I also saw some AI Bros talking about 'Ugh, Japanese people are too dumn to realize how good a thing AI is'
And I feel that that, along with how it was revealed (a month or two ago) that all the maintenance grunt-work for ChatGPT was being handled by underpaid employees in Nairobi, is firm evidence of AI Bros having some, uh... let's not say racist but 'white American-centric' motivations behind their actions.
41 notes · View notes
sasquapossum · 7 months ago
Text
Here's a thought on how the internet is forcing people in multiple fields to monetize their work online, in the process making the online experience worse for everyone. Here's what I was responding to.
When lamenting the "old Internet", a lot of people forget that the vast majority of the people creating content on it were gainfully employed with strong career security. Meaning that they didn't need to make money from their hobbyist online projects, so they didn't need to monetize it. This is a lot different from today, where any sort of journalist/writer/artist/filmmaker is basically dependent on making content that sells ads or generates revenue, because their entire industries have gone online, or in many cases, been destroyed by the tech industry itself.
...and my response...
Interesting point. It makes me think of what happened to typesetters (including my mother) when desktop publishing came along. It was a bloodbath. Everyone was suddenly creating their own reports and newsletters, usually doing a terrible job, instead of paying professionals to do it right. Which is fine, actually, but it did lead to a lot of those skilled professionals losing their livelihoods. A few figured out ways to make it, either as a boutique business catering to those who still wanted work done to traditional standards or by teaching others how to do it themselves better, but most ended up leaving the profession. This is what's happening to a lot of artists, musicians, essayists, and others right now - even more so with "AI" everywhere. Lots of people unable to make a living with their hard-won skills, and insult added to injury as they have to watch others do those same things poorly. And programmers, just you wait until your livelihood consists of rescuing projects that went south because someone insisted on having ChatGPT write it instead of a professional human. For a fraction of what you used to make. I'm sure each and every one of you thinks you'll be one of the winners, still getting paid top dollar to do innovative work, but most of you are wrong. You'll probably get left high and dry just like most of your colleagues, and - unlike the typesetting example - it will mostly be our own collective fault. "Enshittification" already means something else, so we need a new term for when technology both drives people out of work and heralds a massive decline in median work-product quality. (So it's not just "disruption" which has become a word used mostly by tools anyway). Amateurization? Tyrofication?
8 notes · View notes
olderthannetfic · 1 year ago
Note
Sorry to add to the already overflowing inbox with more OTW and AI stuff. I DID see the original comment and I’m not wild about it (it sounded pretty naive), but there was such a meltdown in the comments that you’d think the OTW had just sold all of AO3 off to ChatGPT like it was a teen girl in one of those One Direction stories. There were also comments like “so I’m not allowed to monetize my fic but you get to profit off AI??” That’s not how that works ugh 🙄
--
Yeah. People can be... rather dramatic.
As far as I know, OTW isn't doing anything to enable AI scraping. It's just a fact of life on publicly-available content these days.
49 notes · View notes