#author's guild lawsuit
Explore tagged Tumblr posts
Text
So by popular demand here is my own post about
and why
This case will not affect fanwork.
The actual legal complaint that was filed in court can be found here and I implore people to actually read it, as opposed to taking some rando's word on it (yes, me, I'm some rando).
The Introductory Statement (just pages 2-3) shouldn't require being fluent in legalese and it provides a fairly straightforward summary of what the case is aiming to accomplish, why, and how.
That said, I understand that for the majority of people 90% of the complaint is basically incomprehensible, so please give me some leeway as I try to condense 4 years of school and a 47 page legal document into a tumblr post.
To abbreviate to the extreme, page 46 (paragraph 341, part d) lays out exactly what the plaintiffs are attempting to turn into law:
"An injunction [legal ruling] prohibiting Defendants [AI] from infringing Plaintiffs' [named authors] and class members' [any published authors] copyrights, including without limitation enjoining [prohibiting] Defendants from using Plaintiff's and class members' copyrighted works in "training" Defendant's large language models without express authorization."
That's it. That's all.
This case is not even attempting to alter the definition of "derivative work" and nothing in the language of the argument suggests that it would inadvertently change the legal treatment of "derivative work" going forward.
I see a lot of people throwing around the term "precedent" in a frenzy, assuming that because a case touches on a particular topic (eg “derivative work” aka fanart, fanfiction, etc) somehow it automatically and irrevocably alters the legal standing of that thing going forward.
That’s not how it works.
What's important to understand about the legal definition of "precedent" vs the common understanding of the term is that in law any case can simultaneously follow and establish precedent. Because no two cases are wholly the same due to the diversity of human experience, some elements of a case can reference established law (follow precedent), while other elements of a case can tread entirely new ground (establish precedent).
The plaintiffs in this case are attempting to establish precedent that anything AI creates going forward must be classified as "derivative work", specifically because they are already content with the existing precedent that defines and limits "derivative work".
The legal limitations of "derivative work", such as those dictating that only once it is monetized are its creators fair game to be sued, are the only reason the authors can* bring this to court and seek damages.
*this is called the "grounds" for a lawsuit. You can't sue someone just because you don't like what they're doing. You have to prove you are suffering "damages". This is why fanworks are tentatively "safe"—it's basically impossible to prove that Ebony Dark'ness Dementia is depriving the original creator of any income when she's providing her fanfic for free. On top of that, it's not worth the author’s time or money to attempt to sue Ebony when there's nothing for the author to monetarily gain from a broke nerd.
Pertaining to how AI/ChatGPT is "damaging" authors when Ebony isn't and how much of an unconscionable difference there is between the potential profits up for grabs between the two:
Page 9 (paragraphs 65-68) detail how OpenAI/ChatGPT started off as a non-profit in 2015, but then switched to for-profit in 2019 and is now valued at $29 Billion.
Pages 19-41 ("Plaintiff-Specific Allegations") detail how each named author in the lawsuit has been harmed and pages 15-19 ("GPT-N's and ChatGPT’s Harm to Authors") outline all the other ways that AI is putting thousands and thousands of other authors out of business by flooding the markets with cheap commissions and books.
The only ethically debatable portion of this case is the implications of expanding what qualifies as "derivative work".
However, this case seems pretty solidly aimed at Artificial Intelligence, with very little opportunity for the case to establish precedent that could be used against humans down the line. The language of the case is very thorough in detailing how the specific mechanics of AI means that it copies* copywritten material and how those mechanics specifically mean that anything it produces should be classified as "derivative work" (by virtue of there being no way to prove that everything it produces is not a direct product of it having illegally obtained and used** copywritten material).
*per section "General Factual Allegations" (pgs 7-8), the lawsuit argues that AI uses buzzwords ("train" "learn" "intelligence") to try to muddy how AI works, but in reality it all boils down to AI just "copying" (y'all can disagree with this if you want, I'm just telling you what the lawsuit says)
**I see a lot of people saying that it's not copyright infringement if you're not the one who literally scanned the book and uploaded it to the web—this isn't true. Once you "possess" (and downloading counts) copywritten material through illegal means, you are breaking the law. And AI must first download content in order to train its algorithm, even if it dumps the original content nano-seconds later. So, effectively, AI cannot interact with copywritten material in any capacity, by virtue of how it interacts with content, without infringing.
Now that you know your fanworks are safe, I'll provide my own hot take 🔥:
Even if—even if—this lawsuit put fanworks in jeopardy... I'd still be all for it!
Why? Because if no one can make a living organically creating anything and it leads to all book, TV, and movie markets being entirely flooded with a bunch of progressively more soulless and reductive AI garbage, what the hell are you even going to be making fanworks of?
But, no, actually because the dangers of AI weaseling its way into every crevice of society with impunity is orders of magnitude more dangerous and detrimental to literal human life than fanwork being harder to access.
Note to anyone who chooses to interact with this post in any capacity: Just be civil!
#fanfiction#ao3#fanart#copyright law#copyright#chatgpt#openai#openai lawsuit#chatgpt lawsuit#author's guild#author's guild lawsuit#george rr martin#george rr martin lawsuit#copyright infringement#purs essays#purs post#purs discourse
81 notes
·
View notes
Text
Richard Luscombe at The Guardian:
Six major book publishers have teamed up to sue the US state of Florida over an “unconstitutional” law that has seen hundreds of titles purged from school libraries following rightwing challenges. The landmark action targets the “sweeping book removal provisions” of House Bill 1069, which required school districts to set up a mechanism for parents to object to anything they considered pornographic or inappropriate. A central plank of Republican governor Ron DeSantis’s war on “woke” on Florida campuses, the law has been abused by rightwing activists who quickly realized that any book they challenged had to be immediately removed and replaced only after the exhaustion of a lengthy and cumbersome review process, if at all, the publishers say. Since it went into effect last July, countless titles have been removed from elementary, middle and high school libraries, including American classics such as Brave New World by Aldous Huxley, For Whom the Bell Tolls by Ernest Hemingway and The Adventures of Tom Sawyer by Mark Twain.
Contemporary novels by bestselling authors such as Margaret Atwood, Judy Blume and Stephen King have also been removed, as well as The Diary of a Young Girl, Anne Frank’s gripping account of the Holocaust, according to the publishers. “Florida HB 1069’s complex and overbroad provisions have created chaos and turmoil across the state, resulting in thousands of historic and modern classics, works we are proud to publish, being unlawfully labeled obscene and removed from shelves,” Dan Novack, vice-president and associate general counsel of Penguin Random House (PRH), said in a statement. “Students need access to books that reflect a wide range of human experiences to learn and grow. It’s imperative for the education of our young people that teachers and librarians be allowed to use their professional expertise to match our authors’ books to the right reader at the right time in their life.” PRH is joined in the action by Hachette Book Group, HarperCollins Publishers, Macmillan Publishers, Simon & Schuster and Sourcebooks. The 94-page lawsuit, which also features as plaintiffs the Authors Guild and a number of individual writers, was filed in federal court in Orlando on Thursday.
The suit contends the book removal provisions violate previous supreme court decisions relating to reviewing works for their literary, artistic, political and scientific value as a whole while considering any potential obscenity; and seeks to restore the discretion “of trained educators to evaluate books holistically to avoid harm to students who will otherwise lose access to a wide range of viewpoints”. “Book bans censor authors’ voices, negating and silencing their lived experience and stories,” Mary Rasenberger, chief executive of the Authors Guild, said in the statement. “These bans have a chilling effect on what authors write about, and they damage authors’ reputations by creating the false notion that there is something unseemly about their books. “Yet these same books have edified young people for decades, expanding worlds and fostering self-esteem and empathy for others. We all lose out when authors’ truths are censored.” Separate from the publishers’ action, a group of three parents filed their own lawsuit in June, insisting that the law discriminated against parents who oppose book bans and censorship because it allowed others to dictate what their children can and cannot read.
Six major publishers sue Florida over book ban law HB1069.
96 notes
·
View notes
Text
a point about the IA situation that I cannot make on twitter without death threats
Like many authors, I have complicated feelings about the IA lawsuit. IA has a whole raft of incredibly invaluable services, that's not in dispute. The current eBook licensing structure is also clearly not sustainable. Neither was IA's National Emergency Library, which was unrestricted lending of unlicensed digital copies. There are some thoughtful posts about how their argument to authors, "you'll be paid in exposure," is not especially compelling.
But I'm not here to discuss that; I'm here to talk about the licensing. TL;DR I don't want my work being fed into an AI or put on the blockchain, and to enforce that, you need a license.
So, here's the thing. IA's argument for the NEL boils down to "if we possess a physical copy of the book we should be able to do what we want" and that's frankly unserious. (Compare: Crypto Bros who thought spending $3 million on a single copy of a Dune artbook meant they owned the copyright.) Their claim is that by scanning a physical copy of the book and compiling the scans into a digital edition, that is sufficiently transformative to be considered fair use.
What that gives them is something that functions almost identically to an eBook, without the limitations (or financial obligations) of an eBook license. And I'm sure some of you are thinking, "so what, you lose six cents, get over yourself," but this isn't actually about the money. It's about what they can do with the scans.
A license grants them the right to use the work in specific, limited ways. It also bars them from using it in ways that aren't prescribed.
For example, what if IA decides to expand their current blockchain projects and commit their scanned book collections to the blockchain "for preservation"? Or what if IA decides to let AI scrapers train on the scanned text? One of their archivists sees AI art as a "toy" and "fears [AI art] will be scorned by the industry's gatekeeping types."
Bluntly, an unlicensed, unrestricted collection seems to be what they're gunning for. (Wonky but informative thread from a lawyer with a focus on IP; this cuts to the pertinent part, but the whole thing's good reading.) The Authors Guild is in no way unbiased here, but in the fifth paragraph of this press release, they claim that they offered to help IA work out a licensing agreement back in 2017, and got stonewalled. (They also repeat this claim on Twitter here.)
At the end of the day, I don't want the IA to fold; I don't think anyone does. As a matter of fact, I'd be open to offering them an extremely cheap license for Controlled Digital Lending. (And revamping eBook library licensing while we're at it.) I think there's a lot of opportunity for everyone to win there. But IA needs to recognize that licenses exist for a reason, not just as a cash grab, and authors have the right to say how their work is used, just like any artist.
#good god I'm not putting tags on this#can you imagine#though maybe I'm just twitchy from my time in the twitter trenches#twenches? twinches? they both sound like felonies?
1K notes
·
View notes
Text
Nov. 20, 2024:
The Volusia County School Board Friday responded to a lawsuit filed by the country's largest book publishers challenging Florida's book removal provisions of HB 1069 claiming it was only following state law, among other defenses. Penguin Random House, Hachette Book Group, HarperCollins Publishers, Macmillan Publishers and Simon & Schuster, known as The Big 5, are joined in the lawsuit against the Florida State Board of Education, Volusia County School Board and Orange County School Board by Sourcebooks; The Authors Guild; and critically acclaimed authors Julia Alvarez, Laurie Halse Anderson, John Green, Jodi Picoult and Angie Thomas. A Volusia County parent and student and an Orange County parent and student are also plaintiffs.
Read the rest via the Daytona Beach News-Journal.
89 notes
·
View notes
Text
I have no idea why this needs to be said, but you can hate generative AI, love the Public Domain, love media preservation, hate the overbearing US Copyright system, and... still believe that Copyright Laws exist in the first place for a reason, (even if, thanks to Big Corporation Monopolies, it's been twisted into its current behemoth monstrosity.)
You can hate Large Language Models and still believe in Copyright Reform over Copyright Abolishment.
You can believe in Media Preservation and still believe that Plagiarism is wrong.
You can hate the current restrictive Copyright Laws without wanting to abolish them entirely.
You can love the Public Domain and still loath predatory corporations stealing everything they can get their hands on, to literally *feed the machine.*
These things are not mutually exclusive, and if you think that
"you can't hate AI if you hate the current copyright laws"
or that
"Hating on Generative AI will only give us more restrictive copyright and IP laws, therefore you need to normalize and accept generative AI stealing all of your creations and every single thing you've ever said on the internet!"
I just genuinely don't understand how you can say this kind of crap if you've ever interacted with any creative person in your life.
I'm a wanna-be-author.
I want as many people to be able to afford my written works as possible without restrictions, and I fully plan on having free ebooks of my works available for those who can't afford to buy them.
*That does *not* mean I, in any way shape or form, would ever consent to people stealing my work and uploading it into a Large Language Model and telling it to spit out fifty unauthorized sequels that are then sold for cash profit!*
You cannot support generative AI and turn around and try to claim you're actually just defending small time artists, and *also* you think no one should have any legal protections at all protecting their work from plagiarism at all.
Supporting unethical generative AI (which is literally all of them currently), protecting artists, and *completely abolishing* copyright and intellectual property laws instead of reforming them *are* mutually exclusive concepts.
You *cannot* worship the plagiarism machine, claim to care about small artists, and then say that those same small artists should have absolutely *zero* legal protections to stop their work being plagiarized.
The only way AI could even begin to approach being ethical would be if using it to begin with wasn't a huge hazard to the enviornment, and if it was trained *exclusively * on Public Domain works that had to be checked and confirmed by multiple real human beings before it was put into the training data.
And oh, would you look at that?
Every single AI model is currently just sucking up the entire fucking goddamn internet and everything ever posted on it and everything ever downloaded from it with no way to really truly opt out of it or even just to know if your work has been fed to the machine until an entire page of text from your book pops out when it generates text from someone's writing prompt.
And no, it's not just "privileged Western authors" who are being exploited by AI.
For an updating list of global legal cases again AI tech giants, see this link here to stay up to date as cases develop:
#large text#long post#anti ai#fuck ai#not writing#copyright reform#copyright law#intellectual property
38 notes
·
View notes
Note
I have a question about your post regarding AI in which you detailed some agents' concerns. In particular you mentioned "we don't want our authors or artists work to be data-mined / scraped to "train" AI learning models/bots".
I completely agree, but what could be done to prevent this?
(I am no expert and clearly have NO idea what the terminology really is, but hopefully you will get it, sorry in advance?)
I mean, this is literally the thing we are all trying to figure out lol. But a start would be to have something in the contracts that SAYS Publishers do not have permission to license or otherwise permit companies to incorporate this copyrighted work into AI learning models, or to utilize this technology to mimic an author’s work.
The companies that are making AI bots or whatever are not shadowy guilds of hackers running around stealing things (despite how "web scraping" and "data mining" and all that sounds, which admittedly is v creepy and ominous!) -- web scraping, aka using robots to gather large amounts of publicly available data, is legal. That's like, a big part of how the internet works, it's how Google knows things when you google them, etc.
It's more dubious if scraping things that are protected under copyright is legal -- the companies would say that it is covered under fair use, that they are putting all this info in there to just teach the AI, and it isn't to COPY the author's work, etc etc. The people whose IP it is, though, probs don't feel that way -- and the law is sort of confused/non-existent. (There are loads of lawsuits literally RIGHT NOW that are aiming to sort some of this out, and the Writer's Guild strike which is ongoing and SAG-AFTRA strike which started this week is largely centered around some of the same issues when it comes to companies using AI for screenwriting, using actor's likeness and voice, etc.) Again, these are not shadowy organizations operating illegally off the coast of whatever -- these are regular-degular companies who can be sued, held to account, regulated, etc. The laws just haven’t caught up to the technology yet.
Point being, it's perhaps unethical to "feed" copyrighted work into an AI thing without permission of the copyright holder, but is it ILLEGAL? Uh -- yes??? but also ?????. US copyright law is pretty clear that works generated entirely by AI can't be protected under copyright -- and that works protected by copyright can't be, you know, copied by somebody else -- but there's a bit of a grey area here because of fair use? It’s confusing, for sure, and I'm betting all this is being hashed out in court cases and committee rooms and whatnot as I type.
Anywhoo, the first steps are clarifying these things contractually. Authors Guild (and agents) take the stance that this permission to "feed" info to AI learning models is something the Author automatically holds rights to, and only the author can decide if/when a book is "fed" into an AI... thing.
The Publishers kinda think this is something THEY hold the rights to, or both parties do, and that these rights should be frozen so NEITHER party can choose to "feed", or neither can choose to do so without the other's permission.
(BTW just to be clear, as I understand it -- which again is NOT MUCH lol -- this "permission" is not like, somebody calls each individual author and asks for permission -- it's part of the coding. Like how many e-books are DRM protected, so they are locked to a particular platform / device and you can't share them etc -- there are bits of code that basically say NOPE to scrapers. So (in my imagination, at least), the little spider-robot is Roomba-ing around the internet looking for things to scrape and it comes across this bit of code and NOPE, they have to turn around and try the next thing. Now – just like if an Etsy seller made mugs with pictures of Mickey Mouse on them, using somebody else’s IP is illegal – and those people CAN be sued if the copyright holder has the appetite to do that - but it’s also hard to stop entirely. So if some random person took your book and just copied it onto a blog -- the spider-robot wouldn't KNOW that info was under copyright, or they don't have permission to gobble it up, because it wouldn't have that bit of code to let them know -- so in that way it could be that nobody ever FULLY knows that the spider-robots won't steal their stuff, and publishers can't really be liable for that if third parties are involved mucking it up -- but they certainly CAN at least attempt to protect copyright!)
But also, you know how I don't even know what I'm talking about and don't know the words? Like in the previous paragraphs? The same goes for all the publishers and everyone else who isn't already a tech wizard, ALL of whom are suddenly learning a lot of very weird words and phrases and rules that nobody *exactly* understands, and it's all changing by the week (and by the day, even).
Publishers ARE starting to add some of this language, but I also would expect it to feel somewhat confused/wild-west-ish until some of the laws around this stuff are clearer. But really: We're all working on it!
87 notes
·
View notes
Text
On Monday, the leadership of the Screen Actors Guild–American Federation of Television and Radio Artists held a members-only webinar to discuss the contract the union tentatively agreed upon last week with the Alliance of Motion Picture and Television Producers. If ratified, the contract will officially end the longest labor strike in the guild’s history.
For many in the industry, artificial intelligence was one of the strike's most contentious, fear-inducing components. Over the weekend, SAG released details of its agreed AI terms, an expansive set of protections that require consent and compensation for all actors, regardless of status. With this agreement, SAG has gone substantially further than the Directors Guild of America or the Writers Guild of America, who preceded the group in coming to terms with the AMPTP. This isn’t to say that SAG succeeded where the other unions failed but that actors face more of an immediate, existential threat from machine-learning advances and other computer-generated technologies.
The SAG deal is similar to the DGA and WGA deals in that it demands protections for any instance where machine-learning tools are used to manipulate or exploit their work. All three unions have claimed their AI agreements are "historic" and "protective," but whether one agrees with that or not, these deals function as important guideposts. AI doesn't just posit a threat to writers and actors—it has ramifications for workers in all fields, creative or otherwise.
For those looking to Hollywood's labor struggles as a blueprint for how to deal with AI in their own disputes, it's important that these deals have the right protections, so I understand those who have questioned them or pushed them to be more stringent. I’m among them. But there is a point at which we are pushing for things that cannot be accomplished in this round of negotiations and may not need to be pushed for at all.
To better understand what the public generally calls AI and its perceived threat, I spent months during the strike meeting with many of the leading engineers and tech experts in machine-learning and legal scholars in both Big Tech and copyright law.
The essence of what I learned confirmed three key points: The first is that the gravest threats are not what we hear most spoken about in the news—most of the people whom machine-learning tools will negatively impact aren’t the privileged but low- and working-class laborers and marginalized and minority groups, due to the inherent biases within the technology. The second point is that the studios are as threatened by the rise and unregulated power of Big Tech as the creative workforce, something I wrote about in detail earlier in the strike here and that WIRED’s Angela Watercutter astutely expanded upon here.
Both lead to the third point, which speaks most directly to the AI deals: No ironclad legal language exists to fully protect artists (or anyone) from exploitation involving machine-learning tools.
When we hear artists talk about fighting AI on legal grounds, they’re either suing for copyright infringement or requiring tech companies to cease inputting creative works into their AI models. Neither of these approaches are effective in the current climate. Copyright law is designed to protect intellectual property holders, not creative individuals, and the majority of these infringement lawsuits are unlikely to succeed or, if they do, are unlikely to lead to enforceable new laws. This became evident when the Authors Guild failed in its copyright lawsuit against Google in 2015; and it faces similar challenges with its new suit, as outlined here.
The demand to control the ability of AI to train on artists' work betrays a fundamental lack of understanding of how these models and the companies behind them function, as we can’t possibly prevent who scrapes what in an age where everything is already ingested online. It also relies on trusting tech companies to police themselves and not ingest works they have been told not to, knowing it’s nearly impossible to prove otherwise.
Tech entities like OpenAI are black boxes that offer little to no disclosure about how their datasets work, as are all the major Big Tech players. That doesn’t mean we shouldn’t fight for greater transparency and reform copyright protections. However, that’s a long and uncertain game and requires government entities like the US Federal Trade Commission to be willing to battle the deep-pocketed lobbyists preventing meaningful legislation against their Big Tech bosses. There will be progress eventually, but certainly not in time for this labor crisis that has hurt so many.
The absence of enforceable laws that would shackle Big Tech doesn’t make these deals a toothless compromise—far from it. There is great value in a labor force firmly demanding its terms be codified in a contract. The studios can find loopholes around some of that language if they choose, as they have in the past, but they will then be in breach of their agreed contract and will face publicly shaming lawsuits by influential and beloved artists and the potential of another lengthy and costly strike.
What is historic in these Hollywood deals is the clear statement of what the creative workforce will and won't tolerate from the corporations. Standing in solidarity behind that statement carries tremendous weight, even if it isn't fully enforceable. It sends a message to other industry unions, several of which are facing upcoming contract negotiations, and to all labor movements, that workers will not tolerate being exploited and replaced by the rapid advance of Big Tech. And it should not be lost on the AMPTP that it may soon find itself making similar demands for its own survival to the Big Tech companies, who are perfectly poised to circumvent or devour the legacy studios.
Over the weekend, there were calls for SAG members to reject the contract based on its AI stipulations. I'll be voting to ratify, as I did for the DGA and WGA agreements—not because the terms are perfect or ironclad but because the deal is meaningful and effective. And there are no practical and immediate solutions that aren’t currently addressed. It’s time to get back to work.
This is not a fight that ends with the current strike; it’s early days in the Tech Era, with both painful disruption and significant benefits to come. The SAG deal, in combination with the DGA and WGA deals, is a momentous early blow in labor’s fight for a fair and equitable place in the new world.
25 notes
·
View notes
Text
George R.R. Martin and a number of other authors have filed a class action suit against OpenAI.
This suit is being fought over copyright law.
The complaint claims that OpenAI, the company behind viral chatbot ChatGPT, is copying famous works in acts of “flagrant and harmful” copyright infringement and feeding manuscripts into algorithms to help train systems on how to create more human-like text responses. George R.R. Martin, Jodi Picoult, John Grisham and Jonathan Franzen are among the 17 prominent authors who joined the suit led by the Authors Guild, a professional organization that protects writers’ rights. Filed in the Southern District of New York, the suit alleges that OpenAI’s models directly harm writers’ abilities to make a living wage, as the technology generates texts that writers could be paid to pen, as well as uses copyrighted material to create copycat work. “Generative AI threatens to decimate the author profession,” the Authors Guild wrote in a press release Wednesday.
The suit makes reference to attempts by OpenAI to complete ASoIaF artificially.
The suit alleges that books created by the authors that were illegally downloaded and fed into GPT systems could turn a profit for OpenAI by “writing” new works in the authors’ styles, while the original creators would get nothing. The press release lists AI efforts to create two new volumes in Martin’s Game of Thrones series and AI-generated books available on Amazon.
This is not the first effort to keep GPT systems from pillaging works of literature.
More than 10,000 authors — including James Patterson, Roxane Gay and Margaret Atwood — also signed an open letter calling on AI industry leaders like Microsoft and ChatGPT-maker OpenAI to obtain consent from authors when using their work to train AI models, and to compensate them fairly when they do.
At least in the United States, the law has not kept up with digital technology.
@neil-gaiman
#game of thrones#house of the dragon#chatgpt#openai#the authors guild#george r.r. martin#asoiaf#gra o tron#trône de fer#kampen om tronen#juego de tronos#trono di spade#jogo dos tronos#hra o trůny#valtaistuinpeli#isang kanta ng yelo at apoy#pemainan takhta#trò chơi của ngai#صراع العروش#гра престолів#왕좌의 게임#权力的游戏#ゲームの玉座#গেম অব থ্রোনস#تخت کے کھیل#गेम ऑफ़ थ्रोन्स#ഗെയിം ഓഫ് ത്രോൺസ്#משחקי הכס#игра престолов#Գահերի խաղը
13 notes
·
View notes
Text
US authors George RR Martin and John Grisham are suing ChatGPT-owner OpenAI over claims their copyright was infringed to train the system.
Martin is known for his fantasy series A Song of Ice and Fire, which was adapted into HBO show Game of Thrones.
ChatGPT and other large language models (LLMs) "learn" by analysing a massive amount of data often sourced online.
The lawsuit claims the authors' books were used without their permission to make ChatGPT smarter.
OpenAI said it respected the rights of authors, and believed "they should benefit from AI technology".
Other prominent authors named in the complaint include Jonathan Franzen, Jodi Picoult and George Saunders.
The case has been brought to the federal court in Manhattan, New York, by the Authors Guild, a trade group in the US working on behalf of the named authors.
According to the filing, it accused OpenAI of engaging in "systematic theft on a mass scale".
It follows similar legal action brought by comedian Sarah Silverman in July, as well as an open letter signed by authors Margaret Atwood and Philip Pullman that same month calling for AI companies to compensate them for using their work.
A spokesperson for OpenAI said: "We're having productive conversations with many creators around the world, including the Authors Guild, and have been working co-operatively to understand and discuss their concerns about AI.
"We're optimistic we will continue to find mutually beneficial ways to work together."
AI 'displacing humans'
The case argues that the LLM was fed data from copyrighted books without the permission of the authors, in part because it was able to provide accurate summaries of them.
The lawsuit also pointed to a broader concern in the media industry - that this kind of technology is "displacing human-authored" content.
Patrick Goold, reader in law at City University, told BBC News that while he could sympathise with the authors behind the lawsuit, he believed it was unlikely it would succeed, saying they would initially need to prove ChatGPT had copied and duplicated their work.
"They're actually not really worried about copyright, what they're worried about is that AI is a job killer," he said, likening the concerns to those screenwriters are currently protesting against in Hollywood.
"When we're talking about AI automation and replacing human labour... it's just not something that copyright should fix.
"What we need to be doing is going to Parliament and Congress and talking about how AI is going to displace the creative arts and what we need to do about that in the future."
The case is the latest in a long line of complaints brought against developers of so-called generative AI - that is, artificial intelligence that can create media based on text prompts - over this concern.
It comes after digital artists sued text-to-image generators Stability AI and Midjourney in January, claiming they only function by being trained on copyrighted artwork.
And OpenAI is also facing a lawsuit, alongside Microsoft and programming site GitHub, from a group of computing experts who argue their code was used without their permission to train an AI called Copilot.
None of these lawsuits has yet been resolved.
13 notes
·
View notes
Text
The "National Emergency Library" & Hachette v. Internet Archive
While the Internet Archive is known as the creator and host of the Wayback Machine and many other internet and digital media preservation projects, the IA collection in question in Hachette v. Internet Archive is their Open Library. The Open Library has been digitizing books since as early as 2005, and in early 2011, began to include and distribute copyrighted books through Controlled Digital Lending (CDL). In total, the IA includes 3.6 million copyrighted books and continues to scan over 4,000 books a day.
During the early days of the pandemic, from March 24, 2020, to June 16, 2020, specifically, the Internet Archive offered their National Emergency Library, which did away with the waitlist limitations on their pre-existing Open Library. Instead of following the strict rules laid out in the Position Statement on Controlled Digital Lending, which mandates an equal “owned to loaned” ratio, the IA allowed multiple readers to access the same digitized book at once. This, they said, was a direct emergency response to the worldwide pandemic that cut off people’s access to physical libraries.
In response, on June 1, 2020, Hachette Book Group, HarperCollins, John Wiley & Sons, and Penguin Random House filed a lawsuit against the IA over copyright infringement. Out of their collective 33,000 copyrighted titles available on Open Library, the publishers’ lawsuit focused on 127 books specifically (known in the legal documentation as the “Works in Suit”). After two years of argument, on March 24, 2023, Judge John George Koeltl ruled in favor of the publishers.
The IA’s fair use defense was found to be insufficient as the scanning and distribution of books was not found to be transformative in any way, as opposed to other copyright lawsuits that ruled in favor of digitizing books for “utility-expanding” purposes, such as Authors Guild, Inc. v. HathiTrust. Furthermore, it was found that even prior to the National Emergency Library, the Open Library frequently failed to maintain the “owned to loaned” ratio by not sufficiently monitoring the circulation of books it borrows from partner libraries. Finally, despite being a nonprofit organization overall, the IA was found to profit off of the distribution of the copyrighted books, specifically through a Better World Books link that shares part of every sale made through that specific link with the IA.
It worth noting that this ruling specifies that “even full enforcement of a one-to-one owned-to-loaned ratio, however, would not excuse IA’s reproduction of the Works in Suit.” This may set precedent for future copyright cases that attempt to claim copyright exemption through the practice of controlled digital lending. It is unclear whether this ruling is limited to the National Emergency Library specifically, or if it will affect the Open Library and other collections that practice CDL moving forward.
Edit: I recommend seeing what @carriesthewind has to say about the most recent updates in the Internet Archive cases for a lawyers perspective of how these cases will effective the future of digital lending law in the U.S.
Further Reading:
Full History of Hachette Book Group, Inc. v. Internet Archive [Released by the Free Law Project]
Hachette v. Internet Archive ruling
Internet Archive Loses Lawsuit Over E-Book Copyright Infringement
The Fight Continues [Released by The Internet Archive]
Authors Guild Celebrates Resounding Win in Internet Archive Infringement Lawsuit [Released by The Authors Guild]
Relevant Court Cases:
Authors Guild, Inc. v. Google, Inc.
Authors Guild, Inc. v. HathiTrust
Capitol Records v. ReDigi
Index:
MASTER POST
First-Sale Doctrine & the Economics of E-books
Controlled Digital Lending (CDL)
The “National Emergency Library” & Hachette v. Internet Archive
Authors, Publishers & You
-- Authors: Ideology v. Practicality
-- Publishers: What Authors Are Paid
-- You: When Is Piracy Ethical?
11 notes
·
View notes
Text
3 notes
·
View notes
Text
The human CEO, Dario Amday, is trying to deposit it to Openai copyright lawsuits.
According to a new court, the human CEO, Dario Amode, is trying to avoid being left in copyright lawsuit against Openai. According to that, the plaintiff’s lawyer -author guild– motion Forced the testimony from Amday and his co -founder, Benjamin Man. The author’s lawyer claims that both Amodei and MANN, who worked in Openai, have “very unique and direct knowledge about this case”. Author’s Guild…
0 notes
Text
Anthropic CEO Amboui is trying to make the moneynotes in an openai purea patent lawsuit
Anthropic CEO Darario Amai is trying to avoid deleting OpenAIAAAAA. Lawyers in the Dharma – Lawyers in Justice – Author Guild – A Going Amodei and his nute ropic co-founder, Benjamin Mann. The writers of the Guild, the Guild lawyers, told Alocodei and Mann, saying Almodei and Mann. Writing Association: Represents the writers Like John Grisham, George Rr Martin and Sylvia Day, Sylvia Day and…
0 notes
Text
Anthropic CEO Amboui is trying to make the moneynotes in an openai purea patent lawsuit
Anthropic CEO Darario Amai is trying to avoid deleting OpenAIAAAAA. Lawyers in the Dharma – Lawyers in Justice – Author Guild – A Going Amodei and his nute ropic co-founder, Benjamin Mann. The writers of the Guild, the Guild lawyers, told Alocodei and Mann, saying Almodei and Mann. Writing Association: Represents the writers Like John Grisham, George Rr Martin and Sylvia Day, Sylvia Day and…
0 notes
Text
Anthropic CEO Amboui is trying to make the moneynotes in an openai purea patent lawsuit
Anthropic CEO Darario Amai is trying to avoid deleting OpenAIAAAAA. Lawyers in the Dharma – Lawyers in Justice – Author Guild – A Going Amodei and his nute ropic co-founder, Benjamin Mann. The writers of the Guild, the Guild lawyers, told Alocodei and Mann, saying Almodei and Mann. Writing Association: Represents the writers Like John Grisham, George Rr Martin and Sylvia Day, Sylvia Day and…
0 notes
Text
Il rapporto del creatore di Game of Thrones con l'intelligenza artificiale
0 notes