#author's guild lawsuit
Explore tagged Tumblr posts
puraiuddo · 1 year ago
Text
Tumblr media
So by popular demand here is my own post about
Tumblr media Tumblr media
and why
This case will not affect fanwork.
The actual legal complaint that was filed in court can be found here and I implore people to actually read it, as opposed to taking some rando's word on it (yes, me, I'm some rando).
The Introductory Statement (just pages 2-3) shouldn't require being fluent in legalese and it provides a fairly straightforward summary of what the case is aiming to accomplish, why, and how.
That said, I understand that for the majority of people 90% of the complaint is basically incomprehensible, so please give me some leeway as I try to condense 4 years of school and a 47 page legal document into a tumblr post.
To abbreviate to the extreme, page 46 (paragraph 341, part d) lays out exactly what the plaintiffs are attempting to turn into law:
"An injunction [legal ruling] prohibiting Defendants [AI] from infringing Plaintiffs' [named authors] and class members' [any published authors] copyrights, including without limitation enjoining [prohibiting] Defendants from using Plaintiff's and class members' copyrighted works in "training" Defendant's large language models without express authorization."
That's it. That's all.
This case is not even attempting to alter the definition of "derivative work" and nothing in the language of the argument suggests that it would inadvertently change the legal treatment of "derivative work" going forward.
I see a lot of people throwing around the term "precedent" in a frenzy, assuming that because a case touches on a particular topic (eg “derivative work” aka fanart, fanfiction, etc) somehow it automatically and irrevocably alters the legal standing of that thing going forward.
That’s not how it works.
What's important to understand about the legal definition of "precedent" vs the common understanding of the term is that in law any case can simultaneously follow and establish precedent. Because no two cases are wholly the same due to the diversity of human experience, some elements of a case can reference established law (follow precedent), while other elements of a case can tread entirely new ground (establish precedent).
The plaintiffs in this case are attempting to establish precedent that anything AI creates going forward must be classified as "derivative work", specifically because they are already content with the existing precedent that defines and limits "derivative work".
The legal limitations of "derivative work", such as those dictating that only once it is monetized are its creators fair game to be sued, are the only reason the authors can* bring this to court and seek damages.
*this is called the "grounds" for a lawsuit. You can't sue someone just because you don't like what they're doing. You have to prove you are suffering "damages". This is why fanworks are tentatively "safe"—it's basically impossible to prove that Ebony Dark'ness Dementia is depriving the original creator of any income when she's providing her fanfic for free. On top of that, it's not worth the author’s time or money to attempt to sue Ebony when there's nothing for the author to monetarily gain from a broke nerd.
Pertaining to how AI/ChatGPT is "damaging" authors when Ebony isn't and how much of an unconscionable difference there is between the potential profits up for grabs between the two:
Page 9 (paragraphs 65-68) detail how OpenAI/ChatGPT started off as a non-profit in 2015, but then switched to for-profit in 2019 and is now valued at $29 Billion.
Pages 19-41 ("Plaintiff-Specific Allegations") detail how each named author in the lawsuit has been harmed and pages 15-19 ("GPT-N's and ChatGPT’s Harm to Authors") outline all the other ways that AI is putting thousands and thousands of other authors out of business by flooding the markets with cheap commissions and books.
The only ethically debatable portion of this case is the implications of expanding what qualifies as "derivative work".
However, this case seems pretty solidly aimed at Artificial Intelligence, with very little opportunity for the case to establish precedent that could be used against humans down the line. The language of the case is very thorough in detailing how the specific mechanics of AI means that it copies* copywritten material and how those mechanics specifically mean that anything it produces should be classified as "derivative work" (by virtue of there being no way to prove that everything it produces is not a direct product of it having illegally obtained and used** copywritten material).
*per section "General Factual Allegations" (pgs 7-8), the lawsuit argues that AI uses buzzwords ("train" "learn" "intelligence") to try to muddy how AI works, but in reality it all boils down to AI just "copying" (y'all can disagree with this if you want, I'm just telling you what the lawsuit says)
**I see a lot of people saying that it's not copyright infringement if you're not the one who literally scanned the book and uploaded it to the web—this isn't true. Once you "possess" (and downloading counts) copywritten material through illegal means, you are breaking the law. And AI must first download content in order to train its algorithm, even if it dumps the original content nano-seconds later. So, effectively, AI cannot interact with copywritten material in any capacity, by virtue of how it interacts with content, without infringing.
Now that you know your fanworks are safe, I'll provide my own hot take 🔥:
Even if—even if—this lawsuit put fanworks in jeopardy... I'd still be all for it!
Why? Because if no one can make a living organically creating anything and it leads to all book, TV, and movie markets being entirely flooded with a bunch of progressively more soulless and reductive AI garbage, what the hell are you even going to be making fanworks of?
But, no, actually because the dangers of AI weaseling its way into every crevice of society with impunity is orders of magnitude more dangerous and detrimental to literal human life than fanwork being harder to access.
Note to anyone who chooses to interact with this post in any capacity: Just be civil!
81 notes · View notes
why5x5 · 1 year ago
Text
0 notes
what-eats-owls · 2 years ago
Text
a point about the IA situation that I cannot make on twitter without death threats
Like many authors, I have complicated feelings about the IA lawsuit. IA has a whole raft of incredibly invaluable services, that's not in dispute. The current eBook licensing structure is also clearly not sustainable. Neither was IA's National Emergency Library, which was unrestricted lending of unlicensed digital copies. There are some thoughtful posts about how their argument to authors, "you'll be paid in exposure," is not especially compelling.
But I'm not here to discuss that; I'm here to talk about the licensing. TL;DR I don't want my work being fed into an AI or put on the blockchain, and to enforce that, you need a license.
So, here's the thing. IA's argument for the NEL boils down to "if we possess a physical copy of the book we should be able to do what we want" and that's frankly unserious. (Compare: Crypto Bros who thought spending $3 million on a single copy of a Dune artbook meant they owned the copyright.) Their claim is that by scanning a physical copy of the book and compiling the scans into a digital edition, that is sufficiently transformative to be considered fair use.
What that gives them is something that functions almost identically to an eBook, without the limitations (or financial obligations) of an eBook license. And I'm sure some of you are thinking, "so what, you lose six cents, get over yourself," but this isn't actually about the money. It's about what they can do with the scans.
A license grants them the right to use the work in specific, limited ways. It also bars them from using it in ways that aren't prescribed.
For example, what if IA decides to expand their current blockchain projects and commit their scanned book collections to the blockchain "for preservation"? Or what if IA decides to let AI scrapers train on the scanned text? One of their archivists sees AI art as a "toy" and "fears [AI art] will be scorned by the industry's gatekeeping types."
Bluntly, an unlicensed, unrestricted collection seems to be what they're gunning for. (Wonky but informative thread from a lawyer with a focus on IP; this cuts to the pertinent part, but the whole thing's good reading.) The Authors Guild is in no way unbiased here, but in the fifth paragraph of this press release, they claim that they offered to help IA work out a licensing agreement back in 2017, and got stonewalled. (They also repeat this claim on Twitter here.)
At the end of the day, I don't want the IA to fold; I don't think anyone does. As a matter of fact, I'd be open to offering them an extremely cheap license for Controlled Digital Lending. (And revamping eBook library licensing while we're at it.) I think there's a lot of opportunity for everyone to win there. But IA needs to recognize that licenses exist for a reason, not just as a cash grab, and authors have the right to say how their work is used, just like any artist.
1K notes · View notes
renthony · 6 days ago
Text
Nov. 20, 2024:
The Volusia County School Board Friday responded to a lawsuit filed by the country's largest book publishers challenging Florida's book removal provisions of HB 1069 claiming it was only following state law, among other defenses. Penguin Random House, Hachette Book Group, HarperCollins Publishers, Macmillan Publishers and Simon & Schuster, known as The Big 5, are joined in the lawsuit against the Florida State Board of Education, Volusia County School Board and Orange County School Board by Sourcebooks; The Authors Guild; and critically acclaimed authors Julia Alvarez, Laurie Halse Anderson, John Green, Jodi Picoult and Angie Thomas. A Volusia County parent and student and an Orange County parent and student are also plaintiffs.
Read the rest via the Daytona Beach News-Journal.
87 notes · View notes
justinspoliticalcorner · 3 months ago
Text
Richard Luscombe at The Guardian:
Six major book publishers have teamed up to sue the US state of Florida over an “unconstitutional” law that has seen hundreds of titles purged from school libraries following rightwing challenges. The landmark action targets the “sweeping book removal provisions” of House Bill 1069, which required school districts to set up a mechanism for parents to object to anything they considered pornographic or inappropriate. A central plank of Republican governor Ron DeSantis’s war on “woke” on Florida campuses, the law has been abused by rightwing activists who quickly realized that any book they challenged had to be immediately removed and replaced only after the exhaustion of a lengthy and cumbersome review process, if at all, the publishers say. Since it went into effect last July, countless titles have been removed from elementary, middle and high school libraries, including American classics such as Brave New World by Aldous Huxley, For Whom the Bell Tolls by Ernest Hemingway and The Adventures of Tom Sawyer by Mark Twain.
Contemporary novels by bestselling authors such as Margaret Atwood, Judy Blume and Stephen King have also been removed, as well as The Diary of a Young Girl, Anne Frank’s gripping account of the Holocaust, according to the publishers. “Florida HB 1069’s complex and overbroad provisions have created chaos and turmoil across the state, resulting in thousands of historic and modern classics, works we are proud to publish, being unlawfully labeled obscene and removed from shelves,” Dan Novack, vice-president and associate general counsel of Penguin Random House (PRH), said in a statement. “Students need access to books that reflect a wide range of human experiences to learn and grow. It’s imperative for the education of our young people that teachers and librarians be allowed to use their professional expertise to match our authors’ books to the right reader at the right time in their life.” PRH is joined in the action by Hachette Book Group, HarperCollins Publishers, Macmillan Publishers, Simon & Schuster and Sourcebooks. The 94-page lawsuit, which also features as plaintiffs the Authors Guild and a number of individual writers, was filed in federal court in Orlando on Thursday.
The suit contends the book removal provisions violate previous supreme court decisions relating to reviewing works for their literary, artistic, political and scientific value as a whole while considering any potential obscenity; and seeks to restore the discretion “of trained educators to evaluate books holistically to avoid harm to students who will otherwise lose access to a wide range of viewpoints”. “Book bans censor authors’ voices, negating and silencing their lived experience and stories,” Mary Rasenberger, chief executive of the Authors Guild, said in the statement. “These bans have a chilling effect on what authors write about, and they damage authors’ reputations by creating the false notion that there is something unseemly about their books. “Yet these same books have edified young people for decades, expanding worlds and fostering self-esteem and empathy for others. We all lose out when authors’ truths are censored.” Separate from the publishers’ action, a group of three parents filed their own lawsuit in June, insisting that the law discriminated against parents who oppose book bans and censorship because it allowed others to dictate what their children can and cannot read.
Six major publishers sue Florida over book ban law HB1069.
96 notes · View notes
literaticat · 1 year ago
Note
I have a question about your post regarding AI in which you detailed some agents' concerns. In particular you mentioned "we don't want our authors or artists work to be data-mined / scraped to "train" AI learning models/bots".
I completely agree, but what could be done to prevent this?
(I am no expert and clearly have NO idea what the terminology really is, but hopefully you will get it, sorry in advance?)
I mean, this is literally the thing we are all trying to figure out lol. But a start would be to have something in the contracts that SAYS Publishers do not have permission to license or otherwise permit companies to incorporate this copyrighted work into AI learning models, or to utilize this technology to mimic an author’s work.
The companies that are making AI bots or whatever are not shadowy guilds of hackers running around stealing things (despite how "web scraping" and "data mining" and all that sounds, which admittedly is v creepy and ominous!) -- web scraping, aka using robots to gather large amounts of publicly available data, is legal. That's like, a big part of how the internet works, it's how Google knows things when you google them, etc.
It's more dubious if scraping things that are protected under copyright is legal -- the companies would say that it is covered under fair use, that they are putting all this info in there to just teach the AI, and it isn't to COPY the author's work, etc etc. The people whose IP it is, though, probs don't feel that way -- and the law is sort of confused/non-existent. (There are loads of lawsuits literally RIGHT NOW that are aiming to sort some of this out, and the Writer's Guild strike which is ongoing and SAG-AFTRA strike which started this week is largely centered around some of the same issues when it comes to companies using AI for screenwriting, using actor's likeness and voice, etc.) Again, these are not shadowy organizations operating illegally off the coast of whatever -- these are regular-degular companies who can be sued, held to account, regulated, etc. The laws just haven’t caught up to the technology yet.
Point being, it's perhaps unethical to "feed" copyrighted work into an AI thing without permission of the copyright holder, but is it ILLEGAL? Uh -- yes??? but also ?????. US copyright law is pretty clear that works generated entirely by AI can't be protected under copyright -- and that works protected by copyright can't be, you know, copied by somebody else -- but there's a bit of a grey area here because of fair use? It’s confusing, for sure, and I'm betting all this is being hashed out in court cases and committee rooms and whatnot as I type.
Anywhoo, the first steps are clarifying these things contractually. Authors Guild (and agents) take the stance that this permission to "feed" info to AI learning models is something the Author automatically holds rights to, and only the author can decide if/when a book is "fed" into an AI... thing.
The Publishers kinda think this is something THEY hold the rights to, or both parties do, and that these rights should be frozen so NEITHER party can choose to "feed", or neither can choose to do so without the other's permission.
(BTW just to be clear, as I understand it -- which again is NOT MUCH lol -- this "permission" is not like, somebody calls each individual author and asks for permission -- it's part of the coding. Like how many e-books are DRM protected, so they are locked to a particular platform / device and you can't share them etc -- there are bits of code that basically say NOPE to scrapers. So (in my imagination, at least), the little spider-robot is Roomba-ing around the internet looking for things to scrape and it comes across this bit of code and NOPE, they have to turn around and try the next thing. Now – just like if an Etsy seller made mugs with pictures of Mickey Mouse on them, using somebody else’s IP is illegal – and those people CAN be sued if the copyright holder has the appetite to do that - but it’s also hard to stop entirely. So if some random person took your book and just copied it onto a blog -- the spider-robot wouldn't KNOW that info was under copyright, or they don't have permission to gobble it up, because it wouldn't have that bit of code to let them know -- so in that way it could be that nobody ever FULLY knows that the spider-robots won't steal their stuff, and publishers can't really be liable for that if third parties are involved mucking it up -- but they certainly CAN at least attempt to protect copyright!)
But also, you know how I don't even know what I'm talking about and don't know the words? Like in the previous paragraphs? The same goes for all the publishers and everyone else who isn't already a tech wizard, ALL of whom are suddenly learning a lot of very weird words and phrases and rules that nobody *exactly* understands, and it's all changing by the week (and by the day, even).
Publishers ARE starting to add some of this language, but I also would expect it to feel somewhat confused/wild-west-ish until some of the laws around this stuff are clearer. But really: We're all working on it!
87 notes · View notes
mariacallous · 1 year ago
Text
On Monday, the leadership of the Screen Actors Guild–American Federation of Television and Radio Artists held a members-only webinar to discuss the contract the union tentatively agreed upon last week with the Alliance of Motion Picture and Television Producers. If ratified, the contract will officially end the longest labor strike in the guild’s history.
For many in the industry, artificial intelligence was one of the strike's most contentious, fear-inducing components. Over the weekend, SAG released details of its agreed AI terms, an expansive set of protections that require consent and compensation for all actors, regardless of status. With this agreement, SAG has gone substantially further than the Directors Guild of America or the Writers Guild of America, who preceded the group in coming to terms with the AMPTP. This isn’t to say that SAG succeeded where the other unions failed but that actors face more of an immediate, existential threat from machine-learning advances and other computer-generated technologies.
The SAG deal is similar to the DGA and WGA deals in that it demands protections for any instance where machine-learning tools are used to manipulate or exploit their work. All three unions have claimed their AI agreements are "historic" and "protective," but whether one agrees with that or not, these deals function as important guideposts. AI doesn't just posit a threat to writers and actors—it has ramifications for workers in all fields, creative or otherwise.
For those looking to Hollywood's labor struggles as a blueprint for how to deal with AI in their own disputes, it's important that these deals have the right protections, so I understand those who have questioned them or pushed them to be more stringent. I’m among them. But there is a point at which we are pushing for things that cannot be accomplished in this round of negotiations and may not need to be pushed for at all.
To better understand what the public generally calls AI and its perceived threat, I spent months during the strike meeting with many of the leading engineers and tech experts in machine-learning and legal scholars in both Big Tech and copyright law.
The essence of what I learned confirmed three key points: The first is that the gravest threats are not what we hear most spoken about in the news—most of the people whom machine-learning tools will negatively impact aren’t the privileged but low- and working-class laborers and marginalized and minority groups, due to the inherent biases within the technology. The second point is that the studios are as threatened by the rise and unregulated power of Big Tech as the creative workforce, something I wrote about in detail earlier in the strike here and that WIRED’s Angela Watercutter astutely expanded upon here.
Both lead to the third point, which speaks most directly to the AI deals: No ironclad legal language exists to fully protect artists (or anyone) from exploitation involving machine-learning tools.
When we hear artists talk about fighting AI on legal grounds, they’re either suing for copyright infringement or requiring tech companies to cease inputting creative works into their AI models. Neither of these approaches are effective in the current climate. Copyright law is designed to protect intellectual property holders, not creative individuals, and the majority of these infringement lawsuits are unlikely to succeed or, if they do, are unlikely to lead to enforceable new laws. This became evident when the Authors Guild failed in its copyright lawsuit against Google in 2015; and it faces similar challenges with its new suit, as outlined here.
The demand to control the ability of AI to train on artists' work betrays a fundamental lack of understanding of how these models and the companies behind them function, as we can’t possibly prevent who scrapes what in an age where everything is already ingested online. It also relies on trusting tech companies to police themselves and not ingest works they have been told not to, knowing it’s nearly impossible to prove otherwise.
Tech entities like OpenAI are black boxes that offer little to no disclosure about how their datasets work, as are all the major Big Tech players. That doesn’t mean we shouldn’t fight for greater transparency and reform copyright protections. However, that’s a long and uncertain game and requires government entities like the US Federal Trade Commission to be willing to battle the deep-pocketed lobbyists preventing meaningful legislation against their Big Tech bosses. There will be progress eventually, but certainly not in time for this labor crisis that has hurt so many.
The absence of enforceable laws that would shackle Big Tech doesn’t make these deals a toothless compromise—far from it. There is great value in a labor force firmly demanding its terms be codified in a contract. The studios can find loopholes around some of that language if they choose, as they have in the past, but they will then be in breach of their agreed contract and will face publicly shaming lawsuits by influential and beloved artists and the potential of another lengthy and costly strike.
What is historic in these Hollywood deals is the clear statement of what the creative workforce will and won't tolerate from the corporations. Standing in solidarity behind that statement carries tremendous weight, even if it isn't fully enforceable. It sends a message to other industry unions, several of which are facing upcoming contract negotiations, and to all labor movements, that workers will not tolerate being exploited and replaced by the rapid advance of Big Tech. And it should not be lost on the AMPTP that it may soon find itself making similar demands for its own survival to the Big Tech companies, who are perfectly poised to circumvent or devour the legacy studios.
Over the weekend, there were calls for SAG members to reject the contract based on its AI stipulations. I'll be voting to ratify, as I did for the DGA and WGA agreements—not because the terms are perfect or ironclad but because the deal is meaningful and effective. And there are no practical and immediate solutions that aren’t currently addressed. It’s time to get back to work.
This is not a fight that ends with the current strike; it’s early days in the Tech Era, with both painful disruption and significant benefits to come. The SAG deal, in combination with the DGA and WGA deals, is a momentous early blow in labor’s fight for a fair and equitable place in the new world.
25 notes · View notes
westeroswisdom · 1 year ago
Text
George R.R. Martin and a number of other authors have filed a class action suit against OpenAI.
This suit is being fought over copyright law.
The complaint claims that OpenAI, the company behind viral chatbot ChatGPT, is copying famous works in acts of “flagrant and harmful” copyright infringement and feeding manuscripts into algorithms to help train systems on how to create more human-like text responses. George R.R. Martin, Jodi Picoult, John Grisham and Jonathan Franzen are among the 17 prominent authors who joined the suit led by the Authors Guild, a professional organization that protects writers’ rights. Filed in the Southern District of New York, the suit alleges that OpenAI’s models directly harm writers’ abilities to make a living wage, as the technology generates texts that writers could be paid to pen, as well as uses copyrighted material to create copycat work. “Generative AI threatens to decimate the author profession,” the Authors Guild wrote in a press release Wednesday.
The suit makes reference to attempts by OpenAI to complete ASoIaF artificially.
The suit alleges that books created by the authors that were illegally downloaded and fed into GPT systems could turn a profit for OpenAI by “writing” new works in the authors’ styles, while the original creators would get nothing. The press release lists AI efforts to create two new volumes in Martin’s Game of Thrones series and AI-generated books available on Amazon.
This is not the first effort to keep GPT systems from pillaging works of literature.
More than 10,000 authors — including James Patterson, Roxane Gay and Margaret Atwood — also signed an open letter calling on AI industry leaders like Microsoft and ChatGPT-maker OpenAI to obtain consent from authors when using their work to train AI models, and to compensate them fairly when they do.
At least in the United States, the law has not kept up with digital technology.
@neil-gaiman
13 notes · View notes
beardedmrbean · 1 year ago
Text
US authors George RR Martin and John Grisham are suing ChatGPT-owner OpenAI over claims their copyright was infringed to train the system.
Martin is known for his fantasy series A Song of Ice and Fire, which was adapted into HBO show Game of Thrones.
ChatGPT and other large language models (LLMs) "learn" by analysing a massive amount of data often sourced online.
The lawsuit claims the authors' books were used without their permission to make ChatGPT smarter.
OpenAI said it respected the rights of authors, and believed "they should benefit from AI technology".
Other prominent authors named in the complaint include Jonathan Franzen, Jodi Picoult and George Saunders.
The case has been brought to the federal court in Manhattan, New York, by the Authors Guild, a trade group in the US working on behalf of the named authors.
According to the filing, it accused OpenAI of engaging in "systematic theft on a mass scale".
It follows similar legal action brought by comedian Sarah Silverman in July, as well as an open letter signed by authors Margaret Atwood and Philip Pullman that same month calling for AI companies to compensate them for using their work.
A spokesperson for OpenAI said: "We're having productive conversations with many creators around the world, including the Authors Guild, and have been working co-operatively to understand and discuss their concerns about AI.
"We're optimistic we will continue to find mutually beneficial ways to work together."
AI 'displacing humans'
The case argues that the LLM was fed data from copyrighted books without the permission of the authors, in part because it was able to provide accurate summaries of them.
The lawsuit also pointed to a broader concern in the media industry - that this kind of technology is "displacing human-authored" content.
Patrick Goold, reader in law at City University, told BBC News that while he could sympathise with the authors behind the lawsuit, he believed it was unlikely it would succeed, saying they would initially need to prove ChatGPT had copied and duplicated their work.
"They're actually not really worried about copyright, what they're worried about is that AI is a job killer," he said, likening the concerns to those screenwriters are currently protesting against in Hollywood.
"When we're talking about AI automation and replacing human labour... it's just not something that copyright should fix.
"What we need to be doing is going to Parliament and Congress and talking about how AI is going to displace the creative arts and what we need to do about that in the future."
The case is the latest in a long line of complaints brought against developers of so-called generative AI - that is, artificial intelligence that can create media based on text prompts - over this concern.
It comes after digital artists sued text-to-image generators Stability AI and Midjourney in January, claiming they only function by being trained on copyrighted artwork.
And OpenAI is also facing a lawsuit, alongside Microsoft and programming site GitHub, from a group of computing experts who argue their code was used without their permission to train an AI called Copilot.
None of these lawsuits has yet been resolved.
13 notes · View notes
aorish · 1 year ago
Text
I mean, best case scenario is that OpenAI and the Authors Guild both waste an enormous sum of money on this lawsuit and it accomplishes nothing of value for anyone. Plague on both of your houses.
7 notes · View notes
themthouse · 2 years ago
Text
The "National Emergency Library" & Hachette v. Internet Archive
While the Internet Archive is known as the creator and host of the Wayback Machine and many other internet and digital media preservation projects, the IA collection in question in Hachette v. Internet Archive is their Open Library. The Open Library has been digitizing books since as early as 2005, and in early 2011, began to include and distribute copyrighted books through Controlled Digital Lending (CDL). In total, the IA includes 3.6 million copyrighted books and continues to scan over 4,000 books a day.
During the early days of the pandemic, from March 24, 2020, to June 16, 2020, specifically, the Internet Archive offered their National Emergency Library, which did away with the waitlist limitations on their pre-existing Open Library. Instead of following the strict rules laid out in the Position Statement on Controlled Digital Lending, which mandates an equal “owned to loaned” ratio, the IA allowed multiple readers to access the same digitized book at once. This, they said, was a direct emergency response to the worldwide pandemic that cut off people’s access to physical libraries.
In response, on June 1, 2020, Hachette Book Group, HarperCollins, John Wiley & Sons, and Penguin Random House filed a lawsuit against the IA over copyright infringement. Out of their collective 33,000 copyrighted titles available on Open Library, the publishers’ lawsuit focused on 127 books specifically (known in the legal documentation as the “Works in Suit”). After two years of argument, on March 24, 2023, Judge John George Koeltl ruled in favor of the publishers.
The IA’s fair use defense was found to be insufficient as the scanning and distribution of books was not found to be transformative in any way, as opposed to other copyright lawsuits that ruled in favor of digitizing books for “utility-expanding” purposes, such as Authors Guild, Inc. v. HathiTrust. Furthermore, it was found that even prior to the National Emergency Library, the Open Library frequently failed to maintain the “owned to loaned” ratio by not sufficiently monitoring the circulation of books it borrows from partner libraries. Finally, despite being a nonprofit organization overall, the IA was found to profit off of the distribution of the copyrighted books, specifically through a Better World Books link that shares part of every sale made through that specific link with the IA.
It worth noting that this ruling specifies that “even full enforcement of a one-to-one owned-to-loaned ratio, however, would not excuse IA’s reproduction of the Works in Suit.” This may set precedent for future copyright cases that attempt to claim copyright exemption through the practice of controlled digital lending. It is unclear whether this ruling is limited to the National Emergency Library specifically, or if it will affect the Open Library and other collections that practice CDL moving forward.
Edit: I recommend seeing what @carriesthewind has to say about the most recent updates in the Internet Archive cases for a lawyers perspective of how these cases will effective the future of digital lending law in the U.S.
Further Reading:
Full History of Hachette Book Group, Inc. v. Internet Archive [Released by the Free Law Project]
Hachette v. Internet Archive ruling
Internet Archive Loses Lawsuit Over E-Book Copyright Infringement
The Fight Continues [Released by The Internet Archive]
Authors Guild Celebrates Resounding Win in Internet Archive Infringement Lawsuit [Released by The Authors Guild]
Relevant Court Cases:
Authors Guild, Inc. v. Google, Inc.
Authors Guild, Inc. v. HathiTrust
Capitol Records v. ReDigi
Index:
MASTER POST
First-Sale Doctrine & the Economics of E-books
Controlled Digital Lending (CDL)
The “National Emergency Library” & Hachette v. Internet Archive
Authors, Publishers & You
-- Authors: Ideology v. Practicality
-- Publishers: What Authors Are Paid
-- You: When Is Piracy Ethical?
11 notes · View notes
partisan-by-default · 1 year ago
Text
The proposed class-action lawsuit filed late on Tuesday by the Authors Guild joins several others from writers, source-code owners and visual artists against generative AI providers. In addition to Microsoft-backed OpenAI, similar lawsuits are pending against Meta Platforms and Stability AI over the data used to train their AI systems. Other authors involved in the latest lawsuit include "The Lincoln Lawyer" writer Michael Connelly and lawyer-novelists David Baldacci and Scott Turow.
3 notes · View notes
deadlinecom · 1 year ago
Text
3 notes · View notes
shadowkat678 · 2 years ago
Text
Tumblr media
I posted 26,487 times in 2022
That's 6,310 more posts than 2021!
247 posts created (1%)
26,240 posts reblogged (99%)
Blogs I reblogged the most:
@acidmatze
@taulupis
@bumblerhizal
@meabeck
I tagged 634 of my posts in 2022
#the legend of vox machina - 30 posts
#tlovm spoilers - 26 posts
#unreality - 19 posts
#critical role - 18 posts
#dungeons and dragons - 16 posts
#tlovm - 10 posts
#the raven queen - 10 posts
#toh spoilers - 9 posts
#lovm spoilers - 8 posts
#the owl house spoilers - 7 posts
Longest Tag: 139 characters
#but i have it on good authority their home blueprints of the layout shows that the true basement should be ten feet deeper from the stairs
My Top Posts in 2022:
#5
Tumblr media
👀
44 notes - Posted October 1, 2022
#4
So apparently people really like my Dragon Heist remix!
Tumblr media Tumblr media
It has been suggested I put all my documents in actual PDFs and sell them on DMS Guild but I have no idea how to do that. Maybe eventually.
But in the meantime y'all can check it all out here!
Anyway tell me what you think. This was a lot of work and yes I do seek validation. Any ideas on areas I could expand upon more? What I should tackle next? Let me know!
45 notes - Posted July 30, 2022
#3
I want to see my writing be psychoanalyzed like I see with media on Tumblr. That’s how I’ll know I made it. I don’t care if I sell like 100 copies of something one day someone can send me a letter digging into the details and writing a ten page essay and I’ll be like “Okay I’m content now.”
46 notes - Posted April 30, 2022
#2
Well I was blocked so let me just put this here.
Asexual cisgender people are, by default, not hetrosexual cisgender. It doesn't matter if they're romantically attracted to the opposite sex or not. They are not cishets, and in places where any queerness is punished by law and by social norms, being a queer identity is enough to put you in danger.
Even if you're operating under the assumption that in places like the USA ace and aro people don't face as much structural impression, that absolutely IS NOT TRUE everywhere. When gay people are thrown off buildings, and someone hears you say you're Xsexual instead of cishet, they are not going to pause to ask the nuances. They will hate and harm you for the sole reason of existing as a queer person.
I've seen aces and aros have to run away from their home countries right alongside trans and gay people who had to.
It's 2022. Can we please stop the discourse over this and realize any queer person, in a good very sizable chunk of the world, will not get you ANY amount of privilege akin to being a cishet.
Thank you for coming to my TedTalk and that is the end of my thoughts on the matter.
51 notes - Posted April 30, 2022
My #1 post of 2022
So I got this new ad:
Tumblr media
And got ~vibes~, so I looked it up and when I dug into it this school is a Catholic organization that's widely been blasted for pushing native cultural assimilation in their past. Supposedly they're currently trying to right wrongs and pushing classes that teach native languages and crafts.
But I know there's been talk about stuff still occuring and being swept under the rug as well as avoiding responsibility for past actions by shutting down lawsuits and supporting laws that protect them from legal action.
I'm not native, but I wanted to draw attention to this new ad and see if there's any native users who know more that can chime in on this.
I'm going to blaze this post in hopes of getting seen by someone with more knowledge on this. On the surface my first impression was it was a school run by native cultural activists pushing to teach kids more about their culture, as I've heard of them popping up more to fight back against historical assimilation. But it's a Catholic school and, uh, they obviously have a HISTORY behind them that's not great, to put it mildly.
346 notes - Posted June 7, 2022
Get your Tumblr 2022 Year in Review →
1 note · View note
noticiassincensura · 14 days ago
Text
OpenAI is navigating a wave of copyright lawsuits over ChatGPT, stemming from its use of online content, some of which may be protected by copyright. While OpenAI recently achieved a temporary win in one of these cases, major organizations like The New York Times are watching closely, positioning themselves to pursue further legal action if necessary.
AI and Copyright: An Ongoing Challenge
The question at the center of these disputes is whether AI can legally utilize publicly available online content, especially when that content is copyright-protected. Currently, AI companies like OpenAI are indeed using such materials for model training. While the broader legal impact remains uncertain, the cases suggest there could be significant consequences for this approach.
Publications Accusing ChatGPT of Copyright Infringement
Online publications Raw Story and Alternet filed a lawsuit against OpenAI in February, alleging that the company used thousands of their articles to train ChatGPT without permission. They also claim that the model can reproduce these copyrighted materials upon request, which they argue constitutes direct copyright infringement. The lawsuit is one of several filed in recent months, mirroring a similar case brought by The New York Times in 2023, in which the publication alleged that millions of its articles had been used by OpenAI and Microsoft to train AI models.
Temporary Legal Victory for OpenAI
Recently, a federal judge in New York dismissed Raw Story and Alternet's claims, ruling that the news organizations hadn’t demonstrated sufficient harm to justify their case. Judge Colleen McMahon expressed skepticism about the plaintiffs’ ability to prove measurable damages, though she left open the possibility of appeal.
An Industry-Wide Issue
The copyright disputes surrounding AI extend beyond OpenAI. Getty Images, for example, has filed a lawsuit against Stability AI's Stable Diffusion over the use of images, while GitHub Copilot has faced scrutiny over the use of open-source code in training. Additionally, the Author's Guild has voiced concerns over unauthorized use of literary works. These cases highlight the industry-wide tensions as AI companies use vast datasets with limited transparency about the origins of training materials.
Media Giants in Pursuit of Compensation
In response to these concerns, some AI firms are taking steps to minimize risks through licensing agreements. OpenAI, for instance, has struck deals with media groups such as Prisa (owner of El País) and Le Monde, while Google has licensed data from Reddit. These arrangements hint at a potential path forward, allowing content creators to be compensated for the use of their work in AI training.
Challenges with AI Search Engines and Content Summarization
AI-driven search tools like Perplexity and ChatGPT Search have introduced additional complications. By retrieving and summarizing information from various sources, these AI tools reduce the need for users to click through to the original content. While this benefits users by providing concise answers, it diverts traffic away from content creators, depriving them of ad revenue and further intensifying the debate over fair compensation.
Looking Ahead
With copyright concerns now at the forefront, tech companies like OpenAI and Google must balance innovation with ethical and legal responsibilities. As the legal landscape evolves, companies might increasingly rely on licensing agreements, offering a way to both acknowledge and compensate creators for their contributions to AI training data.
0 notes
coinmystique · 6 months ago
Link
The Authors Guild and seventeen famend authors, together with the likes of John Grisham, Jodi Picoult, David Baldacci, and George R.R. Martin, have lodged a class-action lawsuit in opposition to OpenAI on Sept. 20, within the Southern District of New York.As revealed by The Authors Guild, the lawsuit alleges copyright infringement of their works of fiction used to coach OpenAI’s Generative Pretrained Transformer (GPT), a language mannequin that generates textual content.The plaintiffs contend that the unauthorized duplication of their copyrighted works by OpenAI, with out providing choices or any type of remuneration, not solely transforms the industrial panorama of the AI agency’s product but in addition poses a big risk to the livelihood and function of authors.The lawsuit highlights the obvious existential risk to authors from the unrestricted utilization of books to develop giant language fashions that generate textual content. In accordance with the Guild’s newest creator revenue survey, the median full-time creator revenue in 2022 was barely over $20,000, together with e book gross sales and different author-related actions. The onset of Generative AI, they argue, poses a extreme threat of decimating the creator occupation.Of their filed grievance, the plaintiffs draw consideration to the truth that their books had been downloaded from pirated eBook repositories and built-in into GPT 3.5 and GPT 4. These variations of GPT energy ChatGPT and numerous purposes and enterprise makes use of. OpenAI allegedly expects to earn billions from these purposes, which critics argue are formed considerably by the accessed “professionally authored, edited, and published books.”OpenAI’s AI-generated books have been accused of mimicking the work of human authors, as evidenced by the latest try to generate volumes 6 and seven of George R.R. Martin’s Recreation of Thrones sequence, “A Song of Ice and Fire.” AI-generated books posted on Amazon, trying to cross themselves off as human-generated, have additionally raised critical authorized considerations.The lawsuit underscores alleged hurt triggered to the fiction market, equating OpenAI’s unauthorized use of authors’ works to grand-scale id theft. Authors Guild CEO Mary Rasenberger argued,“Great books are generally written by those who spend their careers, and indeed, their lives, learning and perfecting their crafts. To preserve our literature, authors must have the ability to control if and how their works are used by generative AI.”The present class-action go well with focuses totally on fiction writers as they kind a well-defined and cohesive group whose works at the moment are being extensively mimicked with generative AI instruments. Nonetheless, the Authors Guild acknowledges the injury to nonfiction markets and plans to handle them in the end.Jonathan Franzen, a category consultant, said,“Generative AI is a vast new field for Silicon Valley’s longstanding exploitation of content providers. Authors should have the right to decide when their works are used to ‘train’ AI. If they choose to opt in, they should be appropriately compensated.”The Writer’s Guild believes the financial implications of this challenge might doubtlessly compromise all cultural manufacturing. The worry of a future dominated by spinoff inventive output is deeply regarding. This lawsuit marks one of many many makes an attempt to avert such an consequence.The complete grievance may be learn right here.Supply: https://cryptoslate.com/authors-guild-claims-openai-used-pirated-ebooks-to-train-chatgpt-on-copyrighted-material/
0 notes