Tumgik
#the problem with AI isn't AI itself
dukeoftears · 5 months
Text
ethical AI ✅
unethical AI ❌
13 notes · View notes
powerfulkicks · 3 months
Text
man i hate the current state of ai so much bc it's totally poisoned the well for when actual good and valuable uses of ai are developed ethically and sustainably....
like ai sucks so bad bc of capitalism. they want to sell you on a product that does EVERYTHING. this requires huge amounts of data, so they just scrape the internet and feed everything in without checking if it's accurate or biased or anything. bc it's cheaper and easier.
ai COULD be created with more limited data sets to solve smaller, more specific problems. it would be more useful than trying to shove the entire internet into a LLM and then trying to sell it as a multi tool that can do anything you want kinda poorly.
even in a post-capitalist world there are applications for ai. for example: resource management. data about how many resources specific areas typically use could be collected and fed into a model to predict how many resources should be allocated to a particular area.
this is something that humans would need to be doing and something that we already do, but creating a model based on the data would reduce the amount of time humans need to spend on this important task and reduce the amount of human error.
but bc ai is so shitty now anyone who just announces "hey we created an ai to do this!" will be immediately met with distrust and anger, so any ai model that could potentially be helpful will have an uphill battle bc the ecosystem has just been ruined by all the bullshit chatgpt is selling
7 notes · View notes
demonic-shadowlucifer · 4 months
Text
sooo fucking tired of people saying to "use nightshade/glaze to protect your art!", my pc cant fucking run them and for all i know techbros can just figure out a way to bypass it anyway.
2 notes · View notes
apas-95 · 9 months
Text
anti-ai people need to understand that the opposition communists have to luddism and reactionary sentiment isn't like, a moral one. the main problem with luddism is that it doesn't actually work. like when we say 'we mustn't try to fight against technology itself, we need to fight against the social system that makes it so that advancement in technology and labour-saving devices lead to layoffs' the reason we're saying it is because, if you try fighting the technology, you're going to lose, and you're still going to lose your job too. when you say 'yeah i understand your criticism but I'm still going to fight against AI' you very clearly did not understand the criticism, because the point is that it isn't even in your own self-interest, because it will not work. the fact that, even if it did work, it would only mean maintaining a privileged strata of 'skilled labour' above other workers is secondary -- because, again, flatly resisting technological advancement has never worked in history.
9K notes · View notes
lisafication · 1 year
Text
For those who might happen across this, I'm an administrator for the forum 'Sufficient Velocity', a large old-school forum oriented around Creative Writing. I originally posted this on there (and any reference to 'here' will mean the forum), but I felt I might as well throw it up here, as well, even if I don't actually have any followers.
This week, I've been reading fanfiction on Archive of Our Own (AO3), a site run by the Organisation for Transformative Works (OTW), a non-profit. This isn't particularly exceptional, in and of itself — like many others on the site, I read a lot of fanfiction, both on Sufficient Velocity (SV) and elsewhere — however what was bizarre to me was encountering a new prefix on certain works, that of 'End OTW Racism'. While I'm sure a number of people were already familiar with this, I was not, so I looked into it.
What I found... wasn't great. And I don't think anyone involved realises that.
To summarise the details, the #EndOTWRacism campaign, of which you may find their manifesto here, is a campaign oriented towards seeing hateful or discriminatory works removed from AO3 — and believe me, there is a lot of it. To whit, they want the OTW to moderate them. A laudable goal, on the face of it — certainly, we do something similar on Sufficient Velocity with Rule 2 and, to be clear, nothing I say here is a critique of Rule 2 (or, indeed, Rule 6) on SV.
But it's not that simple, not when you're the size of Archive of Our Own. So, let's talk about the vagaries and little-known pitfalls of content moderation, particularly as it applies to digital fiction and at scale. Let's dig into some of the details — as far as credentials go, I have, unfortunately, been in moderation and/or administration on SV for about six years and this is something we have to grapple with regularly, so I would like to say I can speak with some degree of expertise on the subject.
So, what are the problems with moderating bad works from a site? Let's start with discovery— that is to say, how you find rule-breaching works in the first place. There are more-or-less two different ways to approach manual content moderation of open submissions on a digital platform: review-based and report-based (you could also call them curation-based and flag-based), with various combinations of the two. Automated content moderation isn't something I'm going to cover here — I feel I can safely assume I'm preaching to the choir when I say it's a bad idea, and if I'm not, I'll just note that the least absurd outcome we had when simulating AI moderation (mostly for the sake of an academic exercise) on SV was banning all the staff.
In a review-based system, you check someone's work and approve it to the site upon verifying that it doesn't breach your content rules. Generally pretty simple, we used to do something like it on request. Unfortunately, if you do that, it can void your safe harbour protections in the US per Myeress vs. Buzzfeed Inc. This case, if you weren't aware, is why we stopped offering content review on SV. Suffice to say, it's not really a realistic option for anyone large enough for the courts to notice, and extremely clunky and unpleasant for the users, to boot.
Report-based systems, on the other hand, are something we use today — users find works they think are in breach and alert the moderation team to their presence with a report. On SV, this works pretty well — a user or users flag a work as potentially troublesome, moderation investigate it and either action it or reject the report. Unfortunately, AO3 is not SV. I'll get into the details of that dreadful beast known as scaling later, but thankfully we do have a much better comparison point — fanfiction.net (FFN).
FFN has had two great purges over the years, with a... mixed amount of content moderation applied in between: one in 2002 when the NC-17 rating was removed, and one in 2012. Both, ostensibly, were targeted at adult content. In practice, many fics that wouldn't raise an eye on Spacebattles today or Sufficient Velocity prior to 2018 were also removed; a number of reports suggest that something as simple as having a swearword in your title or summary was enough to get you hit, even if you were a 'T' rated work. Most disturbingly of all, there are a number of — impossible to substantiate — accounts of groups such as the infamous Critics United 'mass reporting' works to trigger a strike to get them removed. I would suggest reading further on places like Fanlore if you are unfamiliar and want to know more.
Despite its flaws however, report-based moderation is more-or-less the only option, and this segues neatly into the next piece of the puzzle that is content moderation, that is to say, the rubric. How do you decide what is, and what isn't against the rules of your site?
Anyone who's complained to the staff about how vague the rules are on SV may have had this explained to them, but as that is likely not many of you, I'll summarise: the more precise and clear-cut your chosen rubric is, the more it will inevitably need to resemble a legal document — and the less readable it is to the layman. We'll return to SV for an example here: many newer users will not be aware of this, but SV used to have a much more 'line by line, clearly delineated' set of rules and... people kind of hated it! An infraction would reference 'Community Compact III.15.5' rather than Rule 3, because it was more or less written in the same manner as the Terms of Service (sans the legal terms of art). While it was a more legible rubric from a certain perspective, from the perspective of communicating expectations to the users it was inferior to our current set of rules  — even less of them read it,  and we don't have great uptake right now.
And it still wasn't really an improvement over our current set-up when it comes to 'moderation consistency'. Even without getting into the nuts and bolts of "how do you define a racist work in a way that does not, at any point, say words to the effect of 'I know it when I see it'" — which is itself very, very difficult don't get me wrong I'm not dismissing this — you are stuck with finding an appropriate footing between a spectrum of 'the US penal code' and 'don't be a dick' as your rubric. Going for the penal code side doesn't help nearly as much as you might expect with moderation consistency, either — no matter what, you will never have a 100% correct call rate. You have the impossible task of writing a rubric that is easy for users to comprehend, extremely clear for moderation and capable of cleanly defining what is and what isn't racist without relying on moderator judgement, something which you cannot trust when operating at scale.
Speaking of scale, it's time to move on to the third prong — and the last covered in this ramble, which is more of a brief overview than anything truly in-depth — which is resources. Moderation is not a magic wand, you can't conjure it out of nowhere: you need to spend an enormous amount of time, effort and money on building, training and equipping a moderation staff, even a volunteer one, and it is far, far from an instant process. Our most recent tranche of moderators spent several months in training and it will likely be some months more before they're fully comfortable in the role — and that's with a relatively robust bureaucracy and a number of highly experienced mentors supporting them, something that is not going to be available to a new moderation branch with little to no experience. Beyond that, there's the matter of sheer numbers.
Combining both moderation and arbitration — because for volunteer staff, pure moderation is in actuality less efficient in my eyes, for a variety of reasons beyond the scope of this post, but we'll treat it as if they're both just 'moderators' — SV presently has 34 dedicated moderation volunteers. SV hosts ~785 million words of creative writing.
AO3 hosts ~32 billion.
These are some very rough and simplified figures, but if you completely ignore all the usual problems of scaling manpower in a business (or pseudo-business), such as (but not limited to) geometrically increasing bureaucratic complexity and administrative burden, along with all the particular issues of volunteer moderation... AO3 would still need well over one thousand volunteer moderators to be able to match SV's moderator-to-creative-wordcount ratio.
Paid moderation, of course, you can get away with less — my estimate is that you could fully moderate SV with, at best, ~8 full-time moderators, still ignoring administrative burden above the level of team leader. This leaves AO3 only needing a much more modest ~350 moderators. At the US minimum wage of ~$15k p.a. — which is, in my eyes, deeply unethical to pay moderators as full-time moderation is an intensely gruelling role with extremely high rates of PTSD and other stress-related conditions — that is approximately ~$5.25m p.a. costs on moderator wages. Their average annual budget is a bit over $500k.
So, that's obviously not on the table, and we return to volunteer staffing. Which... let's examine that scenario and the questions it leaves us with, as our conclusion.
Let's say, through some miracle, AO3 succeeds in finding those hundreds and hundreds and hundreds of volunteer moderators. We'll even say none of them are malicious actors or sufficiently incompetent as to be indistinguishable, and that they manage to replicate something on the level of or superior to our moderation tooling near-instantly at no cost. We still have several questions to be answered:
How are you maintaining consistency? Have you managed to define racism to the point that moderator judgment no longer enters the equation? And to be clear, you cannot allow moderator judgment to be a significant decision maker at this scale, or you will end with absurd results.
How are you handling staff mental health? Some reading on the matter, to save me a lengthy and unrelated explanation of some of the steps involved in ensuring mental health for commercial-scale content moderators.
How are you handling your failures? No moderation in the world has ever succeeded in a 100% accuracy rate, what are you doing about that?
Using report-based discovery, how are you preventing 'report brigading', such as the theories surrounding Critics United mentioned above? It is a natural human response to take into account the amount and severity of feedback. While SV moderators are well trained on the matter, the rare times something is receiving enough reports to potentially be classified as a 'brigade' on that scale will nearly always be escalated to administration, something completely infeasible at (you're learning to hate this word, I'm sure) scale.
How are you communicating expectations to your user base? If you're relying on a flag-based system, your users' understanding of the rules is a critical facet of your moderation system — how have you managed to make them legible to a layman while still managing to somehow 'truly' define racism?
How are you managing over one thousand moderators? Like even beyond all the concerns with consistency, how are you keeping track of that many moving parts as a volunteer organisation without dozens or even hundreds of professional managers? I've ignored the scaling administrative burden up until now, but it has to be addressed in reality.
What are you doing to sweep through your archives? SV is more-or-less on-top of 'old' works as far as rule-breaking goes, with the occasional forgotten tidbit popping up every 18 months or so — and that's what we're extrapolating from. These thousand-plus moderators are mostly going to be addressing current or near-current content, are you going to spin up that many again to comb through the 32 billion words already posted?
I could go on for a fair bit here, but this has already stretched out to over two thousand words.
I think the people behind this movement have their hearts in the right place and the sentiment is laudable, but in practice it is simply 'won't someone think of the children' in a funny hat. It cannot be done.
Even if you could somehow meet the bare minimum thresholds, you are simply not going to manage a ruleset of sufficient clarity so as to prevent a much-worse repeat of the 2012 FF.net massacre, you are not going to be able to manage a moderation staff of that size and you are not going to be able to ensure a coherent understanding among all your users (we haven't managed that after nearly ten years and a much smaller and more engaged userbase). There's a serious number of other issues I haven't covered here as well, as this really is just an attempt at giving some insight into the sheer number of moving parts behind content moderation:  the movement wants off-site content to be policed which isn't so much its own barrel of fish as it is its own barrel of Cthulhu; AO3 is far from English-only and would in actuality need moderators for almost every language it supports — and most damning of all,  if Section 230 is wiped out by the Supreme Court  it is not unlikely that engaging in content moderation at all could simply see AO3 shut down.
As sucky as it seems, the current status quo really is the best situation possible. Sorry about that.
3K notes · View notes
Text
Pluralistic: Leaving Twitter had no effect on NPR's traffic
Tumblr media
I'm coming to Minneapolis! This Sunday (Oct 15): Presenting The Internet Con at Moon Palace Books. Monday (Oct 16): Keynoting the 26th ACM Conference On Computer-Supported Cooperative Work and Social Computing.
Tumblr media
Enshittification is the process by which a platform lures in and then captures end users (stage one), who serve as bait for business customers, who are also captured (stage two), whereupon the platform rug-pulls both groups and allocates all the value they generate and exchange to itself (stage three):
https://pluralistic.net/2023/01/21/potemkin-ai/#hey-guys
Enshittification isn't merely a form of rent-seeking – it is a uniquely digital phenomenon, because it relies on the inherent flexibility of digital systems. There are lots of intermediaries that want to extract surpluses from customers and suppliers – everyone from grocers to oil companies – but these can't be reconfigured in an eyeblink the that that purely digital services can.
A sleazy boss can hide their wage-theft with a bunch of confusing deductions to your paycheck. But when your boss is an app, it can engage in algorithmic wage discrimination, where your pay declines minutely every time you accept a job, but if you start to decline jobs, the app can raise the offer:
https://pluralistic.net/2023/04/12/algorithmic-wage-discrimination/#fishers-of-men
I call this process "twiddling": tech platforms are equipped with a million knobs on their back-ends, and platform operators can endlessly twiddle those knobs, altering the business logic from moment to moment, turning the system into an endlessly shifting quagmire where neither users nor business customers can ever be sure whether they're getting a fair deal:
https://pluralistic.net/2023/02/19/twiddler/
Social media platforms are compulsive twiddlers. They use endless variation to lure in – and then lock in – publishers, with the goal of converting these standalone businesses into commodity suppliers who are dependent on the platform, who can then be charged rent to reach the users who asked to hear from them.
Facebook designed this playbook. First, it lured in end-users by promising them a good deal: "Unlike Myspace, which spies on you from asshole to appetite, Facebook is a privacy-respecting site that will never, ever spy on you. Simply sign up, tell us everyone who matters to you, and we'll populate a feed with everything they post for public consumption":
https://lawcat.berkeley.edu/record/1128876
The users came, and locked themselves in: when people gather in social spaces, they inadvertently take one another hostage. You joined Facebook because you liked the people who were there, then others joined because they liked you. Facebook can now make life worse for all of you without losing your business. You might hate Facebook, but you like each other, and the collective action problem of deciding when and whether to go, and where you should go next, is so difficult to overcome, that you all stay in a place that's getting progressively worse.
Once its users were locked in, Facebook turned to advertisers and said, "Remember when we told these rubes we'd never spy on them? It was a lie. We spy on them with every hour that God sends, and we'll sell you access to that data in the form of dirt-cheap targeted ads."
Then Facebook went to the publishers and said, "Remember when we told these suckers that we'd only show them the things they asked to see? Total lie. Post short excerpts from your content and links back to your websites and we'll nonconsensually cram them into the eyeballs of people who never asked to see them. It's a free, high-value traffic funnel for your own site, bringing monetizable users right to your door."
Now, Facebook had to find a way to lock in those publishers. To do this, it had to twiddle. By tiny increments, Facebook deprioritized publishers' content, forcing them to make their excerpts grew progressively longer. As with gig workers, the digital flexibility of Facebook gave it lots of leeway here. Some publishers sensed the excerpts they were being asked to post were a substitute for visiting their sites – and not an enticement – and drew down their posting to Facebook.
When that happened, Facebook could twiddle in the publisher's favor, giving them broader distribution for shorter excerpts, then, once the publisher returned to the platform, Facebook drew down their traffic unless they started posting longer pieces. Twiddling lets platforms play users and business-customers like a fish on a line, giving them slack when they fight, then reeling them in when they tire.
Once Facebook converted a publisher to a commodity supplier to the platform, it reeled the publishers in. First, it deprioritized publishers' posts when they had links back to the publisher's site (under the pretext of policing "clickbait" and "malicious links"). Then, it stopped showing publishers' content to their own subscribers, extorting them to pay to "boost" their posts in order to reach people who had explicitly asked to hear from them.
For users, this meant that their feeds were increasingly populated with payola-boosted content from advertisers and pay-to-play publishers who paid Facebook's Danegeld to reach them. A user will only spend so much time on Facebook, and every post that Facebook feeds that user from someone they want to hear from is a missed opportunity to show them a post from someone who'll pay to reach them.
Here, too, twiddling lets Facebook fine-tune its approach. If a user starts to wean themself off Facebook, the algorithm (TM) can put more content the user has asked to see in the feed. When the user's participation returns to higher levels, Facebook can draw down the share of desirable content again, replacing it with monetizable content. This is done minutely, behind the scenes, automatically, and quickly. In any shell game, the quickness of the hand deceives the eye.
This is the final stage of enshittification: withdrawing surpluses from end-users and business customers, leaving behind the minimum homeopathic quantum of value for each needed to keep them locked to the platform, generating value that can be extracted and diverted to platform shareholders.
But this is a brittle equilibrium to maintain. The difference between "God, I hate this place but I just can't leave it" and "Holy shit, this sucks, I'm outta here" is razor-thin. All it takes is one privacy scandal, one livestreamed mass-shooting, one whistleblower dump, and people bolt for the exits. This kicks off a death-spiral: as users and business customers leave, the platform's shareholders demand that they squeeze the remaining population harder to make up for the loss.
One reason this gambit worked so well is that it was a long con. Platform operators and their investors have been willing to throw away billions convincing end-users and business customers to lock themselves in until it was time for the pig-butchering to begin. They financed expensive forays into additional features and complementary products meant to increase user lock-in, raising the switching costs for users who were tempted to leave.
For example, Facebook's product manager for its "photos" product wrote to Mark Zuckerberg to lay out a strategy of enticing users into uploading valuable family photos to the platform in order to "make switching costs very high for users," who would have to throw away their precious memories as the price for leaving Facebook:
https://www.eff.org/deeplinks/2021/08/facebooks-secret-war-switching-costs
The platforms' patience paid off. Their slow ratchets operated so subtly that we barely noticed the squeeze, and when we did, they relaxed the pressure until we were lulled back into complacency. Long cons require a lot of prefrontal cortex, the executive function to exercise patience and restraint.
Which brings me to Elon Musk, a man who seems to have been born without a prefrontal cortex, who has repeatedly and publicly demonstrated that he lacks any restraint, patience or planning. Elon Musk's prefrontal cortical deficit resulted in his being forced to buy Twitter, and his every action since has betrayed an even graver inability to stop tripping over his own dick.
Where Zuckerberg played enshittification as a long game, Musk is bent on speedrunning it. He doesn't slice his users up with a subtle scalpel, he hacks away at them with a hatchet.
Musk inaugurated his reign by nonconsensually flipping every user to an algorithmic feed which was crammed with ads and posts from "verified" users whose blue ticks verified solely that they had $8 ($11 for iOS users). Where Facebook deployed substantial effort to enticing users who tired of eyeball-cramming feed decay by temporarily improving their feeds, Musk's Twitter actually overrode users' choice to switch back to a chronological feed by repeatedly flipping them back to more monetizable, algorithmic feeds.
Then came the squeeze on publishers. Musk's Twitter rolled out a bewildering array of "verification" ticks, each priced higher than the last, and publishers who refused to pay found their subscribers taken hostage, with Twitter downranking or shadowbanning their content unless they paid.
(Musk also squeezed advertisers, keeping the same high prices but reducing the quality of the offer by killing programs that kept advertisers' content from being published along Holocaust denial and open calls for genocide.)
Today, Musk continues to squeeze advertisers, publishers and users, and his hamfisted enticements to make up for these depredations are spectacularly bad, and even illegal, like offering advertisers a new kind of ad that isn't associated with any Twitter account, can't be blocked, and is not labeled as an ad:
https://www.wired.com/story/xs-sneaky-new-ads-might-be-illegal/
Of course, Musk has a compulsive bullshitter's contempt for the press, so he has far fewer enticements for them to stay. Quite the reverse: first, Musk removed headlines from link previews, rendering posts by publishers that went to their own sites into stock-art enigmas that generated no traffic:
https://www.theguardian.com/technology/2023/oct/05/x-twitter-strips-headlines-new-links-why-elon-musk
Then he jumped straight to the end-stage of enshittification by announcing that he would shadowban any newsmedia posts with links to sites other than Twitter, "because there is less time spent if people click away." Publishers were advised to "post content in long form on this platform":
https://mamot.fr/@pluralistic/111183068362793821
Where a canny enshittifier would have gestured at a gaslighting explanation ("we're shadowbanning posts with links because they might be malicious"), Musk busts out the motto of the Darth Vader MBA: "I am altering the deal, pray I don't alter it any further."
All this has the effect of highlighting just how little residual value there is on the platform for publishers, and tempts them to bolt for the exits. Six months ago, NPR lost all patience with Musk's shenanigans, and quit the service. Half a year later, they've revealed how low the switching cost for a major news outlet that leaves Twitter really are: NPR's traffic, post-Twitter, has declined by less than a single percentage point:
https://niemanreports.org/articles/npr-twitter-musk/
NPR's Twitter accounts had 8.7 million followers, but even six months ago, Musk's enshittification speedrun had drawn down NPR's ability to reach those users to a negligible level. The 8.7 million number was an illusion, a shell game Musk played on publishers like NPR in a bid to get them to buy a five-figure iridium checkmark or even a six-figure titanium one.
On Twitter, the true number of followers you have is effectively zero – not because Twitter users haven't explicitly instructed the service to show them your posts, but because every post in their feeds that they want to see is a post that no one can be charged to show them.
I've experienced this myself. Three and a half years ago, I left Boing Boing and started pluralistic.net, my cross-platform, open access, surveillance-free, daily newsletter and blog:
https://pluralistic.net/2023/02/19/drei-drei-drei/#now-we-are-three
Boing Boing had the good fortune to have attracted a sizable audience before the advent of siloed platforms, and a large portion of that audience came to the site directly, rather than following us on social media. I knew that, starting a new platform from scratch, I wouldn't have that luxury. My audience would come from social media, and it would be up to me to convert readers into people who followed me on platforms I controlled – where neither they nor I could be held to ransom.
I embraced a strategy called POSSE: Post Own Site, Syndicate Everywhere. With POSSE, the permalink and native habitat for your material is a site you control (in my case, a WordPress blog with all the telemetry, logging and surveillance disabled). Then you repost that content to other platforms – mostly social media – with links back to your own site:
https://indieweb.org/POSSE
There are a lot of automated tools to help you with this, but the platforms have gone to great lengths to break or neuter them. Musk's attack on Twitter's legendarily flexible and powerful API killed every automation tool that might help with this. I was lucky enough to have a reader – Loren Kohnfelder – who coded me some python scripts that automate much of the process, but POSSE remains a very labor-intensive and error-prone methodology:
https://pluralistic.net/2021/01/13/two-decades/#hfbd
And of all the feeds I produce – email, RSS, Discourse, Medium, Tumblr, Mastodon – none is as labor-intensive as Twitter's. It is an unforgiving medium to begin with, and Musk's drawdown of engineering support has made it wildly unreliable. Many's the time I've set up 20+ posts in a thread, only to have the browser tab reload itself and wipe out all my work.
But I stuck with Twitter, because I have a half-million followers, and to the extent that I reach them there, I can hope that they will follow the permalinks to Pluralistic proper and switch over to RSS, or email, or a daily visit to the blog.
But with each day, the case for using Twitter grows weaker. I get ten times as many replies and reposts on Mastodon, though my Mastodon follower count is a tenth the size of my (increasingly hypothetical) Twitter audience.
All this raises the question of what can or should be done about Twitter. One possible regulatory response would be to impose an "End-To-End" rule on the service, requiring that Twitter deliver posts from willing senders to willing receivers without interfering in them. End-To-end is the bedrock of the internet (one of its incarnations is Net Neutrality) and it's a proven counterenshittificatory force:
https://www.eff.org/deeplinks/2023/06/save-news-we-need-end-end-web
Despite what you may have heard, "freedom of reach" is freedom of speech: when a platform interposes itself between willing speakers and their willing audiences, it arrogates to itself the power to control what we're allowed to say and who is allowed to hear us:
https://pluralistic.net/2022/12/10/e2e/#the-censors-pen
We have a wide variety of tools to make a rule like this stick. For one thing, Musk's Twitter has violated innumerable laws and consent decrees in the US, Canada and the EU, which creates a space for regulators to impose "conduct remedies" on the company.
But there's also existing regulatory authorities, like the FTC's Section Five powers, which enable the agency to act against companies that engage in "unfair and deceptive" acts. When Twitter asks you who you want to hear from, then refuses to deliver their posts to you unless they pay a bribe, that's both "unfair and deceptive":
https://pluralistic.net/2023/01/10/the-courage-to-govern/#whos-in-charge
But that's only a stopgap. The problem with Twitter isn't that this important service is run by the wrong mercurial, mediocre billionaire: it's that hundreds of millions of people are at the mercy of any foolish corporate leader. While there's a short-term case for improving the platforms, our long-term strategy should be evacuating them:
https://pluralistic.net/2023/07/18/urban-wildlife-interface/#combustible-walled-gardens
To make that a reality, we could also impose a "Right To Exit" on the platforms. This would be an interoperability rule that would require Twitter to adopt Mastodon's approach to server-hopping: click a link to export the list of everyone who follows you on one server, click another link to upload that file to another server, and all your followers and followees are relocated to your new digs:
https://pluralistic.net/2022/12/23/semipermeable-membranes/#free-as-in-puppies
A Twitter with the Right To Exit would exert a powerful discipline even on the stunted self-regulatory centers of Elon Musk's brain. If he banned a reporter for publishing truthful coverage that cast him in a bad light, that reporter would have the legal right to move to another platform, and continue to reach the people who follow them on Twitter. Publishers aghast at having the headlines removed from their Twitter posts could go somewhere less slipshod and still reach the people who want to hear from them on Twitter.
And both Right To Exit and End-To-End satisfy the two prime tests for sound internet regulation: first, they are easy to administer. If you want to know whether Musk is permitting harassment on his platform, you have to agree on a definition of harassment, determine whether a given act meets that definition, and then investigate whether Twitter took reasonable steps to prevent it.
By contrast, administering End-To-End merely requires that you post something and see if your followers receive it. Administering Right To Exit is as simple as saying, "OK, Twitter, I know you say you gave Cory his follower and followee file, but he says he never got it. Just send him another copy, and this time, CC the regulator so we can verify that it arrived."
Beyond administration, there's the cost of compliance. Requiring Twitter to police its users' conduct also requires it to hire an army of moderators – something that Elon Musk might be able to afford, but community-supported, small federated servers couldn't. A tech regulation can easily become a barrier to entry, blocking better competitors who might replace the company whose conduct spurred the regulation in the first place.
End-to-End does not present this kind of barrier. The default state for a social media platform is to deliver posts from accounts to their followers. Interfering with End-To-End costs more than delivering the messages users want to have. Likewise, a Right To Exit is a solved problem, built into the open Mastodon protocol, itself built atop the open ActivityPub standard.
It's not just Twitter. Every platform is consuming itself in an orgy of enshittification. This is the Great Enshittening, a moment of universal, end-stage platform decay. As the platforms burn, calls to address the fires grow louder and harder for policymakers to resist. But not all solutions to platform decay are created equal. Some solutions will perversely enshrine the dominance of platforms, help make them both too big to fail and too big to jail.
Musk has flagrantly violated so many rules, laws and consent decrees that he has accidentally turned Twitter into the perfect starting point for a program of platform reform and platform evacuation.
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/10/14/freedom-of-reach/#ex
Tumblr media Tumblr media
My next novel is The Lost Cause, a hopeful novel of the climate emergency. Amazon won't sell the audiobook, so I made my own and I'm pre-selling it on Kickstarter!
Tumblr media
Image: JD Lasica (modified) https://commons.wikimedia.org/wiki/File:Elon_Musk_%283018710552%29.jpg
CC BY 2.0 https://creativecommons.org/licenses/by/2.0/deed.en
799 notes · View notes
copperbadge · 20 days
Note
Hi Sam, could you please recommend any resources/websites to learn about ADHD medication? Until reading your post about second-line meds I thought Adderal was the only one
I can definitely talk about it a little! Always bearing in mind that I am not a doctor and this is not medical advice, etc. etc.
So, I've had many friends with ADHD in my life before I got my diagnosis and I picked up some stuff from them even before getting diagnosed; I also spoke with my prescribing psychiatrist about options when we met. If you think your psychiatrist might be resistant to discussing options, or you don't have one, doing your own research is good, but it's not really a substitute for a specialist in medication management. So it's also important to know what your needs are -- ie, "I want help with my executive function but I need something that's nonaddictive" or "I want something nonsedative" or "I don't think the treatment I'm on is working, what is available outside of this kind of medication?"
The problems you run into with researching medication for ADHD are threefold:
Most well-informed sources aren't actually geared towards non-doctor adults who just want to know what their options are -- they're usually either doctors who don't know how to talk about medication to non-doctors, or doctors (and parents) talking to parents about pediatric options.
A huge number of sites when you google are either AI-generated, covert ads for stimulant addiction rehab, or both.
Reliable sites with easy-to-understand information are not updated super often.
So you just kind of have to be really alert and read the "page" itself for context clues -- is it a science journal, is it an organization that helps people with ADHD, is it a doctor, is it a rehab clinic, is it a drug advertiser, is it a random site with a weird URL that's probably AI generated, etc.
So for example, ADDitude Magazine, which is kind of the pre-eminent clearinghouse for non-scholarly information on ADHD, is a great place to start, but when the research is clearly outlined it sometimes isn't up-to-date, and when it's up-to-date it's often a little impenetrable. They have an extensive library of podcast/webinars, and I started this particular research with this one, but his slides aren't super well-organized, he flips back and forth between chemical and brand name, and he doesn't always designate which is which. However, he does have a couple of slides that list off a bunch of medications, so I just put those into a spreadsheet, gleaned what I could from him, and then searched each medication. I did find a pretty good chart at WebMD that at least gives you the types and brand names fairly visibly. (Fwiw with the webinar, I definitely spent more time skimming the transcript than listening to him, auto transcription isn't GOOD but it is helpful in speeding through stuff like that.)
I think, functionally, there are four types of meds for ADHD, and the more popular ones often have several variations. Sometimes this is just for dosage purposes -- like, if you have trouble swallowing pills there are some meds that come in liquids or patches, so it's useful to learn the chemical name rather than the brand name, because then you can identify several "brands" that all use the same chemical and start to differentiate between them.
Top of the list you have your methylphenidate and your amphetamine, those are the two types of stimulant medications; the most well known brand names for these are Ritalin (methylphenidate) and Adderall (amphetamine).
Then there's the nonstimulant medications, SNRIs (Strattera, for example) and Alpha-2 Agonists (guanfacine and clonidine, brand names Kapvay and Intuniv; I'm looking at these for a second-line medication). There's some crossover between these and the next category:
Antidepressants are sometimes helpful with ADHD symptoms as well as being helpful for depression; I haven't looked at these much because for me they feel like the nuclear option, but it's Dopamine reuptake inhibitors like Wellbutrin and tricyclics like Tofranil. If you're researching these you don't need to look at like, every antidepressant ever, just look for ones that are specifically mentioned in context with ADHD.
Lastly there are what I call the Offlabels -- medications that we understand to have an impact on ADHD for some people, but which aren't generally prescribed very often, and sometimes aren't approved for use. I don't know much about these, either, because they tend to be for complex cases that don't respond to the usual scrips and are particularly difficult to research. The one I have in my notes is memantine (brand name Namenda) which is primarily a dementia medication that has shown to be particularly helpful for social cognition in people with combined Autism/ADHD.
So yeah -- hopefully that's a start for you, but as with everything online, don't take my word for it -- I'm also a lay person and may get stuff wrong, so this is just what I've found and kept in my notes. Your best bet truly is to find a psychiatrist specializing in ADHD medication management and discuss your options with them. Good luck!
132 notes · View notes
esperderek · 3 months
Text
New RPG.net owner liked tweets from RFK Jr, Tucker Carlson, and more...
Just left RPG.net, that venerable old tabletop rpg forum, a forum that I've been a part of for 20+ years.
Recently (in March), it was bought by RPGMatch, a startup aiming to do matchmaking for TTRPGs. In the past couple of days, despite their many reassurances, I got it into my head to look up the new owner Joaquin Lippincott, and lucky for me he has a Twitter! (Or X, now, I guess.)
Yeah...the first warning bell is that his description calls him a 'Machine learning advocate', and his feed is full of generative AI shit. Oh, sure, he'll throw the fig leaf of 'AI shouldn't take creative jobs.' here and there, but all-in-all he is a full-throated supporter of genAI. Which means that RPGnet's multiple assurances that they will never scrape for AI...suspect at best.
Especially, when you check out his main company, https://www.metaltoad.com/, and find that his company, amongst other services, is all about advising corporations on how to make the best use of generative AI, LLMs, and machine learning. They're not the ones making them, but they sure are are helping corps decide what jobs to cut in favor of genAI. Sorry, they "Solve Business Problems."
This, alone, while leaving a massive bad taste in my mouth, wouldn't be enough, and apart from his clear love of genAI his feed is mostly business stuff and his love of RPGs. Barely talks politics or anything similar.
But then, I decided to check his Likes, the true bane of many a people who have tried to portray themselves as progressive, or at least neutral.
And wow. In lieu of links that can be taken down, I have made screenshots. If you want to check it yourself, just find his Twitter feed, this isn't hidden information. (Yet.)
Tumblr media
Here's him liking a conspiracy theory that the War on Ukraine is actually NATO's fault, and it's all a plan by the US to grift and disable Russia!
Tumblr media Tumblr media
Here's him liking Robert F. Kennedy Jr. praising Tucker Carlson interviewing Vladimir Putin!
Tumblr media
Here's him liking a right wing influencer's tweet advancing a conspiracy theory about Hunter Biden!
Tumblr media Tumblr media
Former Republican candidate Vivek Ramaswamy talking about how he wants to tear down the Department of Education and the FDA (plus some COVID vaccine conspiracy theory thrown in)
Tumblr media
Sure did like this Tucker Carlson video on Robert Kennedy Jr... (Gee, I wonder who this guy is voting for in October.)
Tumblr media
Agreeing about a right-wing grifter's conspiracy theories... (that guy's Twitter account is full of awful, awful transphobia, always fun.)
Tumblr media
Him liking a tweet about someone using their own fathers death to advance an anti-vaxx agenda! What the fuck! (This guy was pushing anti-vax before his father's death, I checked, if you're wondering.)
So, yes, to sum it up, RPG.net, that prides itself as an inclusive place, protective it's users who are part of vulnerable groups, and extremely supportive of creators, sold out to a techbro (probably)libertarian whose day job is helping companies make use of generative AI and likes tweets that advance conspiracy theories about the Ukraine war, Hunter Biden, vaccines, and others. Big fan of RFKjr, Carlson, and Putin, on the other hand.
And, like, shame on RPG.net, Christopher Allen for selling to this guy, and the various admins and mods who spent ages reassuring everything will be okay (including downplaying Lippincott's involvement in genAI). Like, was no research into this guy done at all? Or did y'all not care?
So I'm gone, and I'm betting while maybe not today or tomorrow, things are going to change for that website, and not for the best for anyone.
201 notes · View notes
screamingcrows · 5 months
Text
A Good Night's Sleep - Zandik x Reader
Tumblr media
Author's note: Feed this to an AI algorithm and I'm feeding you to Streptococcus pyogenes. This is written under the assumption that Zandik is Dottore (idk if using the Dottore tag is okay for it? If not please let me know and I'll remove it) 1.7k words of inexperienced NSFW Zandik Warnings: Somnophilia, noncon, there is no penetrative sex, dry humping, blood (very little), fem reader, very vague thoughts of murder, nsfw Summary: You're out on a field trip together and have been trekking through the forest all day. Somehow Zandik just isn't as tired as he should be. You're fast asleep. So naturally, he decides to try a hands on experiment. MINORS, AGELESS, AND BLANK BLOGS DNI - you will be blocked on sight
Zandik rubbed at his eyes, trying to convince himself that his inability to fall asleep was caused by external factors. You'd been trekking through the forest most of the day, and any proposed break had been quickly shut down by him.
Theoretically, he should be just as fast asleep as you. He turned on the thin mat, faintly cursing at the pitiful excuse for bedding. Proper sleep was a comfort he'd grown to take for granted, and the reminder of how things had once been stung. At least you'd managed to set up the bug net together, even if sharing did mean having to be a little closer than he'd have liked. Pillows would've been nice. Maybe if he hadn't insisted on travelling as light as possible.
It was always easy to be clever in hindsight. If only it could be harnessed.
Burying his face into the scratchy blanket that covered his body he attempted to block out any disturbances. He was no stranger to erratic thoughts, but tonight felt excessive.
His fingers tapped against his thigh in a well-known rhythm while shifting his breathing to accompany the subtle notes. By all means it should work to ease his thoughts, a tried and tested strategy. And it did. His frantic thoughts fading into nothing, no more triple-checking plans for tomorrow, considering parts to excavate and examine, plants to bring back, measurements to take…
A blissful silence settled, broken only by the rustling leaves above.
Until you moved. A small, sleepy mewl escaping your lips as you shuffled beside him. He didn't have to see you to to know what infuriatingly peaceful expression what likely on your face. Images of your soft features flooding his mind, hands moving to scratch at his scalp.
How he tried once more to push those thoughts away, his crimson eyes darkening as memories of the day filled his consciousness nonetheless. You, with your deviously impractical attire, shorts that had left practically everything exposed. It was a daring choice, reflecting the total confidence with which you had moved through the thicket. Oh how his fingers ached to know what it would be like to touch bare skin, hands flexing at the mere thought.
Nothing but a preprogrammed reaction. Although annoying and impractical, the response was natural. The thought circulated in the back of his mind, slowly losing meaning. His body curled in on itself, delirious poison spreading through his body.
You were fluttery by nature, a little bird struggling to remain still for longer intervals. Easily excitable as well, in the most annoying way. You'd flitted around in the forest, zigzagging between moss, animals, shiny rocks, saplings… Leaning down and touching anything you could, ass up while you chatted about your findings.
He'd never had problems concentrating, but with all the blood draining from his mind to other places, it had been impossible to focus on your ramblings.
Despite the hurdles of keeping you on a leash, he always found himself having to suppress a smile when you yapped, your eyes alight with glee. So much went on behind those bright eyes of yours, words clearly too slow to convey everything clearly. That much was evident with how you sometimes spoke in tongues, stumbling and altogether skipping words. But better yet, how you looked when your brows furrowed, sucking your cheek in enough to bite at the inside, actually considering his perspectives.
Before he could register it, he'd already rolled around on his mat, eyes burning holes into your back. A shaky hand reached out, his breath catching in his throat as he fought the desire to examine, squeeze, grope… He groaned softly, reminding himself that this was an endeavor driven by pure curiosity. You were asleep and would be none the wiser as long as he was careful.
The mantra kept repeating itself. This was curiosity, and nothing more. Curiosity about why you had that blasted effect on his mind, and if pursuing physical intimacy would solve his inability to sleep. It was a need akin to hunger, satisfy it and he'd be left alone.
There was already an uncomfortable tightness in the front of his pants, the feeling unfamiliar and invasive. Instinct kicked in and made his hips buck a little, erection rubbing against the confines of his pants. Archons he needed more than this. It infuriated him to no end, body craving the feeling of you against him.
He shifted closer, needing to know if you felt as divine as everything in him screamed. He had to bite down on his own arm, sharp teeth threatening to break skin as his other hand ghosts along your waist. How it had snaked under your blanket without his knowledge was lost on him, which only fueled the heat traveling along his skin.
You were unimaginably warm and pliant under his touch, fingers sinking a little deeper. Everything in his body tingled, an almost magnetic pull spurring him on to shift closer. Your breaths were still even, body vulnerable and his for the taking.
It felt like sacrilege as his hands worshipped your form, pupils dilated when his palm slides across your soft stomach, somehow already under your shirt. Just a little more. He needed some reaction from you, assurance that this was real. That he hadn't inhaled spores and was caught in a hallucination. How terribly unbefitting such a fate would be.
But that would likely entail cutting this experiment short, meaning he'd have to ignore those urges for now. Everything was foreign and uncomfortable, a tightness straining against the front of his boxers. He had to close his eyes, unwilling to watch as his hips buck tentatively, a low hiss passing his lips at the slight friction provided by the fabric.
Still too reluctant to move closer, he settles for sliding his hand further up. It was ridiculous how your skin got even softer the closer he moved to your chest. There was something repulsively human about the way your heart felt as it beat steadily under his twitching fingers. He wanted to throw up.
He needed to get closer. Holding his breath while inching closer, wishing he could sink his nails into your skin and tear it from the muscle. A need to expose exactly what made you this infuriatingly irresistible.
Your scent brought on an almost euphoric state, warm and comfortable as it caressed him. It had to be preserved, your body too ephemeral for this world. He groaned, still careful enough to angle his head away from the back of your neck.
Temptation had him firmly in its grasp, hips meeting the plush of your ass. Slowly, deliberately, he rolled his hips against you. It sent him reeling, a pleasant fog creeping into his mind. He couldn't find it in himself to resist, hands slowly moving back down to your hips and adjusting your position.
He felt alive, burying the part of him that bled out with every slow buck of his hips. The wet patch that had been forming at the front of his boxers did nothing to quell the beast piloting his body. Daring to look down between your bodies, he found nothing but fuel for his frenzy in the way your body curved. The way it looked when he let his fingers squeeze your hips a little further, utterly transfixed by the indentations it made.
Everything in his mind screamed at him to let go and back away. Not for your sake, no you were still blissfully unaware, a tired little creature. No, the longer he continued the more certain he became that this had to be preserved. There had to be a way to mimic it, reverse engineer what made it impossible for him to keep his face out of your hair.
He inhaled deeply, intoxicated as he kept bucking against you, delirious mind too far gone to notice the little huffs and whimpers that left your lips, sleep clearly disturbed by his movements.
It's a dangerous battle, fingertips playing with the hem of your panties. It was imperative that he knew all details. It was too warm, burning his skin and making his stomach churn. There was nothing practiced about it, tentatively tugging and rubbing. Your soft squirming was nothing against him, body curling greedily around you.
Quick to pull his hand back out, he settles for massaging your thighs. His hold was steadily morphing to mimic the vultures of his birthplace, nails sinking in like talons. Tear you to pieces, that was what he needed to do.
He barely realized that he'd begun softly chanting your name, the word a prayer upon his parted lips. It was all too much, uncoordinated movements growing even sloppier as he found himself unable to stop. An overwhelming feeling was building in the pit of his stomach, drowning out every uncertainty that made its home there.
Pure ecstasy was all he felt, head pressed against your shoulder as he came. His nails were stained with your blood when his hands finally released your form. He slowly came to, repulsion filling his entire being at the wet sensation. There was nothing but simple, temporary pleasure to be gained from this endeavor. Expecting anything more profound had been folly.
So this clarity was the price to be paid for his actions?
No.
The real price was paid when he heard your confused voice, the pale moonlight too invasive in the way it lingered along your trembling body. How it reflected in the shimmering droplets of blood running from atop your hip. Small sniffles mixing with your terribly soft voice.
"Z-zandik? What just… why is my back wet? a-and I'm bleeding?"
Part 2
208 notes · View notes
txttletale · 9 months
Note
Your discussions on AI art have been really interesting and changed my mind on it quite a bit, so thank you for that! I don’t think I’m interested in using it, but I feel much less threatened by it in the same way. That being said, I was wondering, how you felt about AI generated creative writing: not, like AI writing in the context of garbage listicles or academic essays, but like, people who generate short stories and then submit them to contests. Do you think it’s the same sort of situation as AI art? Do you think there’s a difference in ChatGPT vs mid journey? Legitimate curiosity here! I don’t quite have an opinion on this in the same way, and I’ve seen v little from folks about creative writing in particular vs generated academic essays/articles
i think that ai generated writing is also indisputably writing but it is mostly really really fucking awful writing for the same reason that most ai art is not good art -- that the large training sets and low 'temperature' of commercially available/mass market models mean that anything produced will be the most generic version of itself. i also think that narrative writing is very very poorly suited to LLM generation because it generally requires very basic internal logic which LLMs are famously bad at (i imagine you'd have similar problems trying to create something visual like a comic that requires consistent character or location design rather than the singular images that AI art is mostly used for). i think it's going to be a very long time before we see anything good long-form from an LLM, especially because it's just not a priority for the people making them.
ultimately though i think you could absolutely do some really cool stuff with AI generated text if you had a tighter training set and let it get a bit wild with it. i've really enjoyed a lot of AI writing for being funny, especially when it was being done with tools like botnik that involve more human curation but still have the ability to completely blindside you with choices -- i unironically think the botnik collegehumour sketch is funnier than anything human-written on the channel. & i think that means it could reliably be used, with similar levels of curation, to make some stuff that feels alien, or unsettling, or etheral, or horrifying, because those are somewhat adjacent to the surreal humour i think it excels at. i could absolutely see it being used in workflows -- one of my friends told me recently, essentially, "if i'm stuck with writer's block, i ask chatgpt what should happen next, it gives me a horrible idea, and i immediately think 'that's shit, and i can do much better' and start writing again" -- which is both very funny but i think presents a great use case as a 'rubber duck'.
but yea i think that if there's anything good to be found in AI-written fiction or poetry it's not going to come from chatGPT specifically, it's going to come from some locally hosted GPT model trained on a curated set of influences -- and will have to either be kind of incoherent or heavily curated into coherence.
that said the submission of AI-written stories to short story mags & such fucking blows -- not because it's "not writing" but because it's just bad writing that's very very easy to produce (as in, 'just tell chatGPT 'write a short story'-easy) -- which ofc isn't bad in and of itself but means that the already existing phenomenon of people cynically submitting awful garbage to literary mags that doesn't even meet the submission guidelines has been magnified immensely and editors are finding it hard to keep up. i think part of believing that generative writing and art are legitimate mediums is also believing they are and should be treated as though they are separate mediums -- i don't think that there's no skill in these disciplines (like, if someone managed to make writing with chatGPT that wasnt unreadably bad, i would be very fucking impressed!) but they're deeply different skills to the traditional artforms and so imo should be in general judged, presented, published etc. separately.
211 notes · View notes
Note
Hey! I was wondering. Does it still counts as plagiarism if one of your ideas gets tweaked, but the premise and even the character's motivation and personality remains the same in the story, although is with another name? (Not even the name if honest.)
How could I protect my work from this? More important, is there any way or it is possible to protect ideas? Or the protection only applies for the work itself?
Thanks in advance!
Tweaked Ideas and Plagiarism
Premises and ideas can't be protected. It's how the premise and ideas are executed that matters. One of my favorite examples of this is The Vampire Diaries vs Twilight. Premise-wise, they're almost identical...
🗸 17-year old "non-magical" girl protagonist 🗸 (she's actually secretly magical... sort of...) 🗸 Protagonist has parental issues 🗸 Mystical small town setting 🗸 Protagonist unaware of town's supernatural undercurrent 🗸 Protagonist meets hot "born-in-a-prior-century" vampire at school 🗸 His senses allow him to identify her among all other students 🗸 They fall in love 🗸 She gets sucked into supernatural shenanigans 🗸 She finds out she has friends who are secretly supernatural 🗸 Dangerous ancient council creates more shenanigans 🗸 Eventually she becomes a vampire
It looks bad on paper, but it's the myriad differences that make these stories completely unique. First, Bella is the polar opposite of Elena in terms of personality and situation. Bella's parents are alive but immature, Elena's parents died in a tragic accident. Bella moved to Forks to live with her father, Elena grew up in Mystic Falls. Edward was born in 1901, Stefan was born in 1846 (TV, 1490 books.) Edward has a big adoptive family. Stefan has his brother. Edward can't read Bella's mind, Stefan can read Elena's mind. Bella's supernatural friends are wolf shifters who are part of the local Indigenous tribe; Elena's supernatural friends include a werewolf, witch, and a vampire hunter. Bella can shield her mind from mind readers, Elena is a doppelgänger. Bella and friends fight an ancient council of vampires who micromanage the vampire world, Elena and friends fight a town council made up of anti-vampire humans.
The point is, the actual stories are very, very different despite the similarities in premise.
Now, I will say that if the premise is genuinely the same, and the character has the same appearance, personality, and even a similar name, that's a bit concerning. If you take a brutally honest look at the two works and find more similarities than differences, it's possible this person did over borrow. The problem is there may not be much you can do about it, but it depends on the situation. If you're both published but you published first, it's something you can take up with your agent or publisher, or if you don't have one, you can contact a copyright lawyer to explore your options. It's costly, however. If you can't afford this option, you can contact them and ask them to take it down, but they're not obligated to. Outside of that, there isn't much you can do.
There honestly isn't much you can do to protect yourself from plagiarism. It's part of the risk we take in putting our work out there. Again, if you're agented, have a publisher, or can hire a copyright attorney, you may find you have some legal recourse, especially if the plagiarism is glaring and they're making money off what they stole. Otherwise, you just have to grin and bear it. But the truth is, given the wealth of stories being published every day, plagiarism isn't that common of an occurrence. (Although, it's increasing now somewhat with the advent of AI...)
And I'll tell you, the harsh reality is you may feel like a work is so similar to yours it must be an intentional rip-off, but our brains have a way of making mountains out of molehills when it comes to spotting similarities. Nine out of ten times it's just not as similar as you think it is.
Other relevant posts:
Similarities vs Plagiarism Plagiarism vs Reference vs Inspiration Beta Reader Sees Similarity with Existing Character Does My Book/Story Already Exist Taking Inspiration from Another Story’s Premise Afraid of Ideas Being Stolen or Copied Once Shared Afraid of Plagiarism Accusation
•••••••••••••••••••••••••••••••••
I’ve been writing seriously for over 30 years and love to share what I’ve learned. Have a writing question? My inbox is always open!
♦ Questions that violate my ask policies will be deleted! ♦ Please see my master list of top posts before asking ♦ Learn more about WQA here
91 notes · View notes
foone · 2 years
Text
So here's the thing about AI art, and why it seems to be connected to a bunch of unethical scumbags despite being an ethically neutral technology on its own. After the readmore, cause long. Tl;dr: capitalism
The problem is competition. More generally, the problem is capitalism.
So the kind of AI art we're seeing these days is based on something called "deep learning", a type of machine learning based on neural networks. How they work exactly isn't important, but one aspect in general is: they have to be trained.
The way it works is that if you want your AI to be able to generate X, you have to be able to train it on a lot of X. The more, the better. It gets better and better at generating something the more it has seen it. Too small a training dataset and it will do a bad job of generating it.
So you need to feed your hungry AI as much as you can. Now, say you've got two AI projects starting up:
Project A wants to do this ethically. They generate their own content to train the AI on, and they seek out datasets that allow them to be used in AI training systems. They avoid misusing any public data that doesn't explicitly give consent for the data to be used for AI training.
Meanwhile, Project B has no interest in the ethics of what they're doing, so long as it makes them money. So they don't shy away from scraping entire websites of user-submitted content and stuffing it into their AI. DeviantArt, Flickr, Tumblr? It's all the same to them. Shove it in!
Now let's fast forward a couple months of these two projects doing this. They both go to demo their project to potential investors and the public art large.
Which one do you think has a better-trained AI? the one with the smaller, ethically-obtained dataset? Or the one with the much larger dataset that they "found" somewhere after it fell off a truck?
It's gonna be the second one, every time. So they get the money, they get the attention, they get to keep growing as more and more data gets stuffed into it.
And this has a follow-on effect: we've just pre-selected AI projects for being run by amoral bastards, remember. So when someone is like "hey can we use this AI to make NFTs?" or "Hey can your AI help us detect illegal immigrants by scanning Facebook selfies?", of course they're gonna say "yeah, if you pay us enough".
So while the technology is not, in itself, immoral or unethical, the situations around how it gets used in capitalism definitely are. That external influence heavily affects how it gets used, and who "wins" in this field. And it won't be the good guys.
An important follow-up: this is focusing on the production side of AI, but obviously even if you had an AI art generator trained on entirely ethically sourced data, it could still be used unethically: it could put artists out of work, by replacing their labor with cheaper machine labor. Again, this is not a problem of the technology itself: it's a problem of capitalism. If artists weren't competing to survive, the existence of cheap AI art would not be a threat.
I just feel it's important to point this out, because I sometimes see people defending the existence of AI Art from a sort of abstract perspective. Yes, if you separate it completely from the society we live in, it's a neutral or even good technology. Unfortunately, we still live in a world ruled by capitalism, and it only makes sense to analyze AI Art from a perspective of having to continue to live in capitalism alongside it.
If you want ideologically pure AI Art, feel free to rise up, lose your chains, overthrow the bourgeoisie, and all that. But it's naive to defend it as just a neutral technology like any other when it's being wielded in capitalism; ie overwhelmingly negatively in impact.
1K notes · View notes
brucewaynehater101 · 4 months
Note
Oh I was absolutely going with them Jason Finds Out During TT route. I think it would be especially funny if he's heard horror stories from Rouges and his own Henchmen that Robin The Third is some kind of demon that Batman summoned on accident. There are some rumors about how the demon feeds off of grief or anger or vengeance because it's illusions of being g a human are stronger when the Bat is there so *clearly* it is taking its power *from* the Bat. Others say that Nightwing summoned it so that it would keep Bruce on a leash without the first Robin having to come back. Some say it was some person in Gothem who did it or that it was the combined form of the many curses on the city.
All Jason knows is that when his replacement turned around, it's head luled to the side just an inch or two, like a puppet on strings that had to much slack on that one string. Jason manages to shoot one of its arms but instead of a spray of blood, it is broken shards of porcelain and sand. His hits feel like he's punching a solid wall but some do leave visible cracks in Tim. This Thing in a Robin Costume could not ever be human. He knows because when he left, he took a handful of sand in a vile to see if he could figure out what it is. Jason still has that vile to this day, the only proof he has that Tim isn't a human. Sometimes he will set it on a flat surface and watch the sand in it make it slowly roll towards whatever direction Tim is in.
As for how he heals, that's to the magic that animates him, all Tim needs to do is hold his pieces together like a jigsaw puzzle and after a few moments the piece he's holding will weld itself back into place. Also his sand will slowly come back to him, attracted like a magnet and he can tell where all his sand in instinctually. He let's Jason keep the vial of it as it's basically an unhacklable Jason Tracker. The sand isn't fast at moving towards him, roughly about the pace of a snail or sloth. It's certainly moving but just getting from downtown to the Batcave could take his sand a week. Also the pull isn't super strong, gaining about as much force as a particularly stubborn ant.
Ras took half a pound of Tim's sand instead of his spleen and Tim would very much like his sand back.
As for Cass knowing, she 100% does. Tim has shown her his true gorm and when she asked why he didn't show the others, Tim replied, "they wouldn't understand. They would worry over things that aren't problems and try to fix things I already fixed and end up breaking those things."
Eventually the Bats must find out though, and when Dick asks if that means they need to do special things to keep Tim from dying to Magic Users, Tim laughs and laughs like Dick has told the funniest joke in the world. When he calms down, he asks a question of his own, "Dick. How could I possibly die if I have never been alive in the first place? I am simply an object enchanted to move and speak. I am no more alive that the AI Babs uses to scan the internet for pictures of us. I am no more alive than a character in a video game. At most, at *most* I can be compared to some of Ivy's plants that she uses to attack us. I can not be killed for I have never been alive. Broken, yes, but that I can fix. I simply have to be put back together like a jigsaw puzzle."
Oof. Poor Dick is going to have to figure out how to feel about that statement. Tim not being alive at all and comparing himself to a video game or AI might fuck with Dick's sense of self, sentience, etc. I would love to see how they all logic, cope, and understand identity after this.
I do love the idea that the sand tries to make its way back to Tim, but he knows where it is at all times. Jason has an estimated location of Tim (N, S, E, W), but Tim has like coordinates.
I wonder if Cass would try dancing with Tim. Since his movements are different, perhaps she would enjoy learning to dance in a way that's similar to how he moves? It could be eerie and fun for her.
I'm curious how Ra's would feel about Tim and his sand in this. Why did he keep the sand? Does it look distinct from other sand? Was it just cause it was part of Tim and Ra's thought he might be able to use it? Also, does he attempt that shit he did with his Nyssa since Tim probably can't reproduce?
102 notes · View notes
saprophilous · 7 months
Note
just letting you know that that ask you rb'd about glaze being a scam seems to be false/dubious. I think they're just misinterpreting "not as useful as we had hoped" and interpreted it maliciously, based on the replies?
not positive but yeah!
Ah yeah, I see people fairly expressing that being “debunked” as in, not a scam; I wasn’t personally particularly aligned to whether or not its “dubious origins” are true or not… so sorry about that.
From what I’ve read, I was more focused upon the consensus that it doesn’t work, and therefore isn’t worth the effort. So having a positive takeaway on glaze outside of its “scam or not status”, as potentially saving us from ai learning doesn’t seem useful to pass around.
Correct me if there’s better information out there but this from an old Reddit post a year back is why I didn’t continue looking into it as it made sense to my layman’s brain:
“lets briefly go over the idea behind GLAZE
computer vision doesn't work the same way as in the brain. They way we do this in computer vision is that we hook a bunch of matrix multiplications together to transform the input into some kind of output (very simplified). One of the consequences of this approach is that small changes over the entire input image can lead to large changes to the output.
It's this effect that GLAZE aims to use as an attack vector / defense mechanism. More specifically, GLAZE sets some kind of budget on how much it is allowed to change the input, and within that budget it then tries to find a change such that the embeddings created by the VAE that sits in front of the diffusion model look like embeddings of an image that come from a different style.
Okay, but how do we know what to change to make it look like a different style? for that they take the original image and use the img2img capabilities of SD itself to transform that image into something of another style. then we can compare the embeddings of both versions and try and alter the original image such that it's embeddings start looking like that of the style transferred version.
So what's wrong with it?
In order for GLAZE to be successful the perturbation it finds (the funny looking swirly pattern) has to be reasonably resistant against transformations. What the authors of GLAZE have tested against is jpeg compression, and adding Gaussian noise, and they found that jpeg compression was largely ineffective and adding Gaussian noise would degrade the artwork quicker than it would degrade the transfer effect of GLAZE. But that's a very limited set of attacks you can test against. It is not scale invariant, something that people making lora's usually do. e.g. they don't train on the 4K version of the image, at most on something that's around 720x720 or something. As per authors admission it might also not be crop invariant. There also seem to be denoising approaches that sufficiently destroy the pattern (the 16 lines of code).
As you've already noticed, GLAZING something can results in rather noticeable swirly patterns. This pattern becomes especially visible when you look at works that consist of a lot of flat shading or smooth gradients. This is not just a problem for the artist/viewer, this is also a fundamental problem for glaze. How the original image is supposed to look like is rather obvious in these cases, so you can fairly aggressively denoise without much loss of quality (might even end up looking better without all the patterns).
Some additional problems that GLAZE might run into: it very specifically targets the original VAE that comes with SD. The authors claim that their approach transfers well enough between some of the different VAEs you can find out in the wild, and that at least they were unsuccessful in training a good VAE that could resist their attack. But their reporting on these findings isn't very rigorous and lacks quite a bit of detail.
will it get better with updates?
Some artists belief that this is essentially a cat and mouse game and that GLAZE will simply need updates to make it better. This is a very optimistic and uninformed opinion made by people that lack the knowledge to make such claims. Some of the shortcomings outlined above aren't due to implementation details, but are much more intimately related with the techniques/math used to achieve these results. Even if this indeed was a cat and mouse game, you'll run into the issue that the artist is always the one that has to make the first move, and the adversary can save past attempt of the artists now broken work.
GLAZE is an interesting academic paper, but it's not going to be a part of the solution artists are looking for.”
[source]
118 notes · View notes
pizzaronipasta · 1 year
Text
READ THIS BEFORE INTERACTING
Alright, I know I said I wasn't going to touch this topic again, but my inbox is filling up with asks from people who clearly didn't read everything I said, so I'm making a pinned post to explain my stance on AI in full, but especially in the context of disability. Read this post in its entirety before interacting with me on this topic, lest you make a fool of yourself.
AI Doesn't Steal
Before I address people's misinterpretations of what I've said, there is something I need to preface with. The overwhelming majority of AI discourse on social media is argued based on a faulty premise: that generative AI models "steal" from artists. There are several problems with this premise. The first and most important one is that this simply isn't how AI works. Contrary to popular misinformation, generative AI does not simply take pieces of existing works and paste them together to produce its output. Not a single byte of pre-existing material is stored anywhere in an AI's system. What's really going on is honestly a lot more sinister.
How It Actually Works
In reality, AI models are made by initializing and then training something called a neural network. Initializing the network simply consists of setting up a multitude of nodes arranged in "layers," with each node in each layer being connected to every node in the next layer. When prompted with input, a neural network will propagate the input data through itself, layer by layer, transforming it along the way until the final layer yields the network's output. This is directly based on the way organic nervous systems work, hence the name "neural network." The process of training a network consists of giving it an example prompt, comparing the resulting output with an expected correct answer, and tweaking the strengths of the network's connections so that its output is closer to what is expected. This is repeated until the network can adequately provide output for all prompts. This is exactly how your brain learns; upon detecting stimuli, neurons will propagate signals from one to the next in order to enact a response, and the connections between those neurons will be adjusted based on how close the outcome was to whatever was anticipated. In the case of both organic and artificial neural networks, you'll notice that no part of the process involves directly storing anything that was shown to it. It is possible, especially in the case of organic brains, for a neural network to be configured such that it can produce a decently close approximation of something it was trained on; however, it is crucial to note that this behavior is extremely undesirable in generative AI, since that would just be using a wasteful amount of computational resources for a very simple task. It's called "overfitting" in this context, and it's avoided like the plague.
The sinister part lies in where the training data comes from. Companies which make generative AI models are held to a very low standard of accountability when it comes to sourcing and handling training data, and it shows. These companies usually just scrape data from the internet indiscriminately, which inevitably results in the collection of people's personal information. This sensitive data is not kept very secure once it's been scraped and placed in easy-to-parse centralized databases. Fortunately, these issues could be solved with the most basic of regulations. The only reason we haven't already solved them is because people are demonizing the products rather than the companies behind them. Getting up in arms over a type of computer program does nothing, and this diversion is being taken advantage of by bad actors, who could be rendered impotent with basic accountability. Other issues surrounding AI are exactly the same way. For example, attempts to replace artists in their jobs are the result of under-regulated businesses and weak worker's rights protections, and we're already seeing very promising efforts to combat this just by holding the bad actors accountable. Generative AI is a tool, not an agent, and the sooner people realize this, the sooner and more effectively they can combat its abuse.
Y'all Are Being Snobs
Now I've debunked the idea that generative AI just pastes together pieces of existing works. But what if that were how it worked? Putting together pieces of existing works... hmm, why does that sound familiar? Ah, yes, because it is, verbatim, the definition of collage. For over a century, collage has been recognized as a perfectly valid art form, and not plagiarism. Furthermore, in collage, crediting sources is not viewed as a requirement, only a courtesy. Therefore, if generative AI worked how most people think it works, it would simply be a form of collage. Not theft.
Some might not be satisfied with that reasoning. Some may claim that AI cannot be artistic because the AI has no intent, no creative vision, and nothing to express. There is a metaphysical argument to be made against this, but I won't bother making it. I don't need to, because the AI is not the artist. Maybe someday an artificial general intelligence could have the autonomy and ostensible sentience to make art on its own, but such things are mere science fiction in the present day. Currently, generative AI completely lacks autonomy—it is only capable of making whatever it is told to, as accurate to the prompt as it can manage. Generative AI is a tool. A sculpture made by 3D printing a digital model is no less a sculpture just because an automatic machine gave it physical form. An artist designed the sculpture, and used a tool to make it real. Likewise, a digital artist is completely valid in having an AI realize the image they designed.
Some may claim that AI isn't artistic because it doesn't require effort. By that logic, photography isn't art, since all you do is point a camera at something that already looks nice, fiddle with some dials, and press a button. This argument has never been anything more than snobbish gatekeeping, and I won't entertain it any further. All art is art. Besides, getting an AI to make something that looks how you want can be quite the ordeal, involving a great amount of trial and error. I don't speak from experience on that, but you've probably seen what AI image generators' first drafts tend to look like.
AI art is art.
Disability and Accessibility
Now that that's out of the way, I can finally move on to clarifying what people keep misinterpreting.
I Never Said That
First of all, despite what people keep claiming, I have never said that disabled people need AI in order to make art. In fact, I specifically said the opposite several times. What I have said is that AI can better enable some people to make the art they want to in the way they want to. Second of all, also despite what people keep claiming, I never said that AI is anyone's only option. Again, I specifically said the opposite multiple times. I am well aware that there are myriad tools available to aid the physically disabled in all manner of artistic pursuits. What I have argued is that AI is just as valid a tool as those other, longer-established ones.
In case anyone doubts me, here are all the posts I made in the discussion in question: Reblog chain 1 Reblog chain 2 Reblog chain 3 Reblog chain 4 Potentially relevant ask
I acknowledge that some of my earlier responses in that conversation were poorly worded and could potentially lead to a little confusion. However, I ended up clarifying everything so many times that the only good faith explanation I can think of for these wild misinterpretations is that people were seeing my arguments largely out of context. Now, though, I don't want to see any more straw men around here. You have no excuse, there's a convenient list of links to everything I said. As of posting this, I will ridicule anyone who ignores it and sends more hate mail. You have no one to blame but yourself for your poor reading comprehension.
What Prompted Me to Start Arguing in the First Place
There is one more thing that people kept misinterpreting, and it saddens me far more than anything else in this situation. It was sort of a culmination of both the things I already mentioned. Several people, notably including the one I was arguing with, have insisted that I'm trying to talk over physically disabled people.
Read the posts again. Notice how the original post was speaking for "everyone" in saying that AI isn't helpful. It doesn't take clairvoyance to realize that someone will find it helpful. That someone was being spoken over, before I ever said a word.
So I stepped in, and tried to oppose the OP on their universal claim. Lo and behold, they ended up saying that I'm the one talking over people.
Along the way, people started posting straight-up inspiration porn.
I hope you can understand where my uncharacteristic hostility came from in that argument.
160 notes · View notes
Text
An open copyright casebook, featuring AI, Warhol and more
Tumblr media
I'm coming to DEFCON! On Aug 9, I'm emceeing the EFF POKER TOURNAMENT (noon at the Horseshoe Poker Room), and appearing on the BRICKED AND ABANDONED panel (5PM, LVCC - L1 - HW1–11–01). On Aug 10, I'm giving a keynote called "DISENSHITTIFY OR DIE! How hackers can seize the means of computation and build a new, good internet that is hardened against our asshole bosses' insatiable horniness for enshittification" (noon, LVCC - L1 - HW1–11–01).
Tumblr media
Few debates invite more uninformed commentary than "IP" – a loosely defined grab bag that regulates an ever-expaning sphere of our daily activities, despite the fact that almost no one, including senior executives in the entertainment industry, understands how it works.
Take reading a book. If the book arrives between two covers in the form of ink sprayed on compressed vegetable pulp, you don't need to understand the first thing about copyright to read it. But if that book arrives as a stream of bits in an app, those bits are just the thinnest scrim of scum atop a terminally polluted ocean of legalese.
At the bottom layer: the license "agreement" for your device itself – thousands of words of nonsense that bind you not to replace its software with another vendor's code, to use the company's own service depots, etc etc. This garbage novella of legalese implicates trademark law, copyright, patent, and "paracopyrights" like the anticircumvention rule defined by Section 1201 of the DMCA:
https://www.eff.org/press/releases/eff-lawsuit-takes-dmca-section-1201-research-and-technology-restrictions-violate
Then there's the store that sold you the ebook: it has its own soporific, cod-legalese nonsense that you must parse; this can be longer than the book itself, and it has been exquisitely designed by the world's best-paid, best-trained lawyer to liquefy the brains of anyone who attempts to read it. Nothing will save you once your brains start leaking out of the corners of your eyes, your nostrils and your ears – not even converting the text to a brilliant graphic novel:
https://memex.craphound.com/2017/03/03/terms-and-conditions-the-bloviating-cruft-of-the-itunes-eula-combined-with-extraordinary-comic-book-mashups/
Even having Bob Dylan sing these terms will not help you grasp them:
https://pluralistic.net/2020/10/25/musical-chairs/#subterranean-termsick-blues
The copyright nonsense that accompanies an ebook transcends mere Newtonian physics – it exists in a state of quantum superposition. For you, the buyer, the copyright nonsense appears as a license, which allows the seller to add terms and conditions that would be invalidated if the transaction were a conventional sale. But for the author who wrote that book, the copyright nonsense insists that what has taken place is a sale (which pays a 25% royalty) and not a license (a 50% revenue-share). Truly, only a being capable of surviving after being smeared across the multiverse can hope to embody these two states of being simultaneously:
https://pluralistic.net/2022/06/21/early-adopters/#heads-i-win
But the challenge isn't over yet. Once you have grasped the permissions and restrictions placed upon you by your device and the app that sold you the ebook, you still must brave the publisher's license terms for the ebook – the final boss that you must overcome with your last hit point and after you've burned all your magical items.
This is by no means unique to reading a book. This bites us on the job, too, at every level. The McDonald's employee who uses a third-party tool to diagnose the problems with the McFlurry machine is using a gadget whose mere existence constitutes a jailable felony:
https://pluralistic.net/2021/04/20/euthanize-rentier-enablers/#cold-war
Meanwhile, every single biotech researcher is secretly violating the patents that cover the entire suite of basic biotech procedures and techniques. Biotechnicians have a folk-belief in "patent fair use," a thing that doesn't exist, because they can't imagine that patent law would be so obnoxious as to make basic science into a legal minefield.
IP is a perfect storm: it touches everything we do, and no one understands it.
Or rather, almost no one understands it. A small coterie of lawyers have a perfectly fine grasp of IP law, but most of those lawyers are (very well!) paid to figure out how to use IP law to screw you over. But not every skilled IP lawyer is the enemy: a handful of brave freedom fighters, mostly working for nonprofits and universities, constitute a resistance against the creep of IP into every corner of our lives.
Two of my favorite IP freedom fighters are Jennifer Jenkins and James Boyle, who run the Duke Center for the Public Domain. They are a dynamic duo, world leading demystifiers of copyright and other esoterica. They are the creators of a pair of stunningly good, belly-achingly funny, and extremely informative graphic novels on the subject, starting with the 2008 Bound By Law, about fair use and film-making:
https://www.dukeupress.edu/Bound-by-Law/
And then the followup, THEFT! A History of Music:
https://web.law.duke.edu/musiccomic/
Both of which are open access – that is to say, free to download and share (you can also get handsome bound print editions made of real ink sprayed on real vegetable pulp!).
Beyond these books, Jenkins and Boyle publish the annual public domain roundups, cataloging the materials entering the public domain each January 1 (during the long interregnum when nothing entered the public domain, thanks to the Sonny Bono Copyright Extension Act, they published annual roundups of all the material that should be entering the public domain):
https://pluralistic.net/2023/12/20/em-oh-you-ess-ee/#sexytimes
This year saw Mickey Mouse entering the public domain, and Jenkins used that happy occasion as a springboard for a masterclass in copyright and trademark:
https://pluralistic.net/2023/12/15/mouse-liberation-front/#free-mickey
But for all that Jenkins and Boyle are law explainers, they are also law professors and as such, they are deeply engaged with minting of new lawyers. This is a hard job: it takes a lot of work to become a lawyer.
It also takes a lot of money to become a lawyer. Not only do law-schools charge nosebleed tuition, but the standard texts set by law-schools are eye-wateringly expensive. Boyle and Jenkins have no say over tuitions, but they have made a serious dent in the cost of those textbooks. A decade ago, the pair launched the first open IP law casebook: a free, superior alternative to the $160 standard text used to train every IP lawyer:
https://web.archive.org/web/20140923104648/https://web.law.duke.edu/cspd/openip/
But IP law is a moving target: it is devouring the world. Accordingly, the pair have produced new editions every couple of years, guaranteeing that their free IP law casebook isn't just the best text on the subject, it's also the most up-to-date. This week, they published the sixth edition:
https://web.law.duke.edu/cspd/openip/
The sixth edition of Intellectual Property: Law & the Information Society – Cases & Materials; An Open Casebook adds sections on the current legal controversies about AI, and analyzes blockbuster (and batshit) recent Supreme Court rulings like Vidal v Elster, Warhol v Goldsmith, and Jack Daniels v VIP Products. I'm also delighted that they chose to incorporate some of my essays on enshittification (did you know that my Pluralistic.net newsletter is licensed CC Attribution, meaning that you can reprint and even sell it without asking me?).
(On the subject of Creative Commons: Boyle helped found Creative Commons!)
Ten years ago, the Boyle/Jenkins open casebook kicked off a revolution in legal education, inspiring many legals scholars to create their own open legal resources. Today, many of the best legal texts are free (as in speech) and free (as in beer). Whether you want to learn about trademark, copyright, patents, information law or more, there's an open casebook for you:
https://pluralistic.net/2021/08/14/angels-and-demons/#owning-culture
The open access textbook movement is a stark contrast with the world of traditional textbooks, where a cartel of academic publishers are subjecting students to the scammiest gambits imaginable, like "inclusive access," which has raised the price of textbooks by 1,000%:
https://pluralistic.net/2021/10/07/markets-in-everything/#textbook-abuses
Meanwhile, Jenkins and Boyle keep working on this essential reference. The next time you're tempted to make a definitive statement about what IP permits – or prohibits – do yourself (and the world) a favor, and look it up. It won't cost you a cent, and I promise you you'll learn something.
Tumblr media
Support me this summer on the Clarion Write-A-Thon and help raise money for the Clarion Science Fiction and Fantasy Writers' Workshop!
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/07/30/open-and-shut-casebook/#stop-confusing-the-issue-with-relevant-facts
Tumblr media
Image: Cryteria (modified) Jenkins and Boyle https://web.law.duke.edu/musiccomic/
CC BY-NC-SA 4.0 https://creativecommons.org/licenses/by-nc-sa/4.0/
177 notes · View notes