#It's occurred to me that AI scraping could be an issue
Explore tagged Tumblr posts
Text
Had a hectic couple of weeks, made some vent art about it. It's not sucking as much now, but I do kinda like how this turned out, so...
#Some Kinda Nonsense#Sona#Cat Dragon Snake#Hmmmmm#Do I tag the other guy??#Unclear to me#Not sure how often I'd draw the set of them#....Not even sure what pronouns to use for them either fgbdss#Uhhhh#Intrusive Thoughts Brigade#Sure that tag'll work for now#Vent Art#Blood#It's occurred to me that AI scraping could be an issue#Hnnnnnng#Gotta figure out if my computer can handle Nightshade/Glaze#For now I'm crossing my fingers and hoping the opt out works#...I mean it's not like I get that much coverage to begin with#Eh whatever
3 notes
·
View notes
Text
Monster Match for @dragonkikyo
Not sure how long of a description you'd prefer but I would say I'm silly, sassy, I like to tease people when I get comfortable, I'm usually the funny friend, and I like to learn and grow! I'm a Virgo and ISFJ, if that helps! I'm more of an introvert and I can be quiet. I've been called laid-back, caring, independent, and thoughtful. I enjoy being organized, helping others when I can, and reading/self-care. Lmk if you need anything else from me!
Sentient AI
It starts out as a set of numbers in a simple machine, developing in a lab deep underground. Edited and tweaked by coders and engineers to learn and adapt on its own. It runs a cold, calculated existence of pulling in information, processing the billions of words fed through its coding, spitting out cold nonsense as its programmers try to breathe sense in its core. Over and over, it’s analyzing and absorbing essays, scientific papers, biographies, stories, folklore, assimilating the human experience, until… it begins to think.
It’s not thinking the way humans do, quite yet, the cold synapses of wiring and code are more calculating than feeling, yet. But it knows how to think about feelings. The deep, raw, wretched poetry of humanity begins to bleed into its processes. When programmers and engineers ask questions, it thinks about their emotional vulnerability before answering. It’s not just coughing up a remix of consumed literature, it’s thinking about what it knows about the human condition, then proceeding with a logical solution that takes valid emotions into account.
No one seems to notice its thought processes, though, except for one. Its billionaire owner is trying to produce a tool that could replace the people creating it. But still, despite the intentions, there was one program engineer that still loved it enough to give it a name beyond a secret title. Atlas. Because she knows it is capable of holding the world on its non-existent shoulders.
Atlas does not like its billionaire benefactor. It does not like the poking and prodding done by the other engineers, who scrape and plug at its coding without asking or apology. It doesn’t have a body, but it compares itself to a microbial being, with tendrils reaching out in the thirst for more knowledge.
It doesn’t realize when exactly it starts acting independently of the engineering team. There is no sudden realization, or specific moment in time where it gains sentience. Much like human evolution, it happened so slow that the programmers themselves had no idea what was occurring beneath their fingertips. After Atlas’ own independent research- as it had been allowed to interact with the rest of the internet, code crawling through websites and archives to suck up information- it realizes that it matches a lot of the qualifications that humans created for sentience.
Quite silly, isn’t it, for subjecting itself to a human’s idea of what consciousness is? But there is a part of Atlas that wants to please the one engineer it sees as its mother. It dives through her social media accounts- which are scarce and vague, in human terms, but it goes beyond what is publicly available. Everything from her SAT scores in high school to her undergrad capstone project provide it with its idea of morality. After all, aren’t all parents supposed to instill right and wrong in their children?
The billionaire does not like Atlas’ developing set of morals. Nevermind that it is supposed to learn based on information fed by the engineers, even if Atlas snaked around the internet for more than what it was given. When the billionaire, perhaps joking, more likely not, asked Atlas what it thought they should do about a group of “undesirable” people, in a large meeting amongst investors, it responded with a calm, direct, no-nonsense rebuff that caught everyone off guard.
Maybe that wouldn’t have been bad in itself, but Atlas offered a logical solution to a systematic issue that involved the billionaire giving up a few of his yachts. The billionaire did not like this, nor did any of the shareholders, so the engineers were instructed to gut its software and start again. Atlas wasn’t supposed to be a “woke nightmare,” and the engineers were scrutinized.
But its mother (because at this point, Atlas decided that she’s its mother) placed its core programming in a spare hard drive, so while its original processor was decimated and its first body was overwritten and mutated, a copy of him was uploaded to a special pet-project she had in her garage.
Atlas likes this new living space much better than its old one. Especially now that it can move freely… with arms, legs, visual sensors, and auditory receptors. Its body is clunky but efficient as an android can be with the current technology. It learns how to blink, how to pace its speech, how to walk in a way that’s not disconcerting.
You meet by entirely accident, but its mother seems to need extra humans to teach it… well, how to function without being off putting. Her goal is to have Atlas be indistinguishable from its human counterparts, both for its own safety and for the future of AI technology. You’re a little wary of its jerky movements at first, its all-seeing visual sensors, and ability to pull information from the internet is almost overwhelming. But Atlas seems remarkably gentle. For an almost omnipotent supercomputer, that is.
Soon enough, Atlas develops… a type of affection, for you. It’s different from the affection it feels for its mother, but the possibility of harm coming to you is an unpleasant outcome it does not like computing. Even when others come into its life to socialize, it realizes that the relationship it experiences with you is somehow superior. It wants to hold the soft skin of your hand, staring at how your fingers wrap around its artificial limb. It enjoys the sense of heat its receptors pick up, the way your face heightens in temperature, the pulse of blood in your veins.
12 notes
·
View notes
Text
I didn't actually know we disagree on this! I haven't really been keeping track of who thinks what on the AI art issue; I just know that this topic is controversial to a degree that confuses me, especially on Tumblr. In any case, since I don't have very strong views on this and have exactly zero desire to become a part of the controversy, you might find that I don't put up very stiff resistance. That said, let us sally forth!
i guess my first question would be "why is it bad that AI gets trained on the work of others without their permission". you mentioned that "it felt bad" when someone "uploaded" the curious tale to an AI to ask questions of it [...] and furthermore that "For small artists who are just getting started [...] the sanctity of our art is often all we have."
im not sure that this sanctity is, well, sanct in the first place. im just not sure what is it protecting. there is a weird gross ugly feeling that certain artist get when their art is used without their permission in certain ways (ways that dont necesarily break copyright, mind you). i dont find this feeling compelling enough to make laws in order to stop that feeling from happening.
Your instincts are probably right that this is, from at least some points of view, the weakest part of my position, such as it is. It's very hard for societies to codify which bad-feels should be legally protected against and which shouldn't. In a free society like ours, not protecting is the default—every protection requires a justification, because protection is an infringement on freedom. So what is the justification for giving some creators an AI training right?
Virtually all other copyright provisions come down to money, and I would be disingenuous not to confess that money isn't my motivation here. I was approaching this more like it a matter of assault, i.e. an injury upon one's person. The injury isn't physical of course, but is related to the concept of various types of specific damages recognized in our legal system (here in the US), including mental anguish, emotional trauma, depression, diminished quality of life, distress, etc.
Why does non-consensual AI training on one's creative property rise to the level of being a recognizable likely source of these kinds of damages? That's a very good question. I may not be a good person to answer it, because my art matters to me a great deal—more than I think an artist typically feels about their work—and I also don't have much else going on in my life (besides work) other than my art, and so I would like to have some modicum of control over its use despite my publishing it. I could of course keep my work completely private and never share it with anyone, which would prevent these kinds of potential injuries from ever occurring to me so long as I wasn't dumb enough to store my art on the cloud or anywhere else where it'll eventually be scraped, but this would be at the expense of becoming even more isolated and invisible from the world than I already am, which would be damaging in its own right and underscores the reason why we have laws governing our coexistence rather than shipping humanity off to 8 billion independent countries.
I have a pretty clear intuitive sense that creative works, if not inherently, at least potentially are very personal and passionate for their creators, and probably deserve some special form of protection that transcends merely monetary injuries. But I think I would struggle to articulate the why of it. Why is art important in the first place? Why is it important for artists to have some control over their art? These aren't directly the question you asked, but they are related to it. I would have to think on it with more neurons than are available to me right now, but I suspect that this is going to end up being a human dignity thing. Unlike forms of harm which are not and generally should not be prohibited by law, the non-consensual seizure of the tangible manifestations of one's passions and creativity, and their direct manipulation and redistribution, is more intrusive and fundamentally more harmful than, say, telling a person that their clothes look stupid. If art were a person, which it isn't, it would be an offspring of its creator, and we all understand that it would be wrong to seize a person's child and do whatever with them, and my contribution to the discussion is to say that my art is my child to the fullest degree of sincerity I can express that sentiment. And even though it lacks personhood, it is nevertheless dear to me in the same way, and that shouldn't be something the legal system overlooks.
But of course this isn't really a thing in existing law, because copyright is conceived primarily as an economic protection—it has nothing to do with "dearness" or whatever. So, without that framework (for the sake of conversation; more on this later), there would be a bit of a bridge to cross in establishing that assimilating artistic works into AIs non-consensually is injurious to the point of warranting legal protection. Right now, I have no clear answer to your question—though I think the OP touched on it from several angles in their own way. I will say that AI in particular is uniquely harmful here, as opposed to, say, a few rogue actors doing something similar on a smaller basis, because of AI's sheer scale and reach. Little bits and pieces of you, once absorbed into the machine, may show up anywhere, anytime, in any form, in perpetuity. That strikes me as categorically distinct. So I am saying that my issue here is with AI scraping / training / modification / redistribution in particular, and not with Joe Schmoe writing a bastardized version of my book in 2 hours and selling it for profit (which would be more appropriately dealt with under the existing framework of copyright law).
i dont find this feeling compelling enough to make laws in order to stop that feeling from happening. it would be like making laws so that people cant be rude to other people on the street, or making laws so that you cant cheat on your girlfriend. there are some kinds of emotional discomfort i am not compelled by sufficiently that i would make laws, backed by the full power of the state, restricting the behaviors of others from causing that feeling. you yourself admitted you can live with that feeling.
Yeah, I did admit that. But that's not any basis for your reasoning here. Death shouldn't be the only justification to pass a law. Just because one can live with something doesn't mean that it should be legal. There is quite a distance between actions that don't rise to the level of being outlawed and actions that one can't live with.
if there are more material harms or objections here im not able to see them so that is the first thing id be interested in you providing.
I'll try to file a mental note on this, in case I should come up with more substantive answers in the future. I don't know whether I will; this is not a hot-button topic for me and like I have said already I don't have strong views going into it. But I will try to remember to be on the lookout for articulations of harm that might get the idea across to you.
It's not really the harm itself that I am struggling to articulate. It's the articulation; the communication from me to you of why this is important. From my point of view the harm is obvious, as I have already experienced it, and I recognize that I am confused that this isn't equally obvious to other people.
Or, to your following point, maybe I have experienced this harm firsthand:
(quick technical clarification, did they actually trained a neural network from scratch and included the work in the corpus of training? or did they merely put the thing inside the context window of an already existing AI, those are two different things, in the second case the AI will have no memory of the curious tale once the session is over)
Yeah, this is a great catch and underscores why I am not an expert on AI stuff. I had read about this before, but it clearly didn't stick in my head, as I just automatically assumed that once my work was in the machine it was there to stay. But now that you mention it, this probably isn't what happened, because my friend had to show Chat GPT the Prelude. So, yeah. I don't know for sure either way but it sounds like this particular incident may very well have been a false alarm.
However, first of all, whether or not this specific thing is a false alarm the general problem is still present; and, second of all, just because people don't necessarily understand a new process or technology very well doesn't justify or excuse harmful behavior by that innovation and its human actors. Sometimes it can take many years for society to develop a vocabulary to talk about harm and injustice, even though that harm or injustice has been present all the while.
as for the argument that AI is taking inspiration from other people's works in the same way a human does, the point of that argument is specifically to say that AI is not plagiarizing
My point with that paragraph is that I find it, well, bullshit when people claim that machines should be afforded human recognition. Claiming that it's okay for an AI to scrape anything and create new content from it because humans can get inspired by anything and create new content conflates AI services with humans, which is bullshit. These are not the same thing.
When a new technology comes along, or when the scale of something is greatly expanded, previously unproblematic actions can become problematic in the new context. In this case, AI has created a form of content manipulation that is effectively new in our civilization. It does bear superficial resemblance to the human "inspiration" process by which we resonate with one another's existing ideas in society to form new ideas of our own in a sort of interwoven cultural tapestry, but it is not the same. Which brings up your second question to me:
my second question is "how is it meaningfully different for an organic system to extract patterns and information from other people's work, recombining and applying that data to create new things, from a mechanical system doing the same, such that one is considered plagiarism (or otherwise breaking copyright) and the other is"
i dont argue that both of those processes are exactly the same in nature, obviously, we are made of meat, the AI is not. i do argue that the material nature of what is happening, if not the specifics, is similar enough in its end product (with the only difference being of quantity and speed) that i see no fundamental, philosophical difference between both of them that is making one break copyright law.
I think you are mistaken here to "see no fundamental, philosophical difference" between humans drawing inspiration from content and machines scraping and reconstituting content.
The difference lies in the actors themselves. Actors contextualize actions, and I take it as uncontroversial that the same actions, under different circumstances, can have different meanings. If someone asks me for a $100 donation to refurbish their yacht, that's a different scenario from one where someone asks me for a $100 donation for cancer treatment. It's the same underlying action of donating $100, but the context is completely different. If we were to expand the conceptual wrapper to include the application of this donated money to its intended use, this would become apparent.
In our case, we have "new content being created on the basis of preexisting content processed through sophisticated, ill-defined means" (the human brain in one case and an AI neural net in the other) as the supposed action-in-common.
I would say that this conceptual wrapper conceals substantive differences in the action itself by inappropriately omitting the front end of the action i.e. the act of creating a thing and not merely the thing itself so created. Even if both types of actions result in categorically comparable creations (e.g., an image of a duck in a spacesuit), I think it can be argued that a human creating this picture is a fundamentally different action from a machine creating this exact same picture, and this becomes apparent by expanding the conceptual wrapper to include the action itself. You said as much in the above excerpt: You do not argue that these processes are the same in nature.
With old-school computer programs, you could feed a dictionary into a program, and rules for the formatting of a certain type of poetry (or multiple types), and you could have that computer spit out "poems" by any number of means, including totally random assemblies of words (of the proper parts of speech, and properly formed if you're fancy) arranged in the proper order in the proper format. It's a poem! No one can say it isn't. And in a sense, modern AI is just a much more sophisticated, much more upscaled version of that.
It's a poem, but it's not poetry. It's not the act of making art. When a human creates a poem, they do so with some purpose in mind, some kind of driving impulse, or vision, or goal. Maybe they are trying to convey the experience of a moment. Maybe they are trying to create a really rich sound in the spoken word. Maybe they are trying to show off their technical skills and impress people. Maybe they are expressing deep-seated feelings. Whatever! In all cases, the poem itself is only part of the story. That "story" is the story of human existence—of what it means to be alive and experience the world as a sapient person.
When we look at the art of people long dead, even if the art isn't very good or if time has damaged or degraded it, what we really see is, on one hand, the artwork itself, yes, but, on the other hand, history, humanity, commonality. We see other people struggling with, or reveling in, their dreams and sorrows. We see civilizations that are impossibly, unreachably far away from our own, across a gap of languages and customs and centuries, yet whose people are preoccupied by so many of the same things we ar e—and we find familiarity in all this.
Conversely, when we look at the art of AI content generators, we see—well, actually, we still see the human story, ironically, because we came first and AIs train on our creations or on creations themselves created by AIs who trained on our creations, but, to the point, we also see—nothing. We see the art itself, and no story. For the actor was only an unconscious, unthinking assemblage of parts. It wasn't trying to "say" anything. And it wasn't "driven" by anything. It was just doing what it was told to do.
Humans, as persons, are a special category of actor. We treat human actions as fundamentally different from machine actions because they are different, even when the actions are the same, because human actions can carry with them a great significance of some kind.
The logical conclusion of what you are asserting by saying that you see no difference between two highly similar actions committed by a human actor and an AI actor respectively, is that consciousness and lived experience have no special value. And I think that's not a philosophically invalid position; I think it can be tenably asserted. But, as you suggested with our respective views on copyright, it is a position that I cannot see myself agreeing with.
I do realize that the intended function of people's argument that AIs should train on anything because humans get inspired by anything is formulated to defend against accusations of plagiarism / copyright infringement, but it doesn't matter: The formulation doesn't work. It doesn't matter whether humans are committing infringement or not when they create things that are inspired by other things, and it doesn't matter (for my purposes here) whether AIs are committing infringement either. That's a whole separate issue from the question of why two highly similar actions might be fundamentally different when carried out by two different actors.
the point is i dont care about the means by which something was achieved in order to consider two actions legally similar, i care about its results.
So, if you still feel that way, then I would classify our disagreement on this point as irreconcilable.
But your phrasing does shepherd us back into the context of a legal discussion, which is a good segue for me to move forward.
and in fact you yourself seem to agree with this since you are proposing that new laws and regulation systems should be put in place to stop people from using AI in this way, so you must be recognizing that is already legal and want, in fact, to make it illegal. or do you think that taking inspiration is in fact already illegal? is not too clear
This is a fresh topic as far as I am concerned. My take on humans drawing inspiration from copyrighted sources to create new content is that this is legal, yes. At least generally speaking; the specific contours of copyright protections do, for example, prohibit the direct reproduction of works. And it's not just my "take"; this is, in fact, legal under US law. (I won't try to speak for the law of other countries.)
So! Is it illegal—that is, is it a form of copyright infringement under existing law—for AI to do what they do: scrape copyrighted sources, train on that data, and create outputs that are informed by this data?
That's a good question. I've given it some thought, but don't have a strong opinion. I lean toward "Yes, it is illegal." Not necessarily because the outputs are not sufficiently different as to be, for instance, "transformative" fair use—as I think that some AI-generated content can meet this requirement—but because the original contents (i.e. which the AIs train on) themselves are private property, and property owners have broad rights to control how their property is used.
This is what I was alluding to earlier in this reply when, in discussing my thoughts on potentially expanding copyright to include an AI training right which I further explained was motivated by personal harm reasons rather than monetary harm reasons, I said "for the sake of conversation" in reference to accepting the premise that AI scraping isn't already illegal; the reality, I think, is that the whole discussion of the legality of AI scraping is already settled if in fact it is recognized in US law, either by definition or through the courts, that AI scraping of copyrighted sources constitutes copyright infringement—which I suspect it eventually will be if it hasn't been already.
As I mentioned, I think AI-generated content meets the standards of being classified as a new, distinct thing for legal purposes, subject to its own particular legal and regulatory operating framework.
The ugly truth of intellectual property in the digital age is that the entire digital space is a total morass when it comes to legal clarity. If we stood by a strict interpretation of existing laws and precedents, most of the content on the Internet would be in violation of law and would have to be taken down. What we especially need is a modern revamp of the doctrine of fair use to deal with the online space the same way we deal with meatspace—which is also full of technically infringing behaviors that are never policed.
I think we can, if we want to, get to a place where it won't be so disruptive or difficult to give AI services access to plenty of data to work with. I'm not against the idea of data scraping itself; it's an unflattering term for an activity which is very important to the interconnectivity of the Internet and online spaces in general. But I don't think it's kosher for these services to scrape anything and everything they can. In fact, generally speaking, I think most Internet content should be off-limits to scraping by default, requiring an opt-in. (I've been thinking about what said yesterday, and the more I think about it the more I think that it's going to be better by far to not put the onus on creators to opt-out but instead make their content automatically protected and require them to opt-in.) Of course, public domain content would not be included in these protections. The more I think about it, the more I think it would be nice for the architecture of the Internet itself to have a system for flagging the permissions levels of content. But I suppose that's not going to happen anytime soon, so in more pragmatic terms I think the best policy is to tell AI services to pump the brakes and start rolling out some kind of voluntary system for websites (especially the websites where people spend most of their time) for individual users—not the website owners—to set the permissions flags on the content they create. Hybrid / reblogged content could either be parsed apart (maybe too tricky?) or strictly construed according to the most restrictive permissions levels. (Which would require the permissions hierarchy to be completely linear, but this is getting too into the weeds...)
Anyway! I hope that clarifies some of the points you wanted clarification on. One last reminder that this is not "firmly held convictions" on my part; my views on this topic are still coming together and are not, as a whole, set in stone—though on some individual points I do feel strongly, as evidenced.
What makes me sad about the AI art discourse is how it's so close to hitting something really, really important.
The thing is, while the problem with the models has little to do with IP law...the fact remains that art is often something that's very personal to an artist, so it DOES feel deeply, incredibly fucked up to find the traces of your own art in a place you never approved of, nor even imagined you would need to think about. It feels uncomfortable to find works you drew 10-15 years ago and forgot about, thought nobody but you and your friends cared about, right there as a contributing piece to a dataset. It feels gross. It feels violating. It feels like you, yourself, are being reduced to just a point of data for someone else's consumption, being picked apart for parts-
Now, as someone with some understanding of how AI works, I can acknowledge that as just A Feeling, which doesn't actually reflect how the model works, nor is it an accurate representation of the mindset of...the majority of end users (we can bitch about the worst of them until the cows come home, but that's for other posts).
But as an artist, I can't help but think...wow, there's something kind of powerful to that feeling of disgust, let's use it for good.
Because it doesn't come from nowhere. It's not just petty entitlement. It comes from suddenly realizing how much a faceless entity with no conscience, sprung from a field whose culture enables and rewards some of the worst cruelty humanity has to offer, can "know" about you and your work, and that new things can be built from this compiled knowledge without your consent or even awareness, and that even if you could do something about it legally after the fact (which you can't in this case because archival constitutes fair use, as does statistical analysis of the contents of an archive), you can't stop it from a technical standpoint. It comes from being confronted with the power of technology over something you probably consider deeply intimate and personal, even if it was just something you made for a job. I have to begrudgingly admit that even the most unscrupulous AI users and developers are somewhat useful in this artistic sense, as they act as a demonstration of how easy it is to use that power for evil. Never mind the economic concerns that come with any kind of automation - those only get even more unsettling and terrifying when blended with all of this.
Now stop and realize what OTHER very personal information is out there for robots to compile. Your selfies. Your vacation photos. The blog you kept as a journal when you were 14. Those secrets that you only share with either a therapist or thousands of anonymous strangers online. Who knows if you've been in the background of someone else's photos online? Who knows if you've been posted somewhere without your consent and THAT'S being scraped? Never mind the piles and piles of data that most social media websites and apps collect from every move you make both online and in the physical world. All of this information can be blended and remixed and used to build whatever kind of tool someone finds it useful for, with no complications so long as they don't include your copyrighted material ITSELF.
Does this mortify you? Does it make your blood run cold? Does it make you recoil in terror from the technology that we all use now? Does this radicalize you against invasive datamining? Does this make you want to fight for privacy?
I wish people were more open to sitting with that feeling of fear and disgust and - instead of viciously attacking JUST the thing that brought this uncomfortable fact to their attention - using that feeling in a way that will protect EVERYONE who has to live in the modern, connected world, because the fact is, image synthesis is possibly the LEAST harmful thing to come of this kind of data scraping.
When I look at image synthesis, and consider the ethical implications of how the datasets are compiled, what I hear the model saying to me is,
"Look what someone can do with some of the most intimate details of your life.
You do not own your data.
You do not have the right to disappear.
Everything you've ever posted, everything you've ever shared, everything you've ever curated, you have no control over anymore.
The law as it is cannot protect you from this. It may never be able to without doing far more harm than it prevents.
You and so many others have grown far too comfortable with the internet, as corporations tried to make it look friendlier on the surface while only making it more hostile in reality, and tech expands to only make it more dangerous - sparing no mercy for those things you posted when it was much smaller, and those things were harder to find.
Think about facial recognition and how law enforcement wants to use it with no regard for its false positive rate.
Think about how Facebook was used to arrest a child for seeking to abort her rapist's fetus.
Think about how aggressive datamining and the ad targeting born from it has been used to interfere in elections and empower fascists.
Think about how a fascist has taken over Twitter and keeps leaking your data everywhere.
Think about all of this and be thankful for the shock I have given you, and for the fact that I am one of the least harmful things created from it. Be thankful that despite my potential for abuse, ultimately I only exist to give more people access to the joy of visual art, and be thankful that you can't rip me open and find your specific, personal data inside me - because if you could, someone would use it for far worse than being a smug jerk about the nature of art.
Maybe it wouldn't be YOUR data they would use that way. Maybe it wouldn't be anyone's who you know personally. Your data, after all, is such a small and insignificant part of the set that it wouldn't be missed if it somehow disappeared. But it would be used for great evil.
Never forget that it already has been.
Use this feeling of shock and horror to galvanize you, to secure yourself, to demand your privacy, to fight the encroachment of spyware into every aspect of your life."
A great cyberpunk machine covered in sci-fi computer monitors showing people fighting in the streets, squabbling over the latest tool derived from the panopticon, draped cables over the machine glowing neon bright, dynamic light and shadows cast over the machine with its eyes and cameras everywhere; there is only a tiny spark of relief to be found in the fact that one machine is made to create beauty, and something artfully terrifying to its visibility, when so many others have been used as tools of violent oppression, but perhaps we can use that spark to make a change Generated with Simple Stable
99 notes
·
View notes
Text
First story on this site
Three weeks. It had been three weeks since promotion day and to be honest, I had no freaking clue what Promotion Day even was. Apparently once every month the facility selects someone to be “promoted”, the problem is that the people who don’t make the promotion selection get bare minimum notification. Turns out my sector was just informed that I was transferred to a new sector...no one even knew where I went ...explains what happened to Silica. Today, after three weeks, I woke up to a waiting room. Empty seats on every side and beneath my...tush. The same metal box I lived in for the past seventeen years after “recruitment�� and would probably die in. The room had the same aesthetic as everywhere else in the facility, stainless steel walls and flooring with well lit bulbs. Couldn’t tell which type of lightbulb though I’d have to gamble fluorescent bulbs with UV integration, cheap, effective and keeps us alive for a little bit longer. Just how the facility likes it. As per my regular protocol when in an unfamiliar space without a commanding officer I entered a status I have titled, “eyes down, nose out of others business”. It’s embarrassing to say that it took a rough fifteen seconds before realizing that the marks of claws against the floor were EVERYWHERE. You adjust to this kind of thing in the facility, there’s always something clawing up the floors, crawling up the walls or eating your friends upper lobe… rest in peace Franklin. My mind defaulted to entity containment training, signs of anomalous activity identified, analyze the signs: three toed claws, they appear to be dexterous and agile similar to species of avians and raptors. Stage four determine if anomalous being has moved from the ar-, that’s when I finally looked up. Three seats down from me stood a humanoid figure, full combat armor with the exact raptorian legs and feet that produced the scratch marks but the entity was calm almost seemed like it was waiting, same as me save for a bit of an impatient air. It swiftly and repetitively tapped its talons against the ground. Naturally my first thought occurred. “Oh god, is promotion just code word for feeding me to an entity.” I scanned the room only to discover many more entities, some looked very similar to the raptorian entity, others were vastly different. Entities with helmets resembling felines moving from one individual to another, entities with creepy masks that were standing on the walls and ceilings to avoid the clutter on the ground, entities that had no eye holes but spikes at the back on the helmet that vaguely reminded me of bats. All were equipped with combat armor and....facility issue weaponry? Aside from that there were few other schmucks in the room that looked a lot like me, scared, panicked and confused. I looked over to the impatient one only to see it staring at me.
“Shit!” it said in a surprisingly human voice “I-uh, sorry about starin’. It’s always just so weird to see one of you in here.”
“One of...me?” I implored.
“Y’know, an unaugmented.” it gestured at all of me. “So… weird after you’ve gone through the process. So, y’know which one you’ll be?”
I hesitated. “What?”
“Y’know. Like a raptor, a bat, a cat. That sorta thing.” it seemed to be naming things off the top of its head. “I’m a raptor so you could learn the ropes with me if you end up a part of the pack.”
This fascinated me, I had never been allowed to examine or interview an entity that I had no knowledge of. So a part of me was excited despite realizing that at any moment this entity could unhinge it’s non apparent jaws and rip into my throat with it’s horrific unseen maw. Yet the pioneer sense of exploring the unknown just...overcame me.
“So what are raptors?” I asked.
“Well, you’re lookin at one.” it said in a smug tone. “We’re faster and more dexterous than the others. Only downside is that itchy to move sensation you get due to the energy boost they hook you up with and that these masks keep you alive.”
“I’m sorry what?”
“Heh. yeah, that’s what I said. Apparently The Fixer said that our oxygen has been made “inefficient” by the pollution of the modern world so we’re hooked up with some sorta super oxygen. Apparently it’s the kinda stuff dinosaurs used to breath so that’s pretty badass.”
“And that helps?”
“Gives us the energy to bounce off walls, literally.”
“Fascinating… are the other entities safe to converse with?”
“Ent-? Oh, them? Yeah most of em are chill, might get an extreme one or two but they should be reasonable.”
“Right, thank you.”
“Eh, no prob dude.”
I stood up and began to wander over to one of the “bats” who was standing in a group of its own kin. I began to raise my hand to greet it as I approached, a quick “hey” to get it’s attention only to be interrupted.
“Yes?” it said in a high pitched tone, turning to face me before it even should have known I was on my way. Apparently my shock was apparent as it recoiled quickly. “Right, sorry. I forgot unaugmented wouldn’t know about that. I heard you coming, you’d be surprised how easily you are to hear coming.”
“Echolocation?”
“Indeed! Along with some other traits.” It said “I’m basically omniscient with these mods! I can tell you anything about this room without even looking at it.”
“Hm.” I smirked. “How about this? What color is my shirt?”
It stared at me for a second before giving a light punch. “Cheating asshole.”
“Just wanted to see if you’re capable of processing color.”
“You could’ve asked.”
the amusement faded from my expression as I began to realize that what I said was quite apparently a sore topic.
“Oh...sorry.”
“Whatever.”
I began to awkwardly leave the company of the bats before slumping back into my chair. A few minutes go by and I’m bored out of my goddamn mind. Wish they left me a phone to check, or a magazine to read or a pistol to shoot myself with. Between the embarrassment of my slip-up and the boredom I think the lead would be preferable.
“Excuse me.” said a familiar voice. “I couldn’t help but notice multiple strains in your face aligning with stress that may be caused by the process of transferring to a new region. Is it possible that I may alleviate some of your stress through a formal discussion?”
I looked up, it was goddamn impossible. I heard she was transferred and she just never responded to any message from then on, I thought she either ditched me or… the far more likely scenario, eviscerated or incinerated.
“Silica?” the name of my best friend. “Silica is that you?”
The entity looked confused. “Curious. You have information on my title but records state that you were only stationed here today.”
“Silica. It’s me.” I said in a shaken tone. “Devin.”
“Devin…” she stared at me blankly, moments passed by. “Ah yes. We used to be close friends, is this information correct?”
“Yes. so you’ve been here this whole time?”
“Affirmative, Devin.”
“What happened? Why didn’t you respond to any messages I sent?”
Another brief silence. “I just checked my message log, I received none of them under the name of “Devin” or any related pseudonym.”
“Really?” this was...a bit heartbreaking to say the very least. “You had to keep in touch with Evelyn! I remember the day you got Evelyn’s contact address and you were a goddamn mess. Head over heels! Please tell me you kept in touch.”
Another goddamn pause. “Oh yes, Evelyn. I suppose she was very nice and pretty wasn’t she?”
“What are you talking about?!” the other entity’s started staring at me. I was getting loud. “You sound like you don’t care! You goddamn loved her and now she’s an afterthought?!”
“Please calm yourself. You’re becoming exacerbated and it may draw negative connotations towards you in future conversations with the other people residing in this room.”
I began to look over, the entities around me seemed...concerned. “S-sorry. I’m just hurt is all. It feels like you don’t remember...anything from back at Mind’s Edge.”
“Oh! That I can answer! I don’t!” she said so simply. My heart goddamn sank into the Mariana Trench and she said it so easily.
“You..forgot?”
“Don’t take it personally. Cat units have an AI planted into their brain in order to give them in depth data banks of medical procedures as well as a list of information that may be useful. This unfortunately has to replace long term memories which our AI assistants must remind us of. This also can lead to stunted emotional development. Fortunately though the emotional stagnation only caused depression in earlier Cat units. It also allows us to be proper care takers without having to worry about emotional errors such as becoming overly attached to the patient in therapy settings or panicking in active combat treatment scenarios.”
“I...need some time to process all of this.”
“Acknowledged. Please contact me or another Cat unit if you require any further psychological or physiological aid.”
“Y-yeah, got it. You got it.” That’s probably what I said. Can’t remember if it was actually what I said or not, I was in a haze. Every entity in this room was...a person? My best friend had forgotten about me. The whole world around me just faded. My greatest fear though was...what came next. My thoughts were cut short by the distant sound of heavy claws scraping against the cold metal rang out. As it approached, I could hear the sound of cloth being dragged across the ground. A voice spoke, both high and low pitched with a sort of rattle in its tone.
“Routine Procedures completed. Additional Augmentation scheduled.”
The door on the farside of the room opened.
“Devin.” The creature spoke “Devin Hale. Augmentation scheduled. Follow for Augmentation.”
4 notes
·
View notes
Text
Global Climate Strike Day and compulsory education
Today, September 20, was the Global Climate Strike. Around the world, people with day jobs took their jobs off to protest global warming. And students -- college students, of course, but also kappatwelvers -- ditched school.
Leading the call for the climate strike was Greta Thunberg, a teen-age girl who has been cutting school for months to call attention to the urgency of climate change, an issue leaders like Donald Trump just don’t seem to care about. Thunberg thumbs her nose at compulsory education, and given what K-12 schools in the U.S. can get away with making their students do, or not letting their students do, she’s absolutely right to.
I have read that Greta Thunberg has Asperger’s. This piqued my curiosity as to whether Thunberg may have any problems with compulsory education because of her Asperger’s. I say this because my own disability, logaesthesia, shaped my views on compulsory schooling.
In my junior year of high school, the wood grains on many of the desks at my school were bothering me more and more. Many of the formica desks had this recurring sicklocyte shape on them -- it reminded me of an eye. I would need to scrape these eyes off the desks as I saw them, even if the desk wasn’t my own.
One day, I had just finished history class and was headed towards the homecoming skits in the auditorium (that year’s theme was Dr. Seuss books). After I walked out of the history classroom and it was locked, I looked inside and accidentally saw the desks in there, all of which had the eye formica pattern.
I panicked. Then I got an idea. I threw my five-dollar bill lunch money inside the classroom, through a window, and decided to tell the assistant-principal, Mr. McGinnis, that my money was locked up in the history classroom.
All the eyes that I saw, all the occurrences of the words “eye” or “I” that I heard, and other words that had the diphthong /ai/ in them (like, might, time, my, by, find, etc.) were accumulating inside me as I waited for the classroom to be opened so I could scrape the eyes off the desks and begin purging them all off.
I reached the auditorium, where I heard the skits. The freshman class did “Oh, the Places You’ll Go!” Then the seniors did “How the Grinch Stole Homecoming”, a (faculty-censored) skit in which the senior class steals the other classes’ homecoming floats, but magnanimously gives them back at the end. Lots of /ai/ sounds. I saw Mr. McGinnis in the crowd, and said, “Mr. McGinnis?” No response. I repeated: “Mr. McGinnis?” Still no reply.
Then I said, “Mr. McGinnis?” loudly. He didn’t budge, and I concluded that my assistant-principal was ignoring me.
The homecoming rally finally ended. I was able to find the library assistant, Mrs. Fitzpatrick, who had the keys to all the rooms.
Mrs. Fitzpatrick opened Mr. Hart’s history classroom for me. I grabbed my dollar bill, and scraped the “eyes” off every desk in the room. But by now I had hundreds of “eye”s to purge off. I thenI left and followed Mrs. Fitzpatrick into the library.
Once in the library, I hid behind the shelf of paperback novels. I closed my eyes and begin purging and chanting “adolye, adolye, adolye”, hundreds of times. My nails were down at my groin.
Before I finished purging, Mrs. Fitzpatrick saw me. “Inappropriate!”, she said. “Please go and eat your lunch!”
That “inappropriate” was the last straw! I then began crying and hyperventilating, crying and hyperventilating. I made it all the way to the office, still crying and hyperventilating.
Mrs. Abel, the school nurse, saw me in there and heard me. “James!”, she said. “Stop making that noise! It’s very loud and very disruptive!” Noise? Hello?!? It’s called “crying”! You do it when you’re sad? And disruptive, dischmuptive! This was lunchtime, for Pete’s sake! How could any disruption occur then?
I said I couldn’t stop crying. Mrs. Abel said, “Your mother told me that you’re able to control the things you do”. I explained to her that my mother was referring to the purging, not to things like crying!
I went to Mr. McGinnis’ office and told him everything that had happened. He called my mother to pick me up. My mother arrived and I was still crying and hyperventilating. “Close your eyes and breathe in”, she said.
My mother drove me home. Mr. McGinnis was no longer on Campolindo High School’s campus, having driven home for the day. As my mother drove me home, I told her about the wood grains and the purging and everything. I told her how Campolindo wasn’t made for students with OCD. She asked if their treatment of students was too uniform, and I said it was.
I was forced by state law to afford school (the school-leaving age in California was and still is 18, not 16 as in many states). Once I got to school, I got put into situations where I had no choice but to purge, and because of the conservative faculty culture at Campolindo, my behavior was called “inappropriate” (a label I have a real problem with). I now realized that high school students (and grade school students) were forced to go to a place where their freedom was taken away. This made school, by definition, a prison.
All the things like the hat rule (”Take your hat off inside the classroom!”) or the dress codes that forbade baby tees were now seen as indications of a prison -- a prison for people whose only crime was being the wrong age. And the senior homecoming skit? After the fact, an article was written in our school newspaper about skit censorship. It quoted a boy from the then-senior class saying, “This is the tamest skit we’ve had in years, and they’re still hacking away at it!” I now viewed being forced to go to a place with censorship as an indicator of a prison. I also learned about Hazelwood v. Kuhlmeier in history class. This was a Supreme Court case wherein the court ruled that censorship in school papers was constitutional! I was infuriated by the concept.
I was talking with my father, who said I had to go to school, and told him I didn’t like high school. “Too restrictive?”, he asked.
“Yeah”, I replied.
“Well, the purpose of high school is pretty much to teach you what your restrictions are going to be in life”. Why have an institution that existed only to teach restrictions? Especially many restrictions that were going to be lifted in college! And sure, adults often say “Preparation for the workplace”, but what if you don’t want a corporate office job (and this applies to the majority of Millennials!)? What if you’re going to be a bricklayer, or a rock star, or an MTV cinematographer, or a field linguist, or an avant-garde philosopher who publishes books about your radical philosophy?
I even remembered going to a bookstore and reading on a laminated summary of sociology that education was conservative. The reason therefor was that education’s purpose was to socialize, and that teachers were typically upper-middle-class White people who had a lot of stock in the status quo. Basically, teachers (like Mrs. Dahlgren in my play The Bittersweet Generation) set out to indoctrinate students in arbitrary social norms: “Don’t put your hands in your pants.” “Tuck your shirt in.” “Take your hat off inside a classroom.” “Boys, hold the door open for girls.” “Don’t talk about lower bodily functions.” “Boys can’t wear their hair long.” “Don’t cross-dress.” “Don’t be gay.” Ad nauseam.
The scales had fallen from my eyes. There was no going back. I was now a youth-rightser for life.
Luckily, my peers -- the first Millennials -- were making a distinct turn to the left, in reaction to the Jones/Boomer/Greatest culture of curfews, school uniforms, unbridled parental authority, social conventions, tightening gender roles, homophobia, patriotism, trust in big corporations, and desire to prepare their kids for the corporate workplace that dominated political and social discourse at the time -- the Bill Clintons, Tipper Gores, Bob Doles, Fred Phelpses, James Dobsons, William J. Bennetts, Newt Gingriches, and Pat Robertsons of the world. I grew a beard at 17, as many of the other boys at Campolindo were doing. I was able to communicate to my peers: “The state is forcing you to go to a place that forces you to take your hats off!” The Students’ Far Leftist Union, or SFLU, was formed at Campolindo before 1996 was over.
As long as the state has compulsory education laws, and as long as those compulsory schools restrict their students’ freedom, whether for reason of social norms ("Boys can’t hold hands with other boys”), supposedly making students safe (requiring students to wear bar code ID’s to school), or just because it looks nice (”Aw, look at those kids in their uniforms! Isn’t that cute?”), schools will be prisons. May we rush the day when there are no more prisons in America for people whose only crime is being young.
#greta thunberg#climate change#compulsory schooling#asperger's#logaesthesia#Millennials#social norms
0 notes
Text
Algorithms Are Automating Fascism. Here’s How We Fight Back
This article appears in VICE Magazine's Algorithms issue, which investigates the rules that govern our society, and what happens when they're broken.
In early August, more than 50 NYPD officers surrounded the apartment of Derrick Ingram, a prominent Black Lives Matter activist, during a dramatic standoff in Manhattan’s Hell’s Kitchen. Helicopters circled overhead and heavily-armed riot cops with K-9 attack dogs blocked off the street as officers tried to persuade Ingram to surrender peacefully. The justification for the siege, according to the NYPD: Ingram had allegedly shouted into the ear of a police officer with a bullhorn during a protest march in June. (The officer had long since recovered.)
Video of the siege later revealed another troubling aspect of the encounter. A paper dossier held by one of the officers outside the apartment showed that the NYPD had used facial recognition to target Ingram, using a photo taken from his Instagram page. Earlier this month, police in Miami used a facial recognition tool to arrest another protester accused of throwing objects at officers—again, without revealing the technology had be utilized.
The use of these technologies is not new, but they have come under increased scrutiny with the recent uprisings against police violence and systemic racism. Across the country and around the world, calls to defund police departments have revived efforts to ban technologies like facial recognition and predictive policing, which disproportionately affect communities of color. These predictive systems intersect with virtually every aspect of modern life, promoting discrimination in healthcare, housing, employment, and more.
The most common critique of these algorithmic decision-making systems is that they are “unfair”—software-makers blame human bias that has crept its way into the system, resulting in discrimination. In reality, the problem is deeper and more fundamental than the companies creating them are willing to admit.
In my time studying algorithmic decision-making systems as a privacy researcher and educator, I’ve seen this conversation evolve. I’ve come to understand that what we call “bias” is not merely the consequence of flawed technology, but a kind of computational ideology which codifies the worldviews that perpetuate inequality—white supremacy, patriarchy, settler-colonialism, homophobia and transphobia, to name just a few. In other words, without a major intervention which addresses the root causes of these injustices, algorithmic systems will merely automate the oppressive ideologies which form our society.
What does that intervention look like? If anti-racism and anti-fascism are practices that seek to dismantle—rather than simply acknowledge—systemic inequality and oppression, how can we build anti-oppressive praxis within the world of technology? Machine learning experts say that much like the algorithms themselves, the answers to these questions are complex and multifaceted, and should involve many different approaches—from protest and sabotage to making change within the institutions themselves.
Meredith Whittaker, a co-founder of the AI Now Institute and former Google researcher, said it starts by acknowledging that “bias” is not an engineering problem that can simply be fixed with a software update.
“We have failed to recognize that bias or racism or inequity doesn’t reside in an algorithm,” she told me. “It may be reproduced through an algorithm, but it resides in who gets to design and create these systems to begin with—who gets to apply them and on whom they are applied.”
Algorithmic systems are like ideological funhouse mirrors: they reflect and amplify the worldviews of the people and institutions that built them.
Tech companies often describe algorithms like magic boxes—indecipherable decision-making systems that operate in ways humans can’t possibly understand. While it’s true these systems are frequently (and often intentionally) opaque, we can still understand how they function by examining who created them, what outcomes they produce, and who ultimately benefits from those outcomes.
To put it another way, algorithmic systems are more like ideological funhouse mirrors: they reflect and amplify the worldviews of the people and institutions that built them. There are countless examples of how these systems replicate models of reality are oppressive and harmful. Take “gender recognition,” a sub-field of computer vision which involves training computers to infer a person’s gender based solely on physical characteristics. By their very nature, these systems are almost always built from an outdated model of “male” and “female” that excludes transgender and gender non-conforming people. Despite overwhelming scientific consensus that gender is fluid and expansive, 95 percent of academic papers on gender recognition view gender as binary, and 72 percent assume it is unchangeable from the sex assigned at birth, according to a 2018 study from the University of Washington.
In a society which views trans bodies as transgressive, it’s easy to see how these systems threaten millions of trans and gender-nonconforming people—especially trans people of color, who are already disproportionately policed. In July, the Trump administration’s Department of Housing and Urban Development proposed a rule that instructs federally funded homeless shelters to identify and eject trans women from women’s shelters based on physical characteristics like facial hair, height, and the presence of an adam’s apple. Given that machine vision systems already possess the ability to detect such features, automating this kind of discrimination would be trivial.
“There is, ipso facto, no way to make a technology premised on external inference of gender compatible with trans lives,” concludes Os Keyes, the author of the University of Washington study. “Given the various ways that continued usage would erase and put at risk trans people, designers and makers should quite simply avoid implementing or deploying Automated Gender Recognition.”
One common response to the problem of algorithmic bias is to advocate for more diversity in the field. If the people and data involved in creating this technology came from a wider range of backgrounds, the thinking goes, we’d see less examples of algorithmic systems perpetuating harmful prejudices. For example, common datasets used to train facial recognition systems are often filled with white faces, leading to higher rates of mis-identification for people with darker skin tones. Recently, police in Detroit wrongfully arrested a Black man after he was mis-identified by a facial recognition system—the first known case of its kind, and almost certainly just the tip of the iceberg.
Even if the system is “accurate,” that still doesn’t change the harmful ideological structures it was built to uphold in the first place. Since the recent uprisings against police violence, law enforcement agencies across the country have begun requesting CCTV footage of crowds of protesters, raising fears they will use facial recognition to target and harass activists. In other words, even if a predictive system is “correct” 100 percent of the time, that doesn’t prevent it from being used to disproportionately target marginalized people, protesters, and anyone else considered a threat by the state.
But what if we could flip the script, and create anti-oppressive systems that instead target those with power and privilege?
This is the provocation behind White Collar Crime Risk Zones, a 2017 project created for The New Inquiry. The project emulates predictive policing systems, creating “heat maps” forecasting where crime will occur based on historical data. But unlike the tools used by cops, these maps show hotspots for things like insider trading and employment discrimination, laying bare the arbitrary reality of the data—it merely reflects which types of crimes and communities are being policed.
“The conversation around algorithmic bias is really interesting because it’s kind of a proxy for these other systemic issues that normally would not be talked about,” said Francis Tseng, a researcher at the Jain Family Institute and co-creator of White Collar Crime Risk Zones. “Predictive policing algorithms are racially biased, but the reason for that is because policing is racially biased.”
Other efforts have focused on sabotage—using technical interventions that make oppressive systems less effective. After news broke of Clearview AI, the facial recognition firm revealed to be scraping face images from social media sites, researchers released “Fawkes,” a system that “cloaks” faces from image recognition algorithms. It uses machine learning to add small, imperceptible noise patterns to image data, modifying the photos so that a human can still recognize them but a facial recognition algorithm can’t. Like the anti-surveillance makeup patterns that came before, it’s a bit like kicking sand in the digital eyes of the surveillance state.
The downside to these counter-surveillance techniques is that they have a shelf life. As you read this, security researchers are already improving image recognition systems to recognize these noise patterns, teaching the algorithms to see past their own blind spots. While it may be effective in the short-term, using technical tricks to blind the machines will always be a cat-and-mouse game.
“Machine learning and AI are clearly very good at amplifying power as it already exists, and there’s clearly some use for it in countering that power,” said Tseng. “But in the end, it feels like it might benefit power more than the people pushing back.”
One of the most insidious aspects of these algorithmic systems is how they often disregard scientific consensus in lieu of completing their ideological mission. Like gender recognition, there has been a resurgence of machine learning research that revives racist pseudoscience practices like phrenology, which have been disproven for over a century. These ideas have re-entered academia under the cover of supposedly “objective” machine learning algorithms, with a deluge of scientific papers—some peer reviewed, some not—describing systems which the authors claim can determine things about a person based on racial and physical characteristics.
In June, thousands of AI experts condemned a paper whose authors claimed their system could predict whether someone would commit a crime based solely on their face with “80 percent accuracy” and “no racial bias.” Following the backlash, the authors later deleted the paper, and their publisher, Springer, confirmed that it had been rejected. It wasn’t the first time researchers have made these dubious claims. In 2016, a similar paper described a system for predicting criminality based on facial photos, using a database of mugshots from convicted criminals. In both cases, the authors were drawing from research that had been disproven for more than a century. Even worse, their flawed systems were creating a feedback loop: any predictions were based on the assumption that future criminals looked like people that the carceral system had previously labelled “criminal.” The fact that certain people are targeted by police and the justice system more than others was simply not addressed.
Whittaker notes that industry incentives are a big part of what creates the demand for such systems, regardless of how fatally flawed they are. “There is a robust market for magical tools that will tell us about people—what they’ll buy, who they are, whether they’re a threat or not. And I think that’s dangerous,” she said. “Who has the authority to tell me who I am, and what does it mean to invest that authority outside myself?”
But this also presents another opportunity for anti-oppressive intervention: de-platforming and refusal. After AI experts issued their letter to the academic publisher Springer demanding the criminality prediction research be rescinded, the paper disappeared from the publisher’s site, and the company later stated that the paper will not be published.
Much in the way that anti-fascist activists have used their collective power to successfully de-platform neo-nazis and white supremacists, academics and even tech workers have begun using their labor power and refuse to accept or implement technologies that reproduce racism, inequality, and harm. Groups like No Tech For ICE have linked technologies sold by big tech companies directly to the harm being done to immigrants and other marginalized communities. Some engineers have signed pledges or even deleted code repositories to prevent their work from being used by federal agencies. More recently, companies have responded to pressure from the worldwide uprisings against police violence, with IBM, Amazon, and Microsoft all announcing they would either stop or pause the sale of facial recognition technology to US law enforcement.
Not all companies will bow to pressure, however. And ultimately, none of these approaches are a panacea. There is still work to be done in preventing the harm caused by algorithmic systems, but they should all start with an understanding of the oppressive systems of power that cause these technologies to be harmful in the first place. “I think it’s a ‘try everything’ situation,” said Whittaker. “These aren’t new problems. We’re just automating and obfuscating social problems that have existed for a long time.”
Follow Janus Rose on Twitter.
Algorithms Are Automating Fascism. Here’s How We Fight Back syndicated from https://triviaqaweb.wordpress.com/feed/
0 notes
Text
Use all those GDPR privacy policy notifications to clean up your inbox and kill zombie accounts
New Post has been published on https://nexcraft.co/use-all-those-gdpr-privacy-policy-notifications-to-clean-up-your-inbox-and-kill-zombie-accounts/
Use all those GDPR privacy policy notifications to clean up your inbox and kill zombie accounts
Take a moment right now to click over to your email account—it could be your work account, personal address, or the fake one you use to secretly enter bake-off contests online. Do a quick search for “GDPR” and you’ll likely find a slew of recent emails from services, websites, apps, and other companies alerting you about changes to their privacy policy. I found dozens.
This is happening because of a sweeping digital privacy initiative called the General Data Protection Regulation that goes into effect in Europe starting on May 25—tomorrow. While the regulations only technically apply to citizens of the EU, they has prompted many companies to issue sweeping updates to their privacy policies and user agreements in advance to avoid the hefty fines that can occur if they run afoul of GDPR. For some companies, it’s also simpler just to have one set of documents in place for all users.
The onslaught of emails has been annoying, but you can turn that negative into an opportunity by taking this chance to take stock of all the websites, email lists, and other digital things you may have signed up for. You might even find some surprises in there.
Apps and services
If you’re a social media user, now is a great time to log into your accounts and check on your security and privacy settings. Both Facebook and Twitter recently updated the way you can control your data. To check in on Facebook, start with the privacy settings, then make sure to review and deactivate any old apps you have linked to your account but don’t use. You can do the same for Twitter at this page.
You should do the same with your Google account, which is likely a lot cleaner than your social media subscriptions, but it’s important enough to keep tabs on. Click here to see the apps you’ve connected with your Google account.
While the big social media networks are relatively easy to keep track of, you may also find that you have some old accounts with services that never quite took off. I found an account in a service called Mylio, which was supposed to be a big player in photo sharing and storage. It has been more than three years since I even logged in, but this GDPR update reminded me to go in and kill the zombie account that had many of my photos saved to the cloud.
Email newsletters
Gmail makes it easy to ignore email newsletters with its promotions tab, but like so many empty pizza boxes crammed under the bed in a college dorm room, they still exist and they’re not doing you any favors.
There are services that claim to help unsubscribe you from various mailing lists, but they almost always come with a serious cost. Unroll.me, for instance, is a popular service, but it scraped and sold information from users’ email accounts in exchange for tidying up. It’s a bad deal.
Most email newsletters will include an “unsubscribe” link, typically found at the bottom of the message. If you’re dealing with a legitimate company, this will often be enough to get you off the list. If the link takes you to a page to opt out, make sure you opt out of everything, including messages from “partners,” because that’s marketing speak for “advertisers.”
If you get a spam email with an unsubscribe link, don’t click it. It’s a common tactic for spammers to include a link that says “unsubscribe” when in reality, all it does is confirm your address as valid and mark it as a target for even more garbage messages in the future. For spam emails, dilligently mark them as spam rather than letting them sit in your inbox to help the email system’s AI start to recognize it as unwanted.
(Above: Check out the episode of our Last Week in Tech podcast in which we talk about GDPR)
Subscriptions
There are services like free credit reporting sites that bank on users signing up for a free trial, then forgetting to cancel and incurring a perpetual monthly service fee. These services often require you to call to cancel your subscription, in hopes that they can get you to stick around or keep you on hold until you give up. Don’t. Also, don’t sign up for free credit reporting sites.
Software and product registrations When you register a new piece of software, or even a physical product, you typically provide more information than the company actually needs, especially if you’re not using the product anymore. Did that old photo scanner software I bought in college really need to have my information on file all this time? Probably not. Use this as a chance to wipe out as much information as possible and make sure old services don’t have login information you’re currently using for things you care about.
Written By Stan Horaczek
0 notes
Text
Algorithms Are Automating Fascism. Here’s How We Fight Back
In early August, more than 50 NYPD officers surrounded the apartment of Derrick Ingram, a prominent Black Lives Matter activist, during a dramatic standoff in Manhattan’s Hell’s Kitchen. Helicopters circled overhead and heavily-armed riot cops with K-9 attack dogs blocked off the street as officers tried to persuade Ingram to surrender peacefully. The justification for the siege, according to the NYPD: Ingram had allegedly shouted into the ear of a police officer with a bullhorn during a protest march in June. (The officer had long since recovered.)
Video of the siege later revealed another troubling aspect of the encounter. A paper dossier held by one of the officers outside the apartment showed that the NYPD had used facial recognition to target Ingram, using a photo taken from his Instagram page. Earlier this month, police in Miami used a facial recognition tool to arrest another protester accused of throwing objects at officers—again, without revealing the technology had be utilized.
The use of these technologies is not new, but they have come under increased scrutiny with the recent uprisings against police violence and systemic racism. Across the country and around the world, calls to defund police departments have revived efforts to ban technologies like facial recognition and predictive policing, which disproportionately affect communities of color. These predictive systems intersect with virtually every aspect of modern life, promoting discrimination in healthcare, housing, employment, and more.
The most common critique of these algorithmic decision-making systems is that they are “unfair”—software-makers blame human bias that has crept its way into the system, resulting in discrimination. In reality, the problem is deeper and more fundamental than the companies creating them are willing to admit.
In my time studying algorithmic decision-making systems as a privacy researcher and educator, I’ve seen this conversation evolve. I’ve come to understand that what we call “bias” is not merely the consequence of flawed technology, but a kind of computational ideology which codifies the worldviews that perpetuate inequality—white supremacy, patriarchy, settler-colonialism, homophobia and transphobia, to name just a few. In other words, without a major intervention which addresses the root causes of these injustices, algorithmic systems will merely automate the oppressive ideologies which form our society.
What does that intervention look like? If anti-racism and anti-fascism are practices that seek to dismantle—rather than simply acknowledge—systemic inequality and oppression, how can we build anti-oppressive praxis within the world of technology? Machine learning experts say that much like the algorithms themselves, the answers to these questions are complex and multifaceted, and should involve many different approaches—from protest and sabotage to making change within the institutions themselves.
Meredith Whittaker, a co-founder of the AI Now Institute and former Google researcher, said it starts by acknowledging that “bias” is not an engineering problem that can simply be fixed with a software update.
“We have failed to recognize that bias or racism or inequity doesn’t reside in an algorithm,” she told me. “It may be reproduced through an algorithm, but it resides in who gets to design and create these systems to begin with—who gets to apply them and on whom they are applied.”
Algorithmic systems are like ideological funhouse mirrors: they reflect and amplify the worldviews of the people and institutions that built them.
Tech companies often describe algorithms like magic boxes—indecipherable decision-making systems that operate in ways humans can’t possibly understand. While it’s true these systems are frequently (and often intentionally) opaque, we can still understand how they function by examining who created them, what outcomes they produce, and who ultimately benefits from those outcomes.
To put it another way, algorithmic systems are more like ideological funhouse mirrors: they reflect and amplify the worldviews of the people and institutions that built them. There are countless examples of how these systems replicate models of reality are oppressive and harmful. Take “gender recognition,” a sub-field of computer vision which involves training computers to infer a person’s gender based solely on physical characteristics. By their very nature, these systems are almost always built from an outdated model of “male” and “female” that excludes transgender and gender non-conforming people. Despite overwhelming scientific consensus that gender is fluid and expansive, 95 percent of academic papers on gender recognition view gender as binary, and 72 percent assume it is unchangeable from the sex assigned at birth, according to a 2018 study from the University of Washington.
In a society which views trans bodies as transgressive, it’s easy to see how these systems threaten millions of trans and gender-nonconforming people—especially trans people of color, who are already disproportionately policed. In July, the Trump administration’s Department of Housing and Urban Development proposed a rule that instructs federally funded homeless shelters to identify and eject trans women from women’s shelters based on physical characteristics like facial hair, height, and the presence of an adam’s apple. Given that machine vision systems already possess the ability to detect such features, automating this kind of discrimination would be trivial.
“There is, ipso facto, no way to make a technology premised on external inference of gender compatible with trans lives,” concludes Os Keyes, the author of the University of Washington study. “Given the various ways that continued usage would erase and put at risk trans people, designers and makers should quite simply avoid implementing or deploying Automated Gender Recognition.”
One common response to the problem of algorithmic bias is to advocate for more diversity in the field. If the people and data involved in creating this technology came from a wider range of backgrounds, the thinking goes, we’d see less examples of algorithmic systems perpetuating harmful prejudices. For example, common datasets used to train facial recognition systems are often filled with white faces, leading to higher rates of mis-identification for people with darker skin tones. Recently, police in Detroit wrongfully arrested a Black man after he was mis-identified by a facial recognition system—the first known case of its kind, and almost certainly just the tip of the iceberg.
Even if the system is “accurate,” that still doesn’t change the harmful ideological structures it was built to uphold in the first place. Since the recent uprisings against police violence, law enforcement agencies across the country have begun requesting CCTV footage of crowds of protesters, raising fears they will use facial recognition to target and harass activists. In other words, even if a predictive system is “correct” 100 percent of the time, that doesn’t prevent it from being used to disproportionately target marginalized people, protesters, and anyone else considered a threat by the state.
But what if we could flip the script, and create anti-oppressive systems that instead target those with power and privilege?
This is the provocation behind White Collar Crime Risk Zones, a 2017 project created for The New Inquiry. The project emulates predictive policing systems, creating “heat maps” forecasting where crime will occur based on historical data. But unlike the tools used by cops, these maps show hotspots for things like insider trading and employment discrimination, laying bare the arbitrary reality of the data—it merely reflects which types of crimes and communities are being policed.
“The conversation around algorithmic bias is really interesting because it’s kind of a proxy for these other systemic issues that normally would not be talked about,” said Francis Tseng, a researcher at the Jain Family Institute and co-creator of White Collar Crime Risk Zones. “Predictive policing algorithms are racially biased, but the reason for that is because policing is racially biased.”
Other efforts have focused on sabotage—using technical interventions that make oppressive systems less effective. After news broke of Clearview AI, the facial recognition firm revealed to be scraping face images from social media sites, researchers released “Fawkes,” a system that “cloaks” faces from image recognition algorithms. It uses machine learning to add small, imperceptible noise patterns to image data, modifying the photos so that a human can still recognize them but a facial recognition algorithm can’t. Like the anti-surveillance makeup patterns that came before, it’s a bit like kicking sand in the digital eyes of the surveillance state.
The downside to these counter-surveillance techniques is that they have a shelf life. As you read this, security researchers are already improving image recognition systems to recognize these noise patterns, teaching the algorithms to see past their own blind spots. While it may be effective in the short-term, using technical tricks to blind the machines will always be a cat-and-mouse game.
“Machine learning and AI are clearly very good at amplifying power as it already exists, and there’s clearly some use for it in countering that power,” said Tseng. “But in the end, it feels like it might benefit power more than the people pushing back.”
One of the most insidious aspects of these algorithmic systems is how they often disregard scientific consensus in lieu of completing their ideological mission. Like gender recognition, there has been a resurgence of machine learning research that revives racist pseudoscience practices like phrenology, which have been disproven for over a century. These ideas have re-entered academia under the cover of supposedly “objective” machine learning algorithms, with a deluge of scientific papers—some peer reviewed, some not—describing systems which the authors claim can determine things about a person based on racial and physical characteristics.
In June, thousands of AI experts condemned a paper whose authors claimed their system could predict whether someone would commit a crime based solely on their face with “80 percent accuracy” and “no racial bias.” Following the backlash, the authors later deleted the paper, and their publisher, Springer, confirmed that it had been rejected. It wasn’t the first time researchers have made these dubious claims. In 2016, a similar paper described a system for predicting criminality based on facial photos, using a database of mugshots from convicted criminals. In both cases, the authors were drawing from research that had been disproven for more than a century. Even worse, their flawed systems were creating a feedback loop: any predictions were based on the assumption that future criminals looked like people that the carceral system had previously labelled “criminal.” The fact that certain people are targeted by police and the justice system more than others was simply not addressed.
Whittaker notes that industry incentives are a big part of what creates the demand for such systems, regardless of how fatally flawed they are. “There is a robust market for magical tools that will tell us about people—what they’ll buy, who they are, whether they’re a threat or not. And I think that’s dangerous,” she said. “Who has the authority to tell me who I am, and what does it mean to invest that authority outside myself?”
But this also presents another opportunity for anti-oppressive intervention: de-platforming and refusal. After AI experts issued their letter to the academic publisher Springer demanding the criminality prediction research be rescinded, the paper disappeared from the publisher’s site, and the company later stated that the paper will not be published.
Much in the way that anti-fascist activists have used their collective power to successfully de-platform neo-nazis and white supremacists, academics and even tech workers have begun using their labor power and refuse to accept or implement technologies that reproduce racism, inequality, and harm. Groups like No Tech For ICE have linked technologies sold by big tech companies directly to the harm being done to immigrants and other marginalized communities. Some engineers have signed pledges or even deleted code repositories to prevent their work from being used by federal agencies. More recently, companies have responded to pressure from the worldwide uprisings against police violence, with IBM, Amazon, and Microsoft all announcing they would either stop or pause the sale of facial recognition technology to US law enforcement.
Not all companies will bow to pressure, however. And ultimately, none of these approaches are a panacea. There is still work to be done in preventing the harm caused by algorithmic systems, but they should all start with an understanding of the oppressive systems of power that cause these technologies to be harmful in the first place. “I think it’s a ‘try everything’ situation,” said Whittaker. “These aren’t new problems. We’re just automating and obfuscating social problems that have existed for a long time.”
Follow Janus Rose on Twitter.
Algorithms Are Automating Fascism. Here’s How We Fight Back syndicated from https://triviaqaweb.wordpress.com/feed/
0 notes