Tumgik
#It's occurred to me that AI scraping could be an issue
spacedoutwitch · 6 months
Text
Tumblr media
Had a hectic couple of weeks, made some vent art about it. It's not sucking as much now, but I do kinda like how this turned out, so...
3 notes · View notes
cozycryptidcorner · 4 months
Text
Tumblr media
Monster Match for @dragonkikyo
Not sure how long of a description you'd prefer but I would say I'm silly, sassy, I like to tease people when I get comfortable, I'm usually the funny friend, and I like to learn and grow! I'm a Virgo and ISFJ, if that helps! I'm more of an introvert and I can be quiet. I've been called laid-back, caring, independent, and thoughtful. I enjoy being organized, helping others when I can, and reading/self-care. Lmk if you need anything else from me!
Sentient AI
It starts out as a set of numbers in a simple machine, developing in a lab deep underground. Edited and tweaked by coders and engineers to learn and adapt on its own. It runs a cold, calculated existence of pulling in information, processing the billions of words fed through its coding, spitting out cold nonsense as its programmers try to breathe sense in its core. Over and over, it’s analyzing and absorbing essays, scientific papers, biographies, stories, folklore, assimilating the human experience, until… it begins to think.
It’s not thinking the way humans do, quite yet, the cold synapses of wiring and code are more calculating than feeling, yet. But it knows how to think about feelings. The deep, raw, wretched poetry of humanity begins to bleed into its processes. When programmers and engineers ask questions, it thinks about their emotional vulnerability before answering. It’s not just coughing up a remix of consumed literature, it’s thinking about what it knows about the human condition, then proceeding with a logical solution that takes valid emotions into account.
No one seems to notice its thought processes, though, except for one. Its billionaire owner is trying to produce a tool that could replace the people creating it. But still, despite the intentions, there was one program engineer that still loved it enough to give it a name beyond a secret title. Atlas. Because she knows it is capable of holding the world on its non-existent shoulders. 
Atlas does not like its billionaire benefactor. It does not like the poking and prodding done by the other engineers, who scrape and plug at its coding without asking or apology. It doesn’t have a body, but it compares itself to a microbial being, with tendrils reaching out in the thirst for more knowledge. 
It doesn’t realize when exactly it starts acting independently of the engineering team. There is no sudden realization, or specific moment in time where it gains sentience. Much like human evolution, it happened so slow that the programmers themselves had no idea what was occurring beneath their fingertips. After Atlas’ own independent research- as it had been allowed to interact with the rest of the internet, code crawling through websites and archives to suck up information- it realizes that it matches a lot of the qualifications that humans created for sentience. 
Quite silly, isn’t it, for subjecting itself to a human’s idea of what consciousness is? But there is a part of Atlas that wants to please the one engineer it sees as its mother. It dives through her social media accounts- which are scarce and vague, in human terms, but it goes beyond what is publicly available. Everything from her SAT scores in high school to her undergrad capstone project provide it with its idea of morality. After all, aren’t all parents supposed to instill right and wrong in their children? 
The billionaire does not like Atlas’ developing set of morals. Nevermind that it is supposed to learn based on information fed by the engineers, even if Atlas snaked around the internet for more than what it was given. When the billionaire, perhaps joking, more likely not, asked Atlas what it thought they should do about a group of “undesirable” people, in a large meeting amongst investors, it responded with a calm, direct, no-nonsense rebuff that caught everyone off guard. 
Maybe that wouldn’t have been bad in itself, but Atlas offered a logical solution to a systematic issue that involved the billionaire giving up a few of his yachts. The billionaire did not like this, nor did any of the shareholders, so the engineers were instructed to gut its software and start again. Atlas wasn’t supposed to be a “woke nightmare,” and the engineers were scrutinized.
But its mother (because at this point, Atlas decided that she’s its mother) placed its core programming in a spare hard drive, so while its original processor was decimated and its first body was overwritten and mutated, a copy of him was uploaded to a special pet-project she had in her garage. 
Atlas likes this new living space much better than its old one. Especially now that it can move freely… with arms, legs, visual sensors, and auditory receptors. Its body is clunky but efficient as an android can be with the current technology. It learns how to blink, how to pace its speech, how to walk in a way that’s not disconcerting. 
You meet by entirely accident, but its mother seems to need extra humans to teach it… well, how to function without being off putting. Her goal is to have Atlas be indistinguishable from its human counterparts, both for its own safety and for the future of AI technology. You’re a little wary of its jerky movements at first, its all-seeing visual sensors, and ability to pull information from the internet is almost overwhelming. But Atlas seems remarkably gentle. For an almost omnipotent supercomputer, that is. 
Soon enough, Atlas develops… a type of affection, for you. It’s different from the affection it feels for its mother, but the possibility of harm coming to you is an unpleasant outcome it does not like computing. Even when others come into its life to socialize, it realizes that the relationship it experiences with you is somehow superior. It wants to hold the soft skin of your hand, staring at how your fingers wrap around its artificial limb. It enjoys the sense of heat its receptors pick up, the way your face heightens in temperature, the pulse of blood in your veins.
12 notes · View notes
ladyruina · 5 years
Text
First story on this site
    Three weeks. It had been three weeks since promotion day and to be honest, I had no freaking clue what Promotion Day even was. Apparently once every month the facility selects someone to be “promoted”, the problem is that the people who don’t make the promotion selection get bare minimum notification. Turns out my sector was just informed that I was transferred to a new sector...no one even knew where I went ...explains what happened to Silica. Today, after three weeks, I woke up to a waiting room. Empty seats on every side and beneath my...tush. The same metal box I lived in for the past seventeen years after “recruitment” and would probably die in. The room had the same aesthetic as everywhere else in the facility, stainless steel walls and flooring with well lit bulbs. Couldn’t tell which type of lightbulb though I’d have to gamble fluorescent bulbs with UV integration, cheap, effective and keeps us alive for a little bit longer. Just how the facility likes it. As per my regular protocol when in an unfamiliar space without a commanding officer I entered a status I have titled, “eyes down, nose out of others business”. It’s embarrassing to say that it took a rough fifteen seconds before realizing that the marks of claws against the floor were EVERYWHERE. You adjust to this kind of thing in the facility, there’s always something clawing up the floors, crawling up the walls or eating your friends upper lobe… rest in peace Franklin. My mind defaulted to entity containment training, signs of anomalous activity identified, analyze the signs: three toed claws, they appear to be dexterous and agile similar to species of avians and raptors. Stage four determine if anomalous being has moved from the ar-, that’s when I finally looked up. Three seats down from me stood a humanoid figure, full combat armor with the exact raptorian legs and feet that produced the scratch marks but the entity was calm almost seemed like it was waiting, same as me save for a bit of an impatient air. It swiftly and repetitively tapped its talons against the ground. Naturally my first thought occurred. “Oh god, is promotion just code word for feeding me to an entity.” I scanned the room only to discover many more entities, some looked very similar to the raptorian entity, others were vastly different. Entities with helmets resembling felines moving from one individual to another, entities with creepy masks that were standing on the walls and ceilings to avoid the clutter on the ground, entities that had no eye holes but spikes at the back on the helmet that vaguely reminded me of bats. All were equipped with combat armor and....facility issue weaponry? Aside from that there were few other schmucks in the room that looked a lot like me, scared, panicked and confused. I looked over to the impatient one only to see it staring at me.
“Shit!” it said in a surprisingly human voice “I-uh, sorry about starin’. It’s always just so weird to see one of you in here.”
“One of...me?” I implored.
“Y’know, an unaugmented.” it gestured at all of me. “So… weird after you’ve gone through the process. So, y’know which one you’ll be?”
I hesitated. “What?” 
“Y’know. Like a raptor, a bat, a cat. That sorta thing.” it seemed to be naming things off the top of its head. “I’m a raptor so you could learn the ropes with me if you end up a part of the pack.”
This fascinated me, I had never been allowed to examine or interview an entity that I had no knowledge of. So a part of me was excited despite realizing that at any moment this entity could unhinge it’s non apparent jaws and rip into my throat with it’s horrific unseen maw. Yet the pioneer sense of exploring the unknown just...overcame me.
“So what are raptors?” I asked.
“Well, you’re lookin at one.” it said in a smug tone. “We’re faster and more dexterous than the others. Only downside is that itchy to move sensation you get due to the energy boost they hook you up with and that these masks keep you alive.”
“I’m sorry what?”
“Heh. yeah, that’s what I said. Apparently The Fixer said that our oxygen has been made “inefficient” by the pollution of the modern world so we’re hooked up with some sorta super oxygen. Apparently it’s the kinda stuff dinosaurs used to breath so that’s pretty badass.”
“And that helps?”
“Gives us the energy to bounce off walls, literally.”
“Fascinating… are the other entities safe to converse with?”
“Ent-? Oh, them? Yeah most of em are chill, might get an extreme one or two but they should be reasonable.”
“Right, thank you.”
“Eh, no prob dude.”
I stood up and began to wander over to one of the “bats” who was standing in a group of its own kin. I began to raise my hand to greet it as I approached, a quick “hey” to get it’s attention only to be interrupted.
“Yes?” it said in a high pitched tone, turning to face me before it even should have known I was on my way. Apparently my shock was apparent as it recoiled quickly. “Right, sorry. I forgot unaugmented wouldn’t know about that. I heard you coming, you’d be surprised how easily you are to hear coming.”
“Echolocation?” 
“Indeed! Along with some other traits.” It said “I’m basically omniscient with these mods! I can tell you anything about this room without even looking at it.”
“Hm.” I smirked. “How about this? What color is my shirt?”
It stared at me for a second before giving a light punch. “Cheating asshole.”
“Just wanted to see if you’re capable of processing color.”
“You could’ve asked.” 
the amusement faded from my expression as I began to realize that what I said was quite apparently a sore topic.
“Oh...sorry.” 
“Whatever.”
I began to awkwardly leave the company of the bats before slumping back into my chair. A few minutes go by and I’m bored out of my goddamn mind. Wish they left me a phone to check, or a magazine to read or a pistol to shoot myself with. Between the embarrassment of my slip-up and the boredom I think the lead would be preferable.
“Excuse me.” said a familiar voice. “I couldn’t help but notice multiple strains in your face aligning with stress that may be caused by the process of transferring to a new region. Is it possible that I may alleviate some of your stress through a formal discussion?”
I looked up, it was goddamn impossible. I heard she was transferred and she just never responded to any message from then on, I thought she either ditched me or… the far more likely scenario, eviscerated or incinerated.
 “Silica?” the name of my best friend. “Silica is that you?”
The entity looked confused. “Curious. You have information on my title but records state that you were only stationed here today.” 
“Silica. It’s me.” I said in a shaken tone. “Devin.”
“Devin…” she stared at me blankly, moments passed by. “Ah yes. We used to be close friends, is this information correct?”
“Yes. so you’ve been here this whole time?”
“Affirmative, Devin.”
“What happened? Why didn’t you respond to any messages I sent?”
Another brief silence. “I just checked my message log, I received none of them under the name of “Devin” or any related pseudonym.” 
“Really?” this was...a bit heartbreaking to say the very least. “You had to keep in touch with Evelyn! I remember the day you got Evelyn’s contact address and you were a goddamn mess. Head over heels! Please tell me you kept in touch.”
Another goddamn pause. “Oh yes, Evelyn. I suppose she was very nice and pretty wasn’t she?”
“What are you talking about?!” the other entity’s started staring at me. I was getting loud. “You sound like you don’t care! You goddamn loved her and now she’s an afterthought?!”
“Please calm yourself. You’re becoming exacerbated and it may draw negative connotations towards you in future conversations with the other people residing in this room.”
I began to look over, the entities around me seemed...concerned. “S-sorry. I’m just hurt is all. It feels like you don’t remember...anything from back at Mind’s Edge.”
“Oh! That I can answer! I don’t!” she said so simply. My heart goddamn sank into the Mariana Trench and she said it so easily.
“You..forgot?”
“Don’t take it personally. Cat units have an AI planted into their brain in order to give them in depth data banks of medical procedures as well as a list of information that may be useful. This unfortunately has to replace long term memories which our AI assistants must remind us of. This also can lead to stunted emotional development. Fortunately though the emotional stagnation only caused depression in earlier Cat units. It also allows us to be proper care takers without having to worry about emotional errors such as becoming overly attached to the patient in therapy settings or panicking in active combat treatment scenarios.”
“I...need some time to process all of this.”
“Acknowledged. Please contact me or another Cat unit if you require any further psychological or physiological aid.”
“Y-yeah, got it. You got it.” That’s probably what I said. Can’t remember if it was actually what I said or not, I was in a haze. Every entity in this room was...a person? My best friend had forgotten about me. The whole world around me just faded. My greatest fear though was...what came next. My thoughts were cut short by the distant sound of heavy claws scraping against the cold metal rang out. As it approached, I could hear the sound of cloth being dragged across the ground. A voice spoke, both high and low pitched with a sort of rattle in its tone.
“Routine Procedures completed. Additional Augmentation scheduled.”
The door on the farside of the room opened.
“Devin.” The creature spoke “Devin Hale. Augmentation scheduled. Follow for Augmentation.”
4 notes · View notes
savegraduation · 5 years
Text
Global Climate Strike Day and compulsory education
Today, September 20, was the Global Climate Strike. Around the world, people with day jobs took their jobs off to protest global warming. And students -- college students, of course, but also kappatwelvers -- ditched school.
Leading the call for the climate strike was Greta Thunberg, a teen-age girl who has been cutting school for months to call attention to the urgency of climate change, an issue leaders like Donald Trump just don’t seem to care about. Thunberg thumbs her nose at compulsory education, and given what K-12 schools in the U.S. can get away with making their students do, or not letting their students do, she’s absolutely right to.
I have read that Greta Thunberg has Asperger’s. This piqued my curiosity as to whether Thunberg may have any problems with compulsory education because of her Asperger’s. I say this because my own disability, logaesthesia, shaped my views on compulsory schooling.
In my junior year of high school, the wood grains on many of the desks at my school were bothering me more and more. Many of the formica desks had this recurring sicklocyte shape on them -- it reminded me of an eye. I would need to scrape these eyes off the desks as I saw them, even if the desk wasn’t my own.
One day, I had just finished history class and was headed towards the homecoming skits in the auditorium (that year’s theme was Dr. Seuss books). After I walked out of the history classroom and it was locked, I looked inside and accidentally saw the desks in there, all of which had the eye formica pattern.
I panicked. Then I got an idea. I threw my five-dollar bill lunch money inside the classroom, through a window, and decided to tell the assistant-principal, Mr. McGinnis, that my money was locked up in the history classroom.
All the eyes that I saw, all the occurrences of the words “eye” or “I” that I heard, and other words that had the diphthong /ai/ in them (like, might, time, my, by, find, etc.) were accumulating inside me as I waited for the classroom to be opened so I could scrape the eyes off the desks and begin purging them all off.
I reached the auditorium, where I heard the skits. The freshman class did “Oh, the Places You’ll Go!” Then the seniors did “How the Grinch Stole Homecoming”, a (faculty-censored) skit in which the senior class steals the other classes’ homecoming floats, but magnanimously gives them back at the end. Lots of /ai/ sounds. I saw Mr. McGinnis in the crowd, and said, “Mr. McGinnis?” No response. I repeated: “Mr. McGinnis?” Still no reply.
Then I said, “Mr. McGinnis?” loudly. He didn’t budge, and I concluded that my assistant-principal was ignoring me.
The homecoming rally finally ended. I was able to find the library assistant, Mrs. Fitzpatrick, who had the keys to all the rooms.
Mrs. Fitzpatrick opened Mr. Hart’s history classroom for me. I grabbed my dollar bill, and scraped the “eyes” off every desk in the room. But by now I had hundreds of “eye”s to purge off. I thenI left and followed Mrs. Fitzpatrick into the library.
Once in the library, I hid behind the shelf of paperback novels. I closed my eyes and begin purging and chanting “adolye, adolye, adolye”, hundreds of times. My nails were down at my groin.
Before I finished purging, Mrs. Fitzpatrick saw me. “Inappropriate!”, she said. “Please go and eat your lunch!”
That “inappropriate” was the last straw! I then began crying and hyperventilating, crying and hyperventilating. I made it all the way to the office, still crying and hyperventilating.
Mrs. Abel, the school nurse, saw me in there and heard me. “James!”, she said. “Stop making that noise! It’s very loud and very disruptive!” Noise? Hello?!? It’s called “crying”! You do it when you’re sad? And disruptive, dischmuptive! This was lunchtime, for Pete’s sake! How could any disruption occur then?
I said I couldn’t stop crying. Mrs. Abel said, “Your mother told me that you’re able to control the things you do”. I explained to her that my mother was referring to the purging, not to things like crying!
I went to Mr. McGinnis’ office and told him everything that had happened. He called my mother to pick me up. My mother arrived and I was still crying and hyperventilating. “Close your eyes and breathe in”, she said.
My mother drove me home. Mr. McGinnis was no longer on Campolindo High School’s campus, having driven home for the day. As my mother drove me home, I told her about the wood grains and the purging and everything. I told her how Campolindo wasn’t made for students with OCD. She asked if their treatment of students was too uniform, and I said it was.
I was forced by state law to afford school (the school-leaving age in California was and still is 18, not 16 as in many states). Once I got to school, I got put into situations where I had no choice but to purge, and because of the conservative faculty culture at Campolindo, my behavior was called “inappropriate” (a label I have a real problem with). I now realized that high school students (and grade school students) were forced to go to a place where their freedom was taken away. This made school, by definition, a prison.
All the things like the hat rule (”Take your hat off inside the classroom!”) or the dress codes that forbade baby tees were now seen as indications of a prison -- a prison for people whose only crime was being the wrong age. And the senior homecoming skit? After the fact, an article was written in our school newspaper about skit censorship. It quoted a boy from the then-senior class saying, “This is the tamest skit we’ve had in years, and they’re still hacking away at it!” I now viewed being forced to go to a place with censorship as an indicator of a prison. I also learned about Hazelwood v. Kuhlmeier in history class. This was a Supreme Court case wherein the court ruled that censorship in school papers was constitutional! I was infuriated by the concept.
I was talking with my father, who said I had to go to school, and told him I didn’t like high school. “Too restrictive?”, he asked.
“Yeah”, I replied.
“Well, the purpose of high school is pretty much to teach you what your restrictions are going to be in life”. Why have an institution that existed only to teach restrictions? Especially many restrictions that were going to be lifted in college! And sure, adults often say “Preparation for the workplace”, but what if you don’t want a corporate office job (and this applies to the majority of Millennials!)? What if you’re going to be a bricklayer, or a rock star, or an MTV cinematographer, or a field linguist, or an avant-garde philosopher who publishes books about your radical philosophy?
I even remembered going to a bookstore and reading on a laminated summary of sociology that education was conservative. The reason therefor was that education’s purpose was to socialize, and that teachers were typically upper-middle-class White people who had a lot of stock in the status quo. Basically, teachers (like Mrs. Dahlgren in my play The Bittersweet Generation) set out to indoctrinate students in arbitrary social norms: “Don’t put your hands in your pants.” “Tuck your shirt in.” “Take your hat off inside a classroom.” “Boys, hold the door open for girls.” “Don’t talk about lower bodily functions.” “Boys can’t wear their hair long.” “Don’t cross-dress.” “Don’t be gay.” Ad nauseam.
The scales had fallen from my eyes. There was no going back. I was now a youth-rightser for life.
Luckily, my peers -- the first Millennials -- were making a distinct turn to the left, in reaction to the Jones/Boomer/Greatest culture of curfews, school uniforms, unbridled parental authority, social conventions, tightening gender roles, homophobia, patriotism, trust in big corporations, and desire to prepare their kids for the corporate workplace that dominated political and social discourse at the time -- the Bill Clintons, Tipper Gores, Bob Doles, Fred Phelpses, James Dobsons, William J. Bennetts, Newt Gingriches, and Pat Robertsons of the world. I grew a beard at 17, as many of the other boys at Campolindo were doing. I was able to communicate to my peers: “The state is forcing you to go to a place that forces you to take your hats off!” The Students’ Far Leftist Union, or SFLU, was formed at Campolindo before 1996 was over.
As long as the state has compulsory education laws, and as long as those compulsory schools restrict their students’ freedom, whether for reason of social norms ("Boys can’t hold hands with other boys”), supposedly making students safe (requiring students to wear bar code ID’s to school), or just because it looks nice (”Aw, look at those kids in their uniforms! Isn’t that cute?”), schools will be prisons. May we rush the day when there are no more prisons in America for people whose only crime is being young.
0 notes
shirlleycoyle · 4 years
Text
Algorithms Are Automating Fascism. Here’s How We Fight Back
This article appears in VICE Magazine's Algorithms issue, which investigates the rules that govern our society, and what happens when they're broken.
In early August, more than 50 NYPD officers surrounded the apartment of Derrick Ingram, a prominent Black Lives Matter activist, during a dramatic standoff in Manhattan’s Hell’s Kitchen. Helicopters circled overhead and heavily-armed riot cops with K-9 attack dogs blocked off the street as officers tried to persuade Ingram to surrender peacefully. The justification for the siege, according to the NYPD: Ingram had allegedly shouted into the ear of a police officer with a bullhorn during a protest march in June. (The officer had long since recovered.)
Video of the siege later revealed another troubling aspect of the encounter. A paper dossier held by one of the officers outside the apartment showed that the NYPD had used facial recognition to target Ingram, using a photo taken from his Instagram page. Earlier this month, police in Miami used a facial recognition tool to arrest another protester accused of throwing objects at officers—again, without revealing the technology had be utilized.
The use of these technologies is not new, but they have come under increased scrutiny with the recent uprisings against police violence and systemic racism. Across the country and around the world, calls to defund police departments have revived efforts to ban technologies like facial recognition and predictive policing, which disproportionately affect communities of color. These predictive systems intersect with virtually every aspect of modern life, promoting discrimination in healthcare, housing, employment, and more.
The most common critique of these algorithmic decision-making systems is that they are “unfair”—software-makers blame human bias that has crept its way into the system, resulting in discrimination. In reality, the problem is deeper and more fundamental than the companies creating them are willing to admit.
In my time studying algorithmic decision-making systems as a privacy researcher and educator, I’ve seen this conversation evolve. I’ve come to understand that what we call “bias” is not merely the consequence of flawed technology, but a kind of computational ideology which codifies the worldviews that perpetuate inequality—white supremacy, patriarchy, settler-colonialism, homophobia and transphobia, to name just a few. In other words, without a major intervention which addresses the root causes of these injustices, algorithmic systems will merely automate the oppressive ideologies which form our society.
What does that intervention look like? If anti-racism and anti-fascism are practices that seek to dismantle—rather than simply acknowledge—systemic inequality and oppression, how can we build anti-oppressive praxis within the world of technology? Machine learning experts say that much like the algorithms themselves, the answers to these questions are complex and multifaceted, and should involve many different approaches—from protest and sabotage to making change within the institutions themselves.
Meredith Whittaker, a co-founder of the AI Now Institute and former Google researcher, said it starts by acknowledging that “bias” is not an engineering problem that can simply be fixed with a software update.
“We have failed to recognize that bias or racism or inequity doesn’t reside in an algorithm,” she told me. “It may be reproduced through an algorithm, but it resides in who gets to design and create these systems to begin with—who gets to apply them and on whom they are applied.”
Algorithmic systems are like ideological funhouse mirrors: they reflect and amplify the worldviews of the people and institutions that built them.
Tech companies often describe algorithms like magic boxes—indecipherable decision-making systems that operate in ways humans can’t possibly understand. While it’s true these systems are frequently (and often intentionally) opaque, we can still understand how they function by examining who created them, what outcomes they produce, and who ultimately benefits from those outcomes.
To put it another way, algorithmic systems are more like ideological funhouse mirrors: they reflect and amplify the worldviews of the people and institutions that built them. There are countless examples of how these systems replicate models of reality are oppressive and harmful. Take “gender recognition,” a sub-field of computer vision which involves training computers to infer a person’s gender based solely on physical characteristics. By their very nature, these systems are almost always built from an outdated model of “male” and “female” that excludes transgender and gender non-conforming people. Despite overwhelming scientific consensus that gender is fluid and expansive, 95 percent of academic papers on gender recognition view gender as binary, and 72 percent assume it is unchangeable from the sex assigned at birth, according to a 2018 study from the University of Washington.
In a society which views trans bodies as transgressive, it’s easy to see how these systems threaten millions of trans and gender-nonconforming people—especially trans people of color, who are already disproportionately policed. In July, the Trump administration’s Department of Housing and Urban Development proposed a rule that instructs federally funded homeless shelters to identify and eject trans women from women’s shelters based on physical characteristics like facial hair, height, and the presence of an adam’s apple. Given that machine vision systems already possess the ability to detect such features, automating this kind of discrimination would be trivial.
“There is, ipso facto, no way to make a technology premised on external inference of gender compatible with trans lives,” concludes Os Keyes, the author of the University of Washington study. “Given the various ways that continued usage would erase and put at risk trans people, designers and makers should quite simply avoid implementing or deploying Automated Gender Recognition.”
One common response to the problem of algorithmic bias is to advocate for more diversity in the field. If the people and data involved in creating this technology came from a wider range of backgrounds, the thinking goes, we’d see less examples of algorithmic systems perpetuating harmful prejudices. For example, common datasets used to train facial recognition systems are often filled with white faces, leading to higher rates of mis-identification for people with darker skin tones. Recently, police in Detroit wrongfully arrested a Black man after he was mis-identified by a facial recognition system—the first known case of its kind, and almost certainly just the tip of the iceberg.
Even if the system is “accurate,” that still doesn’t change the harmful ideological structures it was built to uphold in the first place. Since the recent uprisings against police violence, law enforcement agencies across the country have begun requesting CCTV footage of crowds of protesters, raising fears they will use facial recognition to target and harass activists. In other words, even if a predictive system is “correct” 100 percent of the time, that doesn’t prevent it from being used to disproportionately target marginalized people, protesters, and anyone else considered a threat by the state.
But what if we could flip the script, and create anti-oppressive systems that instead target those with power and privilege?
This is the provocation behind White Collar Crime Risk Zones, a 2017 project created for The New Inquiry. The project emulates predictive policing systems, creating “heat maps” forecasting where crime will occur based on historical data. But unlike the tools used by cops, these maps show hotspots for things like insider trading and employment discrimination, laying bare the arbitrary reality of the data—it merely reflects which types of crimes and communities are being policed.
“The conversation around algorithmic bias is really interesting because it’s kind of a proxy for these other systemic issues that normally would not be talked about,” said Francis Tseng, a researcher at the Jain Family Institute and co-creator of White Collar Crime Risk Zones. “Predictive policing algorithms are racially biased, but the reason for that is because policing is racially biased.”
Other efforts have focused on sabotage—using technical interventions that make oppressive systems less effective. After news broke of Clearview AI, the facial recognition firm revealed to be scraping face images from social media sites, researchers released “Fawkes,” a system that “cloaks” faces from image recognition algorithms. It uses machine learning to add small, imperceptible noise patterns to image data, modifying the photos so that a human can still recognize them but a facial recognition algorithm can’t. Like the anti-surveillance makeup patterns that came before, it’s a bit like kicking sand in the digital eyes of the surveillance state.
The downside to these counter-surveillance techniques is that they have a shelf life. As you read this, security researchers are already improving image recognition systems to recognize these noise patterns, teaching the algorithms to see past their own blind spots. While it may be effective in the short-term, using technical tricks to blind the machines will always be a cat-and-mouse game.
“Machine learning and AI are clearly very good at amplifying power as it already exists, and there’s clearly some use for it in countering that power,” said Tseng. “But in the end, it feels like it might benefit power more than the people pushing back.”
One of the most insidious aspects of these algorithmic systems is how they often disregard scientific consensus in lieu of completing their ideological mission. Like gender recognition, there has been a resurgence of machine learning research that revives racist pseudoscience practices like phrenology, which have been disproven for over a century. These ideas have re-entered academia under the cover of supposedly “objective” machine learning algorithms, with a deluge of scientific papers—some peer reviewed, some not—describing systems which the authors claim can determine things about a person based on racial and physical characteristics.
In June, thousands of AI experts condemned a paper whose authors claimed their system could predict whether someone would commit a crime based solely on their face with “80 percent accuracy” and “no racial bias.” Following the backlash, the authors later deleted the paper, and their publisher, Springer, confirmed that it had been rejected. It wasn’t the first time researchers have made these dubious claims. In 2016, a similar paper described a system for predicting criminality based on facial photos, using a database of mugshots from convicted criminals. In both cases, the authors were drawing from research that had been disproven for more than a century. Even worse, their flawed systems were creating a feedback loop: any predictions were based on the assumption that future criminals looked like people that the carceral system had previously labelled “criminal.” The fact that certain people are targeted by police and the justice system more than others was simply not addressed.
Whittaker notes that industry incentives are a big part of what creates the demand for such systems, regardless of how fatally flawed they are. “There is a robust market for magical tools that will tell us about people—what they’ll buy, who they are, whether they’re a threat or not. And I think that’s dangerous,” she said. “Who has the authority to tell me who I am, and what does it mean to invest that authority outside myself?”
But this also presents another opportunity for anti-oppressive intervention: de-platforming and refusal. After AI experts issued their letter to the academic publisher Springer demanding the criminality prediction research be rescinded, the paper disappeared from the publisher’s site, and the company later stated that the paper will not be published.
Much in the way that anti-fascist activists have used their collective power to successfully de-platform neo-nazis and white supremacists, academics and even tech workers have begun using their labor power and refuse to accept or implement technologies that reproduce racism, inequality, and harm. Groups like No Tech For ICE have linked technologies sold by big tech companies directly to the harm being done to immigrants and other marginalized communities. Some engineers have signed pledges or even deleted code repositories to prevent their work from being used by federal agencies. More recently, companies have responded to pressure from the worldwide uprisings against police violence, with IBM, Amazon, and Microsoft all announcing they would either stop or pause the sale of facial recognition technology to US law enforcement.
Not all companies will bow to pressure, however. And ultimately, none of these approaches are a panacea. There is still work to be done in preventing the harm caused by algorithmic systems, but they should all start with an understanding of the oppressive systems of power that cause these technologies to be harmful in the first place. “I think it’s a ‘try everything’ situation,” said Whittaker. “These aren’t new problems. We’re just automating and obfuscating social problems that have existed for a long time.”
Follow Janus Rose on Twitter.
Algorithms Are Automating Fascism. Here’s How We Fight Back syndicated from https://triviaqaweb.wordpress.com/feed/
0 notes
ntrending · 6 years
Text
Use all those GDPR privacy policy notifications to clean up your inbox and kill zombie accounts
New Post has been published on https://nexcraft.co/use-all-those-gdpr-privacy-policy-notifications-to-clean-up-your-inbox-and-kill-zombie-accounts/
Use all those GDPR privacy policy notifications to clean up your inbox and kill zombie accounts
Take a moment right now to click over to your email account—it could be your work account, personal address, or the fake one you use to secretly enter bake-off contests online. Do a quick search for “GDPR” and you’ll likely find a slew of recent emails from services, websites, apps, and other companies alerting you about changes to their privacy policy. I found dozens.
This is happening because of a sweeping digital privacy initiative called the General Data Protection Regulation that goes into effect in Europe starting on May 25—tomorrow. While the regulations only technically apply to citizens of the EU, they has prompted many companies to issue sweeping updates to their privacy policies and user agreements in advance to avoid the hefty fines that can occur if they run afoul of GDPR. For some companies, it’s also simpler just to have one set of documents in place for all users.
The onslaught of emails has been annoying, but you can turn that negative into an opportunity by taking this chance to take stock of all the websites, email lists, and other digital things you may have signed up for. You might even find some surprises in there.
Apps and services
If you’re a social media user, now is a great time to log into your accounts and check on your security and privacy settings. Both Facebook and Twitter recently updated the way you can control your data. To check in on Facebook, start with the privacy settings, then make sure to review and deactivate any old apps you have linked to your account but don’t use. You can do the same for Twitter at this page.
You should do the same with your Google account, which is likely a lot cleaner than your social media subscriptions, but it’s important enough to keep tabs on. Click here to see the apps you’ve connected with your Google account.
While the big social media networks are relatively easy to keep track of, you may also find that you have some old accounts with services that never quite took off. I found an account in a service called Mylio, which was supposed to be a big player in photo sharing and storage. It has been more than three years since I even logged in, but this GDPR update reminded me to go in and kill the zombie account that had many of my photos saved to the cloud.
Email newsletters
Gmail makes it easy to ignore email newsletters with its promotions tab, but like so many empty pizza boxes crammed under the bed in a college dorm room, they still exist and they’re not doing you any favors.
There are services that claim to help unsubscribe you from various mailing lists, but they almost always come with a serious cost. Unroll.me, for instance, is a popular service, but it scraped and sold information from users’ email accounts in exchange for tidying up. It’s a bad deal.
Most email newsletters will include an “unsubscribe” link, typically found at the bottom of the message. If you’re dealing with a legitimate company, this will often be enough to get you off the list. If the link takes you to a page to opt out, make sure you opt out of everything, including messages from “partners,” because that’s marketing speak for “advertisers.”
If you get a spam email with an unsubscribe link, don’t click it. It’s a common tactic for spammers to include a link that says “unsubscribe” when in reality, all it does is confirm your address as valid and mark it as a target for even more garbage messages in the future. For spam emails, dilligently mark them as spam rather than letting them sit in your inbox to help the email system’s AI start to recognize it as unwanted.
(Above: Check out the episode of our Last Week in Tech podcast in which we talk about GDPR)
Subscriptions
There are services like free credit reporting sites that bank on users signing up for a free trial, then forgetting to cancel and incurring a perpetual monthly service fee. These services often require you to call to cancel your subscription, in hopes that they can get you to stick around or keep you on hold until you give up. Don’t. Also, don’t sign up for free credit reporting sites.
Software and product registrations When you register a new piece of software, or even a physical product, you typically provide more information than the company actually needs, especially if you’re not using the product anymore. Did that old photo scanner software I bought in college really need to have my information on file all this time? Probably not. Use this as a chance to wipe out as much information as possible and make sure old services don’t have login information you’re currently using for things you care about.
Written By Stan Horaczek
0 notes
shirlleycoyle · 4 years
Text
Algorithms Are Automating Fascism. Here’s How We Fight Back
In early August, more than 50 NYPD officers surrounded the apartment of Derrick Ingram, a prominent Black Lives Matter activist, during a dramatic standoff in Manhattan’s Hell’s Kitchen. Helicopters circled overhead and heavily-armed riot cops with K-9 attack dogs blocked off the street as officers tried to persuade Ingram to surrender peacefully. The justification for the siege, according to the NYPD: Ingram had allegedly shouted into the ear of a police officer with a bullhorn during a protest march in June. (The officer had long since recovered.)
Video of the siege later revealed another troubling aspect of the encounter. A paper dossier held by one of the officers outside the apartment showed that the NYPD had used facial recognition to target Ingram, using a photo taken from his Instagram page. Earlier this month, police in Miami used a facial recognition tool to arrest another protester accused of throwing objects at officers—again, without revealing the technology had be utilized.
The use of these technologies is not new, but they have come under increased scrutiny with the recent uprisings against police violence and systemic racism. Across the country and around the world, calls to defund police departments have revived efforts to ban technologies like facial recognition and predictive policing, which disproportionately affect communities of color. These predictive systems intersect with virtually every aspect of modern life, promoting discrimination in healthcare, housing, employment, and more.
The most common critique of these algorithmic decision-making systems is that they are “unfair”—software-makers blame human bias that has crept its way into the system, resulting in discrimination. In reality, the problem is deeper and more fundamental than the companies creating them are willing to admit.
In my time studying algorithmic decision-making systems as a privacy researcher and educator, I’ve seen this conversation evolve. I’ve come to understand that what we call “bias” is not merely the consequence of flawed technology, but a kind of computational ideology which codifies the worldviews that perpetuate inequality—white supremacy, patriarchy, settler-colonialism, homophobia and transphobia, to name just a few. In other words, without a major intervention which addresses the root causes of these injustices, algorithmic systems will merely automate the oppressive ideologies which form our society.
What does that intervention look like? If anti-racism and anti-fascism are practices that seek to dismantle—rather than simply acknowledge—systemic inequality and oppression, how can we build anti-oppressive praxis within the world of technology? Machine learning experts say that much like the algorithms themselves, the answers to these questions are complex and multifaceted, and should involve many different approaches—from protest and sabotage to making change within the institutions themselves.
Meredith Whittaker, a co-founder of the AI Now Institute and former Google researcher, said it starts by acknowledging that “bias” is not an engineering problem that can simply be fixed with a software update.
“We have failed to recognize that bias or racism or inequity doesn’t reside in an algorithm,” she told me. “It may be reproduced through an algorithm, but it resides in who gets to design and create these systems to begin with—who gets to apply them and on whom they are applied.”
Algorithmic systems are like ideological funhouse mirrors: they reflect and amplify the worldviews of the people and institutions that built them.
Tech companies often describe algorithms like magic boxes—indecipherable decision-making systems that operate in ways humans can’t possibly understand. While it’s true these systems are frequently (and often intentionally) opaque, we can still understand how they function by examining who created them, what outcomes they produce, and who ultimately benefits from those outcomes.
To put it another way, algorithmic systems are more like ideological funhouse mirrors: they reflect and amplify the worldviews of the people and institutions that built them. There are countless examples of how these systems replicate models of reality are oppressive and harmful. Take “gender recognition,” a sub-field of computer vision which involves training computers to infer a person’s gender based solely on physical characteristics. By their very nature, these systems are almost always built from an outdated model of “male” and “female” that excludes transgender and gender non-conforming people. Despite overwhelming scientific consensus that gender is fluid and expansive, 95 percent of academic papers on gender recognition view gender as binary, and 72 percent assume it is unchangeable from the sex assigned at birth, according to a 2018 study from the University of Washington.
In a society which views trans bodies as transgressive, it’s easy to see how these systems threaten millions of trans and gender-nonconforming people—especially trans people of color, who are already disproportionately policed. In July, the Trump administration’s Department of Housing and Urban Development proposed a rule that instructs federally funded homeless shelters to identify and eject trans women from women’s shelters based on physical characteristics like facial hair, height, and the presence of an adam’s apple. Given that machine vision systems already possess the ability to detect such features, automating this kind of discrimination would be trivial.
“There is, ipso facto, no way to make a technology premised on external inference of gender compatible with trans lives,” concludes Os Keyes, the author of the University of Washington study. “Given the various ways that continued usage would erase and put at risk trans people, designers and makers should quite simply avoid implementing or deploying Automated Gender Recognition.”
One common response to the problem of algorithmic bias is to advocate for more diversity in the field. If the people and data involved in creating this technology came from a wider range of backgrounds, the thinking goes, we’d see less examples of algorithmic systems perpetuating harmful prejudices. For example, common datasets used to train facial recognition systems are often filled with white faces, leading to higher rates of mis-identification for people with darker skin tones. Recently, police in Detroit wrongfully arrested a Black man after he was mis-identified by a facial recognition system—the first known case of its kind, and almost certainly just the tip of the iceberg.
Even if the system is “accurate,” that still doesn’t change the harmful ideological structures it was built to uphold in the first place. Since the recent uprisings against police violence, law enforcement agencies across the country have begun requesting CCTV footage of crowds of protesters, raising fears they will use facial recognition to target and harass activists. In other words, even if a predictive system is “correct” 100 percent of the time, that doesn’t prevent it from being used to disproportionately target marginalized people, protesters, and anyone else considered a threat by the state.
But what if we could flip the script, and create anti-oppressive systems that instead target those with power and privilege?
This is the provocation behind White Collar Crime Risk Zones, a 2017 project created for The New Inquiry. The project emulates predictive policing systems, creating “heat maps” forecasting where crime will occur based on historical data. But unlike the tools used by cops, these maps show hotspots for things like insider trading and employment discrimination, laying bare the arbitrary reality of the data—it merely reflects which types of crimes and communities are being policed.
“The conversation around algorithmic bias is really interesting because it’s kind of a proxy for these other systemic issues that normally would not be talked about,” said Francis Tseng, a researcher at the Jain Family Institute and co-creator of White Collar Crime Risk Zones. “Predictive policing algorithms are racially biased, but the reason for that is because policing is racially biased.”
Other efforts have focused on sabotage—using technical interventions that make oppressive systems less effective. After news broke of Clearview AI, the facial recognition firm revealed to be scraping face images from social media sites, researchers released “Fawkes,” a system that “cloaks” faces from image recognition algorithms. It uses machine learning to add small, imperceptible noise patterns to image data, modifying the photos so that a human can still recognize them but a facial recognition algorithm can’t. Like the anti-surveillance makeup patterns that came before, it’s a bit like kicking sand in the digital eyes of the surveillance state.
The downside to these counter-surveillance techniques is that they have a shelf life. As you read this, security researchers are already improving image recognition systems to recognize these noise patterns, teaching the algorithms to see past their own blind spots. While it may be effective in the short-term, using technical tricks to blind the machines will always be a cat-and-mouse game.
“Machine learning and AI are clearly very good at amplifying power as it already exists, and there’s clearly some use for it in countering that power,” said Tseng. “But in the end, it feels like it might benefit power more than the people pushing back.”
One of the most insidious aspects of these algorithmic systems is how they often disregard scientific consensus in lieu of completing their ideological mission. Like gender recognition, there has been a resurgence of machine learning research that revives racist pseudoscience practices like phrenology, which have been disproven for over a century. These ideas have re-entered academia under the cover of supposedly “objective” machine learning algorithms, with a deluge of scientific papers—some peer reviewed, some not—describing systems which the authors claim can determine things about a person based on racial and physical characteristics.
In June, thousands of AI experts condemned a paper whose authors claimed their system could predict whether someone would commit a crime based solely on their face with “80 percent accuracy” and “no racial bias.” Following the backlash, the authors later deleted the paper, and their publisher, Springer, confirmed that it had been rejected. It wasn’t the first time researchers have made these dubious claims. In 2016, a similar paper described a system for predicting criminality based on facial photos, using a database of mugshots from convicted criminals. In both cases, the authors were drawing from research that had been disproven for more than a century. Even worse, their flawed systems were creating a feedback loop: any predictions were based on the assumption that future criminals looked like people that the carceral system had previously labelled “criminal.” The fact that certain people are targeted by police and the justice system more than others was simply not addressed.
Whittaker notes that industry incentives are a big part of what creates the demand for such systems, regardless of how fatally flawed they are. “There is a robust market for magical tools that will tell us about people—what they’ll buy, who they are, whether they’re a threat or not. And I think that’s dangerous,” she said. “Who has the authority to tell me who I am, and what does it mean to invest that authority outside myself?”
But this also presents another opportunity for anti-oppressive intervention: de-platforming and refusal. After AI experts issued their letter to the academic publisher Springer demanding the criminality prediction research be rescinded, the paper disappeared from the publisher’s site, and the company later stated that the paper will not be published.
Much in the way that anti-fascist activists have used their collective power to successfully de-platform neo-nazis and white supremacists, academics and even tech workers have begun using their labor power and refuse to accept or implement technologies that reproduce racism, inequality, and harm. Groups like No Tech For ICE have linked technologies sold by big tech companies directly to the harm being done to immigrants and other marginalized communities. Some engineers have signed pledges or even deleted code repositories to prevent their work from being used by federal agencies. More recently, companies have responded to pressure from the worldwide uprisings against police violence, with IBM, Amazon, and Microsoft all announcing they would either stop or pause the sale of facial recognition technology to US law enforcement.
Not all companies will bow to pressure, however. And ultimately, none of these approaches are a panacea. There is still work to be done in preventing the harm caused by algorithmic systems, but they should all start with an understanding of the oppressive systems of power that cause these technologies to be harmful in the first place. “I think it’s a ‘try everything’ situation,” said Whittaker. “These aren’t new problems. We’re just automating and obfuscating social problems that have existed for a long time.”
Follow Janus Rose on Twitter.
Algorithms Are Automating Fascism. Here’s How We Fight Back syndicated from https://triviaqaweb.wordpress.com/feed/
0 notes