#facial recognition
Explore tagged Tumblr posts
titleknown · 1 year ago
Text
So, this is scary as hell, Google's developing tech to scan your face as a form of age identification, and it's yet another reason why we need to stop the various bad internet bills like EARN IT, STOP CSAM, and especially KOSA.
Because, that's why they're doing this, and that sort of invasive face scanning is what everybody's been warning people they're going to do if they pass, so the fact they're running up to push it through should alarm everyone.
And, as of this posting on 12/27/2023, it's been noted that Chuck Schumer wants to try and start pushing these bills through as soon as the new year starts, and some whisperings have been made even of all these bad bills being merged under STOP CSAM into one deadly super-bill.
So, if you live in the US call your senators, and even if you don't please boost this, we need to stop this now.
6K notes · View notes
allthecanadianpolitics · 1 month ago
Text
Air Canada is poised to roll out facial recognition technology at the gate, making it the first Canadian airline to deploy the software in a bid to streamline the boarding process. Starting Tuesday, customers who board most domestic Air Canada flights at Vancouver International Airport will be able to walk onto the plane without presenting any physical pieces of identification, such as a passport or driver's licence, the country's largest airline said. Participants in the program, which is voluntary, can upload a photo of their face and a scan of their passport to the airline's app.
Continue reading
Tagging: @newsfromstolenland
463 notes · View notes
mishacollins · 11 months ago
Photo
Tumblr media
Hey Apple, there’s a major bug with your latest iOS update! My iPhone’s facial recognition has stopped working!
1K notes · View notes
alcrego · 6 months ago
Photo
Tumblr media
Facial Recognition
7 October 2018.
592 notes · View notes
mostlysignssomeportents · 11 months ago
Text
Hypothetical AI election disinformation risks vs real AI harms
Tumblr media
I'm on tour with my new novel The Bezzle! Catch me TONIGHT (Feb 27) in Portland at Powell's. Then, onto Phoenix (Changing Hands, Feb 29), Tucson (Mar 9-12), and more!
Tumblr media
You can barely turn around these days without encountering a think-piece warning of the impending risk of AI disinformation in the coming elections. But a recent episode of This Machine Kills podcast reminds us that these are hypothetical risks, and there is no shortage of real AI harms:
https://soundcloud.com/thismachinekillspod/311-selling-pickaxes-for-the-ai-gold-rush
The algorithmic decision-making systems that increasingly run the back-ends to our lives are really, truly very bad at doing their jobs, and worse, these systems constitute a form of "empiricism-washing": if the computer says it's true, it must be true. There's no such thing as racist math, you SJW snowflake!
https://slate.com/news-and-politics/2019/02/aoc-algorithms-racist-bias.html
Nearly 1,000 British postmasters were wrongly convicted of fraud by Horizon, the faulty AI fraud-hunting system that Fujitsu provided to the Royal Mail. They had their lives ruined by this faulty AI, many went to prison, and at least four of the AI's victims killed themselves:
https://en.wikipedia.org/wiki/British_Post_Office_scandal
Tenants across America have seen their rents skyrocket thanks to Realpage's landlord price-fixing algorithm, which deployed the time-honored defense: "It's not a crime if we commit it with an app":
https://www.propublica.org/article/doj-backs-tenants-price-fixing-case-big-landlords-real-estate-tech
Housing, you'll recall, is pretty foundational in the human hierarchy of needs. Losing your home – or being forced to choose between paying rent or buying groceries or gas for your car or clothes for your kid – is a non-hypothetical, widespread, urgent problem that can be traced straight to AI.
Then there's predictive policing: cities across America and the world have bought systems that purport to tell the cops where to look for crime. Of course, these systems are trained on policing data from forces that are seeking to correct racial bias in their practices by using an algorithm to create "fairness." You feed this algorithm a data-set of where the police had detected crime in previous years, and it predicts where you'll find crime in the years to come.
But you only find crime where you look for it. If the cops only ever stop-and-frisk Black and brown kids, or pull over Black and brown drivers, then every knife, baggie or gun they find in someone's trunk or pockets will be found in a Black or brown person's trunk or pocket. A predictive policing algorithm will naively ingest this data and confidently assert that future crimes can be foiled by looking for more Black and brown people and searching them and pulling them over.
Obviously, this is bad for Black and brown people in low-income neighborhoods, whose baseline risk of an encounter with a cop turning violent or even lethal. But it's also bad for affluent people in affluent neighborhoods – because they are underpoliced as a result of these algorithmic biases. For example, domestic abuse that occurs in full detached single-family homes is systematically underrepresented in crime data, because the majority of domestic abuse calls originate with neighbors who can hear the abuse take place through a shared wall.
But the majority of algorithmic harms are inflicted on poor, racialized and/or working class people. Even if you escape a predictive policing algorithm, a facial recognition algorithm may wrongly accuse you of a crime, and even if you were far away from the site of the crime, the cops will still arrest you, because computers don't lie:
https://www.cbsnews.com/sacramento/news/texas-macys-sunglass-hut-facial-recognition-software-wrongful-arrest-sacramento-alibi/
Trying to get a low-waged service job? Be prepared for endless, nonsensical AI "personality tests" that make Scientology look like NASA:
https://futurism.com/mandatory-ai-hiring-tests
Service workers' schedules are at the mercy of shift-allocation algorithms that assign them hours that ensure that they fall just short of qualifying for health and other benefits. These algorithms push workers into "clopening" – where you close the store after midnight and then open it again the next morning before 5AM. And if you try to unionize, another algorithm – that spies on you and your fellow workers' social media activity – targets you for reprisals and your store for closure.
If you're driving an Amazon delivery van, algorithm watches your eyeballs and tells your boss that you're a bad driver if it doesn't like what it sees. If you're working in an Amazon warehouse, an algorithm decides if you've taken too many pee-breaks and automatically dings you:
https://pluralistic.net/2022/04/17/revenge-of-the-chickenized-reverse-centaurs/
If this disgusts you and you're hoping to use your ballot to elect lawmakers who will take up your cause, an algorithm stands in your way again. "AI" tools for purging voter rolls are especially harmful to racialized people – for example, they assume that two "Juan Gomez"es with a shared birthday in two different states must be the same person and remove one or both from the voter rolls:
https://www.cbsnews.com/news/eligible-voters-swept-up-conservative-activists-purge-voter-rolls/
Hoping to get a solid education, the sort that will keep you out of AI-supervised, precarious, low-waged work? Sorry, kiddo: the ed-tech system is riddled with algorithms. There's the grifty "remote invigilation" industry that watches you take tests via webcam and accuses you of cheating if your facial expressions fail its high-tech phrenology standards:
https://pluralistic.net/2022/02/16/unauthorized-paper/#cheating-anticheat
All of these are non-hypothetical, real risks from AI. The AI industry has proven itself incredibly adept at deflecting interest from real harms to hypothetical ones, like the "risk" that the spicy autocomplete will become conscious and take over the world in order to convert us all to paperclips:
https://pluralistic.net/2023/11/27/10-types-of-people/#taking-up-a-lot-of-space
Whenever you hear AI bosses talking about how seriously they're taking a hypothetical risk, that's the moment when you should check in on whether they're doing anything about all these longstanding, real risks. And even as AI bosses promise to fight hypothetical election disinformation, they continue to downplay or ignore the non-hypothetical, here-and-now harms of AI.
There's something unseemly – and even perverse – about worrying so much about AI and election disinformation. It plays into the narrative that kicked off in earnest in 2016, that the reason the electorate votes for manifestly unqualified candidates who run on a platform of bald-faced lies is that they are gullible and easily led astray.
But there's another explanation: the reason people accept conspiratorial accounts of how our institutions are run is because the institutions that are supposed to be defending us are corrupt and captured by actual conspiracies:
https://memex.craphound.com/2019/09/21/republic-of-lies-the-rise-of-conspiratorial-thinking-and-the-actual-conspiracies-that-fuel-it/
The party line on conspiratorial accounts is that these institutions are good, actually. Think of the rebuttal offered to anti-vaxxers who claimed that pharma giants were run by murderous sociopath billionaires who were in league with their regulators to kill us for a buck: "no, I think you'll find pharma companies are great and superbly regulated":
https://pluralistic.net/2023/09/05/not-that-naomi/#if-the-naomi-be-klein-youre-doing-just-fine
Institutions are profoundly important to a high-tech society. No one is capable of assessing all the life-or-death choices we make every day, from whether to trust the firmware in your car's anti-lock brakes, the alloys used in the structural members of your home, or the food-safety standards for the meal you're about to eat. We must rely on well-regulated experts to make these calls for us, and when the institutions fail us, we are thrown into a state of epistemological chaos. We must make decisions about whether to trust these technological systems, but we can't make informed choices because the one thing we're sure of is that our institutions aren't trustworthy.
Ironically, the long list of AI harms that we live with every day are the most important contributor to disinformation campaigns. It's these harms that provide the evidence for belief in conspiratorial accounts of the world, because each one is proof that the system can't be trusted. The election disinformation discourse focuses on the lies told – and not why those lies are credible.
That's because the subtext of election disinformation concerns is usually that the electorate is credulous, fools waiting to be suckered in. By refusing to contemplate the institutional failures that sit upstream of conspiracism, we can smugly locate the blame with the peddlers of lies and assume the mantle of paternalistic protectors of the easily gulled electorate.
But the group of people who are demonstrably being tricked by AI is the people who buy the horrifically flawed AI-based algorithmic systems and put them into use despite their manifest failures.
As I've written many times, "we're nowhere near a place where bots can steal your job, but we're certainly at the point where your boss can be suckered into firing you and replacing you with a bot that fails at doing your job"
https://pluralistic.net/2024/01/15/passive-income-brainworms/#four-hour-work-week
The most visible victims of AI disinformation are the people who are putting AI in charge of the life-chances of millions of the rest of us. Tackle that AI disinformation and its harms, and we'll make conspiratorial claims about our institutions being corrupt far less credible.
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/02/27/ai-conspiracies/#epistemological-collapse
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
145 notes · View notes
odinsblog · 1 year ago
Text
Tumblr media
A shocking story of wrongful arrest in Detroit has renewed scrutiny of how facial recognition software is being deployed by police departments, despite major flaws in the technology.
Porcha Woodruff was arrested in February when police showed up at her house accusing her of robbery and carjacking. Woodruff, who was eight months pregnant at the time, insisted she had nothing to do with the crime, but police detained her for 11 hours, during which time she had contractions. She was eventually released on a $100,000 bond before prosecutors dropped the case a month later, admitting that her arrest was based in part on a false facial recognition match.
Woodruff is the sixth known person to be falsely accused of a crime because of facial recognition, and all six victims have been Black. “That’s not an accident,” says Dorothy Roberts, director of the University of Pennsylvania Program on Race, Science and Society, who says new technology often reflects societal biases when built atop flawed systems. “Racism gets embedded into the technologies.”
👉🏿 https://www.nytimes.com/2023/08/06/business/facial-recognition-false-arrest.html
👉🏿 https://www.democracynow.org/2023/8/7/facial_recognition_dorothy_roberts
👉🏿 https://www.pbs.org/independentlens/documentaries/coded-bias/
338 notes · View notes
mxjackparker · 3 months ago
Text
Protecting sex workers means protecting our privacy. Protecting sex workers means putting a stop to facial recognition technology.
Facial recognition technology poses a real danger to sex workers, by giving people a new way to discover our true identities. In this article I share my personal struggle with accepting how easily I can be stalked or harmed and the harm the rise of facial recognition image search websites has done to sex workers in general.
43 notes · View notes
Text
I love ahs because it's like watching a theatre play on television. They just change a little bit the characterization of an actor/actress and you have to assume that what you are seeing now is a completely different character and immerse yourself in a universe where everyone lacks facial recognition memory and be like " who is that bitch?"
21 notes · View notes
news4dzhozhar · 9 months ago
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
77 notes · View notes
augmentedpolls · 3 months ago
Text
^^ quiz if you’re not sure (10-20 mins, it tells you your decile at the end)
28 notes · View notes
Text
Tumblr media Tumblr media
"Nineteen Eighty-Four was not supposed to be an instruction manual."
Yet so much Orwell's writings ring true today. Big Brother Watch is determined to make Nineteen Eighty-Four fiction again.
46 notes · View notes
theculturedmarxist · 25 days ago
Text
> THREAD: The official narrative of how Luigi Mangione was apprehended doesn’t add up. Evidence suggests a deeper surveillance operation involving real-time facial recognition technology. Let’s delve into the inconsistencies and explore the implications. 🧵 Image Image >Considering Mangione’s efforts to conceal his identity, it’s improbable that a fast-food employee could identify him based solely on limited public images. We’re talking about a high-pressure, fast-paced environment where employees process hundreds of customers daily. >But what if the real key to his capture wasn’t human recognition at all? I’ll bet you didn’t know McDonald’s kiosk cameras have facial recognition technology: pointjupiter.com/work/mcdonalds/ >Given the integration of this technology, it’s not unlikely that federal agencies can access these systems for surveillance with real-time facial recognition across multiple venues. The NSA and other agencies already have a track record of using private surveillance networks. >Admitting the feds are running real-time facial recognition surveillance across the country would spark outrage. Instead, they sell a more "believable" narrative that a heroic employee saved the day. >If this is true, it means federal agencies have access to live camera feeds in private businesses, and they’re using AI to scan and identify individuals in real time. Surveillance isn’t limited to fugitives – it could extend to anyone, anywhere.
>This has massive implications for privacy and civil liberties. If McDonald’s can be used as a hub for mass surveillance, what about other chains? Grocery stores? Gas stations? The infrastructure is already there. Image >Kroger has faced scrutiny for using facial recognition tech too – and it’s even more dystopian. Not only are they scanning faces, but they’re linking that data to shopping profiles and potentially altering prices in real time based on your data. aclu.org/news/privacy-t… >This isn’t just about marketing – it’s surveillance capitalism on steroids. Corporations are turning our faces into data points to manipulate our spending, while the government secretly piggybacks off that infrastructure to track civilians. >We should demand transparency. If facial recognition is being used at this scale, we have a right to know. How much data are these companies collecting? Who else has access to this data, such as third parties and law enforcement? How are these technologies regulated, if at all? >We’re at a crossroads. If we don’t push back now, this kind of tech will become the norm. Surveillance will be baked into every facet of our lives, from shopping to dining to simply walking down the street. Image >The Mangione case and Kroger’s practices show us the future: a world where our faces are not just tracked but exploited, both by corporations seeking profit and governments seeking control. >We need transparency and accountability: Clear limits on the use of facial recognition tech, protections against price manipulation based on profiling, and strong oversight of government access to private surveillance networks. >This isn’t just a McDonald’s or Kroger issue – it’s a systemic shift. Facial recognition is becoming the foundation of a surveillance economy, and we must demand better protections now.
17 notes · View notes
allthecanadianpolitics · 6 months ago
Text
Some police services in Canada are using facial recognition technology to help solve crimes, while other police forces say human rights and privacy concerns are holding them back from employing the powerful digital tools. It’s this uneven application of the technology — and the loose rules governing its use — that has legal and AI experts calling on the federal government to set national standards. “Until there’s a better handle on the risks involved with the use of this technology, there ought to be a moratorium or a range of prohibitions on how and where it can be used,” says Kristen Thomasen, law professor at the University of British Columbia. As well, the patchwork of regulations on emerging biometric technologies has created situations in which some citizens’ privacy rights are more protected than others. “I think the fact that we have different police forces taking different steps raises concerns (about) inequities and how people are treated across the country, but (it) also highlights the continuing importance of some kind of federal action to be taken,” she said.
Continue Reading
Tagging: @newsfromstolenland
309 notes · View notes
allthegeopolitics · 8 months ago
Text
Microsoft has reaffirmed its ban on U.S. police departments from using generative AI for facial recognition through Azure OpenAI Service, the company’s fully managed, enterprise-focused wrapper around OpenAI tech. Language added Wednesday to the terms of service for Azure OpenAI Service more clearly prohibits integrations with Azure OpenAI Service from being used “by or for” police departments for facial recognition in the U.S., including integrations with OpenAI’s current — and possibly future — image-analyzing models. A separate new bullet point covers “any law enforcement globally,” and explicitly bars the use of “real-time facial recognition technology” on mobile cameras, like body cameras and dashcams, to attempt to identify a person in “uncontrolled, in-the-wild” environments.
Continue Reading.
34 notes · View notes
readandwriteclub · 1 month ago
Text
Boycott McDonald's
Tumblr media Tumblr media
16 notes · View notes
templeofshame · 2 months ago
Text
does facial recognition login tech have trouble with people who wear different makeup on different days, or sometimes wear makeup but not always? at least the one my work is encouraging still lets you use a password but im just like. What biases and assumptions are baked in here? and if people get surgery or an injury or something, are there processes for updating it? would you have to disclose personal medical information to an IT person?
12 notes · View notes