#Tucson train
Explore tagged Tumblr posts
Text
#been thinking abt it for days……#the shawshank redemption#Bruce Springsteen#Tucson train#western stars
4 notes
·
View notes
Text
I’m working on day 3 of this migraine and it fucking came back when I was parking at the gym and so YOU BET IM GONNA FEED IT SPICY NOODS
just BLAST THAT SHIT OUT
I did 20 min of exercise, which amounted to 35 squats, and I am just SO BORED of sitting on the couch watching TV or just laying there and drinking water, and I WANT TO FUCKING DO THINGS
#someone on Reddit mentioned that triptans like. REALLY dehydrate you#and the last time I had constant low-grade migraines that weren’t attributable to anything else#it was because I didn’t have enough electrolytes#but that was also during yoga teacher training where I was doing 20 hrs of yoga in a non-air-conditioned BJJ gym in the middle of the summer#in Tucson#every weekend#so it feels like the math isn’t mathing here
3 notes
·
View notes
Text
This is insane
#Toxic chemicals#Environment#Environmentalism#Tucson#Toxic Tucson crash#Tucson crash#ohio train derailment#Ohio Chernobyl#Ohio train disaster#News#Truth#Politics#Biden#Buttigieg#Transportation#Health#Public safety#Nitric acid#Toxic
8 notes
·
View notes
Text
REMEMBER boiling water kills BACTERIA it does not remove contaminants like plastics or chemicals. If you are in one of the areas with the water contaminant, boiling water won’t do much if anything, in fact you might release a toxic gas from the water. Please buy bottled water!
#the birds#and the fish#are always first to drop when something is wrong#mike dewine#is lying#east palestine#Ohio#train derailment#containments#Oregon#yaquina River#Ohio River#Toledo Oregon#tucson#Tucson Arizona#arizonia#pennsylvania#toxic#acid#staysafe#don’t trust the government#epa#Norfolk southern#capitalism#water#kentucky#Cincinnati#lexington#environmental#concerns
4 notes
·
View notes
Text
THE TRAIN IS A PHONE! I REPEAT, THE TRAIN. IS. A PHONE.
If I had a landline or the skill to make this Bluetooth I'd just *clenches fist and sobs*
Midtown mercantile in tucson az
645 notes
·
View notes
Text
Conspiratorialism as a material phenomenon
I'll be in TUCSON, AZ from November 8-10: I'm the GUEST OF HONOR at the TUSCON SCIENCE FICTION CONVENTION.
I think it behooves us to be a little skeptical of stories about AI driving people to believe wrong things and commit ugly actions. Not that I like the AI slop that is filling up our social media, but when we look at the ways that AI is harming us, slop is pretty low on the list.
The real AI harms come from the actual things that AI companies sell AI to do. There's the AI gun-detector gadgets that the credulous Mayor Eric Adams put in NYC subways, which led to 2,749 invasive searches and turned up zero guns:
https://www.cbsnews.com/newyork/news/nycs-subway-weapons-detector-pilot-program-ends/
Any time AI is used to predict crime – predictive policing, bail determinations, Child Protective Services red flags – they magnify the biases already present in these systems, and, even worse, they give this bias the veneer of scientific neutrality. This process is called "empiricism-washing," and you know you're experiencing it when you hear some variation on "it's just math, math can't be racist":
https://pluralistic.net/2020/06/23/cryptocidal-maniacs/#phrenology
When AI is used to replace customer service representatives, it systematically defrauds customers, while providing an "accountability sink" that allows the company to disclaim responsibility for the thefts:
https://pluralistic.net/2024/04/23/maximal-plausibility/#reverse-centaurs
When AI is used to perform high-velocity "decision support" that is supposed to inform a "human in the loop," it quickly overwhelms its human overseer, who takes on the role of "moral crumple zone," pressing the "OK" button as fast as they can. This is bad enough when the sacrificial victim is a human overseeing, say, proctoring software that accuses remote students of cheating on their tests:
https://pluralistic.net/2022/02/16/unauthorized-paper/#cheating-anticheat
But it's potentially lethal when the AI is a transcription engine that doctors have to use to feed notes to a data-hungry electronic health record system that is optimized to commit health insurance fraud by seeking out pretenses to "upcode" a patient's treatment. Those AIs are prone to inventing things the doctor never said, inserting them into the record that the doctor is supposed to review, but remember, the only reason the AI is there at all is that the doctor is being asked to do so much paperwork that they don't have time to treat their patients:
https://apnews.com/article/ai-artificial-intelligence-health-business-90020cdf5fa16c79ca2e5b6c4c9bbb14
My point is that "worrying about AI" is a zero-sum game. When we train our fire on the stuff that isn't important to the AI stock swindlers' business-plans (like creating AI slop), we should remember that the AI companies could halt all of that activity and not lose a dime in revenue. By contrast, when we focus on AI applications that do the most direct harm – policing, health, security, customer service – we also focus on the AI applications that make the most money and drive the most investment.
AI hasn't attracted hundreds of billions in investment capital because investors love AI slop. All the money pouring into the system – from investors, from customers, from easily gulled big-city mayors – is chasing things that AI is objectively very bad at and those things also cause much more harm than AI slop. If you want to be a good AI critic, you should devote the majority of your focus to these applications. Sure, they're not as visually arresting, but discrediting them is financially arresting, and that's what really matters.
All that said: AI slop is real, there is a lot of it, and just because it doesn't warrant priority over the stuff AI companies actually sell, it still has cultural significance and is worth considering.
AI slop has turned Facebook into an anaerobic lagoon of botshit, just the laziest, grossest engagement bait, much of it the product of rise-and-grind spammers who avidly consume get rich quick "courses" and then churn out a torrent of "shrimp Jesus" and fake chainsaw sculptures:
https://www.404media.co/email/1cdf7620-2e2f-4450-9cd9-e041f4f0c27f/
For poor engagement farmers in the global south chasing the fractional pennies that Facebook shells out for successful clickbait, the actual content of the slop is beside the point. These spammers aren't necessarily tuned into the psyche of the wealthy-world Facebook users who represent Meta's top monetization subjects. They're just trying everything and doubling down on anything that moves the needle, A/B splitting their way into weird, hyper-optimized, grotesque crap:
https://www.404media.co/facebook-is-being-overrun-with-stolen-ai-generated-images-that-people-think-are-real/
In other words, Facebook's AI spammers are laying out a banquet of arbitrary possibilities, like the letters on a Ouija board, and the Facebook users' clicks and engagement are a collective ideomotor response, moving the algorithm's planchette to the options that tug hardest at our collective delights (or, more often, disgusts).
So, rather than thinking of AI spammers as creating the ideological and aesthetic trends that drive millions of confused Facebook users into condemning, praising, and arguing about surreal botshit, it's more true to say that spammers are discovering these trends within their subjects' collective yearnings and terrors, and then refining them by exploring endlessly ramified variations in search of unsuspected niches.
(If you know anything about AI, this may remind you of something: a Generative Adversarial Network, in which one bot creates variations on a theme, and another bot ranks how closely the variations approach some ideal. In this case, the spammers are the generators and the Facebook users they evince reactions from are the discriminators)
https://en.wikipedia.org/wiki/Generative_adversarial_network
I got to thinking about this today while reading User Mag, Taylor Lorenz's superb newsletter, and her reporting on a new AI slop trend, "My neighbor’s ridiculous reason for egging my car":
https://www.usermag.co/p/my-neighbors-ridiculous-reason-for
The "egging my car" slop consists of endless variations on a story in which the poster (generally a figure of sympathy, canonically a single mother of newborn twins) complains that her awful neighbor threw dozens of eggs at her car to punish her for parking in a way that blocked his elaborate Hallowe'en display. The text is accompanied by an AI-generated image showing a modest family car that has been absolutely plastered with broken eggs, dozens upon dozens of them.
According to Lorenz, variations on this slop are topping very large Facebook discussion forums totalling millions of users, like "Movie Character…,USA Story, Volleyball Women, Top Trends, Love Style, and God Bless." These posts link to SEO sites laden with programmatic advertising.
The funnel goes:
i. Create outrage and hence broad reach;
ii, A small percentage of those who see the post will click through to the SEO site;
iii. A small fraction of those users will click a low-quality ad;
iv. The ad will pay homeopathic sub-pennies to the spammer.
The revenue per user on this kind of scam is next to nothing, so it only works if it can get very broad reach, which is why the spam is so designed for engagement maximization. The more discussion a post generates, the more users Facebook recommends it to.
These are very effective engagement bait. Almost all AI slop gets some free engagement in the form of arguments between users who don't know they're commenting an AI scam and people hectoring them for falling for the scam. This is like the free square in the middle of a bingo card.
Beyond that, there's multivalent outrage: some users are furious about food wastage; others about the poor, victimized "mother" (some users are furious about both). Not only do users get to voice their fury at both of these imaginary sins, they can also argue with one another about whether, say, food wastage even matters when compared to the petty-minded aggression of the "perpetrator." These discussions also offer lots of opportunity for violent fantasies about the bad guy getting a comeuppance, offers to travel to the imaginary AI-generated suburb to dole out a beating, etc. All in all, the spammers behind this tedious fiction have really figured out how to rope in all kinds of users' attention.
Of course, the spammers don't get much from this. There isn't such a thing as an "attention economy." You can't use attention as a unit of account, a medium of exchange or a store of value. Attention – like everything else that you can't build an economy upon, such as cryptocurrency – must be converted to money before it has economic significance. Hence that tooth-achingly trite high-tech neologism, "monetization."
The monetization of attention is very poor, but AI is heavily subsidized or even free (for now), so the largest venture capital and private equity funds in the world are spending billions in public pension money and rich peoples' savings into CO2 plumes, GPUs, and botshit so that a bunch of hustle-culture weirdos in the Pacific Rim can make a few dollars by tricking people into clicking through engagement bait slop – twice.
The slop isn't the point of this, but the slop does have the useful function of making the collective ideomotor response visible and thus providing a peek into our hopes and fears. What does the "egging my car" slop say about the things that we're thinking about?
Lorenz cites Jamie Cohen, a media scholar at CUNY Queens, who points out that subtext of this slop is "fear and distrust in people about their neighbors." Cohen predicts that "the next trend, is going to be stranger and more violent.”
This feels right to me. The corollary of mistrusting your neighbors, of course, is trusting only yourself and your family. Or, as Margaret Thatcher liked to say, "There is no such thing as society. There are individual men and women and there are families."
We are living in the tail end of a 40 year experiment in structuring our world as though "there is no such thing as society." We've gutted our welfare net, shut down or privatized public services, all but abolished solidaristic institutions like unions.
This isn't mere aesthetics: an atomized society is far more hospitable to extreme wealth inequality than one in which we are all in it together. When your power comes from being a "wise consumer" who "votes with your wallet," then all you can do about the climate emergency is buy a different kind of car – you can't build the public transit system that will make cars obsolete.
When you "vote with your wallet" all you can do about animal cruelty and habitat loss is eat less meat. When you "vote with your wallet" all you can do about high drug prices is "shop around for a bargain." When you vote with your wallet, all you can do when your bank forecloses on your home is "choose your next lender more carefully."
Most importantly, when you vote with your wallet, you cast a ballot in an election that the people with the thickest wallets always win. No wonder those people have spent so long teaching us that we can't trust our neighbors, that there is no such thing as society, that we can't have nice things. That there is no alternative.
The commercial surveillance industry really wants you to believe that they're good at convincing people of things, because that's a good way to sell advertising. But claims of mind-control are pretty goddamned improbable – everyone who ever claimed to have managed the trick was lying, from Rasputin to MK-ULTRA:
https://pluralistic.net/HowToDestroySurveillanceCapitalism
Rather than seeing these platforms as convincing people of things, we should understand them as discovering and reinforcing the ideology that people have been driven to by material conditions. Platforms like Facebook show us to one another, let us form groups that can imperfectly fill in for the solidarity we're desperate for after 40 years of "no such thing as society."
The most interesting thing about "egging my car" slop is that it reveals that so many of us are convinced of two contradictory things: first, that everyone else is a monster who will turn on you for the pettiest of reasons; and second, that we're all the kind of people who would stick up for the victims of those monsters.
Tor Books as just published two new, free LITTLE BROTHER stories: VIGILANT, about creepy surveillance in distance education; and SPILL, about oil pipelines and indigenous landback.
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/10/29/hobbesian-slop/#cui-bono
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
#pluralistic#taylor lorenz#conspiratorialism#conspiracy fantasy#mind control#a paradise built in hell#solnit#ai slop#ai#disinformation#materialism#doppelganger#naomi klein
295 notes
·
View notes
Text
USMC CH-53 with a USAF HC-130J during joint operation training over Tucson, Arizona
#USMC#USAF#Sikorsky#CH-53#Super Stallion#Helicopter#Lockheed#HC-130#Combat King II#AAR#aerial refueling#Military aviation#combat aircraft#Military aircraft#Marines#Air Force
121 notes
·
View notes
Text
WIP Wednesday
Figuring out how to weave Shannon and Eddie's love story and relationship breakdown into the NHL AU has been a fun and complicated little game. Here's a little snippet of Buck and Eddie bonding on a road trip and Eddie telling Buck about Shannon.
And just as a reminder, in my house we love and respect Shannon Diaz. There will be no Shannon bashing in this fic, but there will be Buck's POV as Eddie and Shannon work to overcome a lot of hurt feelings to become co-parents and friends again.
“Shannon’s family moved to El Paso our freshman year of high school and she was devastated because they moved away from all of her friends and from her rink. I saw her at school a few times, but the first time we really talked was at the rink. She figure staked and her mom had finally managed to scrape together enough money for some ice time for her,” Eddie says and Buck can picture young Eddie watching a faceless girl spin on the ice. “I think I fell in love with her that day, but we didn’t get together until right before I left for training with the U.S. National team for U18 Worlds our junior year of high school.” “That was 2017 right?” Buck asks because he remembers that year. It was the year before their draft and the U.S. team was stacked - they had Brady Tkachuk, Quinn Hughes, and Eddie fucking Diaz. The Canadian team hadn’t even made it out of the quarterfinals against Sweden. “I hated your guts.” “You didn’t even play us,” Eddie laughs and it sounds so fucking nice. Buck didn’t know a laugh could sound like that, deep and rich. “You were so good,” Buck says, voice maybe too earnest. “Do you want me to talk about Worlds or do you want me to tell you my story?” Buck mimes zipping his lips and gives Eddie a closed mouth smile to prove his point. “I come back after Worlds and Shannon and I are inseparable. She’s so cool and funny and also a total dork. I learned how to do lifts so she could practice them. I- I loved her so much,” Eddie says and Buck knows it's true. He knows in his bones that Eddie loved this woman with his whole heart. “Senior year was good, great even. I had to leave a few times for U.S. national team stuff, but I got lucky and mom found a great team for me in Tucson so I didn’t leave home like you had to. I just had to drive four hours there and four hours back every weekend.” “And then the week before the draft we found out she was pregnant, after I’d already committed to Michigan, Shannon was torn between taking a year off to travel and going to Michigan with me to try to walk on to their figure skating team. She - she was good, but her family didn’t have the resources to pay for a great trainer like mine did so she kind of got left behind in terms of her skating,” Eddie sounds sad. He sounds like it hurts him that she didn’t get the same chances he did. “So you decided to sign your entry level contract instead of going to school,” Buck says, the pieces falling neatly into place in his head. Eddie, 18 years old getting ready to play for one of the best university programs in the U.S. and realizing he’s going to have bills to pay, bills that you can’t pay when you’re a full time student-athlete.
Tagged by @spotsandsocks @cal-daisies-and-briars @daffi-990
no pressure tagging @devirnis @thewolvesof1998 @spagheddiediaz @malewifediaz @eddiebabygirldiaz @eddiebuckley-diaz @rosieposiepuddingnpie @acountrygirlsfun @wildlife4life @jamespearce9-1-1 @underwater-ninja-13 @steadfastsaturnsrings @monsterrae1 @ladydorian05 @loserdiaz @wikiangela @rainbow-nerdss @jeeyuns @puppyboybuckley @watchyourbuck @ @jesuisici33 @butchdiaz @911-on-abc @disasterbuckdiaz and anyone else who wants to share!
55 notes
·
View notes
Note
What do you think about a Tucson -> Phoenix -> Flagstaff -> Williams -> Grand Canyon train? The terrain is a little rough, but surely this would be good for Arizona?
I don't really want to build on the Grand Canyon, it's holy land and some of the most important natural land in the us
117 notes
·
View notes
Text
Rail Over the Border
Here's a southbound train crossing over the border in Nogales, Arizona into Nogales, Sonora, Mexico. A Border Patrol Agent told me this happens six to eight times a day.
The train rolled up slowly to the gate, which is right in the center of town, it stopped for a few minutes (presumably to change crews, as I had seen vans with drivers waiting), and then continued south at a slow rate. After the last car rolled through, the gate was closed.
The line south out of Tucson is now of the Union Pacific; the line on the other side of the border is Ferromex (which is short for Ferrocarril Mexicano—while that sounds like a national line, it is now a private consortium).
Going back in time, the route here on the US side was originally that of the Atchison, Topeka & Santa Fe (operating under the name of the New Mexico and Arizona Railroad) while the rail line on the Mexican side was originally the Sonora Railway (funded by the ATSF so that they would have access to the port of Guaymas on the Pacific.
Six images by Richard Koenig; taken May 3rd 2024.
#railroadhistory#railwayhistory#union pacific#nogalesarizona#nogalesaz#nogales#sonora#ferromex#sonorarailway#newmexicoandarizonarailroad#santacruzcounty
25 notes
·
View notes
Photo
SP train, engine number 5037, engine type 4-10-2 Train #962 eastbound freight train; 45 cars, 10 MPH. Photographed: Tucson, Ariz., April 22, 1933.
63 notes
·
View notes
Text
Cactus Country by Zoë Bossiere
A striking literary memoir of genderfluidity, class, masculinity, and the American Southwest that captures the author’s experience coming of age in a Tucson, Arizona trailer park.
Newly arrived in the Sonoran Desert, eleven-year-old Zoë’s world is one of giant beetles, thundering javelinas, and gnarled paloverde trees. With the family’s move to Cactus Country RV Park, Zoë has been given a fresh start and a new, shorter haircut. Although Zoë doesn’t have the words to express it, he experiences life as a trans boy—and in Cactus Country, others begin to see him as a boy, too. Here, Zoë spends hot days chasing shade and freight trains with an ever-rotating pack of sunburned desert kids, and nights fending off his own questions about the body underneath his baggy clothes.
As Zoë enters adolescence, he must reckon with the sexism, racism, substance abuse, and violence endemic to the working class Cactus Country men he’s grown close to, whose hard masculinity seems as embedded in the desert landscape as the cacti sprouting from parched earth. In response, Zoë adopts an androgynous style and new pronouns, but still cannot escape what it means to live in a gendered body, particularly when a fraught first love destabilizes their sense of self. But beauty flowers in this desert, too. Zoë persists in searching for answers that can’t be found in Cactus Country, dreaming of a day they might leave the park behind to embrace whatever awaits beyond.
Equal parts harsh and tender, Cactus Country is an invitation for readers to consider how we find our place in a world that insists on stark binaries, and a precisely rendered journey of self-determination that will resonate with anyone who’s ever had to fight to be themself.
#cactus country#zoë bossiere#nonbinary#genderfluid#trans book of the day#trans books#queer books#bookblr#booklr
14 notes
·
View notes
Text
youtube
Singer Jean Redpath was born in Edinburgh on 28th April 1937.
Revered as a Scottish musical treasure for her knowledge, understanding and research into traditional music, and her uniquely sensitive interpretations of some of the great ballads, she made more than 50 records, including seven LPs of Robbie Burns’s songs, and was an authority on traditional song.
In the early Sixties she shared an apartment with Bob Dylan at the epicentre of the American folk revival in Greenwich Village.
Jean Redpath disliked the term “folk singer”, insisting: “I avoid it like the plague. In fact, I avoid putting a label on anything. I just like to sing – it’s an easier form of communication to me than talking.”
She had no formal training and said the best advice she ever received was when she sought the help of a singing coach, to be told that if she wanted to improve, the best thing she could do was go away and sing for 20 years the way she was doing already.
Jean spent a decade as a lecturer in Scottish folksong at Stirling University, as well as performing in venues across the including U.S. and to Canada, and played venues in South America, Hong Kong, and Australia's Sydney Opera House, she also performed often at the Edinburgh Folk Festival. In 1996 she launched the Burns International Festival, and in 2011 was appointed artist-in-residence at the University of Edinburgh’s Department of Celtic and Scottish Studies.
In 2014 Jean was diagnosed with cancer and died later that year in a hospice in Tucson, Arizona.
Jean sings the Jacobite song by Lady Nairn, "Will ye no come back again?" the video was shot around Glenfinnan.
18 notes
·
View notes
Text
Hypothetical AI election disinformation risks vs real AI harms
I'm on tour with my new novel The Bezzle! Catch me TONIGHT (Feb 27) in Portland at Powell's. Then, onto Phoenix (Changing Hands, Feb 29), Tucson (Mar 9-12), and more!
You can barely turn around these days without encountering a think-piece warning of the impending risk of AI disinformation in the coming elections. But a recent episode of This Machine Kills podcast reminds us that these are hypothetical risks, and there is no shortage of real AI harms:
https://soundcloud.com/thismachinekillspod/311-selling-pickaxes-for-the-ai-gold-rush
The algorithmic decision-making systems that increasingly run the back-ends to our lives are really, truly very bad at doing their jobs, and worse, these systems constitute a form of "empiricism-washing": if the computer says it's true, it must be true. There's no such thing as racist math, you SJW snowflake!
https://slate.com/news-and-politics/2019/02/aoc-algorithms-racist-bias.html
Nearly 1,000 British postmasters were wrongly convicted of fraud by Horizon, the faulty AI fraud-hunting system that Fujitsu provided to the Royal Mail. They had their lives ruined by this faulty AI, many went to prison, and at least four of the AI's victims killed themselves:
https://en.wikipedia.org/wiki/British_Post_Office_scandal
Tenants across America have seen their rents skyrocket thanks to Realpage's landlord price-fixing algorithm, which deployed the time-honored defense: "It's not a crime if we commit it with an app":
https://www.propublica.org/article/doj-backs-tenants-price-fixing-case-big-landlords-real-estate-tech
Housing, you'll recall, is pretty foundational in the human hierarchy of needs. Losing your home – or being forced to choose between paying rent or buying groceries or gas for your car or clothes for your kid – is a non-hypothetical, widespread, urgent problem that can be traced straight to AI.
Then there's predictive policing: cities across America and the world have bought systems that purport to tell the cops where to look for crime. Of course, these systems are trained on policing data from forces that are seeking to correct racial bias in their practices by using an algorithm to create "fairness." You feed this algorithm a data-set of where the police had detected crime in previous years, and it predicts where you'll find crime in the years to come.
But you only find crime where you look for it. If the cops only ever stop-and-frisk Black and brown kids, or pull over Black and brown drivers, then every knife, baggie or gun they find in someone's trunk or pockets will be found in a Black or brown person's trunk or pocket. A predictive policing algorithm will naively ingest this data and confidently assert that future crimes can be foiled by looking for more Black and brown people and searching them and pulling them over.
Obviously, this is bad for Black and brown people in low-income neighborhoods, whose baseline risk of an encounter with a cop turning violent or even lethal. But it's also bad for affluent people in affluent neighborhoods – because they are underpoliced as a result of these algorithmic biases. For example, domestic abuse that occurs in full detached single-family homes is systematically underrepresented in crime data, because the majority of domestic abuse calls originate with neighbors who can hear the abuse take place through a shared wall.
But the majority of algorithmic harms are inflicted on poor, racialized and/or working class people. Even if you escape a predictive policing algorithm, a facial recognition algorithm may wrongly accuse you of a crime, and even if you were far away from the site of the crime, the cops will still arrest you, because computers don't lie:
https://www.cbsnews.com/sacramento/news/texas-macys-sunglass-hut-facial-recognition-software-wrongful-arrest-sacramento-alibi/
Trying to get a low-waged service job? Be prepared for endless, nonsensical AI "personality tests" that make Scientology look like NASA:
https://futurism.com/mandatory-ai-hiring-tests
Service workers' schedules are at the mercy of shift-allocation algorithms that assign them hours that ensure that they fall just short of qualifying for health and other benefits. These algorithms push workers into "clopening" – where you close the store after midnight and then open it again the next morning before 5AM. And if you try to unionize, another algorithm – that spies on you and your fellow workers' social media activity – targets you for reprisals and your store for closure.
If you're driving an Amazon delivery van, algorithm watches your eyeballs and tells your boss that you're a bad driver if it doesn't like what it sees. If you're working in an Amazon warehouse, an algorithm decides if you've taken too many pee-breaks and automatically dings you:
https://pluralistic.net/2022/04/17/revenge-of-the-chickenized-reverse-centaurs/
If this disgusts you and you're hoping to use your ballot to elect lawmakers who will take up your cause, an algorithm stands in your way again. "AI" tools for purging voter rolls are especially harmful to racialized people – for example, they assume that two "Juan Gomez"es with a shared birthday in two different states must be the same person and remove one or both from the voter rolls:
https://www.cbsnews.com/news/eligible-voters-swept-up-conservative-activists-purge-voter-rolls/
Hoping to get a solid education, the sort that will keep you out of AI-supervised, precarious, low-waged work? Sorry, kiddo: the ed-tech system is riddled with algorithms. There's the grifty "remote invigilation" industry that watches you take tests via webcam and accuses you of cheating if your facial expressions fail its high-tech phrenology standards:
https://pluralistic.net/2022/02/16/unauthorized-paper/#cheating-anticheat
All of these are non-hypothetical, real risks from AI. The AI industry has proven itself incredibly adept at deflecting interest from real harms to hypothetical ones, like the "risk" that the spicy autocomplete will become conscious and take over the world in order to convert us all to paperclips:
https://pluralistic.net/2023/11/27/10-types-of-people/#taking-up-a-lot-of-space
Whenever you hear AI bosses talking about how seriously they're taking a hypothetical risk, that's the moment when you should check in on whether they're doing anything about all these longstanding, real risks. And even as AI bosses promise to fight hypothetical election disinformation, they continue to downplay or ignore the non-hypothetical, here-and-now harms of AI.
There's something unseemly – and even perverse – about worrying so much about AI and election disinformation. It plays into the narrative that kicked off in earnest in 2016, that the reason the electorate votes for manifestly unqualified candidates who run on a platform of bald-faced lies is that they are gullible and easily led astray.
But there's another explanation: the reason people accept conspiratorial accounts of how our institutions are run is because the institutions that are supposed to be defending us are corrupt and captured by actual conspiracies:
https://memex.craphound.com/2019/09/21/republic-of-lies-the-rise-of-conspiratorial-thinking-and-the-actual-conspiracies-that-fuel-it/
The party line on conspiratorial accounts is that these institutions are good, actually. Think of the rebuttal offered to anti-vaxxers who claimed that pharma giants were run by murderous sociopath billionaires who were in league with their regulators to kill us for a buck: "no, I think you'll find pharma companies are great and superbly regulated":
https://pluralistic.net/2023/09/05/not-that-naomi/#if-the-naomi-be-klein-youre-doing-just-fine
Institutions are profoundly important to a high-tech society. No one is capable of assessing all the life-or-death choices we make every day, from whether to trust the firmware in your car's anti-lock brakes, the alloys used in the structural members of your home, or the food-safety standards for the meal you're about to eat. We must rely on well-regulated experts to make these calls for us, and when the institutions fail us, we are thrown into a state of epistemological chaos. We must make decisions about whether to trust these technological systems, but we can't make informed choices because the one thing we're sure of is that our institutions aren't trustworthy.
Ironically, the long list of AI harms that we live with every day are the most important contributor to disinformation campaigns. It's these harms that provide the evidence for belief in conspiratorial accounts of the world, because each one is proof that the system can't be trusted. The election disinformation discourse focuses on the lies told – and not why those lies are credible.
That's because the subtext of election disinformation concerns is usually that the electorate is credulous, fools waiting to be suckered in. By refusing to contemplate the institutional failures that sit upstream of conspiracism, we can smugly locate the blame with the peddlers of lies and assume the mantle of paternalistic protectors of the easily gulled electorate.
But the group of people who are demonstrably being tricked by AI is the people who buy the horrifically flawed AI-based algorithmic systems and put them into use despite their manifest failures.
As I've written many times, "we're nowhere near a place where bots can steal your job, but we're certainly at the point where your boss can be suckered into firing you and replacing you with a bot that fails at doing your job"
https://pluralistic.net/2024/01/15/passive-income-brainworms/#four-hour-work-week
The most visible victims of AI disinformation are the people who are putting AI in charge of the life-chances of millions of the rest of us. Tackle that AI disinformation and its harms, and we'll make conspiratorial claims about our institutions being corrupt far less credible.
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/02/27/ai-conspiracies/#epistemological-collapse
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
#pluralistic#ai#disinformation#algorithmic bias#elections#election disinformation#conspiratorialism#paternalism#this machine kills#Horizon#the rents too damned high#weaponized shelter#predictive policing#fr#facial recognition#labor#union busting#union avoidance#standardized testing#hiring#employment#remote invigilation
145 notes
·
View notes
Text
USAF Major Lindsay "MAD" Johnson in her A-10 during the 2024 Heritage Flight Certification Training at Davis-Monthan AFB, Tucson, Arizona
#A-10 Pilot#Attack Pilot#A-10 Demo Pilot#Heritage Flight Pilot#Female Pilot#Military women#Women in aviation#Air Force#Hog Driver
45 notes
·
View notes
Text
First Ukrainian F-16 pilots will complete training as soon as May
Air National Guard director says the effort is taking a bit longer because pilots need to learn a “full range of missions.”
Audrey DeckerFebruary 16, 2024
An F-16 assigned to Morris Air National Guard Base in Arizona breaks off after midair refueling.
AURORA, Colorado—Ukrainian pilots will start graduating from F-16 fighter jet training in May, the Air National Guard estimates.
The first four pilots are “pretty close” to the end of their training, Air National Guard director Lt. Gen. Michael Loh told reporters Tuesday at the Air & Space Forces Association Warfare Symposium.
The U.S. is training 12 Ukrainian pilots in fiscal 2024—all of whom are set to graduate between May and August, according to Arizona National Guard spokesperson Capt. Erin Hannigan.
But what the pilots do then depends on the broader Ukrainian F-16 effort and when the jets will actually arrive in Ukraine, Loh said. The U.S., which controls the export of the fighter jets, gave allies the green light to transfer F-16s to Ukraine in July.
The training effort started in late October at Morris Air National Guard base in Tucson, Arizona. Denmark is also leading an effort to train Ukrainian pilots on the fighter jet in Europe.
In September, Loh said that it would take anywhere from three months to nine months for Ukrainian pilots to finish F-16 training.
The pilots are already “flying F-16s solo every day,” but the requirements for these pilots have changed and training is taking a bit longer because the pilots need to be able to operate a “full range of missions” beyond wartime scenarios, Loh said.
However, the U.S. won’t be able to keep this effort going and train more pilots beyond the first cohort if Congress doesn’t authorize more funding.
“Training cost is tuition-based and that funding is allocated prior to student arrival. Students currently enrolled are not expected to be impacted by funding issues. If additional pilots are needing to be trained, additional funding will be necessary,” Hannigan said.
@DefenseOne.com
11 notes
·
View notes