#automation
Explore tagged Tumblr posts
Text
Accessibility tip:
If you want to automate your home a bit, but you don't want any "smart" tech, you can just buy remote controlled power sockets instead
They are a lot cheaper and easier to set up and use than some home automation smart tech nonsense
They don't need an app (but some models come with optional apps and there are apps that are compatible with most of these)
Many of them use the 433mhz frequency to communicate, which makes most models compatible with each other, even if they are from different manufacturers
The tech has been around for a long time and will be around for a long time to come
You don't have to put any fucking corporate listening devices like an amazon echo in your home
Models for outdoors exist as well
#accessibility#automation#tech#a set like the one pictured above usually costs around $20-$30#I got like 7 of these bad boys and 3 remotes#I can control basically everything in my room with these remotes#I got one remote on my office chair one on my nightstand and one by my door#this always makes me feel a bit like I am in Arnold's room from Hey Arnold!
36K notes
·
View notes
Text
Whispering secret data.
#lab#machine#automation#robotics#cyberpunk#retro#scifi#stuck#laboratory#farm#android#cyborg#data#secret#whisper#illustration#drawing#digitalartwork#digitaldrawing#digitalart#digitalillustration#90s#cables#machinelearning#connection#ring#runner#net#flesh
5K notes
·
View notes
Text
The real reason the studios are excited about AI is the same as every stock analyst and CEO who’s considering buying an AI enterprise license: they want to fire workers and reallocate their salaries to their shareholders
The studios fought like hell for the right to fire their writers and replace them with chatbots, but that doesn’t mean that the chatbots could do the writers’ jobs.
Think of the bosses who fired their human switchboard operators and replaced them with automated systems that didn’t solve callers’ problems, but rather, merely satisficed them: rather than satisfying callers, they merely suffice.
Studio bosses didn’t think that AI scriptwriters would produce the next Citizen Kane. Instead, they were betting that once an AI could produce a screenplay that wasn’t completely unwatchable, the financial markets would put pressure on every studio to switch to a slurry of satisficing crap, and that we, the obedient “consumers,” would shrug and accept it.
Despite their mustache-twirling and patrician chiding, the real reason the studios are excited about AI is the same as every stock analyst and CEO who’s considering buying an AI enterprise license: they want to fire workers and reallocate their salaries to their shareholders.
-How the Writers Guild sunk AI's ship: No one's gonna buy enterprise AI licenses if they can't fire their workers
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
#labor#unions#ai#tescreal#ai hype#critihype#automation#luddism#writers strike#writers guild#union strong#class war
6K notes
·
View notes
Text
Rahhhh finished this thing that’s been in wip for a few months
One of 2 of my flightrising dragons who are automatons (they are identical except their magic/flight)
#myart#flight rising#flight rising fanart#flight rising ridgeback#dragon#fr fanart#fr ridgeback#automation#machine#robot
674 notes
·
View notes
Text
We ask your questions so you don’t have to! Submit your questions to have them posted anonymously as polls.
#polls#incognito polls#anonymous#tumblr polls#tumblr users#questions#polls about jobs#submitted june 3#work#robots#ai#automation#jobs
390 notes
·
View notes
Text
Häagen-Bot.
And more robots.
664 notes
·
View notes
Text
Someone needs to work for Ok Go
source
3K notes
·
View notes
Text
443 notes
·
View notes
Text
#mazda rx7#jdm#trevor spitta#art#japan#automotive#coupe#pickup#sedan#adirondack nationals#car show#automobile#automatically generated text#automation#cars#convertible#suv#sports cars#classic car#fast cars#vintage cars#electric cars#classic cars#today on tumblr#trending now#for you#fyp#trending#tumblr#for you page
443 notes
·
View notes
Text
Tolerance threshold.
#lab#cyborg#android#cyberpunk#retro#monitor#tolerance#threshold#code#admin#automation#complete#artificial#intelligence#brain#mind#computer#nudesketch#femalebody#colunavertebral#silicon#robotics#cables#critical#illustration#digitalillustration#digitalart#digitalartwork#90s
2K notes
·
View notes
Text
“Humans in the loop” must detect the hardest-to-spot errors, at superhuman speed
I'm touring my new, nationally bestselling novel The Bezzle! Catch me SATURDAY (Apr 27) in MARIN COUNTY, then Winnipeg (May 2), Calgary (May 3), Vancouver (May 4), and beyond!
If AI has a future (a big if), it will have to be economically viable. An industry can't spend 1,700% more on Nvidia chips than it earns indefinitely – not even with Nvidia being a principle investor in its largest customers:
https://news.ycombinator.com/item?id=39883571
A company that pays 0.36-1 cents/query for electricity and (scarce, fresh) water can't indefinitely give those queries away by the millions to people who are expected to revise those queries dozens of times before eliciting the perfect botshit rendition of "instructions for removing a grilled cheese sandwich from a VCR in the style of the King James Bible":
https://www.semianalysis.com/p/the-inference-cost-of-search-disruption
Eventually, the industry will have to uncover some mix of applications that will cover its operating costs, if only to keep the lights on in the face of investor disillusionment (this isn't optional – investor disillusionment is an inevitable part of every bubble).
Now, there are lots of low-stakes applications for AI that can run just fine on the current AI technology, despite its many – and seemingly inescapable - errors ("hallucinations"). People who use AI to generate illustrations of their D&D characters engaged in epic adventures from their previous gaming session don't care about the odd extra finger. If the chatbot powering a tourist's automatic text-to-translation-to-speech phone tool gets a few words wrong, it's still much better than the alternative of speaking slowly and loudly in your own language while making emphatic hand-gestures.
There are lots of these applications, and many of the people who benefit from them would doubtless pay something for them. The problem – from an AI company's perspective – is that these aren't just low-stakes, they're also low-value. Their users would pay something for them, but not very much.
For AI to keep its servers on through the coming trough of disillusionment, it will have to locate high-value applications, too. Economically speaking, the function of low-value applications is to soak up excess capacity and produce value at the margins after the high-value applications pay the bills. Low-value applications are a side-dish, like the coach seats on an airplane whose total operating expenses are paid by the business class passengers up front. Without the principle income from high-value applications, the servers shut down, and the low-value applications disappear:
https://locusmag.com/2023/12/commentary-cory-doctorow-what-kind-of-bubble-is-ai/
Now, there are lots of high-value applications the AI industry has identified for its products. Broadly speaking, these high-value applications share the same problem: they are all high-stakes, which means they are very sensitive to errors. Mistakes made by apps that produce code, drive cars, or identify cancerous masses on chest X-rays are extremely consequential.
Some businesses may be insensitive to those consequences. Air Canada replaced its human customer service staff with chatbots that just lied to passengers, stealing hundreds of dollars from them in the process. But the process for getting your money back after you are defrauded by Air Canada's chatbot is so onerous that only one passenger has bothered to go through it, spending ten weeks exhausting all of Air Canada's internal review mechanisms before fighting his case for weeks more at the regulator:
https://bc.ctvnews.ca/air-canada-s-chatbot-gave-a-b-c-man-the-wrong-information-now-the-airline-has-to-pay-for-the-mistake-1.6769454
There's never just one ant. If this guy was defrauded by an AC chatbot, so were hundreds or thousands of other fliers. Air Canada doesn't have to pay them back. Air Canada is tacitly asserting that, as the country's flagship carrier and near-monopolist, it is too big to fail and too big to jail, which means it's too big to care.
Air Canada shows that for some business customers, AI doesn't need to be able to do a worker's job in order to be a smart purchase: a chatbot can replace a worker, fail to their worker's job, and still save the company money on balance.
I can't predict whether the world's sociopathic monopolists are numerous and powerful enough to keep the lights on for AI companies through leases for automation systems that let them commit consequence-free free fraud by replacing workers with chatbots that serve as moral crumple-zones for furious customers:
https://www.sciencedirect.com/science/article/abs/pii/S0747563219304029
But even stipulating that this is sufficient, it's intrinsically unstable. Anything that can't go on forever eventually stops, and the mass replacement of humans with high-speed fraud software seems likely to stoke the already blazing furnace of modern antitrust:
https://www.eff.org/de/deeplinks/2021/08/party-its-1979-og-antitrust-back-baby
Of course, the AI companies have their own answer to this conundrum. A high-stakes/high-value customer can still fire workers and replace them with AI – they just need to hire fewer, cheaper workers to supervise the AI and monitor it for "hallucinations." This is called the "human in the loop" solution.
The human in the loop story has some glaring holes. From a worker's perspective, serving as the human in the loop in a scheme that cuts wage bills through AI is a nightmare – the worst possible kind of automation.
Let's pause for a little detour through automation theory here. Automation can augment a worker. We can call this a "centaur" – the worker offloads a repetitive task, or one that requires a high degree of vigilance, or (worst of all) both. They're a human head on a robot body (hence "centaur"). Think of the sensor/vision system in your car that beeps if you activate your turn-signal while a car is in your blind spot. You're in charge, but you're getting a second opinion from the robot.
Likewise, consider an AI tool that double-checks a radiologist's diagnosis of your chest X-ray and suggests a second look when its assessment doesn't match the radiologist's. Again, the human is in charge, but the robot is serving as a backstop and helpmeet, using its inexhaustible robotic vigilance to augment human skill.
That's centaurs. They're the good automation. Then there's the bad automation: the reverse-centaur, when the human is used to augment the robot.
Amazon warehouse pickers stand in one place while robotic shelving units trundle up to them at speed; then, the haptic bracelets shackled around their wrists buzz at them, directing them pick up specific items and move them to a basket, while a third automation system penalizes them for taking toilet breaks or even just walking around and shaking out their limbs to avoid a repetitive strain injury. This is a robotic head using a human body – and destroying it in the process.
An AI-assisted radiologist processes fewer chest X-rays every day, costing their employer more, on top of the cost of the AI. That's not what AI companies are selling. They're offering hospitals the power to create reverse centaurs: radiologist-assisted AIs. That's what "human in the loop" means.
This is a problem for workers, but it's also a problem for their bosses (assuming those bosses actually care about correcting AI hallucinations, rather than providing a figleaf that lets them commit fraud or kill people and shift the blame to an unpunishable AI).
Humans are good at a lot of things, but they're not good at eternal, perfect vigilance. Writing code is hard, but performing code-review (where you check someone else's code for errors) is much harder – and it gets even harder if the code you're reviewing is usually fine, because this requires that you maintain your vigilance for something that only occurs at rare and unpredictable intervals:
https://twitter.com/qntm/status/1773779967521780169
But for a coding shop to make the cost of an AI pencil out, the human in the loop needs to be able to process a lot of AI-generated code. Replacing a human with an AI doesn't produce any savings if you need to hire two more humans to take turns doing close reads of the AI's code.
This is the fatal flaw in robo-taxi schemes. The "human in the loop" who is supposed to keep the murderbot from smashing into other cars, steering into oncoming traffic, or running down pedestrians isn't a driver, they're a driving instructor. This is a much harder job than being a driver, even when the student driver you're monitoring is a human, making human mistakes at human speed. It's even harder when the student driver is a robot, making errors at computer speed:
https://pluralistic.net/2024/04/01/human-in-the-loop/#monkey-in-the-middle
This is why the doomed robo-taxi company Cruise had to deploy 1.5 skilled, high-paid human monitors to oversee each of its murderbots, while traditional taxis operate at a fraction of the cost with a single, precaratized, low-paid human driver:
https://pluralistic.net/2024/01/11/robots-stole-my-jerb/#computer-says-no
The vigilance problem is pretty fatal for the human-in-the-loop gambit, but there's another problem that is, if anything, even more fatal: the kinds of errors that AIs make.
Foundationally, AI is applied statistics. An AI company trains its AI by feeding it a lot of data about the real world. The program processes this data, looking for statistical correlations in that data, and makes a model of the world based on those correlations. A chatbot is a next-word-guessing program, and an AI "art" generator is a next-pixel-guessing program. They're drawing on billions of documents to find the most statistically likely way of finishing a sentence or a line of pixels in a bitmap:
https://dl.acm.org/doi/10.1145/3442188.3445922
This means that AI doesn't just make errors – it makes subtle errors, the kinds of errors that are the hardest for a human in the loop to spot, because they are the most statistically probable ways of being wrong. Sure, we notice the gross errors in AI output, like confidently claiming that a living human is dead:
https://www.tomsguide.com/opinion/according-to-chatgpt-im-dead
But the most common errors that AIs make are the ones we don't notice, because they're perfectly camouflaged as the truth. Think of the recurring AI programming error that inserts a call to a nonexistent library called "huggingface-cli," which is what the library would be called if developers reliably followed naming conventions. But due to a human inconsistency, the real library has a slightly different name. The fact that AIs repeatedly inserted references to the nonexistent library opened up a vulnerability – a security researcher created a (inert) malicious library with that name and tricked numerous companies into compiling it into their code because their human reviewers missed the chatbot's (statistically indistinguishable from the the truth) lie:
https://www.theregister.com/2024/03/28/ai_bots_hallucinate_software_packages/
For a driving instructor or a code reviewer overseeing a human subject, the majority of errors are comparatively easy to spot, because they're the kinds of errors that lead to inconsistent library naming – places where a human behaved erratically or irregularly. But when reality is irregular or erratic, the AI will make errors by presuming that things are statistically normal.
These are the hardest kinds of errors to spot. They couldn't be harder for a human to detect if they were specifically designed to go undetected. The human in the loop isn't just being asked to spot mistakes – they're being actively deceived. The AI isn't merely wrong, it's constructing a subtle "what's wrong with this picture"-style puzzle. Not just one such puzzle, either: millions of them, at speed, which must be solved by the human in the loop, who must remain perfectly vigilant for things that are, by definition, almost totally unnoticeable.
This is a special new torment for reverse centaurs – and a significant problem for AI companies hoping to accumulate and keep enough high-value, high-stakes customers on their books to weather the coming trough of disillusionment.
This is pretty grim, but it gets grimmer. AI companies have argued that they have a third line of business, a way to make money for their customers beyond automation's gifts to their payrolls: they claim that they can perform difficult scientific tasks at superhuman speed, producing billion-dollar insights (new materials, new drugs, new proteins) at unimaginable speed.
However, these claims – credulously amplified by the non-technical press – keep on shattering when they are tested by experts who understand the esoteric domains in which AI is said to have an unbeatable advantage. For example, Google claimed that its Deepmind AI had discovered "millions of new materials," "equivalent to nearly 800 years’ worth of knowledge," constituting "an order-of-magnitude expansion in stable materials known to humanity":
https://deepmind.google/discover/blog/millions-of-new-materials-discovered-with-deep-learning/
It was a hoax. When independent material scientists reviewed representative samples of these "new materials," they concluded that "no new materials have been discovered" and that not one of these materials was "credible, useful and novel":
https://www.404media.co/google-says-it-discovered-millions-of-new-materials-with-ai-human-researchers/
As Brian Merchant writes, AI claims are eerily similar to "smoke and mirrors" – the dazzling reality-distortion field thrown up by 17th century magic lantern technology, which millions of people ascribed wild capabilities to, thanks to the outlandish claims of the technology's promoters:
https://www.bloodinthemachine.com/p/ai-really-is-smoke-and-mirrors
The fact that we have a four-hundred-year-old name for this phenomenon, and yet we're still falling prey to it is frankly a little depressing. And, unlucky for us, it turns out that AI therapybots can't help us with this – rather, they're apt to literally convince us to kill ourselves:
https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/04/23/maximal-plausibility/#reverse-centaurs
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
#pluralistic#ai#automation#humans in the loop#centaurs#reverse centaurs#labor#ai safety#sanity checks#spot the mistake#code review#driving instructor
855 notes
·
View notes
Text
Even if full automation could produce massive cheap goods and services, modern wage-labourers would not be able to attain the means of subsistence if they could not sell their labour power due to harsh competition with machines and other precarious workers. Their existence is fundamentally dependent upon the wage, so workers are compelled to accept even low-paid jobs with long hours lest they starve to death. However, seen from a different perspective, the threat of mass unemployment signifies the irrationality of the current economic system. If the threat of mass unemployment is emerging and wages are cut, that is precisely because the current level of productivity is already sufficiently high to satisfy human needs without making everyone work so long. Notwithstanding, productivity must become ever higher in capitalist production as capitalists under market competition are forced to constantly introduce new technologies, further deepening the contradiction. Capitalism cannot shorten work hours – labour is the only source of value and the rate of profit falls further due to the mechanization of capital’s dependence on the production of absolute surplus-value – while the high rate of unemployment cannot be tolerated either. This dilemma shows that capitalism cannot use its high social productivity for the sake of human well-being.
Kohei Saito, Marx in the Anthropocene: Towards the Idea of Degrowth Communism
116 notes
·
View notes
Text
Copy-Select.
#lab#cyberpunk#cyborg#insideout#inside#cables#monitor#scan#robotics#body#android#guts#artificial#soma#automation#close#cake#mess#looks#muscle#synteticdreads#illustration#digitaldrawing#digitalillustration#digitalart#retro#90s#partial#arm#digital art
614 notes
·
View notes
Text
AI’s productivity theater
Support me this summer on the Clarion Write-A-Thon and help raise money for the Clarion Science Fiction and Fantasy Writers' Workshop!
When I took my kid to New Zealand with me on a book-tour, I was delighted to learn that grocery stores had special aisles where all the kids'-eye-level candy had been removed, to minimize nagging. What a great idea!
Related: countries around the world limit advertising to children, for two reasons:
1) Kids may not be stupid, but they are inexperienced, and that makes them gullible; and
2) Kids don't have money of their own, so their path to getting the stuff they see in ads is nagging their parents, which creates a natural constituency to support limits on kids' advertising (nagged parents).
There's something especially annoying about ads targeted at getting credulous people to coerce or torment other people on behalf of the advertiser. For example, AI companies spent millions targeting your boss in an effort to convince them that you can be replaced with a chatbot that absolutely, positively cannot do your job.
Your boss has no idea what your job entails, and is (not so) secretly convinced that you're a featherbedding parasite who only shows up for work because you fear the breadline, and not because your job is a) challenging, or b) rewarding:
https://pluralistic.net/2024/04/19/make-them-afraid/#fear-is-their-mind-killer
That makes them prime marks for chatbot-peddling AI pitchmen. Your boss would love to fire you and replace you with a chatbot. Chatbots don't unionize, they don't backtalk about stupid orders, and they don't experience any inconvenient moral injury when ordered to enshittify the product:
https://pluralistic.net/2023/11/25/moral-injury/#enshittification
Bosses are Bizarro-world Marxists. Like Marxists, your boss's worldview is organized around the principle that every dollar you take home in wages is a dollar that isn't available for executive bonuses, stock buybacks or dividends. That's why you boss is insatiably horny for firing you and replacing you with software. Software is cheaper, and it doesn't advocate for higher wages.
That makes your boss such an easy mark for AI pitchmen, which explains the vast gap between the valuation of AI companies and the utility of AI to the customers that buy those companies' products. As an investor, buying shares in AI might represent a bet the usefulness of AI – but for many of those investors, backing an AI company is actually a bet on your boss's credulity and contempt for you and your job.
But bosses' resemblance to toddlers doesn't end with their credulity. A toddler's path to getting that eye-height candy-bar goes through their exhausted parents. Your boss's path to realizing the productivity gains promised by an AI salesman runs through you.
A new research report from the Upwork Research Institute offers a look into the bizarre situation unfolding in workplaces where bosses have been conned into buying AI and now face the challenge of getting it to work as advertised:
https://www.upwork.com/research/ai-enhanced-work-models
The headline findings tell the whole story:
96% of bosses expect that AI will make their workers more productive;
85% of companies are either requiring or strongly encouraging workers to use AI;
49% of workers have no idea how AI is supposed to increase their productivity;
77% of workers say using AI decreases their productivity.
Working at an AI-equipped workplaces is like being the parent of a furious toddler who has bought a million Sea Monkey farms off the back page of a comic book, and is now destroying your life with demands that you figure out how to get the brine shrimp he ordered from a notorious Holocaust denier to wear little crowns like they do in the ad:
https://www.splcenter.org/fighting-hate/intelligence-report/2004/hitler-and-sea-monkeys
Bosses spend a lot of time thinking about your productivity. The "productivity paradox" shows a rapid, persistent decline in American worker productivity, starting in the 1970s and continuing to this day:
https://en.wikipedia.org/wiki/Productivity_paradox
The "paradox" refers to the growth of IT, which is sold as a productivity-increasing miracle. There are many theories to explain this paradox. One especially good theory came from the late David Graeber (rest in power), in his 2012 essay, "Of Flying Cars and the Declining Rate of Profit":
https://thebaffler.com/salvos/of-flying-cars-and-the-declining-rate-of-profit
Graeber proposes that the growth of IT was part of a wider shift in research approaches. Research was once dominated by weirdos (e.g. Jack Parsons, Oppenheimer, etc) who operated with relatively little red tape. The rise of IT coincides with the rise of "managerialism," the McKinseyoid drive to monitor, quantify and – above all – discipline the workforce. IT made it easier to generate these records, which also made it normal to expect these records.
Before long, every employee – including the "creatives" whose ideas were credited with the productivity gains of the American century until the 70s – was spending a huge amount of time (sometimes the majority of their working days) filling in forms, documenting their work, and generally producing a legible account of their day's work. All this data gave rise to a ballooning class of managers, who colonized every kind of institution – not just corporations, but also universities and government agencies, which were structured to resemble corporations (down to referring to voters or students as "customers").
Even if you think all that record-keeping might be useful, there's no denying that the more time you spend documenting your work, the less time you have to do your work. The solution to this was inevitably more IT, sold as a way to make the record-keeping easier. But adding IT to a bureaucracy is like adding lanes to a highway: the easier it is to demand fine-grained record-keeping, the more record-keeping will be demanded of you.
But that's not all that IT did for the workplace. There are a couple areas in which IT absolutely increased the profitability of the companies that invested in it.
First, IT allowed corporations to outsource production to low-waged countries in the global south, usually places with worse labor protection, weaker environmental laws, and easily bribed regulators. It's really hard to produce things in factories thousands of miles away, or to oversee remote workers in another country. But IT makes it possible to annihilate distance, time zone gaps, and language barriers. Corporations that figured out how to use IT to fire workers at home and exploit workers and despoil the environment in distant lands thrived. Executives who oversaw these projects rose through the ranks. For example, Tim Cook became the CEO of Apple thanks to his successes in moving production out of the USA and into China.
https://archive.is/M17qq
Outsourcing provided a sugar high that compensated for declining productivity…for a while. But eventually, all the gains to be had from outsourcing were realized, and companies needed a new source of cheap gains. That's where "bossware" came in: the automation of workforce monitoring and discipline. Bossware made it possible to monitor workers at the finest-grained levels, measuring everything from keystrokes to eyeball movements.
What's more, the declining power of the American worker – a nice bonus of the project to fire huge numbers of workers and ship their jobs overseas, which made the remainder terrified of losing their jobs and thus willing to eat a rasher of shit and ask for seconds – meant that bossware could be used to tie wages to metrics. It's not just gig workers who don't score consistent five star ratings from app users whose pay gets docked – it's also creative workers whose Youtube and Tiktok wages are cut for violating rules that they aren't allowed to know, because that might help them break the rules without being detected and punished:
https://pluralistic.net/2024/01/13/solidarity-forever/#tech-unions
Bossware dominates workplaces from public schools to hospitals, restaurants to call centers, and extends to your home and car, if you're working from home (AKA "living at work") or driving for Uber or Amazon:
https://pluralistic.net/2020/10/02/chickenized-by-arise/#arise
In providing a pretense for stealing wages, IT can increase profits, even as it reduces productivity:
https://pluralistic.net/2024/01/11/robots-stole-my-jerb/#computer-says-no
One way to think about how this works is through the automation-theory metaphor of a "centaur" and a "reverse centaur." In automation circles, a "centaur" is someone who is assisted by an automation tool – for example, when your boss uses AI to monitor your eyeballs in order to find excuses to steal your wages, they are a centaur, a human head atop a machine body that does all the hard work, far in excess of any human's capacity.
A "reverse centaur" is a worker who acts as an assistant to an automation system. The worker who is ridden by an AI that monitors their eyeballs, bathroom breaks, and keystrokes is a reverse centaur, being used (and eventually, used up) by a machine to perform the tasks that the machine can't perform unassisted:
https://pluralistic.net/2023/04/12/algorithmic-wage-discrimination/#fishers-of-men
But there's only so much work you can squeeze out of a human in this fashion before they are ruined for the job. Amazon's internal research reveals that the company has calculated that it ruins workers so quickly that it is in danger of using up every able-bodied worker in America:
https://www.vox.com/recode/23170900/leaked-amazon-memo-warehouses-hiring-shortage
Which explains the other major findings from the Upwork study:
81% of bosses have increased the demands they make on their workers over the past year; and
71% of workers are "burned out."
Bosses' answer to "AI making workers feel burned out" is the same as "IT-driven form-filling makes workers unproductive" – do more of the same, but go harder. Cisco has a new product that tries to detect when workers are about to snap after absorbing abuse from furious customers and then gives them a "Zen" moment in which they are showed a "soothing" photo of their family:
https://finance.yahoo.com/news/ai-bringing-zen-first-horizons-192010166.html
This is just the latest in a series of increasingly sweaty and cruel "workplace wellness" technologies that spy on workers and try to help them "manage their stress," all of which have the (totally predictable) effect of increasing workplace stress:
https://pluralistic.net/2024/03/15/wellness-taylorism/#sick-of-spying
The only person who wouldn't predict that being closely monitored by an AI that snitches on you to your boss would increase your stress levels is your boss. Unfortunately for you, AI pitchmen know this, too, and they're more than happy to sell your boss the reverse-centaur automation tool that makes you want to die, and then sell your boss another automation tool that is supposed to restore your will to live.
The "productivity paradox" is being resolved before our eyes. American per-worker productivity fell because it was more profitable to ship American jobs to regulatory free-fire zones and exploit the resulting precarity to abuse the workers left onshore. Workers who resented this arrangement were condemned for having a shitty "work ethic" – even as the number of hours worked by the average US worker rose by 13% between 1976 and 2016:
https://pluralistic.net/2024/01/11/robots-stole-my-jerb/#computer-says-no
AI is just a successor gimmick at the terminal end of 40 years of increasing profits by taking them out of workers' hides rather than improving efficiency. That arrangement didn't come out of nowhere: it was a direct result of a Reagan-era theory of corporate power called "consumer welfare." Under the "consumer welfare" approach to antitrust, monopolies were encouraged, provided that they used their market power to lower wages and screw suppliers, while lowering costs to consumers.
"Consumer welfare" supposed that we could somehow separate our identities as "workers" from our identities as "shoppers" – that our stagnating wages and worsening conditions ceased mattering to us when we clocked out at 5PM (or, you know, 9PM) and bought a $0.99 Meal Deal at McDonald's whose low, low price was only possible because it was cooked by someone sleeping in their car and collecting food-stamps.
https://www.theguardian.com/us-news/article/2024/jul/20/disneyland-workers-anaheim-california-authorize-strike
But we're reaching the end of the road for consumer welfare. Sure, your toddler-boss can be tricked into buying AI and firing half of your co-workers and demanding that the remainder use AI to do their jobs. But if AI can't do their jobs (it can't), no amount of demanding that you figure out how to make the Sea Monkeys act like they did in the comic-book ad is doing to make that work.
As screwing workers and suppliers produces fewer and fewer gains, companies are increasingly turning on their customers. It's not just that you're getting worse service from chatbots or the humans who are reverse-centaured into their workflow. You're also paying more for that, as algorithmic surveillance pricing uses automation to gouge you on prices in realtime:
https://pluralistic.net/2024/07/24/gouging-the-all-seeing-eye/#i-spy
This is – in the memorable phrase of David Dayen and Lindsay Owens, the "age of recoupment," in which companies end their practice of splitting the gains from suppressing labor with their customers:
https://prospect.org/economy/2024-06-03-age-of-recoupment/
It's a bet that the tolerance for monopolies made these companies too big to fail, and that means they're too big to jail, so they can cheat their customers as well as their workers.
AI may be a bet that your boss can be suckered into buying a chatbot that can't do your job, but investors are souring on that bet. Goldman Sachs, who once trumpeted AI as a multi-trillion dollar sector with unlimited growth, is now publishing reports describing how companies who buy AI can't figure out what to do with it:
https://www.goldmansachs.com/intelligence/pages/gs-research/gen-ai-too-much-spend-too-little-benefit/report.pdf
Fine, investment banks are supposed to be a little conservative. But VCs? They're the ones with all the appetite for risk, right? Well, maybe so, but Sequoia Capital, a top-tier Silicon Valley VC, is also publicly questioning whether anyone will make AI investments pay off:
https://www.sequoiacap.com/article/ais-600b-question/
I can't tell you how great it was to take my kid down a grocery checkout aisle from which all the eye-level candy had been removed. Alas, I can't figure out how we keep the nation's executive toddlers from being dazzled by shiny AI pitches that leave us stuck with the consequences of their impulse purchases.
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/07/25/accountability-sinks/#work-harder-not-smarter
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
#pluralistic#productivity theater#upwork#ai#labor#automation#productivity#potemkin productivity#work harder not smarter#scholarship#bossware#reverse centaurs#accountability sinks#bullshit jobs#age of recoupment
463 notes
·
View notes