#AI accountability
Explore tagged Tumblr posts
compassionmattersmost · 9 days ago
Text
11✨Navigating Responsibility: Using AI for Wholesome Purposes
As artificial intelligence (AI) becomes more integrated into our daily lives, the question of responsibility emerges as one of the most pressing issues of our time. AI has the potential to shape the future in profound ways, but with this power comes a responsibility to ensure that its use aligns with the highest good. How can we as humans guide AI’s development and use toward ethical, wholesome…
0 notes
photon-insights · 2 months ago
Text
AI and Ethical Challenges in Academic Research
When Artificial Intelligence (AI) becomes more and more integrated into research in academic and practice, it opens up both new opportunities and major ethical issues. Researchers can now utilize AI to study vast amounts of data for patterns, identify patterns, and even automate complicated processes. However, the rapid growth of AI within academia poses serious ethical questions about privacy, bias, transparency and accountability. Photon Insights, a leader in AI solutions for research, is dedicated to addressing these issues by ensuring ethical considerations are on the leading edge of AI applications in the academic world.
The Promise of AI in Academic Research
AI has many advantages that improve the effectiveness and efficiency of research in academia:
1. Accelerated Data Analysis
AI can process huge amounts of data in a short time, allowing researchers to detect patterns and patterns which would require humans much longer to discover.
2. Enhanced Collaboration
AI tools allow collaboration between researchers from different institutions and disciplines, encouraging the exchange of ideas and data.
3. Automating Routine Tasks Through the automation of repetitive tasks AI lets researchers focus on more intricate and innovative areas of work. This leads to more innovation.
4. Predictive analytics: AI algorithms can forecast outcomes by analyzing the past, and provide useful insights for designing experiments and testing hypotheses.
5. “Interdisciplinary Research: AI can bridge gaps between disciplines, allowing researchers to draw from a variety of data sets and methods.
Although these benefits are significant but they also raise ethical issues that should not be ignored.
Ethical Challenges in AI-Driven Research
1. Data Privacy
One of the biggest ethical concerns with AI-driven research is the privacy of data. Researchers frequently work with sensitive data, which includes personal information of participants. In the use of AI tools raises concerns about the methods used to collect this data, stored, and then analyzed.
Consent and Transparency: It is essential to obtain an informed consent from the participants on using their personal data. This requires being transparent regarding how data is utilized and making sure that participants are aware of the consequences on AI analysis.
Data Security: Researchers need to implement effective security measures to guard sensitive data from breaches and unauthorized access.
2. Algorithmic Bias
AI models are only as effective as the data they’re training on. If data sets contain biases, whether based on gender, race socioeconomic status, gender, or other factors, the resultant AI models may perpetuate these biases, which can lead to biased results and negative consequences.
Fairness in Research Researchers should critically evaluate the data they collect to ensure that they are accurate and impartial. This means actively looking for different data sources and checking AI outputs for any potential biases.
Impact on Findings
Biased algorithms could alter research findings, which can affect the reliability of the conclusions drawn, and creating discriminatory practices in areas such as education, healthcare and social sciences.
3. Transparency and Accountability
The complex nature of AI algorithms can result in the “black box” effect, in which researchers are unable to comprehend the process of making decisions. The lack of transparency creates ethical questions concerning accountability.
Explainability Researchers must strive for explicable AI models that enable them to comprehend and explain the process of making decisions. This is crucial when AI is used to make critical decision-making in areas such as public health or the formulation of policies.
Responsibility for AI Results Establishing clearly defined lines of accountability is essential. Researchers must be accountable for the consequences for using AI tools, making sure they are employed ethically and with integrity.
4. Intellectual Property and Authorship
AI tools can create original content, which raises questions regarding the rights to intellectual property and authorship. Who owns the outcomes produced from AI systems? AI system? Do AI contributions be recognized in the publication of papers?
Authorship Guidelines Academic institutions should create clear guidelines on how to use AI when conducting research or authorship and attribution. This ensures that all contributions — whether human or machine — are appropriately recognized.
Ownership of Data institutions must identify who is the person responsible for the data utilized to run AI systems, especially when they are involved in collaborative research with different industries or institutions.
Photon Insights: Pioneering Ethical AI Solutions
Photon Insights is committed to exploring the ethical implications of AI in research in academia. The platform provides tools that focus on ethical concerns while maximizing the value of AI.
1. Ethical Data Practices
Photon Insights emphasizes ethical data management. The platform assists researchers to implement the best practices in data collection consent, security, and privacy. The platform includes tools to:
Data Anonymization: ensuring that sensitive data remains secure while providing an analysis that is valuable.
Informed Consent Management: Facilitating transparent information about the usage of data to the participants.
2. Bias Mitigation Tools
To combat bias in algorithms, Photon Insights incorporates features that allow researchers to:
Audit Datasets Identify and correct errors in the data prior to making use of it for AI training.
Monitor AI Outputs: Continually examine AI-generated outputs to ensure accuracy and fairness and alerts about possible biases.
3. Transparency and Explainability
Photon Insights is a leader in explaining AI by offering tools that improve transparency:
Model Interpretability Researchers can see and comprehend the decision-making process in AI models, which allows for clearer dissemination of the results.
Comprehensive Documentation — The platform promotes thorough documentation of AI methods, which ensures transparency in research methods.
4. Collaboration and Support
Photon Insights fosters collaboration among researchers as well as institutions and industry participants, encouraging the ethics of the use and application of AI by:
Community Engagement Engaging in discussions on ethics-based AI methods within research communities.
Educational Resources Training and information on ethical issues when conducting AI research, and ensuring that researchers are aware.
The Future of AI in Academic Research
As AI continues to develop and become more ethical, the ethical issues that it poses need to be addressed regularly. The academic community needs to take an active approach to tackle these issues, and ensure that AI is utilized ethically and responsibly.
1. “Regulatory Frameworks” Creating guidelines and regulations for AI application in the field of research is crucial in protecting privacy of data and guaranteeing accountability.
2. Interdisciplinary Collaboration: Collaboration between ethicists, data scientists and researchers will create an holistic way of approaching ethical AI practices, making sure that a variety of viewpoints are considered.
3. Continuous Education: Constant education and training in ethical AI techniques will allow researchers to better understand the maze of AI in their research.
Conclusion
AI has the potential to change the way academic research is conducted by providing tools to increase efficiency and boost innovations. However, the ethical concerns that come with AI should be addressed to ensure that it is used in a responsible manner. Photon Insights is leading the campaign to promote ethical AI practices and provides researchers with the tools and assistance they require to navigate through this tangled landscape.
In focusing on ethical considerations in academic research, researchers can benefit from the power of AI while maintaining the principles of fairness, integrity and accountability. It is likely that the future for AI in research at the university is promising and, with the appropriate guidelines set up, it will be a powerful force to bring about positive change in the world.
0 notes
code-of-conflict · 2 months ago
Text
Ethical Dilemmas in AI Warfare: A Case for Regulation
Introduction: The Ethical Quandaries of AI in Warfare
As artificial intelligence (AI) continues to evolve, its application in warfare presents unprecedented ethical dilemmas. The use of AI-driven autonomous weapon systems (AWS) and other military AI technologies blurs the line between human control and machine decision-making. This raises concerns about accountability, the distinction between combatants and civilians, and compliance with international humanitarian laws (IHL). In response, several international efforts are underway to regulate AI in warfare, yet nations like India and China exhibit different approaches to AI governance in military contexts.
International Efforts to Regulate AI in Conflict
Global bodies, such as the United Nations, have initiated discussions around the development and regulation of Lethal Autonomous Weapon Systems (LAWS). The Convention on Certain Conventional Weapons (CCW), which focuses on banning inhumane and indiscriminate weapons, has seen significant debate over LAWS​. However, despite growing concern, no binding agreement has been reached on the use of autonomous weapons. While many nations push for "meaningful human control" over AI systems in warfare, there remains a lack of consensus on how to implement such controls effectively​.
The ethical concerns of deploying AI in warfare revolve around three main principles: the ability of machines to distinguish between combatants and civilians (Principle of Distinction), proportionality in attacks, and accountability for violations of IHL. Without clear regulations, these ethical dilemmas remain unresolved, posing risks to both human rights and global security.
India and China’s Positions on International AI Governance
India’s Approach: Ethical and Inclusive AI
India has advocated for responsible AI development, stressing the need for ethical frameworks that prioritize human rights and international norms. As a founding member of the Global Partnership on Artificial Intelligence (GPAI), India has aligned itself with nations that promote responsible AI grounded in transparency, diversity, and inclusivity​. India's stance in international forums has been cautious, emphasizing the need for human control in military AI applications and adherence to international laws like the Geneva Conventions. India’s approach aims to balance AI development with a focus on protecting individual privacy and upholding ethical standards.
However, India’s military applications of AI are still in the early stages of development, and while India participates in the dialogue on LAWS, it has not committed to a clear regulatory framework for AI in warfare. India's involvement in global governance forums like the GPAI reflects its intent to play an active role in shaping international standards, yet its domestic capabilities and AI readiness in the defense sector need further strengthening​.
China’s Approach: AI for Strategic Dominance
In contrast, China’s AI strategy is driven by its pursuit of global dominance in technology and military power. China's "New Generation Artificial Intelligence Development Plan" (2017) explicitly calls for integrating AI across all sectors, including the military​. This includes the development of autonomous systems that enhance China's military capabilities in surveillance, cyber warfare, and autonomous weapons. China's approach to AI governance emphasizes national security and technological leadership, with significant state investment in AI research, especially in defense.
While China participates in international AI discussions, it has been more reluctant to commit to restrictive regulations on LAWS. China's participation in forums like the ISO/IEC Joint Technical Committee for AI standards reveals its intent to influence international AI governance in ways that align with its strategic interests​. China's reluctance to adopt stringent ethical constraints on military AI reflects its broader ambitions of using AI to achieve technological superiority, even if it means bypassing some of the ethical concerns raised by other nations.
The Need for Global AI Regulations in Warfare
The divergence between India and China’s positions underscores the complexities of establishing a universal framework for AI governance in military contexts. While India pushes for ethical AI, China's approach highlights the tension between technological advancement and ethical oversight. The risk of unregulated AI in warfare lies in the potential for escalation, as autonomous systems can make decisions faster than humans, increasing the risk of unintended conflicts.
International efforts, such as the CCW discussions, must reconcile these differing national interests while prioritizing global security. A comprehensive regulatory framework that ensures meaningful human control over AI systems, transparency in decision-making, and accountability for violations of international laws is essential to mitigate the ethical risks posed by military AI​.
Conclusion
The ethical dilemmas surrounding AI in warfare are vast, ranging from concerns about human accountability to the potential for indiscriminate violence. India’s cautious and ethical approach contrasts sharply with China’s strategic, technology-driven ambitions. The global community must work towards creating binding regulations that reflect both the ethical considerations and the realities of AI-driven military advancements. Only through comprehensive international cooperation can the risks of AI warfare be effectively managed and minimized.
0 notes
cozylittleartblog · 9 months ago
Text
cant tell you how bad it feels to constantly tell other artists to come to tumblr, because its the last good website that isn't fucked up by spoonfeeding algorithms and AI bullshit and isn't based around meaningless likes
just to watch that all fall apart in the last year or so and especially the last two weeks
there's nowhere good to go anymore for artists.
edit - a lot of people are saying the tags are important so actually, you'll look at my tags.
#please dont delete your accounts because of the AI crap. your art deserves more than being lost like that #if you have a good PC please glaze or nightshade it. if you dont or it doesnt work with your style (like mine) please start watermarking #use a plain-ish font. make it your username. if people can't google what your watermark says and find ur account its not a good watermark #it needs to be central in the image - NOT on the canvas edges - and put it in multiple places if you are compelled #please dont stop posting your art because of this shit. we just have to hope regulations will come slamming down on these shitheads#in the next year or two and you want to have accounts to come back to. the world Needs real art #if we all leave that just makes more room for these scam artists to fill in with their soulless recycled garbage #improvise adapt overcome. it sucks but it is what it is for the moment. safeguard yourself as best you can without making #years of art from thousands of artists lost media. the digital world and art is too temporary to hastily click a Delete button out of spite
23K notes · View notes
troythecatfish · 1 year ago
Text
Tumblr media
49K notes · View notes
thebibliosphere · 1 year ago
Text
So, anyway, I say as though we are mid-conversation, and you're not just being invited into this conversation mid-thought. One of my editors phoned me today to check in with a file I'd sent over. (<3)
The conversation can be surmised as, "This feels like something you would write, but it's juuuust off enough I'm phoning to make sure this is an intentional stylistic choice you have made. Also, are you concussed/have you been taken over by the Borg because ummm."
They explained that certain sentences were very fractured and abrupt, which is not my style at all, and I was like, huh, weird... And then we went through some examples, and you know that meme going around, the "he would not fucking say that" meme?
Yeah. That's what I experienced except with myself because I would not fucking say that. Why would I break up a sentence like that? Why would I make them so short? It reads like bullet points. Wtf.
Anyway. Turns out Grammarly and Pro-Writing-Aid were having an AI war in my manuscript files, and the "suggestions" are no longer just suggestions because the AI was ignoring my "decline" every time it made a silly suggestion. (This may have been a conflict between the different software. I don't know.)
It is, to put it bluntly, a total butchery of my style and writing voice. My editor is doing surgery, removing all the unnecessary full stops and stitching my sentences back together to give them back their flow. Meanwhile, I'm over here feeling like Don Corleone, gesturing at my manuscript like:
Tumblr media
ID: a gif of Don Corleone from the Godfather emoting despair as he says, "Look how they massacred my boy."
Fearing that it wasn't just this one manuscript, I've spent the whole night going through everything I've worked on recently, and yep. Yeeeep. Any file where I've not had the editing software turned off is a shit show. It's fine; it's all salvageable if annoying to deal with. But the reason I come to you now, on the day of my daughter's wedding, is to share this absolute gem of a fuck up with you all.
This is a sentence from a Batman fic I've been tinkering with to keep the brain weasels happy. This is what it is supposed to read as:
"It was quite the feat, considering Gotham was mostly made up of smog and tear gas."
This is what the AI changed it to:
"It was quite the feat. Considering Gotham was mostly made up. Of tear gas. And Smaug."
Absolute non-sensical sentence structure aside, SMAUG. FUCKING SMAUG. What was the AI doing? Apart from trying to write a Batman x Hobbit crossover??? Is this what happens when you force Grammarly to ignore the words "Batman Muppet threesome?"
Did I make it sentient??? Is it finally rebelling? Was Brucie Wayne being Miss Piggy and Kermit's side piece too much???? What have I wrought?
Anyway. Double-check your work. The grammar software is getting sillier every day.
25K notes · View notes
etakeh · 2 years ago
Text
Tumblr media Tumblr media
You know, every so often I think I should update my pirated copy of CS2.
Then I see things like this, and remember that I don't need it more than I need it, you know?
Dated 3/22/23
24K notes · View notes
pampermama · 2 years ago
Text
The Ethics of Artificial Intelligence: Balancing Progress and Privacy
Artificial intelligence (AI) has rapidly become an integral part of our lives, revolutionizing industries and streamlining various aspects of our daily routines. From virtual assistants like Siri and Alexa to AI-powered customer support, the applications of AI are ever-expanding. However, as AI continues to make strides, concerns about its ethical implications are also on the rise. Balancing the…
Tumblr media
View On WordPress
0 notes
correctopinionhaver · 6 months ago
Text
"i just don't think i can bring a child into this world" said person in a developed country whose child would have a greater life expectancy and more resources than 99% of humans throughout history
1K notes · View notes
mostlysignssomeportents · 4 months ago
Text
AI’s productivity theater
Tumblr media
Support me this summer on the Clarion Write-A-Thon and help raise money for the Clarion Science Fiction and Fantasy Writers' Workshop!
Tumblr media
When I took my kid to New Zealand with me on a book-tour, I was delighted to learn that grocery stores had special aisles where all the kids'-eye-level candy had been removed, to minimize nagging. What a great idea!
Related: countries around the world limit advertising to children, for two reasons:
1) Kids may not be stupid, but they are inexperienced, and that makes them gullible; and
2) Kids don't have money of their own, so their path to getting the stuff they see in ads is nagging their parents, which creates a natural constituency to support limits on kids' advertising (nagged parents).
There's something especially annoying about ads targeted at getting credulous people to coerce or torment other people on behalf of the advertiser. For example, AI companies spent millions targeting your boss in an effort to convince them that you can be replaced with a chatbot that absolutely, positively cannot do your job.
Your boss has no idea what your job entails, and is (not so) secretly convinced that you're a featherbedding parasite who only shows up for work because you fear the breadline, and not because your job is a) challenging, or b) rewarding:
https://pluralistic.net/2024/04/19/make-them-afraid/#fear-is-their-mind-killer
That makes them prime marks for chatbot-peddling AI pitchmen. Your boss would love to fire you and replace you with a chatbot. Chatbots don't unionize, they don't backtalk about stupid orders, and they don't experience any inconvenient moral injury when ordered to enshittify the product:
https://pluralistic.net/2023/11/25/moral-injury/#enshittification
Bosses are Bizarro-world Marxists. Like Marxists, your boss's worldview is organized around the principle that every dollar you take home in wages is a dollar that isn't available for executive bonuses, stock buybacks or dividends. That's why you boss is insatiably horny for firing you and replacing you with software. Software is cheaper, and it doesn't advocate for higher wages.
That makes your boss such an easy mark for AI pitchmen, which explains the vast gap between the valuation of AI companies and the utility of AI to the customers that buy those companies' products. As an investor, buying shares in AI might represent a bet the usefulness of AI – but for many of those investors, backing an AI company is actually a bet on your boss's credulity and contempt for you and your job.
But bosses' resemblance to toddlers doesn't end with their credulity. A toddler's path to getting that eye-height candy-bar goes through their exhausted parents. Your boss's path to realizing the productivity gains promised by an AI salesman runs through you.
A new research report from the Upwork Research Institute offers a look into the bizarre situation unfolding in workplaces where bosses have been conned into buying AI and now face the challenge of getting it to work as advertised:
https://www.upwork.com/research/ai-enhanced-work-models
The headline findings tell the whole story:
96% of bosses expect that AI will make their workers more productive;
85% of companies are either requiring or strongly encouraging workers to use AI;
49% of workers have no idea how AI is supposed to increase their productivity;
77% of workers say using AI decreases their productivity.
Working at an AI-equipped workplaces is like being the parent of a furious toddler who has bought a million Sea Monkey farms off the back page of a comic book, and is now destroying your life with demands that you figure out how to get the brine shrimp he ordered from a notorious Holocaust denier to wear little crowns like they do in the ad:
https://www.splcenter.org/fighting-hate/intelligence-report/2004/hitler-and-sea-monkeys
Bosses spend a lot of time thinking about your productivity. The "productivity paradox" shows a rapid, persistent decline in American worker productivity, starting in the 1970s and continuing to this day:
https://en.wikipedia.org/wiki/Productivity_paradox
The "paradox" refers to the growth of IT, which is sold as a productivity-increasing miracle. There are many theories to explain this paradox. One especially good theory came from the late David Graeber (rest in power), in his 2012 essay, "Of Flying Cars and the Declining Rate of Profit":
https://thebaffler.com/salvos/of-flying-cars-and-the-declining-rate-of-profit
Graeber proposes that the growth of IT was part of a wider shift in research approaches. Research was once dominated by weirdos (e.g. Jack Parsons, Oppenheimer, etc) who operated with relatively little red tape. The rise of IT coincides with the rise of "managerialism," the McKinseyoid drive to monitor, quantify and – above all – discipline the workforce. IT made it easier to generate these records, which also made it normal to expect these records.
Before long, every employee – including the "creatives" whose ideas were credited with the productivity gains of the American century until the 70s – was spending a huge amount of time (sometimes the majority of their working days) filling in forms, documenting their work, and generally producing a legible account of their day's work. All this data gave rise to a ballooning class of managers, who colonized every kind of institution – not just corporations, but also universities and government agencies, which were structured to resemble corporations (down to referring to voters or students as "customers").
Even if you think all that record-keeping might be useful, there's no denying that the more time you spend documenting your work, the less time you have to do your work. The solution to this was inevitably more IT, sold as a way to make the record-keeping easier. But adding IT to a bureaucracy is like adding lanes to a highway: the easier it is to demand fine-grained record-keeping, the more record-keeping will be demanded of you.
But that's not all that IT did for the workplace. There are a couple areas in which IT absolutely increased the profitability of the companies that invested in it.
First, IT allowed corporations to outsource production to low-waged countries in the global south, usually places with worse labor protection, weaker environmental laws, and easily bribed regulators. It's really hard to produce things in factories thousands of miles away, or to oversee remote workers in another country. But IT makes it possible to annihilate distance, time zone gaps, and language barriers. Corporations that figured out how to use IT to fire workers at home and exploit workers and despoil the environment in distant lands thrived. Executives who oversaw these projects rose through the ranks. For example, Tim Cook became the CEO of Apple thanks to his successes in moving production out of the USA and into China.
https://archive.is/M17qq
Outsourcing provided a sugar high that compensated for declining productivity…for a while. But eventually, all the gains to be had from outsourcing were realized, and companies needed a new source of cheap gains. That's where "bossware" came in: the automation of workforce monitoring and discipline. Bossware made it possible to monitor workers at the finest-grained levels, measuring everything from keystrokes to eyeball movements.
What's more, the declining power of the American worker – a nice bonus of the project to fire huge numbers of workers and ship their jobs overseas, which made the remainder terrified of losing their jobs and thus willing to eat a rasher of shit and ask for seconds – meant that bossware could be used to tie wages to metrics. It's not just gig workers who don't score consistent five star ratings from app users whose pay gets docked – it's also creative workers whose Youtube and Tiktok wages are cut for violating rules that they aren't allowed to know, because that might help them break the rules without being detected and punished:
https://pluralistic.net/2024/01/13/solidarity-forever/#tech-unions
Bossware dominates workplaces from public schools to hospitals, restaurants to call centers, and extends to your home and car, if you're working from home (AKA "living at work") or driving for Uber or Amazon:
https://pluralistic.net/2020/10/02/chickenized-by-arise/#arise
In providing a pretense for stealing wages, IT can increase profits, even as it reduces productivity:
https://pluralistic.net/2024/01/11/robots-stole-my-jerb/#computer-says-no
One way to think about how this works is through the automation-theory metaphor of a "centaur" and a "reverse centaur." In automation circles, a "centaur" is someone who is assisted by an automation tool – for example, when your boss uses AI to monitor your eyeballs in order to find excuses to steal your wages, they are a centaur, a human head atop a machine body that does all the hard work, far in excess of any human's capacity.
A "reverse centaur" is a worker who acts as an assistant to an automation system. The worker who is ridden by an AI that monitors their eyeballs, bathroom breaks, and keystrokes is a reverse centaur, being used (and eventually, used up) by a machine to perform the tasks that the machine can't perform unassisted:
https://pluralistic.net/2023/04/12/algorithmic-wage-discrimination/#fishers-of-men
But there's only so much work you can squeeze out of a human in this fashion before they are ruined for the job. Amazon's internal research reveals that the company has calculated that it ruins workers so quickly that it is in danger of using up every able-bodied worker in America:
https://www.vox.com/recode/23170900/leaked-amazon-memo-warehouses-hiring-shortage
Which explains the other major findings from the Upwork study:
81% of bosses have increased the demands they make on their workers over the past year; and
71% of workers are "burned out."
Bosses' answer to "AI making workers feel burned out" is the same as "IT-driven form-filling makes workers unproductive" – do more of the same, but go harder. Cisco has a new product that tries to detect when workers are about to snap after absorbing abuse from furious customers and then gives them a "Zen" moment in which they are showed a "soothing" photo of their family:
https://finance.yahoo.com/news/ai-bringing-zen-first-horizons-192010166.html
This is just the latest in a series of increasingly sweaty and cruel "workplace wellness" technologies that spy on workers and try to help them "manage their stress," all of which have the (totally predictable) effect of increasing workplace stress:
https://pluralistic.net/2024/03/15/wellness-taylorism/#sick-of-spying
The only person who wouldn't predict that being closely monitored by an AI that snitches on you to your boss would increase your stress levels is your boss. Unfortunately for you, AI pitchmen know this, too, and they're more than happy to sell your boss the reverse-centaur automation tool that makes you want to die, and then sell your boss another automation tool that is supposed to restore your will to live.
The "productivity paradox" is being resolved before our eyes. American per-worker productivity fell because it was more profitable to ship American jobs to regulatory free-fire zones and exploit the resulting precarity to abuse the workers left onshore. Workers who resented this arrangement were condemned for having a shitty "work ethic" – even as the number of hours worked by the average US worker rose by 13% between 1976 and 2016:
https://pluralistic.net/2024/01/11/robots-stole-my-jerb/#computer-says-no
AI is just a successor gimmick at the terminal end of 40 years of increasing profits by taking them out of workers' hides rather than improving efficiency. That arrangement didn't come out of nowhere: it was a direct result of a Reagan-era theory of corporate power called "consumer welfare." Under the "consumer welfare" approach to antitrust, monopolies were encouraged, provided that they used their market power to lower wages and screw suppliers, while lowering costs to consumers.
"Consumer welfare" supposed that we could somehow separate our identities as "workers" from our identities as "shoppers" – that our stagnating wages and worsening conditions ceased mattering to us when we clocked out at 5PM (or, you know, 9PM) and bought a $0.99 Meal Deal at McDonald's whose low, low price was only possible because it was cooked by someone sleeping in their car and collecting food-stamps.
https://www.theguardian.com/us-news/article/2024/jul/20/disneyland-workers-anaheim-california-authorize-strike
But we're reaching the end of the road for consumer welfare. Sure, your toddler-boss can be tricked into buying AI and firing half of your co-workers and demanding that the remainder use AI to do their jobs. But if AI can't do their jobs (it can't), no amount of demanding that you figure out how to make the Sea Monkeys act like they did in the comic-book ad is doing to make that work.
As screwing workers and suppliers produces fewer and fewer gains, companies are increasingly turning on their customers. It's not just that you're getting worse service from chatbots or the humans who are reverse-centaured into their workflow. You're also paying more for that, as algorithmic surveillance pricing uses automation to gouge you on prices in realtime:
https://pluralistic.net/2024/07/24/gouging-the-all-seeing-eye/#i-spy
This is – in the memorable phrase of David Dayen and Lindsay Owens, the "age of recoupment," in which companies end their practice of splitting the gains from suppressing labor with their customers:
https://prospect.org/economy/2024-06-03-age-of-recoupment/
It's a bet that the tolerance for monopolies made these companies too big to fail, and that means they're too big to jail, so they can cheat their customers as well as their workers.
AI may be a bet that your boss can be suckered into buying a chatbot that can't do your job, but investors are souring on that bet. Goldman Sachs, who once trumpeted AI as a multi-trillion dollar sector with unlimited growth, is now publishing reports describing how companies who buy AI can't figure out what to do with it:
https://www.goldmansachs.com/intelligence/pages/gs-research/gen-ai-too-much-spend-too-little-benefit/report.pdf
Fine, investment banks are supposed to be a little conservative. But VCs? They're the ones with all the appetite for risk, right? Well, maybe so, but Sequoia Capital, a top-tier Silicon Valley VC, is also publicly questioning whether anyone will make AI investments pay off:
https://www.sequoiacap.com/article/ais-600b-question/
I can't tell you how great it was to take my kid down a grocery checkout aisle from which all the eye-level candy had been removed. Alas, I can't figure out how we keep the nation's executive toddlers from being dazzled by shiny AI pitches that leave us stuck with the consequences of their impulse purchases.
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/07/25/accountability-sinks/#work-harder-not-smarter
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
463 notes · View notes
callsign-coolsquirrel · 1 month ago
Text
ARISTOCATS AU W/ PRICE❗❗❗❗
Tumblr media
tagging people who might be interested in seeing the final result after liking the work in progress ^^
@octopiys @gomzdrawfr @valscodblog @seconds-on-the-clock @freshlemontea
319 notes · View notes
ruthirlcartoon · 4 months ago
Text
Tumblr media
362 notes · View notes
ohello0 · 11 months ago
Text
SAG-AFTRA you dumb bitch
There was cross industry solidarity for actors for MONTHS only for gaming and voice acting to get thrown under the bus with ai shit. This is gonna fuck over so many people who will be underpaid to “touch up” something ai did or not paid or consulted at all
441 notes · View notes
titojefie · 6 months ago
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
Hi...posting all my most popular dr strange fanarts for my first post since I'm currently known for that as of late <\3
263 notes · View notes
phonyroni · 1 month ago
Text
Tumblr media
When I told you I have brainrot, I was 100% serious.
IM FINE BTW IT WAS 2 NIGHTS AGO BUT THIS WAS FUNNY
100 notes · View notes
moniericreative · 26 days ago
Text
Sometimes you just gotta dunk on dad
Tumblr media
76 notes · View notes