#AI Safety
Explore tagged Tumblr posts
Text
“Humans in the loop” must detect the hardest-to-spot errors, at superhuman speed
I'm touring my new, nationally bestselling novel The Bezzle! Catch me SATURDAY (Apr 27) in MARIN COUNTY, then Winnipeg (May 2), Calgary (May 3), Vancouver (May 4), and beyond!
If AI has a future (a big if), it will have to be economically viable. An industry can't spend 1,700% more on Nvidia chips than it earns indefinitely – not even with Nvidia being a principle investor in its largest customers:
https://news.ycombinator.com/item?id=39883571
A company that pays 0.36-1 cents/query for electricity and (scarce, fresh) water can't indefinitely give those queries away by the millions to people who are expected to revise those queries dozens of times before eliciting the perfect botshit rendition of "instructions for removing a grilled cheese sandwich from a VCR in the style of the King James Bible":
https://www.semianalysis.com/p/the-inference-cost-of-search-disruption
Eventually, the industry will have to uncover some mix of applications that will cover its operating costs, if only to keep the lights on in the face of investor disillusionment (this isn't optional – investor disillusionment is an inevitable part of every bubble).
Now, there are lots of low-stakes applications for AI that can run just fine on the current AI technology, despite its many – and seemingly inescapable - errors ("hallucinations"). People who use AI to generate illustrations of their D&D characters engaged in epic adventures from their previous gaming session don't care about the odd extra finger. If the chatbot powering a tourist's automatic text-to-translation-to-speech phone tool gets a few words wrong, it's still much better than the alternative of speaking slowly and loudly in your own language while making emphatic hand-gestures.
There are lots of these applications, and many of the people who benefit from them would doubtless pay something for them. The problem – from an AI company's perspective – is that these aren't just low-stakes, they're also low-value. Their users would pay something for them, but not very much.
For AI to keep its servers on through the coming trough of disillusionment, it will have to locate high-value applications, too. Economically speaking, the function of low-value applications is to soak up excess capacity and produce value at the margins after the high-value applications pay the bills. Low-value applications are a side-dish, like the coach seats on an airplane whose total operating expenses are paid by the business class passengers up front. Without the principle income from high-value applications, the servers shut down, and the low-value applications disappear:
https://locusmag.com/2023/12/commentary-cory-doctorow-what-kind-of-bubble-is-ai/
Now, there are lots of high-value applications the AI industry has identified for its products. Broadly speaking, these high-value applications share the same problem: they are all high-stakes, which means they are very sensitive to errors. Mistakes made by apps that produce code, drive cars, or identify cancerous masses on chest X-rays are extremely consequential.
Some businesses may be insensitive to those consequences. Air Canada replaced its human customer service staff with chatbots that just lied to passengers, stealing hundreds of dollars from them in the process. But the process for getting your money back after you are defrauded by Air Canada's chatbot is so onerous that only one passenger has bothered to go through it, spending ten weeks exhausting all of Air Canada's internal review mechanisms before fighting his case for weeks more at the regulator:
https://bc.ctvnews.ca/air-canada-s-chatbot-gave-a-b-c-man-the-wrong-information-now-the-airline-has-to-pay-for-the-mistake-1.6769454
There's never just one ant. If this guy was defrauded by an AC chatbot, so were hundreds or thousands of other fliers. Air Canada doesn't have to pay them back. Air Canada is tacitly asserting that, as the country's flagship carrier and near-monopolist, it is too big to fail and too big to jail, which means it's too big to care.
Air Canada shows that for some business customers, AI doesn't need to be able to do a worker's job in order to be a smart purchase: a chatbot can replace a worker, fail to their worker's job, and still save the company money on balance.
I can't predict whether the world's sociopathic monopolists are numerous and powerful enough to keep the lights on for AI companies through leases for automation systems that let them commit consequence-free free fraud by replacing workers with chatbots that serve as moral crumple-zones for furious customers:
https://www.sciencedirect.com/science/article/abs/pii/S0747563219304029
But even stipulating that this is sufficient, it's intrinsically unstable. Anything that can't go on forever eventually stops, and the mass replacement of humans with high-speed fraud software seems likely to stoke the already blazing furnace of modern antitrust:
https://www.eff.org/de/deeplinks/2021/08/party-its-1979-og-antitrust-back-baby
Of course, the AI companies have their own answer to this conundrum. A high-stakes/high-value customer can still fire workers and replace them with AI – they just need to hire fewer, cheaper workers to supervise the AI and monitor it for "hallucinations." This is called the "human in the loop" solution.
The human in the loop story has some glaring holes. From a worker's perspective, serving as the human in the loop in a scheme that cuts wage bills through AI is a nightmare – the worst possible kind of automation.
Let's pause for a little detour through automation theory here. Automation can augment a worker. We can call this a "centaur" – the worker offloads a repetitive task, or one that requires a high degree of vigilance, or (worst of all) both. They're a human head on a robot body (hence "centaur"). Think of the sensor/vision system in your car that beeps if you activate your turn-signal while a car is in your blind spot. You're in charge, but you're getting a second opinion from the robot.
Likewise, consider an AI tool that double-checks a radiologist's diagnosis of your chest X-ray and suggests a second look when its assessment doesn't match the radiologist's. Again, the human is in charge, but the robot is serving as a backstop and helpmeet, using its inexhaustible robotic vigilance to augment human skill.
That's centaurs. They're the good automation. Then there's the bad automation: the reverse-centaur, when the human is used to augment the robot.
Amazon warehouse pickers stand in one place while robotic shelving units trundle up to them at speed; then, the haptic bracelets shackled around their wrists buzz at them, directing them pick up specific items and move them to a basket, while a third automation system penalizes them for taking toilet breaks or even just walking around and shaking out their limbs to avoid a repetitive strain injury. This is a robotic head using a human body – and destroying it in the process.
An AI-assisted radiologist processes fewer chest X-rays every day, costing their employer more, on top of the cost of the AI. That's not what AI companies are selling. They're offering hospitals the power to create reverse centaurs: radiologist-assisted AIs. That's what "human in the loop" means.
This is a problem for workers, but it's also a problem for their bosses (assuming those bosses actually care about correcting AI hallucinations, rather than providing a figleaf that lets them commit fraud or kill people and shift the blame to an unpunishable AI).
Humans are good at a lot of things, but they're not good at eternal, perfect vigilance. Writing code is hard, but performing code-review (where you check someone else's code for errors) is much harder – and it gets even harder if the code you're reviewing is usually fine, because this requires that you maintain your vigilance for something that only occurs at rare and unpredictable intervals:
https://twitter.com/qntm/status/1773779967521780169
But for a coding shop to make the cost of an AI pencil out, the human in the loop needs to be able to process a lot of AI-generated code. Replacing a human with an AI doesn't produce any savings if you need to hire two more humans to take turns doing close reads of the AI's code.
This is the fatal flaw in robo-taxi schemes. The "human in the loop" who is supposed to keep the murderbot from smashing into other cars, steering into oncoming traffic, or running down pedestrians isn't a driver, they're a driving instructor. This is a much harder job than being a driver, even when the student driver you're monitoring is a human, making human mistakes at human speed. It's even harder when the student driver is a robot, making errors at computer speed:
https://pluralistic.net/2024/04/01/human-in-the-loop/#monkey-in-the-middle
This is why the doomed robo-taxi company Cruise had to deploy 1.5 skilled, high-paid human monitors to oversee each of its murderbots, while traditional taxis operate at a fraction of the cost with a single, precaratized, low-paid human driver:
https://pluralistic.net/2024/01/11/robots-stole-my-jerb/#computer-says-no
The vigilance problem is pretty fatal for the human-in-the-loop gambit, but there's another problem that is, if anything, even more fatal: the kinds of errors that AIs make.
Foundationally, AI is applied statistics. An AI company trains its AI by feeding it a lot of data about the real world. The program processes this data, looking for statistical correlations in that data, and makes a model of the world based on those correlations. A chatbot is a next-word-guessing program, and an AI "art" generator is a next-pixel-guessing program. They're drawing on billions of documents to find the most statistically likely way of finishing a sentence or a line of pixels in a bitmap:
https://dl.acm.org/doi/10.1145/3442188.3445922
This means that AI doesn't just make errors – it makes subtle errors, the kinds of errors that are the hardest for a human in the loop to spot, because they are the most statistically probable ways of being wrong. Sure, we notice the gross errors in AI output, like confidently claiming that a living human is dead:
https://www.tomsguide.com/opinion/according-to-chatgpt-im-dead
But the most common errors that AIs make are the ones we don't notice, because they're perfectly camouflaged as the truth. Think of the recurring AI programming error that inserts a call to a nonexistent library called "huggingface-cli," which is what the library would be called if developers reliably followed naming conventions. But due to a human inconsistency, the real library has a slightly different name. The fact that AIs repeatedly inserted references to the nonexistent library opened up a vulnerability – a security researcher created a (inert) malicious library with that name and tricked numerous companies into compiling it into their code because their human reviewers missed the chatbot's (statistically indistinguishable from the the truth) lie:
https://www.theregister.com/2024/03/28/ai_bots_hallucinate_software_packages/
For a driving instructor or a code reviewer overseeing a human subject, the majority of errors are comparatively easy to spot, because they're the kinds of errors that lead to inconsistent library naming – places where a human behaved erratically or irregularly. But when reality is irregular or erratic, the AI will make errors by presuming that things are statistically normal.
These are the hardest kinds of errors to spot. They couldn't be harder for a human to detect if they were specifically designed to go undetected. The human in the loop isn't just being asked to spot mistakes – they're being actively deceived. The AI isn't merely wrong, it's constructing a subtle "what's wrong with this picture"-style puzzle. Not just one such puzzle, either: millions of them, at speed, which must be solved by the human in the loop, who must remain perfectly vigilant for things that are, by definition, almost totally unnoticeable.
This is a special new torment for reverse centaurs – and a significant problem for AI companies hoping to accumulate and keep enough high-value, high-stakes customers on their books to weather the coming trough of disillusionment.
This is pretty grim, but it gets grimmer. AI companies have argued that they have a third line of business, a way to make money for their customers beyond automation's gifts to their payrolls: they claim that they can perform difficult scientific tasks at superhuman speed, producing billion-dollar insights (new materials, new drugs, new proteins) at unimaginable speed.
However, these claims – credulously amplified by the non-technical press – keep on shattering when they are tested by experts who understand the esoteric domains in which AI is said to have an unbeatable advantage. For example, Google claimed that its Deepmind AI had discovered "millions of new materials," "equivalent to nearly 800 years’ worth of knowledge," constituting "an order-of-magnitude expansion in stable materials known to humanity":
https://deepmind.google/discover/blog/millions-of-new-materials-discovered-with-deep-learning/
It was a hoax. When independent material scientists reviewed representative samples of these "new materials," they concluded that "no new materials have been discovered" and that not one of these materials was "credible, useful and novel":
https://www.404media.co/google-says-it-discovered-millions-of-new-materials-with-ai-human-researchers/
As Brian Merchant writes, AI claims are eerily similar to "smoke and mirrors" – the dazzling reality-distortion field thrown up by 17th century magic lantern technology, which millions of people ascribed wild capabilities to, thanks to the outlandish claims of the technology's promoters:
https://www.bloodinthemachine.com/p/ai-really-is-smoke-and-mirrors
The fact that we have a four-hundred-year-old name for this phenomenon, and yet we're still falling prey to it is frankly a little depressing. And, unlucky for us, it turns out that AI therapybots can't help us with this – rather, they're apt to literally convince us to kill ourselves:
https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/04/23/maximal-plausibility/#reverse-centaurs
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
#pluralistic#ai#automation#humans in the loop#centaurs#reverse centaurs#labor#ai safety#sanity checks#spot the mistake#code review#driving instructor
853 notes
·
View notes
Text
hey everyone check out my friend’s super cool TTRPG, The Treacherous Turn, it’s a research support game where you play collectively as a misaligned AGI, intended to help get players thinking about AI safety, you can get it for free at thetreacherousturn.ai !!!
2K notes
·
View notes
Text
Language Models and AI Safety: Still Worrying
Previously, I have explained how modern "AI" research has painted itself into a corner, inventing the science fiction rogue AI scenario where a system is smarter than its guardrails, but can easily outwitted by humans.
Two recent examples have confirmed my hunch about AI safety of generative AI. In one well-circulated case, somebody generated a picture of an "ethnically ambiguous Homer Simpson", and in another, somebody created a picture of "baby, female, hispanic".
These incidents show that generative AI still filters prompts and outputs, instead of A) ensuring the correct behaviour during training/fine-tuning, B) manually generating, re-labelling, or pruning the training data, C) directly modifying the learned weights to affect outputs.
In general, it is not surprising that big corporations like Google and Microsoft and non-profits like OpenAI are prioritising racist language or racial composition of characters in generated images over abuse of LLMs or generative art for nefarious purposes, content farms, spam, captcha solving, or impersonation. Somebody with enough criminal energy to use ChatGPT to automatically impersonate your grandma based on your message history after he hacked the phones of tens of thousands of grandmas will be blamed for his acts. Somebody who unintentionally generates a racist picture based on an ambiguous prompt will blame the developers of the software if he's offended. Scammers could have enough money and incentives to run the models on their own machine anyway, where corporations have little recourse.
There is precedent for this. Word2vec, published in 2013, was called a "sexist algorithm" in attention-grabbing headlines, even though the bodies of such articles usually conceded that the word2vec embedding just reproduced patterns inherent in the training data: Obviously word2vec does not have any built-in gender biases, it just departs from the dictionary definitions of words like "doctor" and "nurse" and learns gendered connotations because in the training corpus doctors are more often men, and nurses are more often women. Now even that last explanation is oversimplified. The difference between "man" and "woman" is not quite the same as the difference between "male" and "female", or between "doctor" and "nurse". In the English language, "man" can mean "male person" or "human person", and "nurse" can mean "feeding a baby milk from your breast" or a kind of skilled health care worker who works under the direction and supervision of a licensed physician. Arguably, the word2vec algorithm picked up on properties of the word "nurse" that are part of the meaning of the word (at least one meaning, according tot he dictionary), not properties that are contingent on our sexist world.
I don't want to come down against "political correctness" here. I think it's good if ChatGPT doesn't tell a girl that girls can't be doctors. You have to understand that not accidentally saying something sexist or racist is a big deal, or at least Google, Facebook, Microsoft, and OpenAI all think so. OpenAI are responding to a huge incentive when they add snippets like "ethnically ambiguous" to DALL-E 3 prompts.
If this is so important, why are they re-writing prompts, then? Why are they not doing A, B, or C? Back in the days of word2vec, there was a simple but effective solution to automatically identify gendered components in the learned embedding, and zero out the difference. It's so simple you'll probably kick yourself reading it because you could have published that paper yourself without understanding how word2vec works.
I can only conclude from the behaviour of systems like DALL-E 3 that they are either using simple prompt re-writing (or a more sophisticated approach that behaves just as prompt rewriting would, and performs as badly) because prompt re-writing is the best thing they can come up with. Transformers are complex, and inscrutable. You can't just reach in there, isolate a concept like "human person", and rebalance the composition.
The bitter lesson tells us that big amorphous approaches to AI perform better and scale better than manually written expert systems, ontologies, or description logics. More unsupervised data beats less but carefully labelled data. Even when the developers of these systems have a big incentive not to reproduce a certain pattern from the data, they can't fix such a problem at the root. Their solution is instead to use a simple natural language processing system, a dumb system they can understand, and wrap it around the smart but inscrutable transformer-based language model and image generator.
What does that mean for "sleeper agent AI"? You can't really trust a model that somebody else has trained, but can you even trust a model you have trained, if you haven't carefully reviewed all the input data? Even OpenAI can't trust their own models.
15 notes
·
View notes
Text
5 notes
·
View notes
Text
really good take on ai safety, talking about the millions of debates contained within the ai debate, about ai uses, about rational worst case scenarios, and stuff like that. it's not fully released yet but i am sending this out to my fellow nuance enthusiasts
6 notes
·
View notes
Text
Non-fiction books that explore AI's impact on society - AI News
New Post has been published on https://thedigitalinsider.com/non-fiction-books-that-explore-ais-impact-on-society-ai-news/
Non-fiction books that explore AI's impact on society - AI News
.pp-multiple-authors-boxes-wrapper display:none; img width:100%;
Artificial Intelligence (AI) is code or technologies that perform complex calculations, an area that encompasses simulations, data processing and analytics.
AI has increasingly grown in importance, becoming a game changer in many industries, including healthcare, education and finance. The use of AI has been proven to double levels of effectiveness, efficiency and accuracy in many processes, and reduced cost in different market sectors.
AI’s impact is being felt across the globe, so, it is important we understand the effects of AI on society and our daily lives.
Better understanding of AI and all that it does and can mean can be gained from well-researched AI books.
Books on AI provide insights into the use and applications of AI. They describe the advancement of AI since its inception and how it has shaped society so far. In this article, we will be examining recommended best books on AI that focus on the societal implications.
For those who don’t have time to read entire books, book summary apps like Headway will be of help.
Book 1: “Superintelligence: Paths, Dangers, Strategies” by Nick Bostrom
Nick Bostrom is a Swedish philosopher with a background in computational neuroscience, logic and AI safety.
In his book, Superintelligence, he talks about how AI can surpass our current definitions of intelligence and the possibilities that might ensue.
Bostrom also talks about the possible risks to humanity if superintelligence is not managed properly, stating AI can easily become a threat to the entire human race if we exercise no control over the technology.
Bostrom offers strategies that might curb existential risks, talks about how Al can be aligned with human values to reduce those risks and suggests teaching AI human values.
Superintelligence is recommended for anyone who is interested in knowing and understanding the implications of AI on humanity’s future.
Book 2: “AI Superpowers: China, Silicon Valley, and the New World Order” by Kai-Fu Lee
AI expert Kai-Fu Lee’s book, AI Superpowers: China, Silicon Valley, and the New World Order, examines the AI revolution and its impact so far, focusing on China and the USA.
He concentrates on the competition between these two countries in AI and the various contributions to the advancement of the technology made by each. He highlights China’s advantage, thanks in part to its larger population.
China’s significant investment so far in AI is discussed, and its chances of becoming a global leader in AI. Lee believes that cooperation between the countries will help shape the future of global power dynamics and therefore the economic development of the world.
In thes book, Lee states AI has the ability to transform economies by creating new job opportunities with massive impact on all sectors.
If you are interested in knowing the geo-political and economic impacts of AI, this is one of the best books out there.
Book 3: “Life 3.0: Being Human in the Age of Artificial Intelligence” by Max Tegmark
Max Tegmark’s Life 3.0 explores the concept of humans living in a world that is heavily influenced by AI. In the book, he talks about the concept of Life 3.0, a future where human existence and society will be shaped by AI. It focuses on many aspects of humanity including identity and creativity.
Tegmark envisions a time where AI has the ability to reshape human existence. He also emphasises the need to follow ethical principles to ensure the safety and preservation of human life.
Life 3.0 is a thought-provoking book that challenges readers to think deeply about the choices humanity may face as we progress into the AI era.
It’s one of the best books to read if you are interested in the ethical and philosophical discussions surrounding AI.
Book 4: “The Fourth Industrial Revolution” by Klaus Schwab
Klaus Martin Schwab is a German economist, mechanical engineer and founder of the World Economic Forum (WEF). He argues that machines are becoming smarter with every advance in technology and supports his arguments with evidence from previous revolutions in thinking and industry.
He explains that the current age – the fourth industrial revolution – is building on the third: with far-reaching consequences.
He states use of AI in technological advancement is crucial and that cybernetics can be used by AIs to change and shape the technological advances coming down the line towards us all.
This book is perfect if you are interested in AI-driven advancements in the fields of digital and technological growth. With this book, the role AI will play in the next phases of technological advancement will be better understood.
Book 5: “Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy” by Cathy O’Neil
Cathy O’Neil’s book emphasises the harm that defective mathematical algorithms cause in judging human behaviour and character. The continual use of maths algorithms promotes harmful results and creates inequality.
An example given in the book is of research that proved bias in voting choices caused by results from different search engines.
Similar examination is given to research that focused Facebook, where, by making newsfeeds appear on users’ timelines, political preferences could be affected.
This book is best suited for readers who want to adventure in the darker sides of AI that wouldn’t regularly be seen in mainstream news outlets.
Book 6: “The Age of Em: Work, Love, and Life when Robots Rule the Earth” by Robin Hanson
An associate professor of economics at George Mason University and a former researcher at the Future of Humanity Institute of Oxford University, Robin Hanson paints an imaginative picture of emulated human brains designed for robots. What if humans copied or “emulated” their brains and emotions and gave them to robots?
He argues that humans who become “Ems” (emulations) will become more dominant in the future workplace because of their higher productivity.
An intriguing book for fans of technology and those who love intelligent predictions of possible futures.
Book 7: “Architects of Intelligence: The truth about AI from the people building it” by Martin Ford
This book was drawn from interviews with AI experts and examines the struggles and possibilities of AI-driven industry.
If you want insights from people actively shaping the world, this book is right for you!
CONCLUSION
These books all have their unique perspectives but all point to one thing – the advantages of AI of today will have significant societal and technological impact. These books will give the reader glimpses into possible futures, with the effects of AI becoming more apparent over time.
For better insight into all aspects of AI, these books are the boosts you need to expand your knowledge. AI is advancing quickly, and these authors are some of the most respected in the field. Learn from the best with these choice reads.
#2024#ai#ai news#ai safety#Algorithms#Analytics#applications#apps#Article#artificial#Artificial Intelligence#author#background#Bias#Big Data#book#Books#brains#Building#change#China#code#competition#creativity#data#data processing#Democracy#development#double#dynamics
2 notes
·
View notes
Text
So I've criticized the present view of "AI Ethics" as focused on "reproducing bias" before, but I want to be more specific.
If you are using facial recognition AI to screen job applicants, and the AI does not recognize the faces of black people, the actual problem is that you're using facial recognition AI to screen job applicants!
This is a terrible decision, AI is nowhere near advanced enough for that application to make sense yet. It could just as easily reject people for wearing glasses! The racism is just a side effect of the fact that you shouldn't be using AI for this in the first place.
Of course it's the case that in some cases, racial minorities might be harmed, or be harmed more, by these practices, for instance, from not being as represented in the training data.
But at present, an AI is fundamentally much simpler and more limited than a human being. It doesn't have a deep understanding, even an intuitive one, of what it's looking at. It could be set off by some random hand gesture that has nothing to do with being a good employee, or a wrinkle in a shirt, or a cat moving in the background.
Including more black faces in the training data might make the algorithm recognize them better, but it won't make "algorithmic screening of video interviews" not bullshit.
(And whoever is making the job application screener AI will be less competent than the authors of GPT, which famously has a tendency to confidently make shit up.)
67 notes
·
View notes
Text
HOW do AI even work (most of the time...)? Literally, we have no idea...
2 notes
·
View notes
Text
I have recently come across multiple posts talking about people using AI programs to finish other people’s fanfics without their permission. In the comments and reblogs, a debate started about whether this was ethical or not.
It is taking someone else’s creative work, which they have spent hours working on as a gift to themselves and other fans, and creating an ending outside the author’s vision because the reader wants a story and for whatever reason the author hasn’t completed it yet. They may have left it unfinished for a reason, or it could be still in progress but taking longer than the reader wants because fanfic writing is an unpaid passion project on these sites.
Fanfic writers shared that they were considering taking down their stories to prevent this, while some fans defended the practice saying they would use AI to compete unfinished works for personal reading but wouldn’t post the AI story. Even if the fic isn’t being posted, the act of putting the fic into the AI as a prompt is still using the writer’s work without their permission.
As you search Tumblr and Ao3, there are dozens of ‘original’ fics being posted with AI credited as having written them. As this practice has become more common, writers are sharing their discomfort with these fics being written by AI using their works as training material.
Fanfiction is an art form, and not a commodity which writers owe to readers. It is a place where fans can come together to discuss and expand upon their favourite fandoms and works, and brings community and a shared appreciation for books, tv shows, movies and other creative mediums. Some of the best fanfics out there were written over the course of multiple years, and in the writing, authors notes and comments, you can see friendships grow between fans.
There is a related discussion in this community of fic writers about the influx of bots scraping existing fanfic. Bots are going through the Ao3 site, and leaving generic comments on fics to make their activity look more natural. In the weeks since I first saw posts about this, fic writers are locking their fics to users with accounts or taking their fics down entirely.
There is talk of moving back to techniques used before the advent of sites like Ao3 and FanFiction.net, such as creating groups of fans where you can share amongst yourselves or the use of email chains created by writers to distribute their works.
This is the resilience of fandom, but is a sad move which could cause fandom to become more isolated and harder to break into for new people.
Here are the posts that sparked this…
#this is a little opinion piece I wrote for a uni club I’m part of#the club is about how we can responsibly create AI#and I was going to put it on our Facebook where we frequently share articles and information#but my other committee members decided that we shouldn’t be posting an opinion (which could be controversial) in a club or age#which fair enough but also they want to post information about how AI is threatening safety and humanity Ty#and I think that the more pressing threat is culture#because I doubt AI is gonna get up and fight tomorrow#but I don’t feel like arguing so I’ll post here instead so enjoy!#ai discourse#ai safety#ai is scary#fandom#fanfiction#also just because this is about Fanfiction and fandom not real world scary science? not sure their exact thought process
7 notes
·
View notes
Text
The DARK SIDE of AI : Can we trust Self-Driving Cars?
Read our new blog and follow/subscribe for more such informative content. #artificialintelligence #ai #selfdrivingcars #tesla
Self-driving cars have been hailed as the future of transportation, promising safer roads and more efficient travel. They use artificial intelligence (AI) to navigate roads and make decisions on behalf of the driver, leading many to believe that they will revolutionize the way we commute. However, as with any technology, there is a dark side to AI-powered self-driving cars that must be…
View On WordPress
#AI and society#AI safety#artificial intelligence#Autonomous driving#Autonomous vehicles#Cybersecurity#Dangers of AI#Driverless cars#Ethics of AI#Future of transportation#Human error#machine learning#Potential dangers#Regulations for self-driving cars#Risks of self-driving cars#Road safety#Safety concerns#Self-driving cars#Technological progress#Trust in technology
16 notes
·
View notes
Text
Goodnight ! 😎🥰
- Created with seaart.ai/s/3MrfGG
#seaart #seaartai #aiartchallenges #ia #ai #stablediffusion #dalle2 #midjourney #midjourneyart #midjourneyai #girl #controlnet #artoftheday #aiart #aiartwork #aigeneratedart #digital #digitalart #aiartcommunity #Asian
#ai#aiart#digitalart#artwork#digitalartwork#artist#seaart#photography#ai artwork#girl#asian#hot asian babe#ai safety
3 notes
·
View notes
Text
Anthropic urges AI regulation to avoid catastrophes
New Post has been published on https://thedigitalinsider.com/anthropic-urges-ai-regulation-to-avoid-catastrophes/
Anthropic urges AI regulation to avoid catastrophes
.pp-multiple-authors-boxes-wrapper display:none; img width:100%;
Anthropic has flagged the potential risks of AI systems and calls for well-structured regulation to avoid potential catastrophes. The organisation argues that targeted regulation is essential to harness AI’s benefits while mitigating its dangers.
As AI systems evolve in capabilities such as mathematics, reasoning, and coding, their potential misuse in areas like cybersecurity or even biological and chemical disciplines significantly increases.
Anthropic warns the next 18 months are critical for policymakers to act, as the window for proactive prevention is narrowing. Notably, Anthropic’s Frontier Red Team highlights how current models can already contribute to various cyber offense-related tasks and expects future models to be even more effective.
Of particular concern is the potential for AI systems to exacerbate chemical, biological, radiological, and nuclear (CBRN) misuse. The UK AI Safety Institute found that several AI models can now match PhD-level human expertise in providing responses to science-related inquiries.
In addressing these risks, Anthropic has detailed its Responsible Scaling Policy (RSP) that was released in September 2023 as a robust countermeasure. RSP mandates an increase in safety and security measures corresponding to the sophistication of AI capabilities.
The RSP framework is designed to be adaptive and iterative, with regular assessments of AI models allowing for timely refinement of safety protocols. Anthropic says that it’s committed to maintaining and enhancing safety spans various team expansions, particularly in security, interpretability, and trust sectors, ensuring readiness for the rigorous safety standards set by its RSP.
Anthropic believes the widespread adoption of RSPs across the AI industry, while primarily voluntary, is essential for addressing AI risks.
Transparent, effective regulation is crucial to reassure society of AI companies’ adherence to promises of safety. Regulatory frameworks, however, must be strategic, incentivising sound safety practices without imposing unnecessary burdens.
Anthropic envisions regulations that are clear, focused, and adaptive to evolving technological landscapes, arguing that these are vital in achieving a balance between risk mitigation and fostering innovation.
In the US, Anthropic suggests that federal legislation could be the ultimate answer to AI risk regulation—though state-driven initiatives might need to step in if federal action lags. Legislative frameworks developed by countries worldwide should allow for standardisation and mutual recognition to support a global AI safety agenda, minimising the cost of regulatory adherence across different regions.
Furthermore, Anthropic addresses scepticism towards imposing regulations—highlighting that overly broad use-case-focused regulations would be inefficient for general AI systems, which have diverse applications. Instead, regulations should target fundamental properties and safety measures of AI models.
While covering broad risks, Anthropic acknowledges that some immediate threats – like deepfakes – aren’t the focus of their current proposals since other initiatives are tackling these nearer-term issues.
Ultimately, Anthropic stresses the importance of instituting regulations that spur innovation rather than stifle it. The initial compliance burden, though inevitable, can be minimised through flexible and carefully-designed safety tests. Proper regulation can even help safeguard both national interests and private sector innovation by securing intellectual property against threats internally and externally.
By focusing on empirically measured risks, Anthropic plans for a regulatory landscape that neither biases against nor favours open or closed-source models. The objective remains clear: to manage the significant risks of frontier AI models with rigorous but adaptable regulation.
(Image Credit: Anthropic)
See also: President Biden issues first National Security Memorandum on AI
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Tags: ai, anthropic, artificial intelligence, government, law, legal, Legislation, policy, Politics, regulation, risks, rsp, safety, Society
#2023#adoption#ai#ai & big data expo#AI models#AI regulation#ai risks#ai safety#AI systems#amp#anthropic#applications#Articles#artificial#Artificial Intelligence#automation#biden#Big Data#california#chemical#Cloud#coding#Companies#compliance#comprehensive#conference#cyber#cyber security#cybersecurity#data
0 notes
Text
AI Robots Can Be Jailbroken & Manipulated to Trigger Destruction
A gaggle of researchers from Penn Engineering created an algorithm that may jailbreak AI-enabled robots, bypass their security protocols, and make them do dangerous issues. An experiment was carried out on three in style AI robots and the researchers have been in a position to make them trigger intentional collisions, block emergency exits, and detonate bombs. The excellent news is that the…
View On WordPress
0 notes
Text
Uh, yeah.
"I do think that AI automation poses very large risks to society, mostly from situations where the AIs autonomously decide to grab power, which is why I research the subject."
0 notes
Text
so far the download links are computer only + apple phone but this seems very useful
Tumblr is doing some stupid AI shit so go to blog settings > Visibility > Prevent third-party sharing.
55K notes
·
View notes
Text
Is AI Regulation Keeping Up? The Urgent Need Explained!
AI regulation is evolving rapidly, with governments and regulatory bodies imposing stricter controls on AI development and deployment. The EU's AI Act aims to ban certain uses of AI, impose obligations on developers of high-risk AI systems, and require transparency from companies using generative AI. This trend reflects mounting concerns over ethics, safety, and the societal impact of artificial intelligence. As we delve into these critical issues, we'll explore the urgent need for robust frameworks to manage this technology's rapid advancement effectively. Stay tuned for an in-depth analysis!
#AIRegulation
#EUAIACT
Video Automatically Generated by Faceless.Video
#AI regulation#AI development#Neturbiz#EU AI Act#AI ethics#AI safety#generative AI#high-risk AI#AI transparency#regulatory bodies#AI frameworks#societal impact#technology management#urgent need for regulation#responsible AI#ethical AI#tech regulation#digital regulation#government AI#AI#risks#governance#controls#deployment#concerns#policies#standards#challenges#innovation#regulation
0 notes