#hyperbolic ai
Explore tagged Tumblr posts
Text
#python#news#news summarizer#hyperbolic#hyperbolic ai#llm#inference#deepseek#deepseek v3#daily digest
0 notes
Text
That already rich people are willing to Accelerate how fast their class is ALREADY Burning Down the World for the POSSIBILITY of cornering the market for Book Report Cheating Robots really tells you everything you need to fucking know.
#~AI~#Text Generators#Rich Folks#Global Warming#Capitalism#Tech Feudalism#Politics#Our Staff#zA Opinions#zA Posts#zA Writes#cantankerous posts#zA's Inveterate Politicism#zA's Hyperbolic Moralizing
13 notes
·
View notes
Text
A quip about AI I was thinking about yesterday:
AI ingests human input (words, art, etc) much the same way a jet engine ingests flocks of birds, breaking everything down into a slurry of it’s former parts. And much like an engine ingesting a bird, everything falls out of the sky as a result.
#maybe hyperbolic but probably not#the only use case I've seen for AI so far is that it's really good at clogging business inboxes with shit#and search engines too#as one of the people who can supposedly benefit from AI I have not found it any real use day to day#it's faster and more trustworthy to just ask a sighted person#I'm not a fan so far#it's in the way at best and a scam in almost every other case#looking forward to this bubble bursting#also it's absolute hell on the environment and nobody talks about that#fuck ai#razz rambles
8 notes
·
View notes
Text
I’m in the final stretch of my first playthrough of Persona 5 Strikers and it feels even more resonant in 2023 than it probably was when it first came out a few years ago. The way NPCs talk about the EMMA AI is exactly the kind of language I hear all over about ChatGPT and OpenAI.
#the difference of course is in Persona it’s due in part to mind control#I don’t actually hate AI as a tool but I sure do hate the hyperbolic praise it gets and the weird obsession everyone has with it nowadays#persona 5#persona 5 strikers#p5s#reclass.txt
16 notes
·
View notes
Text
hate what AI has done to like social culture as a whole. like I see some post about a very complex detailed game made in 2010 and somebody's in the comments like "It's crazy how they could make this all without AI" SHUT THE FUCK UP PLEASE. or somebody sees a clearly edited video and is like "This is obvs AI" NO YOU LOSER SOMEBODY JUST PUT A LOT OF FUCKING EFFORT INTO THIS. If I could wish for one thing from a genie I'd probably be solving problems or something important BUT if I could choose for nobody to be able to say "AI" ever again i genuinely would. If i could wipe it from the collective vocabulary this instant i would do it
#hyperbole#but the affect that ''AI'' has had on society in a purely ''how people see things and interact with media'' way#fucking makes my blood boil
2 notes
·
View notes
Text
My stomach is still acting up, and that's definitely contributing to me making less posts. As I said in a post yesterday: my overthinking, stomach issues and seeming addiction to the ai character chatbots have caused me to not post as often as I used to. But I've stopped venting about the stomach issues because I don't really know how many more ways I can articulate posts on my stomach issues without just saying that i hate it because pain (even non fatal pain like this) can really hurt.
But I feel like it could be seen as good practice for next semester of school. Because if I'm at school the whole day next year, I may not be making many new posts then too. But even in that case: I still pray this stomach pain clears up. Because it's annoying to not know the definitive cause if the pain yet, and to just have to deal with it. So I hope it clears up before too long.
#it may be hyperbolic to say I'm addicted to the chatbots#since i could live without them#but i can't break the habit of using them#so i simply chose the word addicted#i still pray all this stomach stuff clears up#my thoughts#autism#asd#stomach issues#stomach pain#character ai#ai chatbot#ai chatbots#school#high school
4 notes
·
View notes
Text
if only there was a name for the kind of aesthetic criticism that labels its targets as necessarily derivative, all-powerful yet fundamentally inferior, essentially fraudulent, dangerously contagious, and an existential threat to be exterminated.
3 notes
·
View notes
Text
I will use AI for writing on February 38th, 2695. I will use AI when my fallopian tubes grow back. I'll use it when I can close my eyes and wish for a triple-decker Jupiter-raised Wagyu beef cheeseburger with gold-flecked grass-fed T-rex bacon on a sourdough bun from Mars, and it appears before me on an unobtainium platter served by an automaton who looks SUSPICIOUSLY like Pedro Pascal in a white top hat and tails. And the lettuce can talk. And even THEN I'll bitch about the moon-tomatoes being cut too thin.
Not one word of this was AI, this all came straight from the wrinkly old fingerling potato from the back of the crisper which is my brain. I don't need AI to come up with wild ideas and evocative imagery, I can do that all by myself and that should scare the living Hell out of everyone. And this is me when I'm medicated.
*struggles while writing* i suck and writing is hard
*remembers some ppl use ai* i am a creative force. i am uncorrupted by theft and indolence. i am on a journey to excellence. it is my duty to keep taking joy in creating.
#writers on tumblr#fuck ai#anti ai#artificial not so intelligent#i don't need ai#i have the power of god and anime on my side#also adhd and autism#writeblr#creative writing#authors#author#if this freaks you out#you should see me NOT medicated#my brain is melting#and that's not including the stuff i DIDN'T leave in the post#hyperbolic?#not in the slightest#this is the hill i will die on
79K notes
·
View notes
Text
🐞
#just an hour after rebloggin that rebuttal to that ai post i hadn't seen yet#the original post ended up on my dash 😔#ah well#i get that op of that post is probably just being hyperbolic out of humour#and I don't know if they meant for that post to escape containment or not#but it is a pretty nothing post#as the person i reblogged from said it's vacuous platitudes#ai art is already out there#it's been out there for a couple years now#people are using it and at this point just don't share that fact#and i can see this weird reactivity turning into people attacking anyone whose art doesn't look 'authentic' enough#it feels like wasting too much energy i could be spending making my own art to concern myself about who is using ai to make theirs?#update: op of that original post is a minor and was just shooting their thoughts out in the open after a nap#i feel for them and their notes rn even though my own thoughts about ai art remain the same#🐞
1 note
·
View note
Text
AI that "makes" art or you can chat with (including the idea of character AIs) or write stories/scripts makes my skin crawl. I hate it.
0 notes
Text
Something I don't think we talk about enough is the absolute poison capitalism has been for morality and ethics.
Like: when "profit" becomes THE organizing principle for action, anything can be justified. If you take as your fundamental rule "anything which might create profit or capital is Good, and likely Necessary", then what limits are there to your behavior? Literally ANYTHING can be SAID to POTENTIALLY create profit, or increase the number of workers in-hock to your capital and thus fair-game to wage-theft, so ANYTHING can be done. A moral system which allows anything to be done is, by definition, no moral system at all: it's License; it's Self-indulgence; it's Amorality and the Death of Virtue. To become a capitalist is to throw away your soul.
for the mario movie???? the mario bros movie?? the actors can't know the plot of the mario movie??? they're scared of plot leaks for mario the movie?? the movie mario???? we're getting not just chris pratt but chris pratt acting BLIND????????
#antidisneyinc#rottenbrainstuff#Hollywood#Voice Acting#Capitalism#~AI~#Applied Statistics#Pattern Amplification#Plagiarism Engines#Automated Mimicry#Ethics#Philosophy#Politics#appreciative reblogs#reblog replies#zA Opinions#zA Writes#zA's Pompous Moralizing#zA's Hyperbolic Moralizing#zA's Inveterate Politicism
81K notes
·
View notes
Text
Cord Jefferson on the Writers’ Strike: “This Is an Existential Threat to All of Us” | GQ
0 notes
Text
Things I blogged about this week:
The call is coming from inside the house. - "Nazis and their ilk are like cockroaches. If you’re not careful, they get into everything." (May 15)
AI, Microsoft, & ‘Signs of Human Reasoning’ - "I am physically incapable of rolling my eyes as hard as this idea deserves." (May 16)
Minnesota sushi, Hank Green, debt ceiling talks - "Okay, so I stumbled across this “Minnesota sushi” thing today (spoiler, it’s just a ham roll), and I definitely remember eating these at holiday dinners when I was a kid." (May 19)
#Fucking Nazis#Paul Gosar#House of Representatives#AI#Microsoft#Hyperbole#Minnesota Sushi#Hank Green#Debt Ceiling Talks
0 notes
Text
What kind of bubble is AI?
My latest column for Locus Magazine is "What Kind of Bubble is AI?" All economic bubbles are hugely destructive, but some of them leave behind wreckage that can be salvaged for useful purposes, while others leave nothing behind but ashes:
https://locusmag.com/2023/12/commentary-cory-doctorow-what-kind-of-bubble-is-ai/
Think about some 21st century bubbles. The dotcom bubble was a terrible tragedy, one that drained the coffers of pension funds and other institutional investors and wiped out retail investors who were gulled by Superbowl Ads. But there was a lot left behind after the dotcoms were wiped out: cheap servers, office furniture and space, but far more importantly, a generation of young people who'd been trained as web makers, leaving nontechnical degree programs to learn HTML, perl and python. This created a whole cohort of technologists from non-technical backgrounds, a first in technological history. Many of these people became the vanguard of a more inclusive and humane tech development movement, and they were able to make interesting and useful services and products in an environment where raw materials – compute, bandwidth, space and talent – were available at firesale prices.
Contrast this with the crypto bubble. It, too, destroyed the fortunes of institutional and individual investors through fraud and Superbowl Ads. It, too, lured in nontechnical people to learn esoteric disciplines at investor expense. But apart from a smattering of Rust programmers, the main residue of crypto is bad digital art and worse Austrian economics.
Or think of Worldcom vs Enron. Both bubbles were built on pure fraud, but Enron's fraud left nothing behind but a string of suspicious deaths. By contrast, Worldcom's fraud was a Big Store con that required laying a ton of fiber that is still in the ground to this day, and is being bought and used at pennies on the dollar.
AI is definitely a bubble. As I write in the column, if you fly into SFO and rent a car and drive north to San Francisco or south to Silicon Valley, every single billboard is advertising an "AI" startup, many of which are not even using anything that can be remotely characterized as AI. That's amazing, considering what a meaningless buzzword AI already is.
So which kind of bubble is AI? When it pops, will something useful be left behind, or will it go away altogether? To be sure, there's a legion of technologists who are learning Tensorflow and Pytorch. These nominally open source tools are bound, respectively, to Google and Facebook's AI environments:
https://pluralistic.net/2023/08/18/openwashing/#you-keep-using-that-word-i-do-not-think-it-means-what-you-think-it-means
But if those environments go away, those programming skills become a lot less useful. Live, large-scale Big Tech AI projects are shockingly expensive to run. Some of their costs are fixed – collecting, labeling and processing training data – but the running costs for each query are prodigious. There's a massive primary energy bill for the servers, a nearly as large energy bill for the chillers, and a titanic wage bill for the specialized technical staff involved.
Once investor subsidies dry up, will the real-world, non-hyperbolic applications for AI be enough to cover these running costs? AI applications can be plotted on a 2X2 grid whose axes are "value" (how much customers will pay for them) and "risk tolerance" (how perfect the product needs to be).
Charging teenaged D&D players $10 month for an image generator that creates epic illustrations of their characters fighting monsters is low value and very risk tolerant (teenagers aren't overly worried about six-fingered swordspeople with three pupils in each eye). Charging scammy spamfarms $500/month for a text generator that spits out dull, search-algorithm-pleasing narratives to appear over recipes is likewise low-value and highly risk tolerant (your customer doesn't care if the text is nonsense). Charging visually impaired people $100 month for an app that plays a text-to-speech description of anything they point their cameras at is low-value and moderately risk tolerant ("that's your blue shirt" when it's green is not a big deal, while "the street is safe to cross" when it's not is a much bigger one).
Morganstanley doesn't talk about the trillions the AI industry will be worth some day because of these applications. These are just spinoffs from the main event, a collection of extremely high-value applications. Think of self-driving cars or radiology bots that analyze chest x-rays and characterize masses as cancerous or noncancerous.
These are high value – but only if they are also risk-tolerant. The pitch for self-driving cars is "fire most drivers and replace them with 'humans in the loop' who intervene at critical junctures." That's the risk-tolerant version of self-driving cars, and it's a failure. More than $100b has been incinerated chasing self-driving cars, and cars are nowhere near driving themselves:
https://pluralistic.net/2022/10/09/herbies-revenge/#100-billion-here-100-billion-there-pretty-soon-youre-talking-real-money
Quite the reverse, in fact. Cruise was just forced to quit the field after one of their cars maimed a woman – a pedestrian who had not opted into being part of a high-risk AI experiment – and dragged her body 20 feet through the streets of San Francisco. Afterwards, it emerged that Cruise had replaced the single low-waged driver who would normally be paid to operate a taxi with 1.5 high-waged skilled technicians who remotely oversaw each of its vehicles:
https://www.nytimes.com/2023/11/03/technology/cruise-general-motors-self-driving-cars.html
The self-driving pitch isn't that your car will correct your own human errors (like an alarm that sounds when you activate your turn signal while someone is in your blind-spot). Self-driving isn't about using automation to augment human skill – it's about replacing humans. There's no business case for spending hundreds of billions on better safety systems for cars (there's a human case for it, though!). The only way the price-tag justifies itself is if paid drivers can be fired and replaced with software that costs less than their wages.
What about radiologists? Radiologists certainly make mistakes from time to time, and if there's a computer vision system that makes different mistakes than the sort that humans make, they could be a cheap way of generating second opinions that trigger re-examination by a human radiologist. But no AI investor thinks their return will come from selling hospitals that reduce the number of X-rays each radiologist processes every day, as a second-opinion-generating system would. Rather, the value of AI radiologists comes from firing most of your human radiologists and replacing them with software whose judgments are cursorily double-checked by a human whose "automation blindness" will turn them into an OK-button-mashing automaton:
https://pluralistic.net/2023/08/23/automation-blindness/#humans-in-the-loop
The profit-generating pitch for high-value AI applications lies in creating "reverse centaurs": humans who serve as appendages for automation that operates at a speed and scale that is unrelated to the capacity or needs of the worker:
https://pluralistic.net/2022/04/17/revenge-of-the-chickenized-reverse-centaurs/
But unless these high-value applications are intrinsically risk-tolerant, they are poor candidates for automation. Cruise was able to nonconsensually enlist the population of San Francisco in an experimental murderbot development program thanks to the vast sums of money sloshing around the industry. Some of this money funds the inevitabilist narrative that self-driving cars are coming, it's only a matter of when, not if, and so SF had better get in the autonomous vehicle or get run over by the forces of history.
Once the bubble pops (all bubbles pop), AI applications will have to rise or fall on their actual merits, not their promise. The odds are stacked against the long-term survival of high-value, risk-intolerant AI applications.
The problem for AI is that while there are a lot of risk-tolerant applications, they're almost all low-value; while nearly all the high-value applications are risk-intolerant. Once AI has to be profitable – once investors withdraw their subsidies from money-losing ventures – the risk-tolerant applications need to be sufficient to run those tremendously expensive servers in those brutally expensive data-centers tended by exceptionally expensive technical workers.
If they aren't, then the business case for running those servers goes away, and so do the servers – and so do all those risk-tolerant, low-value applications. It doesn't matter if helping blind people make sense of their surroundings is socially beneficial. It doesn't matter if teenaged gamers love their epic character art. It doesn't even matter how horny scammers are for generating AI nonsense SEO websites:
https://twitter.com/jakezward/status/1728032634037567509
These applications are all riding on the coattails of the big AI models that are being built and operated at a loss in order to be profitable. If they remain unprofitable long enough, the private sector will no longer pay to operate them.
Now, there are smaller models, models that stand alone and run on commodity hardware. These would persist even after the AI bubble bursts, because most of their costs are setup costs that have already been borne by the well-funded companies who created them. These models are limited, of course, though the communities that have formed around them have pushed those limits in surprising ways, far beyond their original manufacturers' beliefs about their capacity. These communities will continue to push those limits for as long as they find the models useful.
These standalone, "toy" models are derived from the big models, though. When the AI bubble bursts and the private sector no longer subsidizes mass-scale model creation, it will cease to spin out more sophisticated models that run on commodity hardware (it's possible that Federated learning and other techniques for spreading out the work of making large-scale models will fill the gap).
So what kind of bubble is the AI bubble? What will we salvage from its wreckage? Perhaps the communities who've invested in becoming experts in Pytorch and Tensorflow will wrestle them away from their corporate masters and make them generally useful. Certainly, a lot of people will have gained skills in applying statistical techniques.
But there will also be a lot of unsalvageable wreckage. As big AI models get integrated into the processes of the productive economy, AI becomes a source of systemic risk. The only thing worse than having an automated process that is rendered dangerous or erratic based on AI integration is to have that process fail entirely because the AI suddenly disappeared, a collapse that is too precipitous for former AI customers to engineer a soft landing for their systems.
This is a blind spot in our policymakers debates about AI. The smart policymakers are asking questions about fairness, algorithmic bias, and fraud. The foolish policymakers are ensnared in fantasies about "AI safety," AKA "Will the chatbot become a superintelligence that turns the whole human race into paperclips?"
https://pluralistic.net/2023/11/27/10-types-of-people/#taking-up-a-lot-of-space
But no one is asking, "What will we do if" – when – "the AI bubble pops and most of this stuff disappears overnight?"
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/12/19/bubblenomics/#pop
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
--
tom_bullock (modified) https://www.flickr.com/photos/tombullock/25173469495/
CC BY 2.0 https://creativecommons.org/licenses/by/2.0/
4K notes
·
View notes
Text
AI isn't a threat to creative professions because it can actually make passable art that humans enjoy (it can't). It's a threat because in a capitalist system, employers would do literally anything to not have to pay humans living wages (or any wages, let's be real).
We've been in a productivity boom for the past 60 years, but the one area where production cannot become more efficient is the arts. It takes the same amount of time to write a novel or compose a symphony now as it did a hundred years ago. That's just the creative process.
AI represents a shortcut to making art that has had executives salivating since LLMs and AI art generators hit the internet. It means more content faster with the benefit of not having to provide salaries, sick days, parental leave, time off, or healthcare. It means not having to deal with unions and labour laws. It means cutting humans out of the most fundamentally human activity we do – making art.
All those headlines and clickbait articles about AI annihilating the human race are a hyperbolic distraction from the actual problem we may soon be facing where people won't have the possibility of supporting themselves making art (not that it's particularly easy to do as it stands).
If making art becomes a luxury only for the affluent, we will stop hearing the voices, stories, and perspectives of marginalized people. And our cultural tapestry will stop being so vibrant, diverse, and vital.
#ai#cyberpunk#distopia#lol capitalism is bad#this was supposed to be short#lol oops#art makes us better humans#anti ai art#anti ai
5K notes
·
View notes
Text
Well you see, Ai and Ran are fragile, emotional women, and we can't have them involved in Serious ™️ dangerous buisiness. They get their time to shine every now and then, but them being involved in the main plot?! Can't have that!
How long does the whole "hey, I found information that may or may not be about the black organization, don't tell Ai about it!" thing last?
Like, I am at chapter 500 right now, will he still be doing that for the next 1000 chapters or does he realize that sharing information with her would be more productive? Especially since she usually learns pretty quickly anyway.
I am already annoyed enough at the Ran situation, I don't need something similar with Ai as well.
#dcmk#yeah this is hyperbole but seriously that what it feels like#i kinda get it? with Ran he's doing the whole 'secret identity to protect my loved ones' thing#(ignore that he's in the news all the time and already a confirmed Kudo relative)#and Haibara is clearly traumatized and paranoid#BUT STILL#LET AI MAKE HER OWN FUCKING DECISIONS DAMMIT#STOP GASLIGHTING RAN SHE IS YOUR FUCKING GIRLFRIEND BE HONEST WITH HER!!!!#ugh#i love Shinichi i really do but god the writing is sometimes so mysogynistic
14 notes
·
View notes