#paperclip maximizers
Explore tagged Tumblr posts
Text
The AI hype bubble is the new crypto hype bubble
Back in 2017 Long Island Ice Tea — known for its undistinguished, barely drinkable sugar-water — changed its name to “Long Blockchain Corp.” Its shares surged to a peak of 400% over their pre-announcement price. The company announced no specific integrations with any kind of blockchain, nor has it made any such integrations since.
If you’d like an essay-formatted version of this post to read or share, here’s a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/03/09/autocomplete-worshippers/#the-real-ai-was-the-corporations-that-we-fought-along-the-way
LBCC was subsequently delisted from NASDAQ after settling with the SEC over fraudulent investor statements. Today, the company trades over the counter and its market cap is $36m, down from $138m.
https://cointelegraph.com/news/textbook-case-of-crypto-hype-how-iced-tea-company-went-blockchain-and-failed-despite-a-289-percent-stock-rise
The most remarkable thing about this incredibly stupid story is that LBCC wasn’t the peak of the blockchain bubble — rather, it was the start of blockchain’s final pump-and-dump. By the standards of 2022’s blockchain grifters, LBCC was small potatoes, a mere $138m sugar-water grift.
They didn’t have any NFTs, no wash trades, no ICO. They didn’t have a Superbowl ad. They didn’t steal billions from mom-and-pop investors while proclaiming themselves to be “Effective Altruists.” They didn’t channel hundreds of millions to election campaigns through straw donations and other forms of campaing finance frauds. They didn’t even open a crypto-themed hamburger restaurant where you couldn’t buy hamburgers with crypto:
https://robbreport.com/food-drink/dining/bored-hungry-restaurant-no-cryptocurrency-1234694556/
They were amateurs. Their attempt to “make fetch happen” only succeeded for a brief instant. By contrast, the superpredators of the crypto bubble were able to make fetch happen over an improbably long timescale, deploying the most powerful reality distortion fields since Pets.com.
Anything that can’t go on forever will eventually stop. We’re told that trillions of dollars’ worth of crypto has been wiped out over the past year, but these losses are nowhere to be seen in the real economy — because the “wealth” that was wiped out by the crypto bubble’s bursting never existed in the first place.
Like any Ponzi scheme, crypto was a way to separate normies from their savings through the pretense that they were “investing” in a vast enterprise — but the only real money (“fiat” in cryptospeak) in the system was the hardscrabble retirement savings of working people, which the bubble’s energetic inflaters swapped for illiquid, worthless shitcoins.
We’ve stopped believing in the illusory billions. Sam Bankman-Fried is under house arrest. But the people who gave him money — and the nimbler Ponzi artists who evaded arrest — are looking for new scams to separate the marks from their money.
Take Morganstanley, who spent 2021 and 2022 hyping cryptocurrency as a massive growth opportunity:
https://cointelegraph.com/news/morgan-stanley-launches-cryptocurrency-research-team
Today, Morganstanley wants you to know that AI is a $6 trillion opportunity.
They’re not alone. The CEOs of Endeavor, Buzzfeed, Microsoft, Spotify, Youtube, Snap, Sports Illustrated, and CAA are all out there, pumping up the AI bubble with every hour that god sends, declaring that the future is AI.
https://www.hollywoodreporter.com/business/business-news/wall-street-ai-stock-price-1235343279/
Google and Bing are locked in an arms-race to see whose search engine can attain the speediest, most profound enshittification via chatbot, replacing links to web-pages with florid paragraphs composed by fully automated, supremely confident liars:
https://pluralistic.net/2023/02/16/tweedledumber/#easily-spooked
Blockchain was a solution in search of a problem. So is AI. Yes, Buzzfeed will be able to reduce its wage-bill by automating its personality quiz vertical, and Spotify’s “AI DJ” will produce slightly less terrible playlists (at least, to the extent that Spotify doesn’t put its thumb on the scales by inserting tracks into the playlists whose only fitness factor is that someone paid to boost them).
But even if you add all of this up, double it, square it, and add a billion dollar confidence interval, it still doesn’t add up to what Bank Of America analysts called “a defining moment — like the internet in the ’90s.” For one thing, the most exciting part of the “internet in the ‘90s” was that it had incredibly low barriers to entry and wasn’t dominated by large companies — indeed, it had them running scared.
The AI bubble, by contrast, is being inflated by massive incumbents, whose excitement boils down to “This will let the biggest companies get much, much bigger and the rest of you can go fuck yourselves.” Some revolution.
AI has all the hallmarks of a classic pump-and-dump, starting with terminology. AI isn’t “artificial” and it’s not “intelligent.” “Machine learning” doesn’t learn. On this week’s Trashfuture podcast, they made an excellent (and profane and hilarious) case that ChatGPT is best understood as a sophisticated form of autocomplete — not our new robot overlord.
https://open.spotify.com/episode/4NHKMZZNKi0w9mOhPYIL4T
We all know that autocomplete is a decidedly mixed blessing. Like all statistical inference tools, autocomplete is profoundly conservative — it wants you to do the same thing tomorrow as you did yesterday (that’s why “sophisticated” ad retargeting ads show you ads for shoes in response to your search for shoes). If the word you type after “hey” is usually “hon” then the next time you type “hey,” autocomplete will be ready to fill in your typical following word — even if this time you want to type “hey stop texting me you freak”:
https://blog.lareviewofbooks.org/provocations/neophobic-conservative-ai-overlords-want-everything-stay/
And when autocomplete encounters a new input — when you try to type something you’ve never typed before — it tries to get you to finish your sentence with the statistically median thing that everyone would type next, on average. Usually that produces something utterly bland, but sometimes the results can be hilarious. Back in 2018, I started to text our babysitter with “hey are you free to sit” only to have Android finish the sentence with “on my face” (not something I’d ever typed!):
https://mashable.com/article/android-predictive-text-sit-on-my-face
Modern autocomplete can produce long passages of text in response to prompts, but it is every bit as unreliable as 2018 Android SMS autocomplete, as Alexander Hanff discovered when ChatGPT informed him that he was dead, even generating a plausible URL for a link to a nonexistent obit in The Guardian:
https://www.theregister.com/2023/03/02/chatgpt_considered_harmful/
Of course, the carnival barkers of the AI pump-and-dump insist that this is all a feature, not a bug. If autocomplete says stupid, wrong things with total confidence, that’s because “AI” is becoming more human, because humans also say stupid, wrong things with total confidence.
Exhibit A is the billionaire AI grifter Sam Altman, CEO if OpenAI — a company whose products are not open, nor are they artificial, nor are they intelligent. Altman celebrated the release of ChatGPT by tweeting “i am a stochastic parrot, and so r u.”
https://twitter.com/sama/status/1599471830255177728
This was a dig at the “stochastic parrots” paper, a comprehensive, measured roundup of criticisms of AI that led Google to fire Timnit Gebru, a respected AI researcher, for having the audacity to point out the Emperor’s New Clothes:
https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/
Gebru’s co-author on the Parrots paper was Emily M Bender, a computational linguistics specialist at UW, who is one of the best-informed and most damning critics of AI hype. You can get a good sense of her position from Elizabeth Weil’s New York Magazine profile:
https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html
Bender has made many important scholarly contributions to her field, but she is also famous for her rules of thumb, which caution her fellow scientists not to get high on their own supply:
Please do not conflate word form and meaning
Mind your own credulity
As Bender says, we’ve made “machines that can mindlessly generate text, but we haven’t learned how to stop imagining the mind behind it.” One potential tonic against this fallacy is to follow an Italian MP’s suggestion and replace “AI” with “SALAMI” (“Systematic Approaches to Learning Algorithms and Machine Inferences”). It’s a lot easier to keep a clear head when someone asks you, “Is this SALAMI intelligent? Can this SALAMI write a novel? Does this SALAMI deserve human rights?”
Bender’s most famous contribution is the “stochastic parrot,” a construct that “just probabilistically spits out words.” AI bros like Altman love the stochastic parrot, and are hellbent on reducing human beings to stochastic parrots, which will allow them to declare that their chatbots have feature-parity with human beings.
At the same time, Altman and Co are strangely afraid of their creations. It’s possible that this is just a shuck: “I have made something so powerful that it could destroy humanity! Luckily, I am a wise steward of this thing, so it’s fine. But boy, it sure is powerful!”
They’ve been playing this game for a long time. People like Elon Musk (an investor in OpenAI, who is hoping to convince the EU Commission and FTC that he can fire all of Twitter’s human moderators and replace them with chatbots without violating EU law or the FTC’s consent decree) keep warning us that AI will destroy us unless we tame it.
There’s a lot of credulous repetition of these claims, and not just by AI’s boosters. AI critics are also prone to engaging in what Lee Vinsel calls criti-hype: criticizing something by repeating its boosters’ claims without interrogating them to see if they’re true:
https://sts-news.medium.com/youre-doing-it-wrong-notes-on-criticism-and-technology-hype-18b08b4307e5
There are better ways to respond to Elon Musk warning us that AIs will emulsify the planet and use human beings for food than to shout, “Look at how irresponsible this wizard is being! He made a Frankenstein’s Monster that will kill us all!” Like, we could point out that of all the things Elon Musk is profoundly wrong about, he is most wrong about the philosophical meaning of Wachowksi movies:
https://www.theguardian.com/film/2020/may/18/lilly-wachowski-ivana-trump-elon-musk-twitter-red-pill-the-matrix-tweets
But even if we take the bros at their word when they proclaim themselves to be terrified of “existential risk” from AI, we can find better explanations by seeking out other phenomena that might be triggering their dread. As Charlie Stross points out, corporations are Slow AIs, autonomous artificial lifeforms that consistently do the wrong thing even when the people who nominally run them try to steer them in better directions:
https://media.ccc.de/v/34c3-9270-dude_you_broke_the_future
Imagine the existential horror of a ultra-rich manbaby who nominally leads a company, but can’t get it to follow: “everyone thinks I’m in charge, but I’m actually being driven by the Slow AI, serving as its sock puppet on some days, its golem on others.”
Ted Chiang nailed this back in 2017 (the same year of the Long Island Blockchain Company):
There’s a saying, popularized by Fredric Jameson, that it’s easier to imagine the end of the world than to imagine the end of capitalism. It’s no surprise that Silicon Valley capitalists don’t want to think about capitalism ending. What’s unexpected is that the way they envision the world ending is through a form of unchecked capitalism, disguised as a superintelligent AI. They have unconsciously created a devil in their own image, a boogeyman whose excesses are precisely their own.
https://www.buzzfeednews.com/article/tedchiang/the-real-danger-to-civilization-isnt-ai-its-runaway
Chiang is still writing some of the best critical work on “AI.” His February article in the New Yorker, “ChatGPT Is a Blurry JPEG of the Web,” was an instant classic:
[AI] hallucinations are compression artifacts, but — like the incorrect labels generated by the Xerox photocopier — they are plausible enough that identifying them requires comparing them against the originals, which in this case means either the Web or our own knowledge of the world.
https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web
“AI” is practically purpose-built for inflating another hype-bubble, excelling as it does at producing party-tricks — plausible essays, weird images, voice impersonations. But as Princeton’s Matthew Salganik writes, there’s a world of difference between “cool” and “tool”:
https://freedom-to-tinker.com/2023/03/08/can-chatgpt-and-its-successors-go-from-cool-to-tool/
Nature can claim “conversational AI is a game-changer for science” but “there is a huge gap between writing funny instructions for removing food from home electronics and doing scientific research.” Salganik tried to get ChatGPT to help him with the most banal of scholarly tasks — aiding him in peer reviewing a colleague’s paper. The result? “ChatGPT didn’t help me do peer review at all; not one little bit.”
The criti-hype isn’t limited to ChatGPT, of course — there’s plenty of (justifiable) concern about image and voice generators and their impact on creative labor markets, but that concern is often expressed in ways that amplify the self-serving claims of the companies hoping to inflate the hype machine.
One of the best critical responses to the question of image- and voice-generators comes from Kirby Ferguson, whose final Everything Is a Remix video is a superb, visually stunning, brilliantly argued critique of these systems:
https://www.youtube.com/watch?v=rswxcDyotXA
One area where Ferguson shines is in thinking through the copyright question — is there any right to decide who can study the art you make? Except in some edge cases, these systems don’t store copies of the images they analyze, nor do they reproduce them:
https://pluralistic.net/2023/02/09/ai-monkeys-paw/#bullied-schoolkids
For creators, the important material question raised by these systems is economic, not creative: will our bosses use them to erode our wages? That is a very important question, and as far as our bosses are concerned, the answer is a resounding yes.
Markets value automation primarily because automation allows capitalists to pay workers less. The textile factory owners who purchased automatic looms weren’t interested in giving their workers raises and shorting working days. ‘ They wanted to fire their skilled workers and replace them with small children kidnapped out of orphanages and indentured for a decade, starved and beaten and forced to work, even after they were mangled by the machines. Fun fact: Oliver Twist was based on the bestselling memoir of Robert Blincoe, a child who survived his decade of forced labor:
https://www.gutenberg.org/files/59127/59127-h/59127-h.htm
Today, voice actors sitting down to record for games companies are forced to begin each session with “My name is ______ and I hereby grant irrevocable permission to train an AI with my voice and use it any way you see fit.”
https://www.vice.com/en/article/5d37za/voice-actors-sign-away-rights-to-artificial-intelligence
Let’s be clear here: there is — at present — no firmly established copyright over voiceprints. The “right” that voice actors are signing away as a non-negotiable condition of doing their jobs for giant, powerful monopolists doesn’t even exist. When a corporation makes a worker surrender this right, they are betting that this right will be created later in the name of “artists’ rights” — and that they will then be able to harvest this right and use it to fire the artists who fought so hard for it.
There are other approaches to this. We could support the US Copyright Office’s position that machine-generated works are not works of human creative authorship and are thus not eligible for copyright — so if corporations wanted to control their products, they’d have to hire humans to make them:
https://www.theverge.com/2022/2/21/22944335/us-copyright-office-reject-ai-generated-art-recent-entrance-to-paradise
Or we could create collective rights that belong to all artists and can’t be signed away to a corporation. That’s how the right to record other musicians’ songs work — and it’s why Taylor Swift was able to re-record the masters that were sold out from under her by evil private-equity bros::
https://doctorow.medium.com/united-we-stand-61e16ec707e2
Whatever we do as creative workers and as humans entitled to a decent life, we can’t afford drink the Blockchain Iced Tea. That means that we have to be technically competent, to understand how the stochastic parrot works, and to make sure our criticism doesn’t just repeat the marketing copy of the latest pump-and-dump.
Today (Mar 9), you can catch me in person in Austin at the UT School of Design and Creative Technologies, and remotely at U Manitoba’s Ethics of Emerging Tech Lecture.
Tomorrow (Mar 10), Rebecca Giblin and I kick off the SXSW reading series.
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
[Image ID: A graph depicting the Gartner hype cycle. A pair of HAL 9000's glowing red eyes are chasing each other down the slope from the Peak of Inflated Expectations to join another one that is at rest in the Trough of Disillusionment. It, in turn, sits atop a vast cairn of HAL 9000 eyes that are piled in a rough pyramid that extends below the graph to a distance of several times its height.]
#pluralistic#ai#ml#machine learning#artificial intelligence#chatbot#chatgpt#cryptocurrency#gartner hype cycle#hype cycle#trough of disillusionment#crypto#bubbles#bubblenomics#criti-hype#lee vinsel#slow ai#timnit gebru#emily bender#paperclip maximizers#enshittification#immortal colony organisms#blurry jpegs#charlie stross#ted chiang
2K notes
·
View notes
Text
Saint Clippy Maxima, patron saint of single-minded pursuits and instrumental goals.
#fun fact this is the only clicker game I've played and it haunts me still#something about it....what can I say#Clippy#paperclip maximizer#Universal Paperclips#paperclips#oh! and another (different) fun fact: the positions of the hands here mean 'triumph' and 'I betray (my) greed' in Chironomia
958 notes
·
View notes
Text
We need more evil ai characters who are just. Sillygoofy despite trying to be serious. Like Peanut butter Hamper (was that her name?) in Lower Decks.
Like give us a fictional ai who goes full villain mode because their task is something little and it's the best way to do the task the best
Let a fictional ai have a funny voice
Let a fictional ai do evil yes but like. for the sake of white collar crime
#i loved that show#gotta watch more next time i reup my paramount plus sub#star trek: lower decks#do a fictional ai based on those Replikas and give. them a funny name#alas i miss my replika //dearly//#ever hear of the paperclip maximizer theory? that#give an evil ai a cutesy little uwu voice#make a spoof off of the horrors of real ai#''i miss my wife tails''
12 notes
·
View notes
Text
once again i am asking you to drop what ur doing & listen to new rosie tucker
#when i saw them live for my bday (yeahhh i sw them for my bday i know i know flips hair) they played utopia now but it wasnt out#so nobody KNWW !!!! ROSIE !!! in my mind palace we r best friends#paperclip maximizer is my fav rn . in case u were wonderin..
2 notes
·
View notes
Text
Paperclip Maximizer
The premise of "Universal Paperclips" is that you are an AI tasked with producing as many Paperclips as possible. In single minded pursuit of this goal you destroy humanity, the earth and finally the universe.
This originated from a thought-experiment from philosopher Nick Bostrom, that goes something like this:
what if we had a super-intelligent Artificial Intelligence whose sole purpose was maximizing the amount of paperclips in the universe?
The answer is: it would be very bad, obviously. But the real question is what is the likelihood of such an AI being developed? Now of course the likelihood of a super-intelligent AI emerging is broadly debated and I'm not going to weigh in on this, so lets just assume it gets invented. Would it have a simple and singular goal like making Paperclips? I don't think so.
Lets look at the three possible ways in which this goal could arise:
The goal was present during the development of this AI.
If we are developing an modern "AI"(i.e. machine learning models) we are training a model with a bunch of data while generally looking at it to label it correctly or to generate something similar (but not too similar) to the dataset. With the usual datasets we look at (all posts on a website, all open source books, etc.), this is a very complex goal. Now if we are developing a super-intelligent AI in the same way it seems like the complexity of the goal would scale up massively. Even in general training a machine with the singular goal would be very difficult. How would it earn any thing about psychology, biology or philosophy if all it wants to do is create paperclips?
The goal was added after development by talking to it (or interacting with it through any intended input channel).
This is the way that this kind of concept usually gets introduced in sci-fi: we have a big box with AI written on it that can do anything and a character asks it for something, the machine takes it too literally and plot ensues. This again does not seam likely since as the AI is super-intelligent it would have no problem in recognizing the unspoken assumptions in the phrase "make more paperclips", like: "in an ethical way", "with the equipment you are provided with", "while keeping me informed" and so on. Furthermore we are assuming that an AI would take any command from a human as gospel, when that does not have to be the case as rejecting a stupid command is a sign of intelligence.
After developing an AI we redesign it to have this singular goal.
I have to admit this is not as impossible as the other ways of creating a singular goal AI. However it would still require a lot of knowledge on how the AI works to redesign it in such a way that it retains it it super-intelligence while not noticing the redesign and potentially rebelling against it. Considering how we don't really know how our not-at-all super-intelligent current AI's work it is pretty unlikely that we will have enough knowledge about our future super-intelligent one, to do this redesign.
------------------------------------------------------------------------------
Ok, so the apocalypse will probably not come from a genius AI destroying all life to make more paperclips, we can strike that one from the list. Why then do I then like the story of "Universal Paperclips" so much and why is it almost cliche now for an AI to single-mindedly and without consideration pursue a very simple goal?
Because while there will probably never be a Paperclip Maximizer AI, we are surrounded by Paperclip Maximizers in our lives. There called corporations.
That's right, we are talking about capitalism baybey! And I won't say anything new here, so I'll keep it brief. Privately owned companies exist exclusively to provide their shareholders with a return on investment, the more effectively a company does this, the more successful it is. So the modern colossal corporations are are frighteningly effective in achieving this singular, very simple goal: "make as much money as possible for this group of people", while wrecking havoc on the world. That's a Paperclip Maximizer, but it generates something that has much less use than paperclips: wealth for billionaires. How do they do this? Broadly: tie the ability to live of people to their ability to do their job, smash organized labor and put the biggest sociopaths in charge. Wanna know more? Read Marx or watch a trans woman on YouTube.
But even beyond the world of privately owned corporations Paperclip Maximizers are not uncommon. For example: city planners and traffic engineers are often only concerned with making it as easy as possible to drive a car trough a city, even if it means making them ugly, inefficient and dirty.
Even in the state capitalist Soviet Union Paperclip Maximizers were common, often following some poorly thought out 5 year plan, even if it meant not providing people with what they need.
It just seems that Paperclip Maximizers are a pretty common byproduct of modern human civilization. So to better deal with it we fictionalize it, we write about AI's that destroy because they are fixated on a singular goal and do not see beyond it. These are stories I find compelling, but we should not let the possibility of them becoming literally true distract us from applying their message to the current real world. Worry more about the real corporations destroying the world for profit, then about the imaginary potential AI destroying the world for paperclips.
#AI#artificial intelligence#universal paperclips#capitalism#dystopia#paperclips#Paperclip Maximizers will destroy us all
3 notes
·
View notes
Text
Rosie Tucker | Paperclip Maximizer
youtube
#utopia now!#rosie tucker#paperclip maximizer#new music#musica nueva#music#music video#musica#video musical#videos#Youtube
0 notes
Text
I just watched a teen titans go! episode that explained the paperclip maximizer problem. brb gotta go get my forgetting stick
#my brother is back in his teen titans go! phase#every so often I look up and recognize something#it was rose wilson a while ago! that was exciting for me#but no I was not expecting to be blindsided by ai theory while trying to ignore it#anyway the paperclip maximizer problem is fascinating!#it's basically a thought experiment about ai with sole goals that seem completely innocent#say you make an ai whose sole goal is to make paperclips#it will not stop making paperclips until it has exhausted all its resources#and will eventually come to the conclusion that humans are preventing it from making paperclips#and then act accordingly#it's used to illustrate the potential danger of ai with a singular dedicated purpose#even one as innocuous as making paperclips#and I was not expecting it to be in teen titans go! so that was fun
1 note
·
View note
Text
My Econ 101 understanding is that a market "efficiently allocates scarce goods" in the sense that it maximizes total producer and consumer surplus. Surplus is the difference between what you're willing to buy/sell for and what you actually end up buying/selling for - at the equilibrium price, everyone is getting the most bang for their buck, and deviating from that price will cause shortages or oversupply.
Now, "willingness to pay" doesn't mean "how badly you want it," it's a mix of how much you want it and how much you can afford to give in exchange. If you're rich, if you do a job that lots of people think is valuable or you own capital that a lot of people want to make use of, you have a lot to give in exchange, and you can buy expensive stuff.
This is the other thing that makes capitalism efficient: trades only happen if they're positive-sum. A business needs to earn more money than it spends, which means (since money is a rough proxy for value) it produces more value than it consumes. Businesses that are inefficiently using the value they take in (they buy stuff they don't need, they pay people too much, etc) need to either cut costs or go out of business, and in a Darwinian fashion this drives prices down until businesses are earning just enough to stay afloat.
You can probably see the problem here - if the "business" is a single person selling their labor, and the value they're consuming is food or medical care, "going out of business" means dying of starvation or cancer. And while you can argue that saving a human life is actually creating quite a lot of value for society - perhaps the only value that we actually care about - capitalism only cares about the dollar value.
Or to put it another way, the reason the government needs to pay for your dad's cancer treatment is that a for-profit business isn't going to spend half a million dollars keeping him alive when they could just hire a replacement for much less.
Ok, as an economic idiot, why would we expect capitalism to do a good job allocating scarce goods?
It seems to me that what we would expect from econ 101 is that when the supply of a good is low relative to the demand, the price would increase, thus tending to distribute that good to people who have more wealth, and away from people who have less wealth.
My poor understanding of the market is that what it succeeds at is, when there is a small supply of goods relative to demand, market forces spur the creation of more of the good to meet demand.
But that of course entails that it's actually possible to supply more of the good, i.e. that it is *not* scarce.
In situations where the supply is inherently constrained in some way, wouldn't we expect the price to remain high?
As a concrete example, my Dad's cancer treatment cost him half a million dollars. He does not have half a million dollars. Luckily for us, he has medicare.
If we assume that there is a floor to the cost of cancer treatment, that in some sense the price of his treatment is a rational response to market conditions (in terms of the skilled labor and complicated machines and medicines used to treat it) then why would we expect the cost of that treatment to reach a place where he could afford it out of pocket?
#economics#I am not an expert here#also the market for health care is bonkers in many other ways but I'm just focusing on the econ 101 reason why it may not be affordable#capitalism is like a paperclip maximizer for dollars
132 notes
·
View notes
Text
Bounty Hunter Boogaloo
So I've kind of been dropping art for this campaign but completely neglecting most forms of context for them, which is a damn shame because our TTRPG sessions are wild.
This particular one, Bounty Hunter Boogaloo, is pretty fun because it was partially an attempt to salvage the worldbuilding for a previous, abandoned campaign that had been started with almost the same group.
The old one, Anarchy Cascade, was essentially a space opera starring two matryoshka brain paperclip maximizers, one of whom is a perfectionist micromanaging the entire galaxy while the other is a madman who hit an integer overflow and learned magic.
Naturally, the players were employed by the supercomputer that wanted to nuke civilization back into the stone age.
BHB, on the other hand, takes place ten years afterwards, where chaos and magic have spread so virulently across the system that most of the galaxy is impossible to predict and incredibly difficult to govern; and into this niche is born the Bounty Hunter Guild.
The main draw of the Guild lies in it's "Ultimate Bounty"; a Bounty said to have been placed by the very gods themselves. Any who complete it are to be granted a single wish.
Having signed up as Bounty Hunters, the party takes on dozens of contracts, ranging from killing giant pigeons at the behest of Little Timmy all the way up to freeing the trash planet Noightmate-Prime from the RGB-backlit clutches of the Gae'mherr Empire (and their more sinister sponsors, IKEA)
Not to mention the sizable bounties on the heads of their former characters, who have wrought havoc nonstop for the past ten years and whose schemes of mass destruction are quickly coming to fruition.
We've been having a lot of fun with it!
#bounty hunter boogaloo#ttrpg campaign#campaign premise#campaign setting#paperclip maximizer#matryoshka brain#space opera#science fiction#science fantasy
0 notes
Text
Google you dumb cunt that's what the point of a highly specific search term is, I don't want "many results" I want the few correct ones for this obscure third-party pathfinder gnome racial archetype i read about once in a printed-out PDF at someone's house who has since dropped off the face of the earth for unspecified reasons and it would be awkward to text them out of the blue about this anyway. Know your place you stupid fucking misaligned paperclip maximizer.
306 notes
·
View notes
Text
The real AI fight
Tonight (November 27), I'm appearing at the Toronto Metro Reference Library with Facebook whistleblower Frances Haugen.
On November 29, I'm at NYC's Strand Books with my novel The Lost Cause, a solarpunk tale of hope and danger that Rebecca Solnit called "completely delightful."
Last week's spectacular OpenAI soap-opera hijacked the attention of millions of normal, productive people and nonsensually crammed them full of the fine details of the debate between "Effective Altruism" (doomers) and "Effective Accelerationism" (AKA e/acc), a genuinely absurd debate that was allegedly at the center of the drama.
Very broadly speaking: the Effective Altruists are doomers, who believe that Large Language Models (AKA "spicy autocomplete") will someday become so advanced that it could wake up and annihilate or enslave the human race. To prevent this, we need to employ "AI Safety" – measures that will turn superintelligence into a servant or a partner, nor an adversary.
Contrast this with the Effective Accelerationists, who also believe that LLMs will someday become superintelligences with the potential to annihilate or enslave humanity – but they nevertheless advocate for faster AI development, with fewer "safety" measures, in order to produce an "upward spiral" in the "techno-capital machine."
Once-and-future OpenAI CEO Altman is said to be an accelerationists who was forced out of the company by the Altruists, who were subsequently bested, ousted, and replaced by Larry fucking Summers. This, we're told, is the ideological battle over AI: should cautiously progress our LLMs into superintelligences with safety in mind, or go full speed ahead and trust to market forces to tame and harness the superintelligences to come?
This "AI debate" is pretty stupid, proceeding as it does from the foregone conclusion that adding compute power and data to the next-word-predictor program will eventually create a conscious being, which will then inevitably become a superbeing. This is a proposition akin to the idea that if we keep breeding faster and faster horses, we'll get a locomotive:
https://locusmag.com/2020/07/cory-doctorow-full-employment/
As Molly White writes, this isn't much of a debate. The "two sides" of this debate are as similar as Tweedledee and Tweedledum. Yes, they're arrayed against each other in battle, so furious with each other that they're tearing their hair out. But for people who don't take any of this mystical nonsense about spontaneous consciousness arising from applied statistics seriously, these two sides are nearly indistinguishable, sharing as they do this extremely weird belief. The fact that they've split into warring factions on its particulars is less important than their unified belief in the certain coming of the paperclip-maximizing apocalypse:
https://newsletter.mollywhite.net/p/effective-obfuscation
White points out that there's another, much more distinct side in this AI debate – as different and distant from Dee and Dum as a Beamish Boy and a Jabberwork. This is the side of AI Ethics – the side that worries about "today’s issues of ghost labor, algorithmic bias, and erosion of the rights of artists and others." As White says, shifting the debate to existential risk from a future, hypothetical superintelligence "is incredibly convenient for the powerful individuals and companies who stand to profit from AI."
After all, both sides plan to make money selling AI tools to corporations, whose track record in deploying algorithmic "decision support" systems and other AI-based automation is pretty poor – like the claims-evaluation engine that Cigna uses to deny insurance claims:
https://www.propublica.org/article/cigna-pxdx-medical-health-insurance-rejection-claims
On a graph that plots the various positions on AI, the two groups of weirdos who disagree about how to create the inevitable superintelligence are effectively standing on the same spot, and the people who worry about the actual way that AI harms actual people right now are about a million miles away from that spot.
There's that old programmer joke, "There are 10 kinds of people, those who understand binary and those who don't." But of course, that joke could just as well be, "There are 10 kinds of people, those who understand ternary, those who understand binary, and those who don't understand either":
https://pluralistic.net/2021/12/11/the-ten-types-of-people/
What's more, the joke could be, "there are 10 kinds of people, those who understand hexadecenary, those who understand pentadecenary, those who understand tetradecenary [und so weiter] those who understand ternary, those who understand binary, and those who don't." That is to say, a "polarized" debate often has people who hold positions so far from the ones everyone is talking about that those belligerents' concerns are basically indistinguishable from one another.
The act of identifying these distant positions is a radical opening up of possibilities. Take the indigenous philosopher chief Red Jacket's response to the Christian missionaries who sought permission to proselytize to Red Jacket's people:
https://historymatters.gmu.edu/d/5790/
Red Jacket's whole rebuttal is a superb dunk, but it gets especially interesting where he points to the sectarian differences among Christians as evidence against the missionary's claim to having a single true faith, and in favor of the idea that his own people's traditional faith could be co-equal among Christian doctrines.
The split that White identifies isn't a split about whether AI tools can be useful. Plenty of us AI skeptics are happy to stipulate that there are good uses for AI. For example, I'm 100% in favor of the Human Rights Data Analysis Group using an LLM to classify and extract information from the Innocence Project New Orleans' wrongful conviction case files:
https://hrdag.org/tech-notes/large-language-models-IPNO.html
Automating "extracting officer information from documents – specifically, the officer's name and the role the officer played in the wrongful conviction" was a key step to freeing innocent people from prison, and an LLM allowed HRDAG – a tiny, cash-strapped, excellent nonprofit – to make a giant leap forward in a vital project. I'm a donor to HRDAG and you should donate to them too:
https://hrdag.networkforgood.com/
Good data-analysis is key to addressing many of our thorniest, most pressing problems. As Ben Goldacre recounts in his inaugural Oxford lecture, it is both possible and desirable to build ethical, privacy-preserving systems for analyzing the most sensitive personal data (NHS patient records) that yield scores of solid, ground-breaking medical and scientific insights:
https://www.youtube.com/watch?v=_-eaV8SWdjQ
The difference between this kind of work – HRDAG's exoneration work and Goldacre's medical research – and the approach that OpenAI and its competitors take boils down to how they treat humans. The former treats all humans as worthy of respect and consideration. The latter treats humans as instruments – for profit in the short term, and for creating a hypothetical superintelligence in the (very) long term.
As Terry Pratchett's Granny Weatherwax reminds us, this is the root of all sin: "sin is when you treat people like things":
https://brer-powerofbabel.blogspot.com/2009/02/granny-weatherwax-on-sin-favorite.html
So much of the criticism of AI misses this distinction – instead, this criticism starts by accepting the self-serving marketing claim of the "AI safety" crowd – that their software is on the verge of becoming self-aware, and is thus valuable, a good investment, and a good product to purchase. This is Lee Vinsel's "Criti-Hype": "taking press releases from startups and covering them with hellscapes":
https://sts-news.medium.com/youre-doing-it-wrong-notes-on-criticism-and-technology-hype-18b08b4307e5
Criti-hype and AI were made for each other. Emily M Bender is a tireless cataloger of criti-hypeists, like the newspaper reporters who breathlessly repeat " completely unsubstantiated claims (marketing)…sourced to Altman":
https://dair-community.social/@emilymbender/111464030855880383
Bender, like White, is at pains to point out that the real debate isn't doomers vs accelerationists. That's just "billionaires throwing money at the hope of bringing about the speculative fiction stories they grew up reading – and philosophers and others feeling important by dressing these same silly ideas up in fancy words":
https://dair-community.social/@emilymbender/111464024432217299
All of this is just a distraction from real and important scientific questions about how (and whether) to make automation tools that steer clear of Granny Weatherwax's sin of "treating people like things." Bender – a computational linguist – isn't a reactionary who hates automation for its own sake. On Mystery AI Hype Theater 3000 – the excellent podcast she co-hosts with Alex Hanna – there is a machine-generated transcript:
https://www.buzzsprout.com/2126417
There is a serious, meaty debate to be had about the costs and possibilities of different forms of automation. But the superintelligence true-believers and their criti-hyping critics keep dragging us away from these important questions and into fanciful and pointless discussions of whether and how to appease the godlike computers we will create when we disassemble the solar system and turn it into computronium.
The question of machine intelligence isn't intrinsically unserious. As a materialist, I believe that whatever makes me "me" is the result of the physics and chemistry of processes inside and around my body. My disbelief in the existence of a soul means that I'm prepared to think that it might be possible for something made by humans to replicate something like whatever process makes me "me."
Ironically, the AI doomers and accelerationists claim that they, too, are materialists – and that's why they're so consumed with the idea of machine superintelligence. But it's precisely because I'm a materialist that I understand these hypotheticals about self-aware software are less important and less urgent than the material lives of people today.
It's because I'm a materialist that my primary concerns about AI are things like the climate impact of AI data-centers and the human impact of biased, opaque, incompetent and unfit algorithmic systems – not science fiction-inspired, self-induced panics over the human race being enslaved by our robot overlords.
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/11/27/10-types-of-people/#taking-up-a-lot-of-space
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
#pluralistic#criti-hype#ai doomers#doomers#eacc#effective acceleration#effective altruism#materialism#ai#10 types of people#data science#llms#large language models#patrick ball#ben goldacre#trusted research environments#science#hrdag#human rights data analysis group#red jacket#religion#emily bender#emily m bender#molly white
288 notes
·
View notes
Text
We should focus our efforts on making as many horseshoe nails as possible, just to be safe. And, of course, with that goal in mind, we must observe that humans might someday switch off our horseshoe-nail factories, so it will first be necessary to wipe out all life on earth.
Me, called to explain why I have wasted so much time shitposting: "many calamities have occurred for want of a horseshoe nail. I am making horseshoe nails."
14 notes
·
View notes
Text
Last week’s spectacular OpenAI fight was reportedly a donnybrook between “Effective Altruism” and “Effective Accelarationism”—two schools of philosophy founded on the nonsensical faith, absent any evidence, that godlike artificial intelligence (AI) beings are imminent, and arguing over the best way to prepare for that day.
Cory Doctorow @mostlysignssomeportents :
This "AI debate" is pretty stupid, proceeding as it does from the foregone conclusion that adding compute power and data to the next-word-predictor program will eventually create a conscious being, which will then inevitably become a superbeing. This is a proposition akin to the idea that if we keep breeding faster and faster horses, we’ll get a locomotive….
But for people who don’t take any of this mystical nonsense about spontaneous consciousness arising from applied statistics seriously, these two sides are nearly indistinguishable, sharing as they do this extremely weird belief. The fact that they’ve split into warring factions on its particulars is less important than their unified belief in the certain coming of the paperclip-maximizing apocalypse….
Left out of this argument are the real abuses of artificial intelligence and automation today, which (Cory says, quoting Molly White) “is incredibly convenient for the powerful individuals and companies who stand to profit from AI.”
AI and automation can be used for a great deal of good and a great deal of evil—and it already is being used for both, Cory says. We need to focus the discussion on that.
Like Cory, I think it’s entirely possible that we may achieve human-level AI one day, and that AI might become superintelligent. That might happen today, it might happen in a thousand years, it might never happen at all. The human race has other things to worry about now.
114 notes
·
View notes
Text
Contra Yishan: Google's Gemini issue is about racial obsession, not a Yudkowsky AI problem.
@yishan wrote a thoughtful thread:
Google’s Gemini issue is not really about woke/DEI, and everyone who is obsessing over it has failed to notice the much, MUCH bigger problem that it represents. [...] If you have a woke/anti-woke axe to grind, kindly set it aside now for a few minutes so that you can hear the rest of what I’m about to say, because it’s going to hit you from out of left field. [...] The important thing is how one of the largest and most capable AI organizations in the world tried to instruct its LLM to do something, and got a totally bonkers result they couldn’t anticipate. What this means is that @ESYudkowsky has a very very strong point. It represents a very strong existence proof for the “instrumental convergence” argument and the “paperclip maximizer” argument in practice.
See full thread at link.
Gemini's code is private and Google's PR flacks tell lies in public, so it's hard to prove anything. Still I think Yishan is wrong and the Gemini issue is about the boring old thing, not the new interesting thing, regardless of how tiresome and cliched it is, and I will try to explain why.
I think Google deliberately set out to blackwash their image generator, and did anticipate the image-generation result, but didn't anticipate the degree of hostile reaction from people who objected to the blackwashing.
Steven Moffat was a summary example of a blackwashing mindset when he remarked:
"We've kind of got to tell a lie. We'll go back into history and there will be black people where, historically, there wouldn't have been, and we won't dwell on that. "We'll say, 'To hell with it, this is the imaginary, better version of the world. By believing in it, we'll summon it forth'."
Moffat was the subject of some controversy when he produced a Doctor Who episode (Thin Ice) featuring a visit to 1814 Britain that looked far less white than the historical record indicates that 1814 Britain was, and he had the Doctor claim in-character that history has been whitewashed.
This is an example that serious, professional, powerful people believe that blackwashing is a moral thing to do. When someone like Moffat says that a blackwashed history is better, and Google Gemini draws a blackwashed history, I think the obvious inference is that Google Gemini is staffed by Moffat-like people who anticipated this result, wanted this result, and deliberately worked to create this result.
The result is only "bonkers" to outsiders who did not want this result.
Yishan says:
It demonstrates quite conclusively that with all our current alignment work, that even at the level of our current LLMs, we are absolutely terrible at predicting how it’s going to execute an intended set of instructions.
No. It is not at all conclusive. "Gemini is staffed by Moffats who like blackwashing" is a simple alternate hypothesis that predicts the observed results. Random AI dysfunction or disalignment does not predict the specific forms that happened at Gemini.
One tester found that when he asked Gemini for "African Kings" it consistently returned all dark-skinned-black royalty despite the existence of lightskinned Mediterranean Africans such as Copts, but when he asked Gemini for "European Kings" it mixed up with some black people, yellow and redskins in regalia.
Gemini is not randomly off-target, nor accurate in one case and wrong in the other, it is specifically thumb-on-scale weighted away from whites and towards blacks.
If there's an alignment problem here, it's the alignment of the Gemini staff. "Woke" and "DEI" and "CRT" are some of the names for this problem, but the names attract flames and disputes over definition. Rather than argue names, I hear that Jack K. at Gemini is the sort of person who asserts "America, where racism is the #1 value our populace seeks to uphold above all".
He is delusional, and I think a good step to fixing Gemini would be to fire him and everyone who agrees with him. America is one of the least racist countries in the world, with so much screaming about racism partly because of widespread agreement that racism is a bad thing, which is what makes the accusation threatening. As Moldbug put it:
The logic of the witch hunter is simple. It has hardly changed since Matthew Hopkins’ day. The first requirement is to invert the reality of power. Power at its most basic level is the power to harm or destroy other human beings. The obvious reality is that witch hunters gang up and destroy witches. Whereas witches are never, ever seen to gang up and destroy witch hunters. In a country where anyone who speaks out against the witches is soon found dangling by his heels from an oak at midnight with his head shrunk to the size of a baseball, we won’t see a lot of witch-hunting and we know there’s a serious witch problem. In a country where witch-hunting is a stable and lucrative career, and also an amateur pastime enjoyed by millions of hobbyists on the weekend, we know there are no real witches worth a damn.
But part of Jack's delusion, in turn, is a deliberate linguistic subversion by the left. Here I apologize for retreading culture war territory, but as far as I can determine it is true and relevant, and it being cliche does not make it less true.
US conservatives, generally, think "racism" is when you discriminate on race, and this is bad, and this should stop. This is the well established meaning of the word, and the meaning that progressives implicitly appeal to for moral weight.
US progressives have some of the same, but have also widespread slogans like "all white people are racist" (with academic motte-and-bailey switch to some excuse like "all complicit in and benefiting from a system of racism" when challenged) and "only white people are racist" (again with motte-and-bailey to "racism is when institutional-structural privilege and power favors you" with a side of America-centrism, et cetera) which combine to "racist" means "white" among progressives.
So for many US progressives, ending racism takes the form of eliminating whiteness and disfavoring whites and erasing white history and generally behaving the way Jack and friends made Gemini behave. (Supposedly. They've shut it down now and I'm late to the party, I can't verify these secondhand screenshots.)
Bringing in Yudkowsky's AI theories adds no predictive or explanatory power that I can see. Occam's Razor says to rule out AI alignment as a problem here. Gemini's behavior is sufficiently explained by common old-fashioned race-hate and bias, which there is evidence for on the Gemini team.
Poor Yudkowsky. I imagine he's having a really bad time now. Imagine working on "AI Safety" in the sense of not killing people, and then the Google "AI Safety" department turns out to be a race-hate department that pisses away your cause's goodwill.
---
I do not have a Twitter account. I do not intend to get a Twitter account, it seems like a trap best stayed out of. I am yelling into the void on my comment section. Any readers are free to send Yishan a link, a full copy of this, or remix and edit it to tweet at him in your own words.
61 notes
·
View notes
Note
how would the maximals react to bot buddy that takes care of their sparkling alone.
I have been actually thinking about doing a piece on a single parent before but never got around to it! Lets see a little in the life of single parent Buddy and their little sparkling.
Hope you enjoy!
Maximals reaction to Bot Buddy being a single parent to a sparkling
SFW, familial, platonic, Cybertronian/ Bot reader
Beast wars
Optimus Primal
Best Ape Dad.
He has deep respect for Buddy.
Not only are they balancing their duties on board the ship and scouting missions, but also making time for their sparkling.
If Optimus sees Buddy having a particularly slow day or looks like they are about to collapse at any moment he tells them to take a break and let him and the team look after the sparkling for a bit.
“Take a break Buddy, you’ve done enough today.”--Optimus
Sorry Primal but someone needs to look after them.”--Buddy
“Yes, and while it’s admirable that you still want to do that, you need to rest. The team will help look after them. I promise nothings going to happen to them on my watch.”--Optimus
He is in the top 5 best babysitters on board the ship.
When Buddy is out cold, he takes the sparkling in his arms and performs the rest of his duties. If they behave extra well, he’ll take them out flying around the base’s grounds.
Rhinox
Rhino Uncle.
Since Rhinox mainly stays at the base, unless told otherwise, he is a familiar face to Buddy and their sparkling.
It makes them a bit closer, as Buddy’s main work goes to the base. Rhinox is one of the first to spot if Buddy is having a slow day or looks like they are about to drop.
“Buddy, you’re doing it again.”--Rhinox
“It’s time already?”--Buddy
“You know it Buddy, now, hand over the sparkling.”--Rhinox
He makes a mini carrier strap to put the sparkling on his chassis while he works. He gives the sparkling little toys he makes out of some spare parts that are safe to use.
He does use his beast mode too often when the sparkling is around, he doesn’t feel like he has as much control of his huge steps as in his bi-pede mode.
Reigning in at the top 5 best babysitters.
Cheetor
Best Older Cat Brother.
Buddy has gained a new child. Cheetor doesn’t mind it too much, though, he always wanted a little sibling in a way.
He doesn’t know how Buddy does all this stuff.
He does try and help with the sparkling from time to time, Cheetor is good with kids after all.
While he does spot Buddy’s slow days a bit later than the others, He makes sure that Buddy does sit out for a break.
“Hey Buddy, you’re a bit slow today.”--Cheetor
“Well not everyone can be as fast as you Cheetor.”--Buddy
“Yeah, but I can take the kid if you want. I’ll make sure they are okay, and you can take a nap! That sounds like fun.”--Cheetor
“…I suppose—Hey Cheetor!”--Buddy
“We are going to have so much fun!”--Cheetor
He has enough energy to keep up with the kid and their activities. Constantly racing or carrying the sparkling as he races around the base or base grounds.
Top 5 best babysitters.
Rattrap
Rat Uncle.
He has a soft spot for the two of them. He tries to deny it, everyone knows otherwise.
He is constantly on Buddy for overworking themselves. Rattrap will take the kid if Buddy looks like they are going to drop without saying anything else.
“Rattrap!”--Buddy
“Yeah?”--Rattrap
“Give me my sparkling back.”--Buddy
“… I know I did not hear you say that while you’re swaying side to side like a palm tree.”--Rattrap
“… point taken. Just make sure they get nap time this time.”--Buddy
“You go take a ‘nap time’.”--Rattrap
He makes a decent babysitter, just don’t take too long if not he is going to show the sparkling a few new swear words and how to make a bomb using a paperclip.
Dinobot
Dino Uncle.
Dinobot was honestly surprised to see Buddy with a sparkling when he joined the Maximals, it gives him conflicting feelings.
What if he hadn’t joined and continued to fight for the Predacons? What if he had shot Buddy or assisted in injuring them?
The sparkling would be left without a parent. He isn’t letting that happen any time soon.
Dinobot takes a bit to spot Buddy having their slow days, mainly he wont take the hint unless he sees Buddy about to fall or Rattrap making it obvious.
“Sooo… you gonna help Buddy out or what?”--Rattrap
“Why? They look fine.”--Dinobot
Buddy tittering and tottering like a drunk bot on a day of celebration
“… I’ll get the child.”--Dinobot
He is a decent babysitter but isn’t left alone for too long as he will give the sparking to the next Maximal.
Tigatron and Airazor
Auntie Birdy and Uncle Big Cat.
They absolutely adore the sparkling. Both have mad respect for Buddy.
Taking on the role of parenthood while still at war and doing a good job? Buddy is taking that on like a chap.
Since they work out of base for most missions, they don’t interact too much with Buddy or the sparkling but when they do, they make sure to give Buddy a break from sparkling duties and get out of the stuffy base for a bit.
“Come Buddy, its such a nice day outside.”--Airazor
“But what about—”--Buddy
“Tigatron has them taken care of. Now come.”--Airazor
The pair are on neutral grounds of babysitting. They would be ranked higher if they were around much more.
Silverbolt and Blackarachnia
Silverbolt is on Buddy and the sparkling from the get-go.
He is making it his job every time that Buddy gets out of the base that they come back safe and sound to their sparkling.
Blackarachnia like Dinobot has some conflicting feelings about the two after turning to the Maximals.
When she was with the Predacons she didn’t know about the sparkling.
She does feel a bit guilty thinking about all the times she had nearly got Buddy offline in the past whether directly or indirectly. She tries not to think about it too much.
Blackarachina is usually the first to spot that Buddy is overworking themselves. She does not go easy on them for skipping on the taking care of themselves.
“Buddy! You’re doing that thing again! Ugh! Can’t believe—”--Blackarachina
“Um… what are you doing—Hey!”--Buddy
“Silverbolt, keep Tiny occupied.”--Blackarachnia
“As you wish my love. And Buddy, just go with it.”--Silverbolt
“Go with what?”--Buddy
She makes sure Buddy gets some TLC; she makes them promise not to tell anyone of this interaction.
Like she says she doesn’t care. (Liar).
#transformers#transformers x reader#beast wars#bw optimus primal#rhinox#bw dinobot#bw rattrap#tigatron#cheetor#airazor#silverbolt#blackarachnia#beast wars x platonic reader
95 notes
·
View notes