#hype machine
Explore tagged Tumblr posts
robmoro · 2 years ago
Text
Listen | Dutch Uncles release new single ‘Poppin'
Listen | Dutch Uncles release new single ‘Poppin’
Dutch Uncles have released their new single ‘Poppin’, the second track to be taken from the band’s sixth album, “True Entertainment”. Named after the original demo of the track (as is tradition for a t least one track from their albums), it is said to be partly inspired by Talking Heads’ Brian Eno-produced albums. ‘Poppin’ is a minimal take on the age-old anxieties, dread and fear we all…
Tumblr media
View On WordPress
2 notes · View notes
shesgabrielle · 5 months ago
Text
SFH 8: Hype Unleashed
This new playlist pretty much got full immediately (I aim for 40 songs per sub-playlist) since my new listening method, (using a speaker and making a shortlist, vs prior, headphones and only adding sure favourites) followed by a couple weeks of vetting and culling, worked very well.
Here are my posts from compiling the playlist on Spotify on the 28th June:
I've been auditioning and editing my recent Hypem discoveries (cut down to about 55 songs, with some new additions) while sleeping and during the day sometimes, and I think I'm happy with most of it, so I'm adding a few dozen songs to my playlists now :p
I have amazing music taste, I have to say, it's simply a fact. Last 11 songs added to SFH 7, (40 songs per sub-playlist) so I made a new SFH 8, and also adding them to the main year's playlist, main one here:
New SFH 8 subplaylist is this one:
These songs were missing from Spotify 😒 so I included the webpage urls in the screenshots, since one has already been deleted, but is still playable via the blog, and others are hard to search for:
Tumblr media Tumblr media
Finished the update for now, 47 new songs added with 5 missing. This one was interesting, a song from 2011 called Gabriel. The original song has a lot of emphasis on my name which, well, I went for this remix, which mostly uses a short vocal sample from the end of the original.
Tumblr media
A video of the current main 2024 Searching for Hype playlist, above (minus the five tracks in the screenshots)
0 notes
arowyn-m · 14 days ago
Text
VIKTOR POSTER DROP
Tumblr media
3K notes · View notes
soadscrawl · 5 months ago
Text
i was saying this to my best friend the other day but why are voltron aus making keith either rich or like a prince or something. why must you take his poor kid sparkle. that man knows a 7/11 slurpee he knows a walmart brand bottle of soda. he deserves to know the simple pleasure of an inflatable backyard pool. I know he got those fuckass black jeggings from a thrift store. and that fuckass mullet is from great clips. is keith kogane truly keith kogane if hes not taking his change to the coinstar at the grocery store. dont take this from my man!!!!!!
637 notes · View notes
mostlysignssomeportents · 10 months ago
Text
I assure you, an AI didn’t write a terrible “George Carlin” routine
Tumblr media
There are only TWO MORE DAYS left in the Kickstarter for the audiobook of The Bezzle, the sequel to Red Team Blues, narrated by @wilwheaton! You can pre-order the audiobook and ebook, DRM free, as well as the hardcover, signed or unsigned. There's also bundles with Red Team Blues in ebook, audio or paperback.
Tumblr media
On Hallowe'en 1974, Ronald Clark O'Bryan murdered his son with poisoned candy. He needed the insurance money, and he knew that Halloween poisonings were rampant, so he figured he'd get away with it. He was wrong:
https://en.wikipedia.org/wiki/Ronald_Clark_O%27Bryan
The stories of Hallowe'en poisonings were just that – stories. No one was poisoning kids on Hallowe'en – except this monstrous murderer, who mistook rampant scare stories for truth and assumed (incorrectly) that his murder would blend in with the crowd.
Last week, the dudes behind the "comedy" podcast Dudesy released a "George Carlin" comedy special that they claimed had been created, holus bolus, by an AI trained on the comedian's routines. This was a lie. After the Carlin estate sued, the dudes admitted that they had written the (remarkably unfunny) "comedy" special:
https://arstechnica.com/ai/2024/01/george-carlins-heirs-sue-comedy-podcast-over-ai-generated-impression/
As I've written, we're nowhere near the point where an AI can do your job, but we're well past the point where your boss can be suckered into firing you and replacing you with a bot that fails at doing your job:
https://pluralistic.net/2024/01/15/passive-income-brainworms/#four-hour-work-week
AI systems can do some remarkable party tricks, but there's a huge difference between producing a plausible sentence and a good one. After the initial rush of astonishment, the stench of botshit becomes unmistakable:
https://www.theguardian.com/commentisfree/2024/jan/03/botshit-generative-ai-imminent-threat-democracy
Some of this botshit comes from people who are sold a bill of goods: they're convinced that they can make a George Carlin special without any human intervention and when the bot fails, they manufacture their own botshit, assuming they must be bad at prompting the AI.
This is an old technology story: I had a friend who was contracted to livestream a Canadian awards show in the earliest days of the web. They booked in multiple ISDN lines from Bell Canada and set up an impressive Mbone encoding station on the wings of the stage. Only one problem: the ISDNs flaked (this was a common problem with ISDNs!). There was no way to livecast the show.
Nevertheless, my friend's boss's ordered him to go on pretending to livestream the show. They made a big deal of it, with all kinds of cool visualizers showing the progress of this futuristic marvel, which the cameras frequently lingered on, accompanied by overheated narration from the show's hosts.
The weirdest part? The next day, my friend – and many others – heard from satisfied viewers who boasted about how amazing it had been to watch this show on their computers, rather than their TVs. Remember: there had been no stream. These people had just assumed that the problem was on their end – that they had failed to correctly install and configure the multiple browser plugins required. Not wanting to admit their technical incompetence, they instead boasted about how great the show had been. It was the Emperor's New Livestream.
Perhaps that's what happened to the Dudesy bros. But there's another possibility: maybe they were captured by their own imaginations. In "Genesis," an essay in the 2007 collection The Creationists, EL Doctorow (no relation) describes how the ancient Babylonians were so poleaxed by the strange wonder of the story they made up about the origin of the universe that they assumed that it must be true. They themselves weren't nearly imaginative enough to have come up with this super-cool tale, so God must have put it in their minds:
https://pluralistic.net/2023/04/29/gedankenexperimentwahn/#high-on-your-own-supply
That seems to have been what happened to the Air Force colonel who falsely claimed that a "rogue AI-powered drone" had spontaneously evolved the strategy of killing its operator as a way of clearing the obstacle to its main objective, which was killing the enemy:
https://pluralistic.net/2023/06/04/ayyyyyy-eyeeeee/
This never happened. It was – in the chagrined colonel's words – a "thought experiment." In other words, this guy – who is the USAF's Chief of AI Test and Operations – was so excited about his own made up story that he forgot it wasn't true and told a whole conference-room full of people that it had actually happened.
Maybe that's what happened with the George Carlinbot 3000: the Dudesy dudes fell in love with their own vision for a fully automated luxury Carlinbot and forgot that they had made it up, so they just cheated, assuming they would eventually be able to make a fully operational Battle Carlinbot.
That's basically the Theranos story: a teenaged "entrepreneur" was convinced that she was just about to produce a seemingly impossible, revolutionary diagnostic machine, so she faked its results, abetted by investors, customers and others who wanted to believe:
https://en.wikipedia.org/wiki/Theranos
The thing about stories of AI miracles is that they are peddled by both AI's boosters and its critics. For boosters, the value of these tall tales is obvious: if normies can be convinced that AI is capable of performing miracles, they'll invest in it. They'll even integrate it into their product offerings and then quietly hire legions of humans to pick up the botshit it leaves behind. These abettors can be relied upon to keep the defects in these products a secret, because they'll assume that they've committed an operator error. After all, everyone knows that AI can do anything, so if it's not performing for them, the problem must exist between the keyboard and the chair.
But this would only take AI so far. It's one thing to hear implausible stories of AI's triumph from the people invested in it – but what about when AI's critics repeat those stories? If your boss thinks an AI can do your job, and AI critics are all running around with their hair on fire, shouting about the coming AI jobpocalypse, then maybe the AI really can do your job?
https://locusmag.com/2020/07/cory-doctorow-full-employment/
There's a name for this kind of criticism: "criti-hype," coined by Lee Vinsel, who points to many reasons for its persistence, including the fact that it constitutes an "academic business-model":
https://sts-news.medium.com/youre-doing-it-wrong-notes-on-criticism-and-technology-hype-18b08b4307e5
That's four reasons for AI hype:
to win investors and customers;
to cover customers' and users' embarrassment when the AI doesn't perform;
AI dreamers so high on their own supply that they can't tell truth from fantasy;
A business-model for doomsayers who form an unholy alliance with AI companies by parroting their silliest hype in warning form.
But there's a fifth motivation for criti-hype: to simplify otherwise tedious and complex situations. As Jamie Zawinski writes, this is the motivation behind the obvious lie that the "autonomous cars" on the streets of San Francisco have no driver:
https://www.jwz.org/blog/2024/01/driverless-cars-always-have-a-driver/
GM's Cruise division was forced to shutter its SF operations after one of its "self-driving" cars dragged an injured pedestrian for 20 feet:
https://www.wired.com/story/cruise-robotaxi-self-driving-permit-revoked-california/
One of the widely discussed revelations in the wake of the incident was that Cruise employed 1.5 skilled technical remote overseers for every one of its "self-driving" cars. In other words, they had replaced a single low-waged cab driver with 1.5 higher-paid remote operators.
As Zawinski writes, SFPD is well aware that there's a human being (or more than one human being) responsible for every one of these cars – someone who is formally at fault when the cars injure people or damage property. Nevertheless, SFPD and SFMTA maintain that these cars can't be cited for moving violations because "no one is driving them."
But figuring out who which person is responsible for a moving violation is "complicated and annoying to deal with," so the fiction persists.
(Zawinski notes that even when these people are held responsible, they're a "moral crumple zone" for the company that decided to enroll whole cities in nonconsensual murderbot experiments.)
Automation hype has always involved hidden humans. The most famous of these was the "mechanical Turk" hoax: a supposed chess-playing robot that was just a puppet operated by a concealed human operator wedged awkwardly into its carapace.
This pattern repeats itself through the ages. Thomas Jefferson "replaced his slaves" with dumbwaiters – but of course, dumbwaiters don't replace slaves, they hide slaves:
https://www.stuartmcmillen.com/blog/behind-the-dumbwaiter/
The modern Mechanical Turk – a division of Amazon that employs low-waged "clickworkers," many of them overseas – modernizes the dumbwaiter by hiding low-waged workforces behind a veneer of automation. The MTurk is an abstract "cloud" of human intelligence (the tasks MTurks perform are called "HITs," which stands for "Human Intelligence Tasks").
This is such a truism that techies in India joke that "AI" stands for "absent Indians." Or, to use Jathan Sadowski's wonderful term: "Potemkin AI":
https://reallifemag.com/potemkin-ai/
This Potemkin AI is everywhere you look. When Tesla unveiled its humanoid robot Optimus, they made a big flashy show of it, promising a $20,000 automaton was just on the horizon. They failed to mention that Optimus was just a person in a robot suit:
https://www.siliconrepublic.com/machines/elon-musk-tesla-robot-optimus-ai
Likewise with the famous demo of a "full self-driving" Tesla, which turned out to be a canned fake:
https://www.reuters.com/technology/tesla-video-promoting-self-driving-was-staged-engineer-testifies-2023-01-17/
The most shocking and terrifying and enraging AI demos keep turning out to be "Just A Guy" (in Molly White's excellent parlance):
https://twitter.com/molly0xFFF/status/1751670561606971895
And yet, we keep falling for it. It's no wonder, really: criti-hype rewards so many different people in so many different ways that it truly offers something for everyone.
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/01/29/pay-no-attention/#to-the-little-man-behind-the-curtain
Tumblr media Tumblr media
Back the Kickstarter for the audiobook of The Bezzle here!
Tumblr media
Image:
Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
--
Ross Breadmore (modified) https://www.flickr.com/photos/rossbreadmore/5169298162/
CC BY 2.0 https://creativecommons.org/licenses/by/2.0/
2K notes · View notes
a-whole-tempest-in-a-teacup · 8 months ago
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
(it is enough. it is more than enough.)
Anne of Green Gables, L.M. Montgomery | Mademoiselle Gachet in her garden at Auvers-sur-Oise, Vincent van Gogh (1890) | 'Toad' - Mary Oliver | About Time (2013) | 'No Choir' - Florence + the Machine | Comic by eOndine | The Amber Spyglass, Philip Pullman | Superstore 'All Sales Final' (2021) | 'The View Between Villages' - Noah Kahan | Meadow with Poplars, Claude Monet (1875) | 'The Orange' - Wendy Cope
713 notes · View notes
playing-with-dax · 27 days ago
Text
I'm trying to expand my social bubble on here. If you're anywhere in the realm of monsterfuckery, hello let's be mutuals and maybe friends!
54 notes · View notes
gabe-lovebot · 1 year ago
Text
Tumblr media
how do we kill this thing
272 notes · View notes
ingravinoveritas · 9 months ago
Note
Tumblr media
wtf is that all she has to say about her boyfriend michael fans praised him more than. so is this her saying the show phenomenal or her boyfriend cos honestly this chick don't make sense what ur thoughts on this post
Hi there! And oh, wow. I've had a little time to process this now that I'm home, and I think the biggest thing that comes to mind is how this Insta story feels so...obligatory, and the bare minimum. As you said, it's not clear whether Anna is talking about the production itself or Michael's performance, and there is hardly any energy or enthusiasm to the post, especially not compared to the multiple posts AL made about Photobombing Michael J. Fox at the BAFTAs.
It becomes even more noticeable when you look at it next to the Insta story that Georgia posted:
Tumblr media
Georgia and David didn't even attend the show tonight, and yet they hyped Michael up in a way Anna did not. You can feel the warmth and silliness and love in how they're rooting for him and cheering him on--David, in his manic Scottish way, and Georgia in her more sarcastic/dry English way--and how they seem genuinely excited for Michael. Yet I got absolutely none of that from AL's post.
All of the above is augmented by the choice of pictures in the post, with David and Georgia's photo centering Michael, literally and figuratively. He is the focus of the picture and of their attention, and the message there seems to be that Michael is what David and Georgia are most excited about. In contrast, the picture AL used is of a nearly empty dimly lit stage with a hospital bed on it, and I do not think that is by accident.
As I have said previously, my reaction is never to any one post in isolation, but to the continuation of a pattern of posts/comments from Anna over the course of several years. The same thing happened when production photos were released of Michael as Prince Andrew a few months ago, and when he played Chris Tarrant in Quiz in 2021:
Tumblr media
AL hated the wig then, and my feeling is that she hates the wig Michael is wearing now, as well as the pyjamas that are his costume for a significant portion of the play and how he looks in them. I think that she does not care at all about the play itself or its significance to Michael, and has no desire to hype him up because his appearance in Nye is not what she considers "attractive." In addition, a fan posted stage door pictures on Twitter, including one with AL, and it seems to very much echo the lack of enthusiasm in her Insta story.
So yes, I think AL's post seems very generic (at best). It makes her come across as disinterested and somehow "removed" from both Michael and the show itself, again in contrast to David and Georgia's picture that conveys the exact opposite.
Those are my thoughts, at any rate, and I could be completely off the mark, but as always I'd be glad to hear from my followers about what you think. Thanks for writing in! x
95 notes · View notes
official-lucifers-child · 10 months ago
Text
let’s be real, if any of us actually ended up in a fandom universe (you know those stories, where some modern girl obsessed with the latest and greatest queer ship gets sent to hogwarts or whatever and tries to save the world) we’d fuck it up SO QUICKLY. at least i know i would, i would take one look at the new universe and ask “is anyone going to make things worse?” and not wait for an answer. oh no i cant use my real name for reasons? yeah now i’m ebony dementia darkness raven way and no one can stop me. i’m mysterious and unknown. i speak in riddles. i can and will kill people for fun. fixing the timeline? no, fucking the timeline.
141 notes · View notes
robmoro · 2 years ago
Text
Listen | Swedish trio DEATH AND VANILLA release new single 'Find Another Illusion'
Listen | Swedish trio DEATH AND VANILLA release new single ‘Find Another Illusion’
DEATH AND VANILLA return with news of their new album “Flicker” and their latest dream pop track ‘Find Another Illusion’. More than ten years since Marleen Nilsson, Anders Hansson and Magnus Bodin founded the band in Malmö, Sweden, they have accumulated elements and inspiration from the city’s fierce industrial history and austere present. ‘Find Another Illusion’ is a melancholic dream-pop…
Tumblr media
View On WordPress
2 notes · View notes
olasketches · 7 months ago
Text
sukuna did not tell kenjaku about his plan to change vessels, which makes me wonder… what makes us and maybe even sukuna think that kenjaku told him everything. another yapping session incoming cause I need to get this out of me. we don’t know the terms of their agreement but sukuna is certain that yuuji’s only purpose was to seal his fingers and mind you that says the man who throughout the whole manga kept underestimating him and saying how boring he is, which creates a perfect blind spot. sukuna is so uninterested in yuuji, probably as a way to keep some sort of distance between himself and yuuji, that it is very much likely that he’s not aware of the actual plans kenjaku had towards yuuji. why was it important for them to keep sukuna caged? wa yuuji always supposed to have an engraved curse technique(s)?? why is he slowly turning into another sukuna?? and I’m not saying it to take away yuuji’s agency as a character but to point to the fact that the lines between the curse known as ryomen sukuna and yuuji are beginning to become more and more blurry with each new chapter. sukuna referred to himself as the fallen angel/disgraced one, but who was he BEFORE that?? and what’s the actual reason behind angel wanting to kill him? there’s so much we don’t know and honestly… as intelligent as he is I don’t think sukuna truly knows what is going on either… but I might be totally reaching and I still don’t why I keep brainstorming all of it cause gege is just too damn unpredictable so I really don’t know what’s relevant here or what’s not but there are just so many unknowns in this story that I just can’t help but wonder… (more in the tags)
58 notes · View notes
shesgabrielle · 5 months ago
Text
The Shortlist
My new music listening method is basically reinventing the hifi, I changed the hotkeys on my tablet to be volume up/down, pause/play and skip back/forward, (so basically a remote) and then listen to everything via speaker. I don't have it too loud since the volume is very variable on Hypem, and the fan and background noise dulls it a bit, but I have made a shortlist of generally enjoyable songs heard in this more ambient way - I skip less since this way is more passive, so I don't react strongly to certain things I dislike in songs (like speaking, sound effects, though I must still subconsciously filter them out and just skip less forcefully, since this shortlist still features no elements I dislike so far) and I listen to more things repeatedly since I'm less focused, and somehow this less effortful method has resulted in 76(!) shortlisted songs in two weeks. The more focused method resulted in more like 8 shortlisted songs a month. I switched back to headphones to review them, and all these songs are very good and sound good simply in discovery order. I might make a new playlist series for the songs I found this way as I'm surprised it was so effective, listening *less* intently to allow more music to be heard, and listening to more songs all the way through. There's a slightly chaotic energy to this selection too, it has that kind of random joyfulness found at good music festivals, of just wandering between undefined expressions of energy.
0 notes
arowyn-m · 15 days ago
Text
Viktor's Sequence in S2's Opening, What It Symbolizes & What it Means for the Rest of S2
Tumblr media
So Act I dropped and it's great—Lots of plot points to go over in the future—but for now I want to deep dive into some interesting things I noticed about the intro, particularly found in Viktor's portion of it.
The opening is full of interesting symbolism and representations of Arcane's characters in their clearest, "purest" form (pure as in lacking impurities, not as in morally pure).
There's a lot of neat tidbits hidden in the opening, but I particularly want to dive into Viktor's segment because i am biased as hell his shots have some potentially incredible depth to them that I'd like to dissect.
A lot of that potential comes from what exactly the mask represents, which I'm arguing is not a symbol of Viktor's Machine Herald identity.
Hear me out.
Starting off with his first shot: we see Viktor reaching for the mask. Instantly after he makes contact we cut to a shot of Viktor holding the mask and considering it. He even turns it a little as he looks at its face, as if he's not quite sure what it is.
Tumblr media Tumblr media
These shots are telling the story of S1 Viktor's experimentation with the Hexcore, particularly the research Viktor conducted AFTER his blood mixed with it...and yet, the mask does not represent the Hexcore itself, so how can it be telling that story?
I've seen a lot of theories of what exactly is the catalyst of Hextech's corruption into the Anomaly, and the most popular one at the moment seems to be that Blood + Hextech + Abuse of Magic = Anomaly/Angry Arcane. This theory seems to stem from the fact that not only did the Hexcore react to Viktor's blood, but so also did the Hexgates themselves.
Tumblr media Tumblr media
Corruption found on the base floor of the Hexgates. There's a ceiling to this room, so there's very little chance that this is literally where Viktor's blood landed, but I do think his blood's presence in the Hextech-charged room triggered a chain reaction with the rest of the Hexgate. We may even see this happen in a flashback.
So, assuming these intro shots are representative of the moment when Viktor reached out and touched the Hexcore, and later when he's examining it more closely/experimenting with it, why don't these shots represent the Hexcore itself?
Because Viktor isn't making a move to put on the mask. He's just looking at it, thinking about it, considering what it is. Viktor absolutely made a move to use the Hexcore in S1—and killed his assistant in the process.
So what is he "looking" at?
I believe the mask is representative of the Arcane itself, and, by extension, its hold on Viktor's mind.
He's examined the Arcane and played with its properties—unsure of what to really make of it, but he never had the chance to take on the full potential of it. Once Sky died he realized that something was very wrong. Maybe he didn't realize how wrong, but he definitely concluded that this form of magic needed to be destroyed—thus the "Promise me" scene.
If the Blood + Hextech + Magic Overuse = the Arcane lashing out theory is true...then the moment that Viktor's blood mixes with the Hexcore is the moment it crosses the line from a mindless device to a tool of the Arcane.
This idea is only strengthened by Viktor's next shot—the mask being held to his face.
Tumblr media
Viktor himself is not holding the mask—Jayce is. This shot depicts how Jayce used the Hexcore to save Viktor's life—very much against Viktor's will on multiple fronts—replacing Viktor's identity with a false one.
Jayce is putting the mask of the Arcane onto Viktor's face, hiding his true features, his emotions, his personality. The mask wears a flat, serene expression, reflecting Viktor's forcibly suppressed emotions in this Act—as we see with how Viktor interacted with Jayce when he woke up. As cathartic as that scene may have been, Vik was acting wildly out of character, and I sincerely think that was on purpose.
It's difficult to tell in this lighting but Vik's eyes are also their typical golden-amber in this shot. That would only make sense if this is symbolic of Viktor's true character being concealed by a false identity. It would make no sense to use Vik's amber eyes in a sequence meant to symbolize his new identity being concealed by the literal Machine Herald mask.
The final shot is not much different from the last one, but really drives home this comparison and the idea that the mask represents the Arcane, not Viktor's MH arc. The same mask is worn by numerous others, all slowly fading into view.
Tumblr media
These faceless people are the Church of the Gloriously Evolved, all represented by the same exact mask that Viktor is poised to take on.
And yet, the mask is never fully put onto Viktor's face, unlike Viktor's followers. He can still back away. He can still hesitate.
So what does this all mean for S2?
Tumblr media
It means that this ^ is not Viktor. This is a man either heavily under the influence of (or being fully controlled by) the Arcane.
And it also means that this trancelike state is not Viktor's endgame. I sincerely doubt this husk of who Viktor used to be will end up being the calculating antihero that is the Machine Herald.
Another point for the theory that Viktor's mental humanity will come back to him is the fact that Vik's in-game MH mask has golden eyes, mirroring Viktor's real eyes, not the lifeless—albeit shifting—gray of Viktor's current irises. Assuming Riot will be keeping this iconic part of Vik's design, that signals a change back from the emotionless puppet Viktor seems to be right now.
Tumblr media
But I suppose we'll know for certain by the finale.
419 notes · View notes
the-valiant-valkyrie · 1 year ago
Text
Tumblr media
Don't think I've posted these yet, but a while ago I made some ieytd palettes, and I decided to clean them up a little in preparation for the third game, you know how it is
Use them if you'd like, share them around, all of that fun stuff. And if you make something cool um 😳🥺 I would love to see if you decided to tag me in it 👉👈 tee hee
240 notes · View notes
mostlysignssomeportents · 2 years ago
Text
The AI hype bubble is the new crypto hype bubble
Tumblr media
Back in 2017 Long Island Ice Tea — known for its undistinguished, barely drinkable sugar-water — changed its name to “Long Blockchain Corp.” Its shares surged to a peak of 400% over their pre-announcement price. The company announced no specific integrations with any kind of blockchain, nor has it made any such integrations since.
If you’d like an essay-formatted version of this post to read or share, here’s a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/03/09/autocomplete-worshippers/#the-real-ai-was-the-corporations-that-we-fought-along-the-way
LBCC was subsequently delisted from NASDAQ after settling with the SEC over fraudulent investor statements. Today, the company trades over the counter and its market cap is $36m, down from $138m.
https://cointelegraph.com/news/textbook-case-of-crypto-hype-how-iced-tea-company-went-blockchain-and-failed-despite-a-289-percent-stock-rise
The most remarkable thing about this incredibly stupid story is that LBCC wasn’t the peak of the blockchain bubble — rather, it was the start of blockchain’s final pump-and-dump. By the standards of 2022’s blockchain grifters, LBCC was small potatoes, a mere $138m sugar-water grift.
They didn’t have any NFTs, no wash trades, no ICO. They didn’t have a Superbowl ad. They didn’t steal billions from mom-and-pop investors while proclaiming themselves to be “Effective Altruists.” They didn’t channel hundreds of millions to election campaigns through straw donations and other forms of campaing finance frauds. They didn’t even open a crypto-themed hamburger restaurant where you couldn’t buy hamburgers with crypto:
https://robbreport.com/food-drink/dining/bored-hungry-restaurant-no-cryptocurrency-1234694556/
They were amateurs. Their attempt to “make fetch happen” only succeeded for a brief instant. By contrast, the superpredators of the crypto bubble were able to make fetch happen over an improbably long timescale, deploying the most powerful reality distortion fields since Pets.com.
Anything that can’t go on forever will eventually stop. We’re told that trillions of dollars’ worth of crypto has been wiped out over the past year, but these losses are nowhere to be seen in the real economy — because the “wealth” that was wiped out by the crypto bubble’s bursting never existed in the first place.
Like any Ponzi scheme, crypto was a way to separate normies from their savings through the pretense that they were “investing” in a vast enterprise — but the only real money (“fiat” in cryptospeak) in the system was the hardscrabble retirement savings of working people, which the bubble’s energetic inflaters swapped for illiquid, worthless shitcoins.
We’ve stopped believing in the illusory billions. Sam Bankman-Fried is under house arrest. But the people who gave him money — and the nimbler Ponzi artists who evaded arrest — are looking for new scams to separate the marks from their money.
Take Morganstanley, who spent 2021 and 2022 hyping cryptocurrency as a massive growth opportunity:
https://cointelegraph.com/news/morgan-stanley-launches-cryptocurrency-research-team
Today, Morganstanley wants you to know that AI is a $6 trillion opportunity.
They’re not alone. The CEOs of Endeavor, Buzzfeed, Microsoft, Spotify, Youtube, Snap, Sports Illustrated, and CAA are all out there, pumping up the AI bubble with every hour that god sends, declaring that the future is AI.
https://www.hollywoodreporter.com/business/business-news/wall-street-ai-stock-price-1235343279/
Google and Bing are locked in an arms-race to see whose search engine can attain the speediest, most profound enshittification via chatbot, replacing links to web-pages with florid paragraphs composed by fully automated, supremely confident liars:
https://pluralistic.net/2023/02/16/tweedledumber/#easily-spooked
Blockchain was a solution in search of a problem. So is AI. Yes, Buzzfeed will be able to reduce its wage-bill by automating its personality quiz vertical, and Spotify’s “AI DJ” will produce slightly less terrible playlists (at least, to the extent that Spotify doesn’t put its thumb on the scales by inserting tracks into the playlists whose only fitness factor is that someone paid to boost them).
But even if you add all of this up, double it, square it, and add a billion dollar confidence interval, it still doesn’t add up to what Bank Of America analysts called “a defining moment — like the internet in the ’90s.” For one thing, the most exciting part of the “internet in the ‘90s” was that it had incredibly low barriers to entry and wasn’t dominated by large companies — indeed, it had them running scared.
The AI bubble, by contrast, is being inflated by massive incumbents, whose excitement boils down to “This will let the biggest companies get much, much bigger and the rest of you can go fuck yourselves.” Some revolution.
AI has all the hallmarks of a classic pump-and-dump, starting with terminology. AI isn’t “artificial” and it’s not “intelligent.” “Machine learning” doesn’t learn. On this week’s Trashfuture podcast, they made an excellent (and profane and hilarious) case that ChatGPT is best understood as a sophisticated form of autocomplete — not our new robot overlord.
https://open.spotify.com/episode/4NHKMZZNKi0w9mOhPYIL4T
We all know that autocomplete is a decidedly mixed blessing. Like all statistical inference tools, autocomplete is profoundly conservative — it wants you to do the same thing tomorrow as you did yesterday (that’s why “sophisticated” ad retargeting ads show you ads for shoes in response to your search for shoes). If the word you type after “hey” is usually “hon” then the next time you type “hey,” autocomplete will be ready to fill in your typical following word — even if this time you want to type “hey stop texting me you freak”:
https://blog.lareviewofbooks.org/provocations/neophobic-conservative-ai-overlords-want-everything-stay/
And when autocomplete encounters a new input — when you try to type something you’ve never typed before — it tries to get you to finish your sentence with the statistically median thing that everyone would type next, on average. Usually that produces something utterly bland, but sometimes the results can be hilarious. Back in 2018, I started to text our babysitter with “hey are you free to sit” only to have Android finish the sentence with “on my face” (not something I’d ever typed!):
https://mashable.com/article/android-predictive-text-sit-on-my-face
Modern autocomplete can produce long passages of text in response to prompts, but it is every bit as unreliable as 2018 Android SMS autocomplete, as Alexander Hanff discovered when ChatGPT informed him that he was dead, even generating a plausible URL for a link to a nonexistent obit in The Guardian:
https://www.theregister.com/2023/03/02/chatgpt_considered_harmful/
Of course, the carnival barkers of the AI pump-and-dump insist that this is all a feature, not a bug. If autocomplete says stupid, wrong things with total confidence, that’s because “AI” is becoming more human, because humans also say stupid, wrong things with total confidence.
Exhibit A is the billionaire AI grifter Sam Altman, CEO if OpenAI — a company whose products are not open, nor are they artificial, nor are they intelligent. Altman celebrated the release of ChatGPT by tweeting “i am a stochastic parrot, and so r u.”
https://twitter.com/sama/status/1599471830255177728
This was a dig at the “stochastic parrots” paper, a comprehensive, measured roundup of criticisms of AI that led Google to fire Timnit Gebru, a respected AI researcher, for having the audacity to point out the Emperor’s New Clothes:
https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/
Gebru’s co-author on the Parrots paper was Emily M Bender, a computational linguistics specialist at UW, who is one of the best-informed and most damning critics of AI hype. You can get a good sense of her position from Elizabeth Weil’s New York Magazine profile:
https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html
Bender has made many important scholarly contributions to her field, but she is also famous for her rules of thumb, which caution her fellow scientists not to get high on their own supply:
Please do not conflate word form and meaning
Mind your own credulity
As Bender says, we’ve made “machines that can mindlessly generate text, but we haven’t learned how to stop imagining the mind behind it.” One potential tonic against this fallacy is to follow an Italian MP’s suggestion and replace “AI” with “SALAMI” (“Systematic Approaches to Learning Algorithms and Machine Inferences”). It’s a lot easier to keep a clear head when someone asks you, “Is this SALAMI intelligent? Can this SALAMI write a novel? Does this SALAMI deserve human rights?”
Bender’s most famous contribution is the “stochastic parrot,” a construct that “just probabilistically spits out words.” AI bros like Altman love the stochastic parrot, and are hellbent on reducing human beings to stochastic parrots, which will allow them to declare that their chatbots have feature-parity with human beings.
At the same time, Altman and Co are strangely afraid of their creations. It’s possible that this is just a shuck: “I have made something so powerful that it could destroy humanity! Luckily, I am a wise steward of this thing, so it’s fine. But boy, it sure is powerful!”
They’ve been playing this game for a long time. People like Elon Musk (an investor in OpenAI, who is hoping to convince the EU Commission and FTC that he can fire all of Twitter’s human moderators and replace them with chatbots without violating EU law or the FTC’s consent decree) keep warning us that AI will destroy us unless we tame it.
There’s a lot of credulous repetition of these claims, and not just by AI’s boosters. AI critics are also prone to engaging in what Lee Vinsel calls criti-hype: criticizing something by repeating its boosters’ claims without interrogating them to see if they’re true:
https://sts-news.medium.com/youre-doing-it-wrong-notes-on-criticism-and-technology-hype-18b08b4307e5
There are better ways to respond to Elon Musk warning us that AIs will emulsify the planet and use human beings for food than to shout, “Look at how irresponsible this wizard is being! He made a Frankenstein’s Monster that will kill us all!” Like, we could point out that of all the things Elon Musk is profoundly wrong about, he is most wrong about the philosophical meaning of Wachowksi movies:
https://www.theguardian.com/film/2020/may/18/lilly-wachowski-ivana-trump-elon-musk-twitter-red-pill-the-matrix-tweets
But even if we take the bros at their word when they proclaim themselves to be terrified of “existential risk” from AI, we can find better explanations by seeking out other phenomena that might be triggering their dread. As Charlie Stross points out, corporations are Slow AIs, autonomous artificial lifeforms that consistently do the wrong thing even when the people who nominally run them try to steer them in better directions:
https://media.ccc.de/v/34c3-9270-dude_you_broke_the_future
Imagine the existential horror of a ultra-rich manbaby who nominally leads a company, but can’t get it to follow: “everyone thinks I’m in charge, but I’m actually being driven by the Slow AI, serving as its sock puppet on some days, its golem on others.”
Ted Chiang nailed this back in 2017 (the same year of the Long Island Blockchain Company):
There’s a saying, popularized by Fredric Jameson, that it’s easier to imagine the end of the world than to imagine the end of capitalism. It’s no surprise that Silicon Valley capitalists don’t want to think about capitalism ending. What’s unexpected is that the way they envision the world ending is through a form of unchecked capitalism, disguised as a superintelligent AI. They have unconsciously created a devil in their own image, a boogeyman whose excesses are precisely their own.
https://www.buzzfeednews.com/article/tedchiang/the-real-danger-to-civilization-isnt-ai-its-runaway
Chiang is still writing some of the best critical work on “AI.” His February article in the New Yorker, “ChatGPT Is a Blurry JPEG of the Web,” was an instant classic:
[AI] hallucinations are compression artifacts, but — like the incorrect labels generated by the Xerox photocopier — they are plausible enough that identifying them requires comparing them against the originals, which in this case means either the Web or our own knowledge of the world.
https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web
“AI” is practically purpose-built for inflating another hype-bubble, excelling as it does at producing party-tricks — plausible essays, weird images, voice impersonations. But as Princeton’s Matthew Salganik writes, there’s a world of difference between “cool” and “tool”:
https://freedom-to-tinker.com/2023/03/08/can-chatgpt-and-its-successors-go-from-cool-to-tool/
Nature can claim “conversational AI is a game-changer for science” but “there is a huge gap between writing funny instructions for removing food from home electronics and doing scientific research.” Salganik tried to get ChatGPT to help him with the most banal of scholarly tasks — aiding him in peer reviewing a colleague’s paper. The result? “ChatGPT didn’t help me do peer review at all; not one little bit.”
The criti-hype isn’t limited to ChatGPT, of course — there’s plenty of (justifiable) concern about image and voice generators and their impact on creative labor markets, but that concern is often expressed in ways that amplify the self-serving claims of the companies hoping to inflate the hype machine.
One of the best critical responses to the question of image- and voice-generators comes from Kirby Ferguson, whose final Everything Is a Remix video is a superb, visually stunning, brilliantly argued critique of these systems:
https://www.youtube.com/watch?v=rswxcDyotXA
One area where Ferguson shines is in thinking through the copyright question — is there any right to decide who can study the art you make? Except in some edge cases, these systems don’t store copies of the images they analyze, nor do they reproduce them:
https://pluralistic.net/2023/02/09/ai-monkeys-paw/#bullied-schoolkids
For creators, the important material question raised by these systems is economic, not creative: will our bosses use them to erode our wages? That is a very important question, and as far as our bosses are concerned, the answer is a resounding yes.
Markets value automation primarily because automation allows capitalists to pay workers less. The textile factory owners who purchased automatic looms weren’t interested in giving their workers raises and shorting working days. ‘ They wanted to fire their skilled workers and replace them with small children kidnapped out of orphanages and indentured for a decade, starved and beaten and forced to work, even after they were mangled by the machines. Fun fact: Oliver Twist was based on the bestselling memoir of Robert Blincoe, a child who survived his decade of forced labor:
https://www.gutenberg.org/files/59127/59127-h/59127-h.htm
Today, voice actors sitting down to record for games companies are forced to begin each session with “My name is ______ and I hereby grant irrevocable permission to train an AI with my voice and use it any way you see fit.”
https://www.vice.com/en/article/5d37za/voice-actors-sign-away-rights-to-artificial-intelligence
Let’s be clear here: there is — at present — no firmly established copyright over voiceprints. The “right” that voice actors are signing away as a non-negotiable condition of doing their jobs for giant, powerful monopolists doesn’t even exist. When a corporation makes a worker surrender this right, they are betting that this right will be created later in the name of “artists’ rights” — and that they will then be able to harvest this right and use it to fire the artists who fought so hard for it.
There are other approaches to this. We could support the US Copyright Office’s position that machine-generated works are not works of human creative authorship and are thus not eligible for copyright — so if corporations wanted to control their products, they’d have to hire humans to make them:
https://www.theverge.com/2022/2/21/22944335/us-copyright-office-reject-ai-generated-art-recent-entrance-to-paradise
Or we could create collective rights that belong to all artists and can’t be signed away to a corporation. That’s how the right to record other musicians’ songs work — and it’s why Taylor Swift was able to re-record the masters that were sold out from under her by evil private-equity bros::
https://doctorow.medium.com/united-we-stand-61e16ec707e2
Whatever we do as creative workers and as humans entitled to a decent life, we can’t afford drink the Blockchain Iced Tea. That means that we have to be technically competent, to understand how the stochastic parrot works, and to make sure our criticism doesn’t just repeat the marketing copy of the latest pump-and-dump.
Today (Mar 9), you can catch me in person in Austin at the UT School of Design and Creative Technologies, and remotely at U Manitoba’s Ethics of Emerging Tech Lecture.
Tomorrow (Mar 10), Rebecca Giblin and I kick off the SXSW reading series.
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
[Image ID: A graph depicting the Gartner hype cycle. A pair of HAL 9000's glowing red eyes are chasing each other down the slope from the Peak of Inflated Expectations to join another one that is at rest in the Trough of Disillusionment. It, in turn, sits atop a vast cairn of HAL 9000 eyes that are piled in a rough pyramid that extends below the graph to a distance of several times its height.]
2K notes · View notes