#Model Group
Explore tagged Tumblr posts
angelofdumpsterfires · 1 month ago
Text
Tumblr media
how i feel about all the changes in s3
6K notes · View notes
modelsgroup · 1 year ago
Text
Tumblr media
model group, model club, global-jobs.com, https://www.global-jobs.com
1 note · View note
hisbodycorpse · 21 days ago
Text
Tumblr media Tumblr media
廢棄的火熄滅┉┄─┉了它【 的光芒。
Tumblr media Tumblr media Tumblr media
𝙸𝚅.℻ 戀愛中的鳥͟兒͟歌͟唱͟。͟ ⁝ 𖠂⠀ ⠀ ╮
𓉸 ,⠀⠀ ︽ 𝐏͟𝐞͟𝐨͟𝐧͟𝐢́𝐚͟𝐬͟. 🩸 ╱
Tumblr media Tumblr media
194 notes · View notes
chlobody · 1 year ago
Text
Tumblr media
Held back
2K notes · View notes
idliketobeatree · 3 months ago
Text
you know who is THE character ever? Niko Sasaki.
she's 16. she rents a single room above a butcher shop in Townwhoknowswhere, half a globe away from home. she hasn't spoken to her mum back in Tokio in years. she has no friends. everyone in the gang would wrestle demons in knee-deep mud for her. nobody truly gets her. she's an outstanding listener. she stands up for her boundaries, mantaining an inquisitive spirit. she keeps the parasites who wanted to explode through every orifice in her body in a little mason jar; they are nothing but explicitly mean to her. she tries to reason with them everytime. she shows the ghost boy from the 1900s Scooby Doo and bonds with him over mutual Kindness and Autism. being in her proximity makes people inherently better and more understanding. she arrives at the scene of the crime 45 minutes late banging pots and pans. she's a fashion colour-coding icon. she loves love. she's never kissed anyone. she's the wingwoman. the date she sets up for Jenny and Maxine ends in the serial killer woman getting her head split in half. she outtricks the Night Nurse with the power of excellent media comprehension trained on yaoi mangas, and saves the day. she acquired a small bear charm before she died and now she will live forever. nobody, nobody does it like her
365 notes · View notes
mostlysignssomeportents · 5 months ago
Text
Neither the devil you know nor the devil you don’t
Tumblr media
TONIGHT (June 21) I'm doing an ONLINE READING for the LOCUS AWARDS at 16hPT. On SATURDAY (June 22) I'll be in OAKLAND, CA for a panel (13hPT) and a keynote (18hPT) at the LOCUS AWARDS.
Tumblr media
Spotify's relationship to artists can be kind of confusing. On the one hand, they pay a laughably low per-stream rate, as in homeopathic residues of a penny. On the other hand, the Big Three labels get a fortune from Spotify. And on the other other hand, it makes sense that rate for a stream heard by one person should be less than the rate for a song broadcast to thousands or millions of listeners.
But the whole thing makes sense once you understand the corporate history of Spotify. There's a whole chapter about this in Rebecca Giblin's and my 2022 book, Chokepoint Capitalism; we even made the audio for it a "Spotify exclusive" (it's the only part of the audiobook you can hear on Spotify, natch):
https://pluralistic.net/2022/09/12/streaming-doesnt-pay/#stunt-publishing
Unlike online music predecessors like Napster, Spotify sought licenses from the labels for the music it made available. This gave those labels a lot of power over Spotify, but not all the labels, just three of them. Universal, Warner and Sony, the Big Three, control more than 70% of all music recordings, and more than 60% of all music compositions. These three companies are remarkably inbred. Their execs routine hop from one to the other, and they regularly cross-license samples and other rights to each other.
The Big Three told Spotify that the price of licensing their catalogs would be high. First of all, Spotify had to give significant ownership stakes to all three labels. This put the labels in an unresolvable conflict of interest: as owners of Spotify, it was in their interests for licensing payments for music to be as low as possible. But as labels representing creative workers – musicians – it was in their interests for these payments to be as high as possible.
As it turns out, it wasn't hard to resolve that conflict after all. You see, the money the Big Three got in the form of dividends, stock sales, etc was theirs to spend as they saw fit. They could share some, all, or none of it with musicians. Big the Big Three's contracts with musicians gave those workers a guaranteed share of Spotify's licensing payments.
Accordingly, the Big Three demanded those rock-bottom per-stream rates that Spotify is notorious for. Yeah, it's true that a streaming per-listener payment should be lower than a radio per-play payment (which reaches thousands or millions of listeners), but even accounting for that, the math doesn't add up. Multiply the per-listener stream rate by the number of listeners for, say, a typical satellite radio cast, and Spotify is clearly getting a massive discount relative to other services that didn't make the Big Three into co-owners when they were kicking off.
But there's still something awry: the Big Three take in gigantic fortunes from Spotify in licensing payments. How can the per-stream rate be so low but the licensing payments be so large? And why are artists seeing so little?
Again, it's not hard to understand once you see the structure of Spotify's deal with the Big Three. The Big Three are each guaranteed a monthly minimum payment, irrespective of the number of Spotify streams from their catalog that month. So Sony might be guaranteed, say, $30m a month from Spotify, but the ultra-low per-stream rate Sony insisted on means that all the Sony streams in a typical month add up to $10m. That means that Sony still gets $30m from Spotify, but only $10m is "attributable" to a specific recording artist who can make a claim on it. The rest of the money is Sony's to play with: they can spread it around all their artists, some of their artists, or none of their artists. They can spend it on "artist development" (which might mean sending top execs on luxury junkets to big music festivals). It's theirs. The lower the per-stream rate is, the more of that minimum monthly payment is unattributable, meaning that Sony can line its pockets with it.
But these monthly minimums are just part of the goodies that the Big Three negotiated for themselves when they were designing Spotify. They also get free promo, advertising, and inclusion on Spotify's top playlists. Best (worst!) of all, the Big Three have "most favored nation" status, which means that every other label – the indies that rep the 30% of music not controlled by the Big Three – have to eat shit and take the ultra-low per-stream rate. Only those indies don't get billions in stock, they don't get monthly minimum guarantees, and they have to pay for promo, advertising, and inclusion on hot playlists.
When you understand the business mechanics of Spotify, all the contradictions resolve themselves. It is simultaneously true that Spotify pays a very low per-stream rate, that it pays the Big Three labels gigantic sums every month, and that artists are grotesquely underpaid by this system.
There are many lessons to take from this little scam, but for me, the top takeaway here is that artists are the class enemies of both Big Tech and Big Content. The Napster Wars demanded that artists ally themselves with either the tech sector or the entertainment center, nominating one or the other to be their champion.
But for a creative worker, it doesn't matter who makes a meal out of you, tech or content – all that matters is that you're being devoured.
This brings me to the debate over training AI and copyright. A lot of creative workers are justifiably angry and afraid that the AI companies want to destroy creative jobs. The CTO of Openai literally just said that onstage: "Some creative jobs maybe will go away, but maybe they shouldn’t have been there in the first place":
https://bgr.com/tech/openai-cto-thinks-ai-will-kill-some-jobs-that-shouldnt-have-existed-in-the-first-place/
Many of these workers are accordingly cheering on the entertainment industry's lawsuits over AI training. In these lawsuits, companies like the New York Times and Getty Images claim that the steps associated with training an AI model infringe copyright. This isn't a great copyright theory based on current copyright precedents, and if the suits succeed, they'll narrow fair use in ways that will impact all kinds of socially beneficial activities, like scraping the web to make the Internet Archive's Wayback Machine:
https://pluralistic.net/2024/05/13/spooky-action-at-a-close-up/#invisible-hand
But you can't make an omelet without breaking eggs, right? For some creative workers, legal uncertainty for computational linguists, search engines, and archiving projects are a small price to pay if it means keeping AI from destroying their livelihoods.
Here's the problem: establishing that AI training requires a copyright license will not stop AI from being used to erode the wages and working conditions of creative workers. The companies suing over AI training are also notorious exploiters of creative workers, union-busters and wage-stealers. They don't want to get rid of generative AI, they just want to get paid for the content used to create it. Their use-case for gen AI is the same as Openai's CTO's use-case: get rid of creative jobs and pay less for creative labor.
This isn't hypothetical. Remember last summer's actor strike? The sticking point was that the studios wanted to pay actors a single fee to scan their bodies and faces, and then use those scans instead of hiring those actors, forever, without ever paying them again. Does it matter to an actor whether the AI that replaces you at Warner, Sony, Universal, Disney or Paramount (yes, three of the Big Five studios are also the Big Three labels!) was made by Openai without paying the studios for the training material, or whether Openai paid a license fee that the studios kept?
This is true across the board. The Big Five publishers categorically refuse to include contractual language -romising not to train an LLM with the books they acquire from writers. The game studios require all their voice actors to start every recording session with an on-tape assignment of the training rights to the session:
https://pluralistic.net/2023/02/09/ai-monkeys-paw/#bullied-schoolkids
And now, with total predictability, Universal – the largest music company in the world – has announced that it will start training voice-clones with the music in its catalog:
https://www.rollingstone.com/music/music-news/umg-startsai-voice-clone-partnership-with-soundlabs-1235041808/
This comes hot on the heels of a massive blow-up between Universal and Tiktok, in which Universal professed its outrage that Tiktok was going to train voice-clones with the music Universal licensed to it. In other words: Universal's copyright claims over AI training cash out to this: "If anyone is going to profit from immiserating musicians, it's going to be us, not Tiktok."
I understand why Universal would like this idea. I just don't understand why any musician would root for Universal to defeat Tiktok, or Getty Images to trounce Stable Diffusion. Do you really think that Getty Images likes paying photographers and wants to give them a single penny more than they absolutely have to?
As we learned from George Orwell's avant-garde animated agricultural documentary Animal Farm, the problem isn't who holds the whip, the problem is the whip itself:
The creatures outside looked from pig to man, and from man to pig, and from pig to man again; but already it was impossible to say which was which.
Entertainment execs and tech execs alike are obsessed with AI because they view the future of "content" as fundamentally passive. Here's Ryan Broderick putting it better than I ever could:
At a certain audience size, you just assume those people are locked in and will consume anything you throw at them. Then it just becomes a game of lowering your production costs and increasing your prices to increase your margins. This is why executives love AI and why the average American can’t afford to eat at McDonald’s anymore.
https://www.garbageday.email/p/ceo-passive-content-obsession
Here's a rule of thumb for tech policy prescriptions. Any time you find yourself, as a worker, rooting for the same policy as your boss, you should check and make sure you're on the right side of history. The fact that creative bosses are so obsessed with making copyright cover more kinds of works, restrict more activities, lasting longer and generating higher damages should make creative workers look askance at these proposals.
After 40 years of expanded copyright, we have a creative industry that's larger and more profitable than ever, and yet the share of income going to creative workers has been in steady decline over that entire period. Every year, the share of creative income that creative workers can lay claim to declines, both proportionally and in real terms.
As with the mystery of Spotify's payments, this isn't a mystery at all. You just need to understand that when creators are stuck bargaining with a tiny, powerful cartel of movie, TV, music, publishing, streaming, games or app companies, it doesn't matter how much copyright they have to bargain with. Giving a creative worker more copyright is like giving a bullied schoolkid more lunch-money. There's no amount of money that will satisfy the bullies and leave enough left over for the kid to buy lunch. They just take everything.
Telling creative workers that they can solve their declining wages with more copyright is a denial that creative workers are workers at all. It treats us as entrepreneurial small businesses, LLCs with MFAs negotiating B2B with other companies. That's how we lose.
On the other hand, if we address the problems of AI and labor as workers, and insist on labor rights – like the Writers Guild did when it struck last summer – then we ally ourselves with every other worker whose wages and working conditions are being attacked with AI:
https://pluralistic.net/2023/10/01/how-the-writers-guild-sunk-ais-ship/
Our path to better working conditions lies through organizing and striking, not through helping our bosses sue other giant mulitnational corporations for the right to bleed us out.
The US Copyright Office has repeatedly stated that AI-generated works don't qualify for copyrights, meaning everything AI generated can be freely copied and distributed and the companies that make them can't stop them. This is fantastic news, because the only thing our bosses hate more than paying us is not being able to stop other people from copying the things we make for them. We should be shouting this from the rooftops, not demanding more copyright for AI.
Here's a thing: FTC chair Lina Khan recently told an audience that she was thinking of using her Section 5 powers (to regulate "unfair and deceptive" conduct) to go after AI training:
https://www.youtube.com/watch?v=3mh8Z5pcJpg
Khan has already used these Section 5 powers to secure labor rights, for example, by banning noncompetes:
https://pluralistic.net/2024/04/25/capri-v-tapestry/#aiming-at-dollars-not-men
Creative workers should be banding together with other labor advocates to propose ways for the FTC to prevent all AI-based labor exploitation, like the "reverse-centaur" arrangement in which a human serves as an AI's body, working at breakneck pace until they are psychologically and physically ruined:
https://pluralistic.net/2022/04/17/revenge-of-the-chickenized-reverse-centaurs/
As workers standing with other workers, we can demand the things that help us, even (especially) when that means less for our bosses. On the other hand, if we confine ourselves to backing our bosses' plays, we only stand to gain whatever crumbs they choose to drop at their feet for us.
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/06/21/off-the-menu/#universally-loathed
Tumblr media
Support me this summer on the Clarion Write-A-Thon and help raise money for the Clarion Science Fiction and Fantasy Writers' Workshop!
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
305 notes · View notes
hottiesbooted · 2 months ago
Text
Tumblr media
Dancer & Model: Tag Garwood (@taggarwood_)
Black Leather Jacket: Bershka
Black Leather On Knee Boots By Luxe To Kill.
158 notes · View notes
ddeck · 5 months ago
Text
you know. just like with specific terms and nicknames like clanker or shinie, clones must've come up with unique meanings for their armor paint. like with different meanings assigned to colors of mandalorian armor except since the choice of color is out of their control, all the importance lies in shapes and placement
215 notes · View notes
sergle · 5 months ago
Text
I'm listening to a lot of Maintenance Phase (bc I love it) and this comes up sometimes, so I'll just be sat here thinking about how common it is for little kids to grow up watching their moms and other women in their life jump from diet to diet. Just as ambient background noise in your childhood, the adults around you obsess over calories aloud, express guilt over eating enjoyable food, frame exercise as a form of punishment for eating, and so on.
186 notes · View notes
miumiu-s · 4 months ago
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
YAAAAS & SLAY & TWINK
149 notes · View notes
alchemicwl · 7 days ago
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
FLO icons.
stream Access All Areas for a happy life 🗝️
55 notes · View notes
7goodangel · 6 months ago
Text
Me, several years ago: "I'll never try digital 3D art... just have no interest to learn it. Already have learned other art forms that I barely use anyway... so why add to that ?..."
Me, currently: [Is attempting to model a donut in Blender] "... uh... I can explain..."
144 notes · View notes
fisheito · 4 months ago
Text
Tumblr media
found a baby yaku amidst the Sketchbook-glitch-corruption wreckage..... wondering if he flipped skin tones between black and red and everything in between until he saw his to-be-grandparents (and started mimicking THEIR skin tone....... )
#thinking about yakumo having weird lil homunculus proportions or other such variations#what if he just always had massive hands compared to body size. yaoi hands from birth-transformation#he was so anti-snake that he looked at hands and said YES. THIS IS THE LEAST SNAKEY I CAN BE. I WILL GO 600% ON THIS FEATURE SPECIFICALLY#changing forms from entirely obsidian... or red in patches.... or striped... or other combinations...#because he only had murals to base his human form off of? at least at first?#were the murals in colour? shaded with gradients and lighting oh so conveniently?#then how was he to know what skin tone humans are supposed to have???#imagining the first few times he encountered his grandparents in his cave#maybe they only saw a shadow with eyes darting back into the darkness#just a really long black noodle with semisnake semihuman eyes (just a hint of sclera)#and every time they visited#yakumo observed more of their features#and took on something similar to their proportions...? or hair colour? or skin colour?#and maybe even when he's first adopted into the family and leaves the cave#he's still a vibrant pink and everyone thinks he somehow got sunburnt inside a cave or smth#but then he starts seeing all the other people in the village#including diff age groups and kids who are supposedly around his age#so he starts to slowly morph his body toward those characteristics#his skin gets beige-r. reshapes his eyes a bit.... grows a bit of nose.....lengthens his limbs a bit...#(the big humans seem to treat me the same as that speCIFIC group of smaller humans... so maybe i should use them as a Model)#like... how do you even age in a human body when you have no reference for how humans age?!??!#did yakumo stare at several children in the village and watch their growth year by year#and match his body to their changes just to fit in?#did nature just know what to do?? and he just naturally grew like a human without manual manipulation?#I DEMAND ANSWERS#nu carnival yakumo
102 notes · View notes
mostlysignssomeportents · 1 year ago
Text
The real AI fight
Tumblr media
Tonight (November 27), I'm appearing at the Toronto Metro Reference Library with Facebook whistleblower Frances Haugen.
On November 29, I'm at NYC's Strand Books with my novel The Lost Cause, a solarpunk tale of hope and danger that Rebecca Solnit called "completely delightful."
Tumblr media
Last week's spectacular OpenAI soap-opera hijacked the attention of millions of normal, productive people and nonsensually crammed them full of the fine details of the debate between "Effective Altruism" (doomers) and "Effective Accelerationism" (AKA e/acc), a genuinely absurd debate that was allegedly at the center of the drama.
Very broadly speaking: the Effective Altruists are doomers, who believe that Large Language Models (AKA "spicy autocomplete") will someday become so advanced that it could wake up and annihilate or enslave the human race. To prevent this, we need to employ "AI Safety" – measures that will turn superintelligence into a servant or a partner, nor an adversary.
Contrast this with the Effective Accelerationists, who also believe that LLMs will someday become superintelligences with the potential to annihilate or enslave humanity – but they nevertheless advocate for faster AI development, with fewer "safety" measures, in order to produce an "upward spiral" in the "techno-capital machine."
Once-and-future OpenAI CEO Altman is said to be an accelerationists who was forced out of the company by the Altruists, who were subsequently bested, ousted, and replaced by Larry fucking Summers. This, we're told, is the ideological battle over AI: should cautiously progress our LLMs into superintelligences with safety in mind, or go full speed ahead and trust to market forces to tame and harness the superintelligences to come?
This "AI debate" is pretty stupid, proceeding as it does from the foregone conclusion that adding compute power and data to the next-word-predictor program will eventually create a conscious being, which will then inevitably become a superbeing. This is a proposition akin to the idea that if we keep breeding faster and faster horses, we'll get a locomotive:
https://locusmag.com/2020/07/cory-doctorow-full-employment/
As Molly White writes, this isn't much of a debate. The "two sides" of this debate are as similar as Tweedledee and Tweedledum. Yes, they're arrayed against each other in battle, so furious with each other that they're tearing their hair out. But for people who don't take any of this mystical nonsense about spontaneous consciousness arising from applied statistics seriously, these two sides are nearly indistinguishable, sharing as they do this extremely weird belief. The fact that they've split into warring factions on its particulars is less important than their unified belief in the certain coming of the paperclip-maximizing apocalypse:
https://newsletter.mollywhite.net/p/effective-obfuscation
White points out that there's another, much more distinct side in this AI debate – as different and distant from Dee and Dum as a Beamish Boy and a Jabberwork. This is the side of AI Ethics – the side that worries about "today’s issues of ghost labor, algorithmic bias, and erosion of the rights of artists and others." As White says, shifting the debate to existential risk from a future, hypothetical superintelligence "is incredibly convenient for the powerful individuals and companies who stand to profit from AI."
After all, both sides plan to make money selling AI tools to corporations, whose track record in deploying algorithmic "decision support" systems and other AI-based automation is pretty poor – like the claims-evaluation engine that Cigna uses to deny insurance claims:
https://www.propublica.org/article/cigna-pxdx-medical-health-insurance-rejection-claims
On a graph that plots the various positions on AI, the two groups of weirdos who disagree about how to create the inevitable superintelligence are effectively standing on the same spot, and the people who worry about the actual way that AI harms actual people right now are about a million miles away from that spot.
There's that old programmer joke, "There are 10 kinds of people, those who understand binary and those who don't." But of course, that joke could just as well be, "There are 10 kinds of people, those who understand ternary, those who understand binary, and those who don't understand either":
https://pluralistic.net/2021/12/11/the-ten-types-of-people/
What's more, the joke could be, "there are 10 kinds of people, those who understand hexadecenary, those who understand pentadecenary, those who understand tetradecenary [und so weiter] those who understand ternary, those who understand binary, and those who don't." That is to say, a "polarized" debate often has people who hold positions so far from the ones everyone is talking about that those belligerents' concerns are basically indistinguishable from one another.
The act of identifying these distant positions is a radical opening up of possibilities. Take the indigenous philosopher chief Red Jacket's response to the Christian missionaries who sought permission to proselytize to Red Jacket's people:
https://historymatters.gmu.edu/d/5790/
Red Jacket's whole rebuttal is a superb dunk, but it gets especially interesting where he points to the sectarian differences among Christians as evidence against the missionary's claim to having a single true faith, and in favor of the idea that his own people's traditional faith could be co-equal among Christian doctrines.
The split that White identifies isn't a split about whether AI tools can be useful. Plenty of us AI skeptics are happy to stipulate that there are good uses for AI. For example, I'm 100% in favor of the Human Rights Data Analysis Group using an LLM to classify and extract information from the Innocence Project New Orleans' wrongful conviction case files:
https://hrdag.org/tech-notes/large-language-models-IPNO.html
Automating "extracting officer information from documents – specifically, the officer's name and the role the officer played in the wrongful conviction" was a key step to freeing innocent people from prison, and an LLM allowed HRDAG – a tiny, cash-strapped, excellent nonprofit – to make a giant leap forward in a vital project. I'm a donor to HRDAG and you should donate to them too:
https://hrdag.networkforgood.com/
Good data-analysis is key to addressing many of our thorniest, most pressing problems. As Ben Goldacre recounts in his inaugural Oxford lecture, it is both possible and desirable to build ethical, privacy-preserving systems for analyzing the most sensitive personal data (NHS patient records) that yield scores of solid, ground-breaking medical and scientific insights:
https://www.youtube.com/watch?v=_-eaV8SWdjQ
The difference between this kind of work – HRDAG's exoneration work and Goldacre's medical research – and the approach that OpenAI and its competitors take boils down to how they treat humans. The former treats all humans as worthy of respect and consideration. The latter treats humans as instruments – for profit in the short term, and for creating a hypothetical superintelligence in the (very) long term.
As Terry Pratchett's Granny Weatherwax reminds us, this is the root of all sin: "sin is when you treat people like things":
https://brer-powerofbabel.blogspot.com/2009/02/granny-weatherwax-on-sin-favorite.html
So much of the criticism of AI misses this distinction – instead, this criticism starts by accepting the self-serving marketing claim of the "AI safety" crowd – that their software is on the verge of becoming self-aware, and is thus valuable, a good investment, and a good product to purchase. This is Lee Vinsel's "Criti-Hype": "taking press releases from startups and covering them with hellscapes":
https://sts-news.medium.com/youre-doing-it-wrong-notes-on-criticism-and-technology-hype-18b08b4307e5
Criti-hype and AI were made for each other. Emily M Bender is a tireless cataloger of criti-hypeists, like the newspaper reporters who breathlessly repeat " completely unsubstantiated claims (marketing)…sourced to Altman":
https://dair-community.social/@emilymbender/111464030855880383
Bender, like White, is at pains to point out that the real debate isn't doomers vs accelerationists. That's just "billionaires throwing money at the hope of bringing about the speculative fiction stories they grew up reading – and philosophers and others feeling important by dressing these same silly ideas up in fancy words":
https://dair-community.social/@emilymbender/111464024432217299
All of this is just a distraction from real and important scientific questions about how (and whether) to make automation tools that steer clear of Granny Weatherwax's sin of "treating people like things." Bender – a computational linguist – isn't a reactionary who hates automation for its own sake. On Mystery AI Hype Theater 3000 – the excellent podcast she co-hosts with Alex Hanna – there is a machine-generated transcript:
https://www.buzzsprout.com/2126417
There is a serious, meaty debate to be had about the costs and possibilities of different forms of automation. But the superintelligence true-believers and their criti-hyping critics keep dragging us away from these important questions and into fanciful and pointless discussions of whether and how to appease the godlike computers we will create when we disassemble the solar system and turn it into computronium.
The question of machine intelligence isn't intrinsically unserious. As a materialist, I believe that whatever makes me "me" is the result of the physics and chemistry of processes inside and around my body. My disbelief in the existence of a soul means that I'm prepared to think that it might be possible for something made by humans to replicate something like whatever process makes me "me."
Ironically, the AI doomers and accelerationists claim that they, too, are materialists – and that's why they're so consumed with the idea of machine superintelligence. But it's precisely because I'm a materialist that I understand these hypotheticals about self-aware software are less important and less urgent than the material lives of people today.
It's because I'm a materialist that my primary concerns about AI are things like the climate impact of AI data-centers and the human impact of biased, opaque, incompetent and unfit algorithmic systems – not science fiction-inspired, self-induced panics over the human race being enslaved by our robot overlords.
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/11/27/10-types-of-people/#taking-up-a-lot-of-space
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
289 notes · View notes
777jjk · 11 months ago
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
2014 𝙫𝙞𝙗𝙚𝙨 ✨🎀
206 notes · View notes
socratezeuxis · 4 months ago
Photo
Tumblr media
(via 七菜乃@🇯🇵にいます。 (@nananano_2021) / X)
76 notes · View notes