#business use cases for generative AI
Explore tagged Tumblr posts
Text
How Can Generative AI Enhance Your Business's Organisational and Operational Processes
Experience the future of business transformation with Generative AI. From customer support to content creation, sales optimization, and HR process automation, this cutting-edge technology is your key to enhanced efficiency and innovation. Learn how Generative AI can automate and elevate your operations while delivering personalized experiences and data-driven insights.
At Webclues Infotech, we're your partners in harnessing the true potential of Generative AI. Join us in this technological revolution and take your business to new heights in a fast-paced world. Read the blog for more insights.
#generative AI for customer support#Generative AI for sales#generative AI for marketing#Generating AI for HR#generative AI services#Generative AI chatbots#generative ai in business#business use cases for generative AI
0 notes
Text
I have been on a Willy Wonkified journey today and I need y'all to come with me
It started so innocently. Scrolling Google News I come across this article on Ars Technica:
At first glance I thought what happened was parents saw AI-generated images of an event their kids were at and became concerned, then realized it was fake. The reality? Oh so much better.
On Saturday, event organizers shut down a Glasgow-based "Willy's Chocolate Experience" after customers complained that the unofficial Wonka-inspired event, which took place in a sparsely decorated venue, did not match the lush AI-generated images listed on its official website.... According to Sky News, police were called to the event, and "advice was given."
Thing is, the people who paid to go were obviously not expecting exactly this:
But I can see how they'd be a bit pissed upon arriving to this:
It gets worse.
"Tempest, how could it possibly--"
source of this video that also includes this charming description:
Made up a villain called The Unknown — 'an evil chocolate maker who lives in the walls'
There is already a meme.
Oh yes, the Wish.com Oompa Loompa:
Who has already done an interview!
As bad (and hilarious) as this all is, I got curious about the company that put on this event. Did they somehow overreach? Did the actors they hired back out at the last minute? (Or after they saw the script...) Oddly enough, it doesn't seem so!
Given what I found when poking around I'm legit surprised there was an event at all. Cuz this outfit seems to be 100% a scam.
The website for this specific event is here and it has many AI generated images on it, as stated. I don't think anyone who bought tickets looked very closely at these images, otherwise they might have been concerned about how much Catgacating their children would be exposed to.
Yes, Catgacating. You know, CATgacating!
I personally don't think anyone should serve exarserdray flavored lollipops in public spaces given how many people are allergic to it. And the sweet teats might not have been age appropriate.
Though the Twilight Tunnel looks pretty cool:
I'm not sure that Dim Tight Twdrding is safe. I've also been warned that Vivue Sounds are in that weird frequency range that makes you poop your pants upon hearing them.
Yes, Virginia, these folks used an AI image generator for everything on the website and used Chat GPT for some of the text! From the FAQ:
Q: I cannot go on the available days. Will you have more dates in the future? A: Should there be capacity when you arrive, then you will be able to enter without any problems. In the event that this is not the case, we may ask you to wait a bit.
Fear not, for this question is asked again a few lines down and the answer makes more sense.
Curious about the events company behind this disaster, I took myself over to the homepage of House of Illuminati and I was not disappointed.
I would 100% trust these people to plan my wedding.
This abomination of a website is a badly edited WordPress blog filled with AI art and just enough blog posts to make the casual viewer think that it's a legit business for about 0.0004 seconds.
Their attention to detail is stunning, from how they left up the default first post every WP blog gets to how they didn't bother changing the name on several images, thus revealing where they came from. Like this one:
With the lovely and compact filename "DALL·E-2024-01-30-09.50.54-Imagine-a-scene-where-fantasy-and-reality-merge-seamlessly.-In-the-foreground-a-grand-interactive-gala-is-taking-place-filled-with-elegant-guests-i.png"
"Concept.png" came from the same AI generator that gets text almost, but not quiiiiiite right:
There are a suspicious number of .webp images in the uploads, which makes me think they either stole them from other sites where AI "art" was uploaded or they didn't want to pay for the hi-res versions of some and just grabbed the preview image.
The real fun came when I noticed this filename: Before-and-After-Eventologists-Transformation-Edgbaston-Cricket-Ground-1024x1024-1.jpg and decided to do a Google image search. Friends, you will be shocked to hear that the image in question, found on this post touting how they can transform a boring warehouse into a fun event space, was stolen from this actual event planner.
Even better, this weirdly grainy image?
From a post that claims to be about the preparations for a "Willy Wonka" experience (we'll get to this in a minute), is not only NOT an actual image of anyone preparing anything for Illuminati's event, it is stolen from a YouTube thumbnail that's been chopped to remove the name of the company that actually made this. Here's the video.
If you actually read the blog posts they're all copypasta or some AI generated crap. To the point where this seems like not a real business at all. There's very specific business information at the bottom, but nothing else seems real.
As I said, I'm kinda surprised they put on an event at all. This has, "And then they ran off with all our money!" written all over it. I'm perplexed.
And also wondering when the copyright lawyers are gonna start calling, because...
This post explicitly says they're putting together a "Willy Wonka’s Chocolate Factory Experience" complete with golden tickets.
Somewhere along the line someone must have wised up, because the actual event was called "Willys Chocolate Experience" (note the lack of apostrophe) and the script they handed to the actors about 10 minutes before they were supposed to "perform" was about a "Willy McDuff" and his chocolate factory.
As I was going through this madness with friends in a chat, one pointed out that it took very little prompting to get the free Chat GPT to spit out an event description and such very similar to all this while avoiding copyrighted phrases. But he couldn't figure out where the McDuff came from since it wasn't the type of thing GPT would usually spit out...
Until he altered the prompt to include it would be happening in Glasgow, Scotland.
You cannot make this stuff up.
But truly, honestly, I do not even understand why they didn't take the money and run. Clearly this was all set up to be a scam. A lazy, AI generated scam.
Everything from the website to the event images to the copy to the "script" to the names of things was either stolen or AI generated (aka stolen). Hell, I'd be looking for some poor Japanese visitor wandering the streets of Glasgow, confused, after being jacked for his mascot costume.
HE LIVES IN THE WALLS, Y'ALL.
#long post#Willy Wonka#Wonka#Willy Wonka Experience#Willy Wonka Experience disaster#Willy's Chocolate Experience#Willys Chocolate Experience#THE UNKNOWN#Wish.com Oompa Loompa#House of Illuminati#AI#ai generated
8K notes
·
View notes
Text
With Agoge AI, users can expect a tailored approach to their communication needs. From perfecting negotiation techniques to enhancing presentation skills, this AI tool provides training to excel in business communication. 🗣️👥
#ai business#ai update#ai#ai community#ai developers#ai tools#ai discussion#ai development#ai generated#ai marketing#ai engineer#ai expert#ai revolution#ai technology#ai use cases#ai in digital marketing#ai integration#ai in ecommerce
0 notes
Text
“Humans in the loop” must detect the hardest-to-spot errors, at superhuman speed
I'm touring my new, nationally bestselling novel The Bezzle! Catch me SATURDAY (Apr 27) in MARIN COUNTY, then Winnipeg (May 2), Calgary (May 3), Vancouver (May 4), and beyond!
If AI has a future (a big if), it will have to be economically viable. An industry can't spend 1,700% more on Nvidia chips than it earns indefinitely – not even with Nvidia being a principle investor in its largest customers:
https://news.ycombinator.com/item?id=39883571
A company that pays 0.36-1 cents/query for electricity and (scarce, fresh) water can't indefinitely give those queries away by the millions to people who are expected to revise those queries dozens of times before eliciting the perfect botshit rendition of "instructions for removing a grilled cheese sandwich from a VCR in the style of the King James Bible":
https://www.semianalysis.com/p/the-inference-cost-of-search-disruption
Eventually, the industry will have to uncover some mix of applications that will cover its operating costs, if only to keep the lights on in the face of investor disillusionment (this isn't optional – investor disillusionment is an inevitable part of every bubble).
Now, there are lots of low-stakes applications for AI that can run just fine on the current AI technology, despite its many – and seemingly inescapable - errors ("hallucinations"). People who use AI to generate illustrations of their D&D characters engaged in epic adventures from their previous gaming session don't care about the odd extra finger. If the chatbot powering a tourist's automatic text-to-translation-to-speech phone tool gets a few words wrong, it's still much better than the alternative of speaking slowly and loudly in your own language while making emphatic hand-gestures.
There are lots of these applications, and many of the people who benefit from them would doubtless pay something for them. The problem – from an AI company's perspective – is that these aren't just low-stakes, they're also low-value. Their users would pay something for them, but not very much.
For AI to keep its servers on through the coming trough of disillusionment, it will have to locate high-value applications, too. Economically speaking, the function of low-value applications is to soak up excess capacity and produce value at the margins after the high-value applications pay the bills. Low-value applications are a side-dish, like the coach seats on an airplane whose total operating expenses are paid by the business class passengers up front. Without the principle income from high-value applications, the servers shut down, and the low-value applications disappear:
https://locusmag.com/2023/12/commentary-cory-doctorow-what-kind-of-bubble-is-ai/
Now, there are lots of high-value applications the AI industry has identified for its products. Broadly speaking, these high-value applications share the same problem: they are all high-stakes, which means they are very sensitive to errors. Mistakes made by apps that produce code, drive cars, or identify cancerous masses on chest X-rays are extremely consequential.
Some businesses may be insensitive to those consequences. Air Canada replaced its human customer service staff with chatbots that just lied to passengers, stealing hundreds of dollars from them in the process. But the process for getting your money back after you are defrauded by Air Canada's chatbot is so onerous that only one passenger has bothered to go through it, spending ten weeks exhausting all of Air Canada's internal review mechanisms before fighting his case for weeks more at the regulator:
https://bc.ctvnews.ca/air-canada-s-chatbot-gave-a-b-c-man-the-wrong-information-now-the-airline-has-to-pay-for-the-mistake-1.6769454
There's never just one ant. If this guy was defrauded by an AC chatbot, so were hundreds or thousands of other fliers. Air Canada doesn't have to pay them back. Air Canada is tacitly asserting that, as the country's flagship carrier and near-monopolist, it is too big to fail and too big to jail, which means it's too big to care.
Air Canada shows that for some business customers, AI doesn't need to be able to do a worker's job in order to be a smart purchase: a chatbot can replace a worker, fail to their worker's job, and still save the company money on balance.
I can't predict whether the world's sociopathic monopolists are numerous and powerful enough to keep the lights on for AI companies through leases for automation systems that let them commit consequence-free free fraud by replacing workers with chatbots that serve as moral crumple-zones for furious customers:
https://www.sciencedirect.com/science/article/abs/pii/S0747563219304029
But even stipulating that this is sufficient, it's intrinsically unstable. Anything that can't go on forever eventually stops, and the mass replacement of humans with high-speed fraud software seems likely to stoke the already blazing furnace of modern antitrust:
https://www.eff.org/de/deeplinks/2021/08/party-its-1979-og-antitrust-back-baby
Of course, the AI companies have their own answer to this conundrum. A high-stakes/high-value customer can still fire workers and replace them with AI – they just need to hire fewer, cheaper workers to supervise the AI and monitor it for "hallucinations." This is called the "human in the loop" solution.
The human in the loop story has some glaring holes. From a worker's perspective, serving as the human in the loop in a scheme that cuts wage bills through AI is a nightmare – the worst possible kind of automation.
Let's pause for a little detour through automation theory here. Automation can augment a worker. We can call this a "centaur" – the worker offloads a repetitive task, or one that requires a high degree of vigilance, or (worst of all) both. They're a human head on a robot body (hence "centaur"). Think of the sensor/vision system in your car that beeps if you activate your turn-signal while a car is in your blind spot. You're in charge, but you're getting a second opinion from the robot.
Likewise, consider an AI tool that double-checks a radiologist's diagnosis of your chest X-ray and suggests a second look when its assessment doesn't match the radiologist's. Again, the human is in charge, but the robot is serving as a backstop and helpmeet, using its inexhaustible robotic vigilance to augment human skill.
That's centaurs. They're the good automation. Then there's the bad automation: the reverse-centaur, when the human is used to augment the robot.
Amazon warehouse pickers stand in one place while robotic shelving units trundle up to them at speed; then, the haptic bracelets shackled around their wrists buzz at them, directing them pick up specific items and move them to a basket, while a third automation system penalizes them for taking toilet breaks or even just walking around and shaking out their limbs to avoid a repetitive strain injury. This is a robotic head using a human body – and destroying it in the process.
An AI-assisted radiologist processes fewer chest X-rays every day, costing their employer more, on top of the cost of the AI. That's not what AI companies are selling. They're offering hospitals the power to create reverse centaurs: radiologist-assisted AIs. That's what "human in the loop" means.
This is a problem for workers, but it's also a problem for their bosses (assuming those bosses actually care about correcting AI hallucinations, rather than providing a figleaf that lets them commit fraud or kill people and shift the blame to an unpunishable AI).
Humans are good at a lot of things, but they're not good at eternal, perfect vigilance. Writing code is hard, but performing code-review (where you check someone else's code for errors) is much harder – and it gets even harder if the code you're reviewing is usually fine, because this requires that you maintain your vigilance for something that only occurs at rare and unpredictable intervals:
https://twitter.com/qntm/status/1773779967521780169
But for a coding shop to make the cost of an AI pencil out, the human in the loop needs to be able to process a lot of AI-generated code. Replacing a human with an AI doesn't produce any savings if you need to hire two more humans to take turns doing close reads of the AI's code.
This is the fatal flaw in robo-taxi schemes. The "human in the loop" who is supposed to keep the murderbot from smashing into other cars, steering into oncoming traffic, or running down pedestrians isn't a driver, they're a driving instructor. This is a much harder job than being a driver, even when the student driver you're monitoring is a human, making human mistakes at human speed. It's even harder when the student driver is a robot, making errors at computer speed:
https://pluralistic.net/2024/04/01/human-in-the-loop/#monkey-in-the-middle
This is why the doomed robo-taxi company Cruise had to deploy 1.5 skilled, high-paid human monitors to oversee each of its murderbots, while traditional taxis operate at a fraction of the cost with a single, precaratized, low-paid human driver:
https://pluralistic.net/2024/01/11/robots-stole-my-jerb/#computer-says-no
The vigilance problem is pretty fatal for the human-in-the-loop gambit, but there's another problem that is, if anything, even more fatal: the kinds of errors that AIs make.
Foundationally, AI is applied statistics. An AI company trains its AI by feeding it a lot of data about the real world. The program processes this data, looking for statistical correlations in that data, and makes a model of the world based on those correlations. A chatbot is a next-word-guessing program, and an AI "art" generator is a next-pixel-guessing program. They're drawing on billions of documents to find the most statistically likely way of finishing a sentence or a line of pixels in a bitmap:
https://dl.acm.org/doi/10.1145/3442188.3445922
This means that AI doesn't just make errors – it makes subtle errors, the kinds of errors that are the hardest for a human in the loop to spot, because they are the most statistically probable ways of being wrong. Sure, we notice the gross errors in AI output, like confidently claiming that a living human is dead:
https://www.tomsguide.com/opinion/according-to-chatgpt-im-dead
But the most common errors that AIs make are the ones we don't notice, because they're perfectly camouflaged as the truth. Think of the recurring AI programming error that inserts a call to a nonexistent library called "huggingface-cli," which is what the library would be called if developers reliably followed naming conventions. But due to a human inconsistency, the real library has a slightly different name. The fact that AIs repeatedly inserted references to the nonexistent library opened up a vulnerability – a security researcher created a (inert) malicious library with that name and tricked numerous companies into compiling it into their code because their human reviewers missed the chatbot's (statistically indistinguishable from the the truth) lie:
https://www.theregister.com/2024/03/28/ai_bots_hallucinate_software_packages/
For a driving instructor or a code reviewer overseeing a human subject, the majority of errors are comparatively easy to spot, because they're the kinds of errors that lead to inconsistent library naming – places where a human behaved erratically or irregularly. But when reality is irregular or erratic, the AI will make errors by presuming that things are statistically normal.
These are the hardest kinds of errors to spot. They couldn't be harder for a human to detect if they were specifically designed to go undetected. The human in the loop isn't just being asked to spot mistakes – they're being actively deceived. The AI isn't merely wrong, it's constructing a subtle "what's wrong with this picture"-style puzzle. Not just one such puzzle, either: millions of them, at speed, which must be solved by the human in the loop, who must remain perfectly vigilant for things that are, by definition, almost totally unnoticeable.
This is a special new torment for reverse centaurs – and a significant problem for AI companies hoping to accumulate and keep enough high-value, high-stakes customers on their books to weather the coming trough of disillusionment.
This is pretty grim, but it gets grimmer. AI companies have argued that they have a third line of business, a way to make money for their customers beyond automation's gifts to their payrolls: they claim that they can perform difficult scientific tasks at superhuman speed, producing billion-dollar insights (new materials, new drugs, new proteins) at unimaginable speed.
However, these claims – credulously amplified by the non-technical press – keep on shattering when they are tested by experts who understand the esoteric domains in which AI is said to have an unbeatable advantage. For example, Google claimed that its Deepmind AI had discovered "millions of new materials," "equivalent to nearly 800 years’ worth of knowledge," constituting "an order-of-magnitude expansion in stable materials known to humanity":
https://deepmind.google/discover/blog/millions-of-new-materials-discovered-with-deep-learning/
It was a hoax. When independent material scientists reviewed representative samples of these "new materials," they concluded that "no new materials have been discovered" and that not one of these materials was "credible, useful and novel":
https://www.404media.co/google-says-it-discovered-millions-of-new-materials-with-ai-human-researchers/
As Brian Merchant writes, AI claims are eerily similar to "smoke and mirrors" – the dazzling reality-distortion field thrown up by 17th century magic lantern technology, which millions of people ascribed wild capabilities to, thanks to the outlandish claims of the technology's promoters:
https://www.bloodinthemachine.com/p/ai-really-is-smoke-and-mirrors
The fact that we have a four-hundred-year-old name for this phenomenon, and yet we're still falling prey to it is frankly a little depressing. And, unlucky for us, it turns out that AI therapybots can't help us with this – rather, they're apt to literally convince us to kill ourselves:
https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/04/23/maximal-plausibility/#reverse-centaurs
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
#pluralistic#ai#automation#humans in the loop#centaurs#reverse centaurs#labor#ai safety#sanity checks#spot the mistake#code review#driving instructor
853 notes
·
View notes
Text
The reason I took interest in AI as an art medium is that I've always been interested in experimenting with novel and unconventional art media - I started incorporating power tools into a lot of my physical processes younger than most people were even allowed to breathe near them, and I took to digital art like a duck to water when it was the big, relatively new, controversial thing too, so really this just seems like the logical next step. More than that, it's exciting - it's not every day that we just invent an entirely new never-before-seen art medium! I have always been one to go fucking wild for that shit.
Which is, ironically, a huge part of why I almost reflexively recoil at how it's used in the corporate world: because the world of business, particularly the entertainment industry, has what often seems like less than zero interest in appreciating it as a novel medium.
And I often wonder how much less that would be the case - and, by extension, how much less vitriolic the discussion around it would be, and how many fewer well-meaning people would be falling for reactionary mythologies about where exactly the problems lie - if it hadn't reached the point of...at least an illusion of commercial viability, at exactly the moment it did.
See, the groundwork was laid in 2020, back during covid lockdowns, when we saw a massive spike in people relying on TV, games, books, movies, etc. to compensate for the lack of outdoor, physical, social entertainment. This was, seemingly, wonderful for the whole industry - but under late-stage capitalism, it was as much of a curse as it was a gift. When industries are run by people whose sole brain process is "line-go-up", tiny factors like "we're not going to be in lockdown forever" don't matter. CEOs got dollar signs in their eyes. Shareholders demanded not only perpetual growth, but perpetual growth at this rate or better. Even though everyone with an ounce of common sense was screaming "this is an aberration, this is not sustainable" - it didn't matter. The business bros refused to believe it. This was their new normal, they were determined to prove -
And they, predictably, failed to prove it.
So now the business bros are in a pickle. They're beholden to the shareholders to do everything within their power to maintain the infinite growth they promised, in a world with finite resources. In fact, by precedent, they're beholden to this by law. Fiduciary duty has been interpreted in court to mean that, given the choice between offering a better product and ensuring maximum returns for shareholders, the latter MUST be a higher priority; reinvesting too much in the business instead of trying to make the share value increase as much as possible, as fast as possible, can result in a lawsuit - that a board member or CEO can lose, and have lost before - because it's not acting in the best interest of shareholders. If that unsustainable explosive growth was promised forever, all the more so.
And now, 2-3-4 years on, that impossibility hangs like a sword of Damocles over the heads of these media company CEOs. The market is fully saturated; the number of new potential customers left to onboard is negligible. Some companies began trying to "solve" this "problem" by violating consumer privacy and charging per household member, which (also predictably) backfired because those of us who live in reality and not statsland were not exactly thrilled about the concept of being told we couldn't watch TV with our own families. Shareholders are getting antsy, because their (however predictably impossible) infinite lockdown-level profits...aren't coming, and someone's gotta make up for that, right? So they had already started enshittifying, making excuses for layoffs, for cutting employee pay, for duty creep, for increasing crunch, for lean-staffing, for tightening turnarounds-
And that was when we got the first iterations of AI image generation that were actually somewhat useful for things like rapid first drafts, moodboards, and conceptualizing.
Lo! A savior! It might as well have been the digital messiah to the business bros, and their eyes turned back into dollar signs. More than that, they were being promised that this...both was, and wasn't art at the same time. It was good enough for their final product, or if not it would be within a year or two, but it required no skill whatsoever to make! Soon, you could fire ALL your creatives and just have Susan from accounting write your scripts and make your concept art with all the effort that it takes to get lunch from a Star Trek replicator!
This is every bit as much bullshit as the promise of infinite lockdown-level growth, of course, but with shareholders clamoring for the money they were recklessly promised, executives are looking for anything, even the slightest glimmer of a new possibility, that just might work as a life raft from this sinking ship.
So where are we now? Well, we're exiting the "fucking around" phase and entering "finding out". According to anecdotes I've read, companies are, allegedly, already hiring prompt engineers (or "prompters" - can't give them a job title that implies there's skill or thought involved, now can we, that just might imply they deserve enough money to survive!)...and most of them not only lack the skill to manually post-process their works, but don't even know how (or perhaps aren't given access) to fully use the software they specialize in, being blissfully unaware of (or perhaps not able/allowed to use) features such as inpainting or img2img. It has been observed many times that LLMs are being used to flood once-reputable information outlets with hallucinated garbage. I can verify - as can nearly everyone who was online in the aftermath of the Glasgow Willy Wonka Dashcon Experience - that the results are often outright comically bad.
To anyone who was paying attention to anything other than please-line-go-up-faster-please-line-go-please (or buying so heavily into reactionary mythologies about why AI can be dangerous in industry that they bought the tech companies' false promises too and just thought it was a bad thing), this was entirely predictable. Unfortunately for everyone in the blast radius, common sense has never been an executive's strong suit when so much money is on the line.
Much like CGI before it, what we have here is a whole new medium that is seldom being treated as a new medium with its own unique strengths, but more often being used as a replacement for more expensive labor, no matter how bad the result may be - nor, for that matter, how unjust it may be that the labor is so much cheaper.
And it's all because of timing. It's all because it came about in the perfect moment to look like a life raft in a moment of late-stage capitalist panic. Any port in a storm, after all - even if that port is a non-Euclidean labyrinth of soggy, rotten botshit garbage.
Any port in a storm, right? ...right?
All images generated using Simple Stable, under the Code of Ethics of Are We Art Yet?
#ai art#generated art#generated artwork#essays#about ai#worth a whole 'nother essay is how the tech side exists in a state that is both thriving and floundering at the same time#because the money theyre operating with is in schrodinger's box#at the same time it exists and it doesnt#theyre highly valued but usually operating at a loss#that is another MASSIVE can of worms and deserves its own deep dive
442 notes
·
View notes
Text
Fascinated by what the business case could be for Coca-Cola using generative AI.
400 notes
·
View notes
Text
marta svetek -- voice of gregory, vanny, roxanne wolf in fnaf: sb and its dlc, ruin -- has joined kellen goff in voicing her thoughts on ai reproductions of her voice. again: if you want voice actors to maintain a living wage (and their own mental health), please respect her wishes and ensure that this kind of behaviour is heavily discouraged in the future.
full text of thread below the cut for brevity:
Been getting tagged in all kinds of posts and popular TikToks of people using AI voice clones of my FNAF characters to have them sing or say all kinds of random stuff. You might think it's fun. But really it's contributing to the problem all VAs are facing in the wake of AI. The more these AI voice clones get used, the more they're normalized, and the more businesses see it as a viable alternative to real VAs when making content for you. Slowly but surely, most of your favourite VAs will be out of work. Not to mention what this will cost in terms of future generations of VAs. Incredible performances we'll never get to experience because most businesses will more often than not go for the cheapest, quickest option as soon as the technology is up to scratch. This is made even worse by the fact that we currently have very few legal protections against the unauthorized use of our voices using AI. And we're being pressured to waive those too, for a fraction of their worth. This is one of the big reasons the current strikes are a thing. If you enjoyed my work in FNAF and have any respect for the amount of time and effort VAs put into bringing characters to life, don't use AI voice clones in your fanart. I can't express how violated I feel every time I hear my voice say words that aren't my own. And just in case it wasn't clear - I have never consented to having my voice, likeness or any of my performances synthesized into AI voices/avatars etc.
#fnaf#five nights at freddy's#roxanne wolf#fnaf gregory#fnaf roxy#vanny#discourse#speaking!#no your funnie tiktoks and fangames#are not worth violating the bodies of other human beings#no not even if you're broke.#ch/rist alive guys.#vas are human beings who deserve basic respect#as well as the ability to freely live their life#and create their art#support workers. support artists. support strikes.#your actions have consequences. let them be good ones.
1K notes
·
View notes
Text
gonna start this post upfront by saying tumblr's fuckin up bad with moderation right now, regarding the wave of trans people being targeted. but i'm not here to discuss that issue, i'm going to talk about the nature of large and small social spaces on the internet
as this post rightly points out, examining our existing social network structure reveals the crux of the problem: we are tenants on someone else's service. extrapolating from that, we're the source of revenue for someone's business. under that model, there is no incentive whatsoever for a social network to apply a "fair" or "just" moderation scheme. their goal is to maximize the number of people using the service and minimize blowback from advertisers regarding "what goes on" on the site
there will not be an alternative social network that gets this right at scale, unless it meets the following criteria:
1. Has ample moderators to thoughtfully deal with user moderation cases
2. Has terms of service that you agree with
3. Has a moderation team that understands how to apply moderation according to the terms of service, and amends it when necessary
4. Does not rely on external income source to pay for the site
Number 1: An ideal social network is one that has numerous, well-treated moderators who are adept at resolving conflict. Under capitalism, this is a non-starter, as moderation is seen as a money sink that just needs to be barely enough to make the site usable.
Number 2: An ideal social network has terms of service you agree with. Unfortunately there's no set of rules everyone will find fair. While this is not a problem for the people who want to use the site, it will inevitably create an outgroup who are pushed away from the site. The obvious bad actors (nazis, terfs, etc) are pretty straightforward, but there are groups that do things you might find "unpleasant" even if you support their right to do it. Inevitably this turns into lines drawn in the sand about how visible should that content be.
Number 3: An ideal social network has moderators who have internalized the terms of service and consistently make decisions based on the TOS. If a situation comes up where there's no clear ruling in the TOS, but users need a moderation decision regarding it, the moderation team must choose how to act and then, potentially, amend the TOS if the case warrants it. Humans, though, are not robots, and no, AI is not the solution here jesus christ. There will always be variance in moderation decisions. And when it comes to amending the TOS, who's the decision maker? The sites' owners? The moderation team? Users as a whole?
Number 4: An ideal social network does not rely on an external income source to pay for the site. The site pays for itself, and its income flow covers the costs necessary with reserves for unexpected situations. Again, under capitalism this is a no-go, because a corporate social network's only goal is to maximize money. Infinite growth, not stasis. A private social network paid by members requires enough paying members to be sustainable, and costs will generally go up over time, not down. A social network that has some lump sum of cash just generating wealth is also unreliable because, first you need a large lump sum to begin with, and that mechanism is tied to the whims of the investment market. And, again, costs of the site will go up, not down.
As you've read through these you're probably reaching the conclusion: making a large-scale social network that is fair and sustainable is very, very difficult, if not impossible with our current culture and economic systems. There might be a scale where you can reach "almost fair" and "barely sustainable", but then you have to cap its growth.
So the "town square" social network is rife with problems and we need to abandon it's model as the ideal network. Should we go small instead? We have a model already for that with message boards and forums. Though they weren't without their problems, they didn't have the scale that exacerbated those problems to crisis levels. Most of the time.
If you're thinking maybe you need a small network like this, free from a corporate owner (like Discord), the tools are out there for you to accomplish it. However, before you try, keep the above points in mind. Even if you're not out to create a large-scale social network, an open network will run away from you. And all of those points above are guidelines for a good online community.
You and your network of 50 friends and friends of friends might all get along together, but every single person you add increases the risk of creating moderation problems. People also change, or simply have episodes of irrational behavior. You need a dedicated team of moderators who are acting coherently for and agreeably to the community.
And you absolutely must keep this in mind: inevitably, as you add more people, someone will do vile shit. CSAM and violence type shit. You have to be prepared to encounter it. You have to have a plan to see and handle that, and the moderators who are part of your moderation team must be prepared to see and handle it too.
There's been a steady trickle of new alternative social networks (or social media networks) popping up, but you cannot expect those to be perfect havens. Tumblr was once the haven for weirdos on the internet. Now it's hostile to its core members. This is not trying to rationalize staying here because "hey, it could be worse". This is just trying to warn you to temper your expectations, especially because new networks that suddenly get a huge influx of new members hit a critical point where many falter, change, or fail.
Examine who's running those networks closely. Think critically about what they're touting as the benefits of those networks. And if you decide to join them, do not, under any case, expect those new homes to be permanent.
206 notes
·
View notes
Text
Hey, you know how I said there was nothing ethical about Adobe's approach to AI? Well whaddya know?
Adobe wants your team lead to contact their customer service to not have your private documents scraped!
This isn't the first of Adobe's always-online subscription-based products (which should not have been allowed in the first place) to have sneaky little scraping permissions auto-set to on and hidden away, but this is the first one (I'm aware of) where you have to contact customer service to turn it off for a whole team.
Now, I'm on record for saying I see scraping as fair use, and it is. But there's an aspect of that that is very essential to it being fair use: The material must be A) public facing and B) fixed published work.
All public facing published work is subject to transformative work and academic study, the use of mechanical apparatus to improve/accelerate that process does not change that principle. Its the difference between looking through someone's public instagram posts and reading through their drafts folder and DMs.
But that's not the kind of work that Adobe's interested in. See, they already have access to that work just like everyone else. But the in-progress work that Creative Cloud gives them access to, and the private work that's never published that's stored there isn't in LIAON. They want that advantage.
And that's valuable data. For an example: having a ton of snapshots of images in the process of being completed would be very handy for making an AI that takes incomplete work/sketches and 'finishes' it. That's on top of just being general dataset grist.
But that work is, definitionally, not published. There's no avenue to a fair use argument for scraping it, so they have to ask. And because they know it will be an unpopular ask, they make it a quiet op-out.
This was sinister enough when it was Photoshop, but PDF is mainly used for official documents and forms. That's tax documents, medical records, college applications, insurance documents, business records, legal documents. And because this is a server-side scrape, even if you opt-out, you have no guarantee that anyone you're sending those documents to has done so.
So, in case you weren't keeping score, corps like Adobe, Disney, Universal, Nintendo, etc all have the resources to make generative AI systems entirely with work they 'own' or can otherwise claim rights to, and no copyright argument can stop them because they own the copyrights.
They just don't want you to have access to it as a small creator to compete with them, and if they can expand copyright to cover styles and destroy fanworks they will. Here's a pic Adobe trying to do just that:
If you want to know more about fair use and why it applies in this circumstance, I recommend the Electronic Frontier Foundation over the Copyright Alliance.
181 notes
·
View notes
Text
Auto-Generated Junk Web Sites
I don't know if you heard the complaints about Google getting worse since 2018, or about Amazon getting worse. Some people think Google got worse at search. I think Google got worse because the web got worse. Amazon got worse because the supply side on Amazon got worse, but ultimately Amazon is to blame for incentivising the sale of more and cheaper products on its platform.
In any case, if you search something on Google, you get a lot of junk, and if you search for a specific product on Amazon, you get a lot of junk, even though the process that led to the junk is very different.
I don't subscribe to the "Dead Internet Theory", the idea that most online content is social media and that most social media is bots. I think Google search has gotten worse because a lot of content from as recently as 2018 got deleted, and a lot of web 1.0 and the blogosphere got deleted, comment sections got deleted, and content in the style of web 1.0 and the blogosphere is no longer produced. Furthermore, many links are now broken because they don't directly link to web pages, but to social media accounts and tweets that used to aggregate links.
I don't think going back to web 1.0 will help discoverability, and it probably won't be as profitable or even monetiseable to maintain a useful web 1.0 page compared to an entertaining but ephemeral YouTube channel. Going back to Web 1.0 means more long-term after-hours labour of love site maintenance, and less social media posting as a career.
Anyway, Google has gotten noticeably worse since GPT-3 and ChatGPT were made available to the general public, and many people blame content farms with language models and image synthesis for this. I am not sure. If Google had started to show users meaningless AI generated content from large content farms, that means Google has finally lost the SEO war, and Google is worse at AI/language models than fly-by-night operations whose whole business model is skimming clicks off Google.
I just don't think that's true. I think the reality is worse.
Real web sites run by real people are getting overrun by AI-generated junk, and human editors can't stop it. Real people whose job it is to generate content are increasingly turning in AI junk at their jobs.
Furthermore, even people who are setting up a web site for a local business or an online presence for their personal brand/CV are using auto-generated text.
I have seen at least two different TV commercials by web hosting and web design companies that promoted this. Are you starting your own business? Do you run a small business? A business needs a web site. With our AI-powered tools, you don't have to worry about the content of your web site. We generate it for you.
There are companies out there today, selling something that's probably a re-labelled ChatGPT or LLaMA plus Stable Diffusion to somebody who is just setting up a bicycle repair shop. All the pictures and written copy on the web presence for that repair shop will be automatically generated.
We would be living in a much better world if there was a small number of large content farms and bot operators poisoning our search results. Instead, we are living in a world where many real people are individually doing their part.
165 notes
·
View notes
Text
Tokyo Debunker WickChat Icons
as of posting this no chat with the Mortkranken ghouls has been released, so their icons are not here. If I forget to update when they come out, send me an ask!
Jin's is black, but not the default icon. An icon choice that says "do not percieve me."
Tohma's is the Frostheim crest. Very official, he probably sends out a lot of official Frostheim business group texts.
Kaito's is a doodle astronaut! He has the same astronaut on his phone case! He canonically likes stars, but I wonder if this is a doodle he made himself and put it on his phone case or something?
Luca's is maybe a family crest?
Alan's is the default icon. He doesn't know how to set one up, if I were to take an educated guess. . . .
Leo's is himself, looking cute and innocent. Pretty sure this is an altered version of the 'Leo's is himself, looking cute and innocent. Pretty sure this is an altered version of the 'DATA DELETED' panel from Episode 2 Chapter 2.
Sho's is Bonnie!!! Fun fact, in Episode 2 Chapter 2 you can see that Bonnie has her name spraypainted/on a decal on her side!
Haru's is Peekaboo! Such a mommy blogger choice.
Towa's is some sort of flowers! I don't know flowers well enough to guess what kind though.
Ren's is the "NAW" poster! "NAW" is the in-world version of Jaws that Ren likes, and you can see the same poster over his bed.
Taiga's is a somewhat simplified, greyscale version of the Sinostra crest with a knife stabbing through it and a chain looping behind it. There are also roses growing behind it. Basically says "I Am The Boss Of Sinostra."
Romeo's is likely a brand logo. It looks like it's loosely inspired by the Gucci logo? I don't follow things like this, this honestly could be his family's business logo now that I think about it.
Ritsu's is just himself. Very professional.
Subaru's is hydrangeas I think! Hydrangeas in Japan represent a lot of things apparently, like fidelity, sincerity, remorse, and forgiveness, which all fit Subaru pretty well I think lol. . . .
Haku's is a riverside? I wonder if this is near where his family is from? It looks familiar, but a quick search isn't bringing anything up that would tell me where it is. . .
Zenji's is his professional logo I guess? The kanji used is 善 "Zen" from his first name! It means "good" or "right" or "virtue"!
Edward's appears to be a night sky full of stars. Not sure if the big glowing one is the moon or what. . . .
Rui's is a mixed drink! Assuming this is an actual cocktail of some sort, somebody else can probably figure out what it is. Given the AI generated nature of several images in the game, it's probably not real lol.
Lyca's is his blankie! Do not wash it. Or touch it. It's all he's got.
Yuri's is his signature! Simple and professional, but a little unique.
Jiro's is a winged asklepian/Rod of Asclepius in front of a blue cross? ⚕ Not sure what's at the top of the rod. A fancy syringe plunger maybe? It's very much a symbol of medicine and healing, so his is also very professional. Considering he sends you texts regarding your appointments, it might be the symbol of Mortkranken's medical office?
These are from the NPCs the PC was in the "Concert Buds" group chat! The icons are pretty generic, a cat silhouette staring at a starry sky(SickleMoon), a pink, blue, and yellow gradient swirl(Pickles), a cute panda(Corby), and spider lilies(Mina). Red spider lilies in Japan are a symbol of death--and Mina of course cursed the PC, allegedly cursing them to death in a year.
#tokyo debunker#danie yells at tokyo debunker#not sure if anon meant these or not but here they are anyway lol#tdb ref
142 notes
·
View notes
Text
Anyone who has spent even 15 minutes on TikTok over the past two months will have stumbled across more than one creator talking about Project 2025, a nearly thousand-page policy blueprint from the Heritage Foundation that outlines a radical overhaul of the government under a second Trump administration. Some of the plan’s most alarming elements—including severely restricting abortion and rolling back the rights of LGBTQ+ people—have already become major talking points in the presidential race.
But according to a new analysis from the Technology Oversight Project, Project 2025 includes hefty handouts and deregulation for big business, and the tech industry is no exception. The plan would roll back environmental regulation to the benefit of the AI and crypto industries, quash labor rights, and scrap whole regulatory agencies, handing a massive win to big companies and billionaires—including many of Trump’s own supporters in tech and Silicon Valley.
“Their desire to eliminate whole agencies that are the enforcers of antitrust, of consumer protection is a huge, huge gift to the tech industry in general,” says Sacha Haworth, executive director at the Tech Oversight Project.
One of the most drastic proposals in Project 2025 suggests abolishing the Federal Reserve altogether, which would allow banks to back their money using cryptocurrencies, if they so choose. And though some conservatives have railed against the dominance of Big Tech, Project 2025 also suggests that a second Trump administration could abolish the Federal Trade Commission (FTC), which currently has the power to enforce antitrust laws.
Project 2025 would also drastically shrink the role of the National Labor Relations Board, the independent agency that protects employees’ ability to organize and enforces fair labor practices. This could have a major knock on effect for tech companies: In January, Musk’s SpaceX filed a lawsuit in a Texas federal court claiming that the National Labor Relations Board (NLRB) was unconstitutional after the agency said the company had illegally fired eight employees who sent a letter to the company’s board saying that Musk was a “distraction and embarrassment.” Last week, a Texas judge ruled that the structure of the NLRB—which includes a director that can’t be fired by the president—was unconstitutional, and experts believe the case may wind its way to the Supreme Court.
This proposal from Project 2025 could help quash the nascent unionization efforts within the tech sector, says Darrell West, a senior fellow at the Brookings Institution’s Center for Technology Innovation. “Tech, of course, relies a lot on independent contractors,” says West. “They have a lot of jobs that don't offer benefits. It's really an important part of the tech sector. And this document seems to reward those types of business.”
For emerging technologies like AI and crypto, a rollback in environmental regulations proposed by Project 2025 would mean that companies would not be accountable for the massive energy and environmental costs associated with bitcoin mining and running and cooling the data centers that make AI possible. “The tech industry can then backtrack on emission pledges, especially given that they are all in on developing AI technology,” says Haworth.
The Republican Party’s official platform for the 2024 elections is even more explicit, promising to roll back the Biden administration’s early efforts to ensure AI safety and “defend the right to mine Bitcoin.”
All of these changes would conveniently benefit some of Trump’s most vocal and important backers in Silicon Valley. Trump’s running mate, Republican senator J.D. Vance of Ohio, has long had connections to the tech industry, particularly through his former employer, billionaire founder of Palantir and longtime Trump backer Peter Thiel. (Thiel’s venture capital firm, Founder’s Fund, invested $200 million in crypto earlier this year.)
Thiel is one of several other Silicon Valley heavyweights who have recently thrown their support behind Trump. In the past month, Elon Musk and David Sacks have both been vocal about backing the former president. Venture capitalists Marc Andreessen and Ben Horowitz, whose firm a16z has invested in several crypto and AI startups, have also said they will be donating to the Trump campaign.
“They see this as their chance to prevent future regulation,” says Haworth. “They are buying the ability to avoid oversight.”
Reporting from Bloomberg found that sections of Project 2025 were written by people who have worked or lobbied for companies like Meta, Amazon, and undisclosed bitcoin companies. Both Trump and independent candidate Robert F. Kennedy Jr. have courted donors in the crypto space, and in May, the Trump campaign announced it would accept donations in cryptocurrency.
But Project 2025 wouldn’t necessarily favor all tech companies. In the document, the authors accuse Big Tech companies of attempting “to drive diverse political viewpoints from the digital town square.” The plan supports legislation that would eliminate the immunities granted to social media platforms by Section 230, which protects companies from being legally held responsible for user-generated content on their sites, and pushes for “anti-discrimination” policies that “prohibit discrimination against core political viewpoints.”
It would also seek to impose transparency rules on social platforms, saying that the Federal Communications Commission (FCC) “could require these platforms to provide greater specificity regarding their terms of service, and it could hold them accountable by prohibiting actions that are inconsistent with those plain and particular terms.”
And despite Trump’s own promise to bring back TikTok, Project 2025 suggests the administration “ban all Chinese social media apps such as TikTok and WeChat, which pose significant national security risks and expose American consumers to data and identity theft.”
West says the plan is full of contradictions when it comes to its approach to regulation. It’s also, he says, notably soft on industries where tech billionaires and venture capitalists have put a significant amount of money, namely AI and cryptocurrency. “Project 2025 is not just to be a policy statement, but to be a fundraising vehicle,” he says. “So, I think the money angle is important in terms of helping to resolve some of the seemingly inconsistencies in the regulatory approach.”
It remains to be seen how impactful Project 2025 could be on a future Republican administration. On Tuesday, Paul Dans, the director of the Heritage Foundation’s Project 2025, stepped down. Though Trump himself has sought to distance himself from the plan, reporting from the Wall Street Journal indicates that while the project may be lower profile, it’s not going away. Instead, the Heritage Foundation is shifting its focus to making a list of conservative personnel who could be hired into a Republican administration to execute the party’s vision.
64 notes
·
View notes
Note
I feel like some people can't be/refuse to be educated, or they're deliberately being obtuse because they're trolls, psyops, or they just fell for the trolls and psyops. But its still good to point out where they're wrong and to give actual, you know, facts, for the benefit of other people reading who might actually be reachable.
yeah, I mean I usually ignore them because usually its bad faith and when a post is getting hundreds even thousands of notes in a day you just can't keep up with the 10-20-ish people who say something, particularly if its in the tags because thats just hard or fighting in the replies which always feels weird
But I was in a bad mood and in general seeing the same either bad faith or straight up don't know comment over and over and over again is very annoying
the "lol Joe Biden didn't do anything about Student loans!" one is pretty annoying since Biden has forgiven well over 100 BILLION dollars worth of student loan debt, so like he has done a lot on student loan debt. I'm not a big deal but I remember I did one of my "what Biden did this week" posts and it had the student loan debt forgiveness for people who got defrauded by the Art Institutes, and a few people added their stories of being defrauded and being in debt to AI for years and the one that'll stay with me was an older guy who went to try to get a new degree to get a job in a different field kinda late in the game, his 50s or 60s and of course didn't get the jobs he hoped for because scam college and saying how he thought he'd die in debt and it was all gone, all forgiven. So just like people flippantly dismissing a very real life changing thing is very annoying
there are a few other very common annoying ones "why didn't he do this when he controlled congress before!" well he was busy passing the biggest climate change bill any government on earth has ever done, investing in our Infrastructure for the first time since before Reagan was President (Reagan 😒) listen Biden passed 4 of the biggest most transformationally progressive bills the US has seen since LBJ
American Rescue Plan
Bipartisan Infrastructure Law
CHIPS and Science Act
Inflation Reduction Act
on top of which he passed the first gun control law out of congress in 30 years, and other things, like the Respect for Marriage Act to protect gay marriage, or making Juneteenth a federal holiday (the first new federal holiday since MLK day in 1983)
SO! thats why he didn't do the things he wants to do in his next term he was busy doing equally (and in the case of climate change more important) things and thats why we should all be hopeful if Joe Biden is President with a Democratic Congress he'll get most if not ALL the things on his agenda done, because he's fucking good at this, we haven't had a President this good at pushing bills through Congress and using every switch and lever of the federal government to make major progressive change since LBJ or FDR, I guess his big mistake was naming it something boring like "Inflation Reduction Act" and not something sexy like "New Deal" or "Great Society"
sorry to go off on a tare there, but its just frustrating to see 40 (out of tens of thousands really) posts saying the same dumb shit and having no real way to respond
92 notes
·
View notes
Note
hey, same person who requested Medium/Psychic Reader. I've noticed you wrote only for 2 characters but is it okay if u do the same for Optimus and Ratchet please?
[ Please do not repost, plagiarize, or use my writing for AI! Translating my work with proper credit is acceptable, but please ask first! ]
Optimus
The rides back to base from the church are always haunted by this eerie and tense silence, and it's impossible to miss the stark difference in their demeanor before and after the church congregation. He's unsure whether it's normal for humans to seem exhausted and on edge after attending a religious congregation, and though he momentarily considers this may be part of their religious culture, he's far too concerned to simply leave it be.
He asks the question up front and directly, it's clear he's not beating around the bush, and he wants an answer because he's concerned for their welbeing not only as their guardian but as a friend. If they choose to lie to him, it'll be hard persuading him to leave it at that unless they're a very--and I mean very--good liar. He is by no means a lie detector, but he's good at picking up on the signs and interpreting them.
If they're honest with him, he believes them rather quickly, given the general belief of the existence of an afterlife in cybertronian culture. However he finds it fascinating, given how it's not exactly clear that humans have a "spark" or a "soul" that would be reborn, and yet some persist after death and take the form of a ghost with the potential to grow malevolent and harmful towards the living.
When he goes to pick them up, he'll offer to escort them back home early and so they can rest and relax from whatever it was they had to do that day. However, if they don't wish to be alone in that moment, he'll bring them back to base and they can sit near the rails as he works, or they can join him on a brief patrol around the area.
Ratchet
He's not one to poke and prod at business that doesn't concern him, no matter the rumors he hears about it, yet the only exception is when said business ends up hurting others. In this case it has to do with his charge. He's heard the kids gossip about the local church in town, and they talk about ghosts and the paranormal, and of course he doesn't believe in such things... Or so he thinks.
His eyes are sharper than they'd expect, and he tends to notice a lot of things that they think they're hiding well as he drives them back to base. He doesn't miss the haunted look on their face, or the tense periods of silence after attending the "assembly" at church, or the way they jump at the creaking of the metal pipes that run along the walls once they're back inside.
And one day he'll ask how "church" went after picking them up. If they're clearly reluctant to talk, he'll push, insisting that it's obvious that they don't look well after whatever they had just done, and as their guardian, he needs to know what's happened because he's concerned that it's causing them some sort of harm.
But if they're honest, he'll hesitate to respond. He's skeptical that ghosts and spirits exist, seeing as humans don't seem to have any sparks that persist after their bodies die, but if humans are the spawn of Unicron, then perhaps this is a result of that? If this is genuinely what is causing them all this stress then he'll take it at that. From then on, every time they finish "church", he'll offer to take them for a drive around the town to get their mind off all the stress they no doubt endured during the exorcism or whatever else it was they had to do that day.
#tfp imagines#tfp headcanons#tfp x reader#tfp optimus prime#optimus prime x reader#tfp ratchet#ratchet x reader#x reader#reader insert#self insert#weenwrites
52 notes
·
View notes
Text
"Why is [Social Media CEO] destroying their website?" Because their internal reports suggest that their power-users (die hard, addicted, cost of time fallacy drained users) will not leave their platform even for the dumbest changes. The people who will leave? The ones who don't have reach, the ones who aren't advertiser friendly, the ones with literally nothing to lose on their sites. They are the ones who "waste" their bandwidth and resources, they post images, videos, etc at a much higher rate than the deigned social-fantasy gods hitting those 1000x multipliers of social engagement, and they won't receive any money from them because they don't have a reach at that level. [Social Media CEO] is openly trying to drive the 'common' person off their platforms, the ones who bring little to the table that an AI bot can just replace to satiate and give the 'influencer/marketer/shil' their respective analytics. This is why platforms actively fail at solving "The Bot Problem" in most cases. Sure it's a relatively hard problem to solve on a systems level, but it's not impossible to severely reduce the bot population. Bots help daily active user counts, engagement, and pushing the power-users to continue using the platform as nudges (and often inflate those wonderful financial reports for their stock price to go ever higher). Talking of stock prices, there is the concept of cost-per-user, or revenue-per-user. If you reduce the 'common' user, and focus on power-users you can change the more important analytics for the stock price for internal or external investors. Less people, but with more (or the same, or even less revenue) looks better. Think of it as "Less Effort but for the same money". A reduction of "staff", for the same monetary output. Social Media platforms used to try hard for everyone to be on them. They don't want that anymore. Dealing with everyone is a nightmare. But those power users? The ones who generate revenue from a smaller set of users? That's the goal. This is the "private club" of yesteryear, for the internet. This is the new internet business plan, different form than the one everyone thinks of that has dominated the last thirty years of the internet. This is the new way to do business on the internet in the 2020s.
314 notes
·
View notes
Note
I think your post about AI Doom doesn't really acknowledge the fact that, generally speaking, people enjoy being alive for its own sake and prefer it to being dead. Unless I'm misinterpreting, the conclusion of the post is essentially saying that not wanting people to be killed is "out of step with human values" which is obviously not true. Most people do not want to be killed. Killing people is bad. It would not be OK for AI to kill everyone even if it made something else afterwards.
(Pt 2) this all seems extremely obvious to me but I could not come up with an interpretation of that post which isn’t just broadly in favour of people being killed, which seems sort of like. The most evil thing anyone could ever possibly believe. So I am hoping that I misinterpreted
You're not alone, this aspect of yesterday's post was confusing to a lot of people.
FWIW I'm mostly tapped out on discussing the subject matter of that post for the moment, but this does deserves some kind of further explanation, so here goes.
----
First, to address something you didn't mention, but which was broadly confusing:
I am not saying: "when the doomers say AI will kill us all, they don't mean the natural reading of that phrase, they don't mean it will literally kill all the individual humans, they mean some weird other thing instead."
No, they really do just mean it will kill everyone. Sorry that wasn't clear.
----
What I did mean, when I talked about doomers vs. average Joe here, is that the idea of human extinction hits different if you're an anti-deathist transhumanist, versus if you aren't one.
If you're an anti-deathist, what's bad about extinction is, in part, the same thing that's bad about ordinary death. The anti-deathist looks around them and sees, in some sense, a slow-motion and staggered extinction already happening.
Even without extinction, we are all gonna die. Our great-great-great-great-great-grandparents' generation did not die out in an extinction event, but all the same, they are in fact extinct. Dead. 100% fatality rate, for those guys.
Sure, it was spread out over time, and "natural," but -- the anti-deathist argues, quite reasonably -- why should any of that matter to them, the dead ones? Those distinctions don't change any of what it is that's intuitively bad about dying in the first place.
The horror you express at "people being killed"? For the anti-deathist, that horror gets generalized to include the case of people being killed "by death," as it were. By just, dying, of old age or whatever, rather than by the hand of some other creature.
----
Now sure, even for the anti-deathist, there are important ways that extinction is worse than business as usual. Most obviously, extinction not only stops all the lives of people around now, but prevents the lives of any future people from getting created later on. (Plus of course, all else being equal, death sooner is worse than death later.)
If you're not an anti-deathist, though -- and most people aren't -- these special factors that make extinction worse (for the anti-deathist) are in fact your only objections to extinction.
That is not to say that they aren't extremely strong objections. Of course normal people do not want human extinction!
But for the normal person, there is this hard line between "extinction" and "business as usual." For such a person, there is a horror in the former that just isn't there in the latter, even though (as the anti-deathist likes to point out) business as usual still means a 100% fatality rate, on a long enough timeline.
For the anti-deathist, there is not this hard line. Extinction is bad. Getting killed by a person or a machine is bad. Dying of natural causes is bad. And a lot of the badness -- though by no means all of it -- comes from what is shared across all these cases, not what is special to each case alone.
----
OK, now let's talk more directly about your question.
Unless I'm misinterpreting, the conclusion of the post is essentially saying that not wanting people to be killed is "out of step with human values" which is obviously not true.
I mean, yeah, that's obviously not true.
But there are things sort of superficially similar to it that might be true.
And when something is true, but on the surface sounds bizarre and backwards and staggeringly wrong, I often like to play around with the way it sounds -- to just have a bit of fun with the way I can say things that seem so outrageous, and yet might not actually be wrong. Or even really outrageous, when properly understood.
And maybe I get carried with this, sometimes, at the expense of clarity. Sorry about that. (But also, it's my blog, where I write the kind of stuff I like writing. And I do like writing in this way. Them's the breaks.)
Anyway.
If we want to understand ordinary human values, then we need to cope with the "average Joe's" simultaneous belief in the following two things:
I really do not want to die. As a particular case, I really really do not want to die right now, today. But also, come to think of it, dying tomorrow would be super bad too. And you know what, the day after tomorrow? Same deal. And I guess I could go on like this.
I do not, at all, actively want to "live forever." In fact I kind of don't want this. If you directly ask me, I'll say the idea is sort of creepy and weird and bad. Or, even if I don't think that, I don't find the idea motivating at all. It might be acceptable, if it were forced on me, but none of my actions are driven by a desire to make it more likely.
(I am hand-waving away the concept of the afterlife here, which is involved in the typical Joe's actual beliefs in a way that annoyingly complicates the analysis while being tangential to my point. Let's say we're talking about the average atheist/agnostic but non-transhumanist Joe. I think the point can be generalized further, but I'm trying and failing to be brief here, so you'll just have to trust me.)
Now, together, these two beliefs are nearly a paradox.
Maybe they are just a paradox. Maybe you can't, really, think both of these at the same time without, on some level, kidding yourself. This is what the anti-deathist alleges, about the average Joe.
Maybe you agree. If so: congratulations, you're an anti-deathist too. Which is a perfectly valid point of view. Despite all I said in my post, I have quite a lot of sympathy for it, myself.
But the average Joe is really not an anti-deathist. This is just a fact about the world. Average Joe really does think both of the 2 things, at once. Maybe he does so inconsistently, or wrongly. Still, he does.
I think you essentially have two choices here. You can take the road less traveled, fully bite the "death is bad" bullet, and be an anti-deathist. Or, you can do what most do, and be like average Joe.
But if you are doing what average Joe does, and you go on to say things like...
being in favour of people being killed [is the] most evil thing anyone could ever possibly believe
...then you have some explaining to do. You have to spell out what it is this means, if it doesn't just mean full anti-deathism. Which is kinda what it sounds like.
A lot of things "kinda sound like" full anti-deathism. That view is very amenable to being phrased in terms that make it sound utterly obvious.
But we can't let this lull us into thinking that -- because anti-deathism sounds obvious, and average Joe often believes things that sound obvious -- that average Joe believes in anti-deathism. Somehow, despite all that obviousness, he just doesn't.
Somehow, despite all that obviousness, anti-deathism is a fringe position. And if we're not on the fringe, then we have to spell out just what it is that we believe instead.
Now OK, let's be real. You didn't say "being in favour of death" was the evil thing. What you wrote was "people being killed," not "people dying."
And that's what makes the distinction to you, right? I imagine? That it's bad news when some entity actively kills a person, that goes beyond the badness of death per se?
----
That does sound pretty intuitive! But what exactly is it that makes killing worse, here?
I didn't answer that question, in my post. I answered a bunch of other questions, instead. There are still more questions, which no one has asked me, but which I kind of feel I ought to answer, when talking about this topic. Nonetheless, I have to stop myself at some point, or I'll never do anything else. Hence these kinds of glaring lacunae.
I won't answer it here, either, in full. I have some other things to do today, and this is no longer just explicating what I meant earlier, this is new stuff. I'll just make some gestures, now, towards the kind of answer that would make sense of how I treated the topic in my earlier post.
----
So, there are some pretty obvious answers to "why is killing especially bad?"
Say, that it reflects poorly on the killer: an AI that would kill us all is probably an AI that's just plain bad morally.
Or, that we have a norm against it. It's a part of our ethics, the stuff we agree on as part of the social contract.
But you know what we don't have a norm against? If we're average Joe, and not on the fringe?
Killing chickens.
Or torturing chickens, and then killing them. Or breeding lots of them, specifically to be tortured, and then killed.
Sorry for the sudden swerve into vegan talking points! But this is kind of a big deal.
I've heard this cited, multiple times, by doomer types as a motivating case for being worried about how superintelligent AIs might treat us.
Just look at how we treat creatures that can very evidently feel pain -- but just happen to be different from us, not constituted the way we are, and in particular much less smart than we are!
And I, personally, find this argument pretty motivating. This is one of those arguments where even I have to hand it to the doomers.
But once we've allowed this much, we are in danger of conceding some really wild shit, if we don't tread carefully. Maybe we even should concede the wild shit, in the last analysis. Still, we should tread carefully.
Say you take the chicken argument seriously.
You've conceded that human values contain some really fucked-up things about how to treat other, dumber, "more primitive" beings. Beings of the kind that prevailed before the new, "super"-intelligent, sparkly, world-dominating species stepped onto the scene and changed everything.
You've conceded that humans are basically misaligned AIs, of the evil killeveryone Torment Nexus sort.
Remember, that was the whole substance of the argument: to make such awful AIs seem more plausible, by pointing out that such a thing already exists. Namely, us.
But now, what standing do we have to object to the AIs, without it rebounding back on us? Must we oppose ourselves just as fervently as we oppose the evil AIs, for the same reason?
"An AI that kills all humans" sounds pretty bad. Sounds like an evil thing, that we would not want to exist. But by the same token, we're evil, and we shouldn't exist.
(We might have wiped out chickens, if they weren't so tasty. There are plenty of non-tasty things which we did, in fact, wipe out. I and the doomers focus on chickens and the like, here, because what we did them is arguably even worse.)
Would we really accept an AI that's only "aligned with human values," and treats us about as well as we treat other beings when we are placed in an analogous scenario? Or do we hold AI to a higher standard -- one we can't possibly apply to ourselves, for that way lies madness?
Well, I don't know. These are tough questions.
But I would like to leave open some room to imagine, at least, that the advent of humanity was not (or not only) a catastrophe. That it was not, in fact, "the most evil thing possible."
Despite all the evil that we do, I'd like to imagine that.
And I'd like to imagine that, if there is such a thing as "human values," it contains this affirmation of the value of the advent of humanity.
And the value of things like the advent of humanity.
And the golden rule, and the rule of law. Which means, among other things: not holding you to a higher standard than I hold myself.
Even though the apparent implications of this are pretty nasty.
Philosophy is like that. Often you are between a rock and a hard place. Saying "that's a rock, don't you know that rocks cannot be walked through??" in an alarmed tone does not really get at the heart of the dilemma, or point the way to a solution.
----
All else being equal, of course, I would prefer not to be killed.
So would the chickens, I imagine.
We must not pretend there are easy answers, when there aren't.
64 notes
·
View notes