#business use cases for generative AI
Explore tagged Tumblr posts
Text
How Can Generative AI Enhance Your Business's Organisational and Operational Processes
Experience the future of business transformation with Generative AI. From customer support to content creation, sales optimization, and HR process automation, this cutting-edge technology is your key to enhanced efficiency and innovation. Learn how Generative AI can automate and elevate your operations while delivering personalized experiences and data-driven insights.
At Webclues Infotech, we're your partners in harnessing the true potential of Generative AI. Join us in this technological revolution and take your business to new heights in a fast-paced world. Read the blog for more insights.
#generative AI for customer support#Generative AI for sales#generative AI for marketing#Generating AI for HR#generative AI services#Generative AI chatbots#generative ai in business#business use cases for generative AI
0 notes
Text
I have been on a Willy Wonkified journey today and I need y'all to come with me
It started so innocently. Scrolling Google News I come across this article on Ars Technica:
At first glance I thought what happened was parents saw AI-generated images of an event their kids were at and became concerned, then realized it was fake. The reality? Oh so much better.
On Saturday, event organizers shut down a Glasgow-based "Willy's Chocolate Experience" after customers complained that the unofficial Wonka-inspired event, which took place in a sparsely decorated venue, did not match the lush AI-generated images listed on its official website.... According to Sky News, police were called to the event, and "advice was given."
Thing is, the people who paid to go were obviously not expecting exactly this:
But I can see how they'd be a bit pissed upon arriving to this:
It gets worse.
"Tempest, how could it possibly--"
source of this video that also includes this charming description:
Made up a villain called The Unknown — 'an evil chocolate maker who lives in the walls'
There is already a meme.
Oh yes, the Wish.com Oompa Loompa:
Who has already done an interview!
As bad (and hilarious) as this all is, I got curious about the company that put on this event. Did they somehow overreach? Did the actors they hired back out at the last minute? (Or after they saw the script...) Oddly enough, it doesn't seem so!
Given what I found when poking around I'm legit surprised there was an event at all. Cuz this outfit seems to be 100% a scam.
The website for this specific event is here and it has many AI generated images on it, as stated. I don't think anyone who bought tickets looked very closely at these images, otherwise they might have been concerned about how much Catgacating their children would be exposed to.
Yes, Catgacating. You know, CATgacating!
I personally don't think anyone should serve exarserdray flavored lollipops in public spaces given how many people are allergic to it. And the sweet teats might not have been age appropriate.
Though the Twilight Tunnel looks pretty cool:
I'm not sure that Dim Tight Twdrding is safe. I've also been warned that Vivue Sounds are in that weird frequency range that makes you poop your pants upon hearing them.
Yes, Virginia, these folks used an AI image generator for everything on the website and used Chat GPT for some of the text! From the FAQ:
Q: I cannot go on the available days. Will you have more dates in the future? A: Should there be capacity when you arrive, then you will be able to enter without any problems. In the event that this is not the case, we may ask you to wait a bit.
Fear not, for this question is asked again a few lines down and the answer makes more sense.
Curious about the events company behind this disaster, I took myself over to the homepage of House of Illuminati and I was not disappointed.
I would 100% trust these people to plan my wedding.
This abomination of a website is a badly edited WordPress blog filled with AI art and just enough blog posts to make the casual viewer think that it's a legit business for about 0.0004 seconds.
Their attention to detail is stunning, from how they left up the default first post every WP blog gets to how they didn't bother changing the name on several images, thus revealing where they came from. Like this one:
With the lovely and compact filename "DALL·E-2024-01-30-09.50.54-Imagine-a-scene-where-fantasy-and-reality-merge-seamlessly.-In-the-foreground-a-grand-interactive-gala-is-taking-place-filled-with-elegant-guests-i.png"
"Concept.png" came from the same AI generator that gets text almost, but not quiiiiiite right:
There are a suspicious number of .webp images in the uploads, which makes me think they either stole them from other sites where AI "art" was uploaded or they didn't want to pay for the hi-res versions of some and just grabbed the preview image.
The real fun came when I noticed this filename: Before-and-After-Eventologists-Transformation-Edgbaston-Cricket-Ground-1024x1024-1.jpg and decided to do a Google image search. Friends, you will be shocked to hear that the image in question, found on this post touting how they can transform a boring warehouse into a fun event space, was stolen from this actual event planner.
Even better, this weirdly grainy image?
From a post that claims to be about the preparations for a "Willy Wonka" experience (we'll get to this in a minute), is not only NOT an actual image of anyone preparing anything for Illuminati's event, it is stolen from a YouTube thumbnail that's been chopped to remove the name of the company that actually made this. Here's the video.
If you actually read the blog posts they're all copypasta or some AI generated crap. To the point where this seems like not a real business at all. There's very specific business information at the bottom, but nothing else seems real.
As I said, I'm kinda surprised they put on an event at all. This has, "And then they ran off with all our money!" written all over it. I'm perplexed.
And also wondering when the copyright lawyers are gonna start calling, because...
This post explicitly says they're putting together a "Willy Wonka’s Chocolate Factory Experience" complete with golden tickets.
Somewhere along the line someone must have wised up, because the actual event was called "Willys Chocolate Experience" (note the lack of apostrophe) and the script they handed to the actors about 10 minutes before they were supposed to "perform" was about a "Willy McDuff" and his chocolate factory.
As I was going through this madness with friends in a chat, one pointed out that it took very little prompting to get the free Chat GPT to spit out an event description and such very similar to all this while avoiding copyrighted phrases. But he couldn't figure out where the McDuff came from since it wasn't the type of thing GPT would usually spit out...
Until he altered the prompt to include it would be happening in Glasgow, Scotland.
You cannot make this stuff up.
But truly, honestly, I do not even understand why they didn't take the money and run. Clearly this was all set up to be a scam. A lazy, AI generated scam.
Everything from the website to the event images to the copy to the "script" to the names of things was either stolen or AI generated (aka stolen). Hell, I'd be looking for some poor Japanese visitor wandering the streets of Glasgow, confused, after being jacked for his mascot costume.
HE LIVES IN THE WALLS, Y'ALL.
#long post#Willy Wonka#Wonka#Willy Wonka Experience#Willy Wonka Experience disaster#Willy's Chocolate Experience#Willys Chocolate Experience#THE UNKNOWN#Wish.com Oompa Loompa#House of Illuminati#AI#ai generated
8K notes
·
View notes
Text
With Agoge AI, users can expect a tailored approach to their communication needs. From perfecting negotiation techniques to enhancing presentation skills, this AI tool provides training to excel in business communication. 🗣️👥
#ai business#ai update#ai#ai community#ai developers#ai tools#ai discussion#ai development#ai generated#ai marketing#ai engineer#ai expert#ai revolution#ai technology#ai use cases#ai in digital marketing#ai integration#ai in ecommerce
0 notes
Text
“Humans in the loop” must detect the hardest-to-spot errors, at superhuman speed
I'm touring my new, nationally bestselling novel The Bezzle! Catch me SATURDAY (Apr 27) in MARIN COUNTY, then Winnipeg (May 2), Calgary (May 3), Vancouver (May 4), and beyond!
If AI has a future (a big if), it will have to be economically viable. An industry can't spend 1,700% more on Nvidia chips than it earns indefinitely – not even with Nvidia being a principle investor in its largest customers:
https://news.ycombinator.com/item?id=39883571
A company that pays 0.36-1 cents/query for electricity and (scarce, fresh) water can't indefinitely give those queries away by the millions to people who are expected to revise those queries dozens of times before eliciting the perfect botshit rendition of "instructions for removing a grilled cheese sandwich from a VCR in the style of the King James Bible":
https://www.semianalysis.com/p/the-inference-cost-of-search-disruption
Eventually, the industry will have to uncover some mix of applications that will cover its operating costs, if only to keep the lights on in the face of investor disillusionment (this isn't optional – investor disillusionment is an inevitable part of every bubble).
Now, there are lots of low-stakes applications for AI that can run just fine on the current AI technology, despite its many – and seemingly inescapable - errors ("hallucinations"). People who use AI to generate illustrations of their D&D characters engaged in epic adventures from their previous gaming session don't care about the odd extra finger. If the chatbot powering a tourist's automatic text-to-translation-to-speech phone tool gets a few words wrong, it's still much better than the alternative of speaking slowly and loudly in your own language while making emphatic hand-gestures.
There are lots of these applications, and many of the people who benefit from them would doubtless pay something for them. The problem – from an AI company's perspective – is that these aren't just low-stakes, they're also low-value. Their users would pay something for them, but not very much.
For AI to keep its servers on through the coming trough of disillusionment, it will have to locate high-value applications, too. Economically speaking, the function of low-value applications is to soak up excess capacity and produce value at the margins after the high-value applications pay the bills. Low-value applications are a side-dish, like the coach seats on an airplane whose total operating expenses are paid by the business class passengers up front. Without the principle income from high-value applications, the servers shut down, and the low-value applications disappear:
https://locusmag.com/2023/12/commentary-cory-doctorow-what-kind-of-bubble-is-ai/
Now, there are lots of high-value applications the AI industry has identified for its products. Broadly speaking, these high-value applications share the same problem: they are all high-stakes, which means they are very sensitive to errors. Mistakes made by apps that produce code, drive cars, or identify cancerous masses on chest X-rays are extremely consequential.
Some businesses may be insensitive to those consequences. Air Canada replaced its human customer service staff with chatbots that just lied to passengers, stealing hundreds of dollars from them in the process. But the process for getting your money back after you are defrauded by Air Canada's chatbot is so onerous that only one passenger has bothered to go through it, spending ten weeks exhausting all of Air Canada's internal review mechanisms before fighting his case for weeks more at the regulator:
https://bc.ctvnews.ca/air-canada-s-chatbot-gave-a-b-c-man-the-wrong-information-now-the-airline-has-to-pay-for-the-mistake-1.6769454
There's never just one ant. If this guy was defrauded by an AC chatbot, so were hundreds or thousands of other fliers. Air Canada doesn't have to pay them back. Air Canada is tacitly asserting that, as the country's flagship carrier and near-monopolist, it is too big to fail and too big to jail, which means it's too big to care.
Air Canada shows that for some business customers, AI doesn't need to be able to do a worker's job in order to be a smart purchase: a chatbot can replace a worker, fail to their worker's job, and still save the company money on balance.
I can't predict whether the world's sociopathic monopolists are numerous and powerful enough to keep the lights on for AI companies through leases for automation systems that let them commit consequence-free free fraud by replacing workers with chatbots that serve as moral crumple-zones for furious customers:
https://www.sciencedirect.com/science/article/abs/pii/S0747563219304029
But even stipulating that this is sufficient, it's intrinsically unstable. Anything that can't go on forever eventually stops, and the mass replacement of humans with high-speed fraud software seems likely to stoke the already blazing furnace of modern antitrust:
https://www.eff.org/de/deeplinks/2021/08/party-its-1979-og-antitrust-back-baby
Of course, the AI companies have their own answer to this conundrum. A high-stakes/high-value customer can still fire workers and replace them with AI – they just need to hire fewer, cheaper workers to supervise the AI and monitor it for "hallucinations." This is called the "human in the loop" solution.
The human in the loop story has some glaring holes. From a worker's perspective, serving as the human in the loop in a scheme that cuts wage bills through AI is a nightmare – the worst possible kind of automation.
Let's pause for a little detour through automation theory here. Automation can augment a worker. We can call this a "centaur" – the worker offloads a repetitive task, or one that requires a high degree of vigilance, or (worst of all) both. They're a human head on a robot body (hence "centaur"). Think of the sensor/vision system in your car that beeps if you activate your turn-signal while a car is in your blind spot. You're in charge, but you're getting a second opinion from the robot.
Likewise, consider an AI tool that double-checks a radiologist's diagnosis of your chest X-ray and suggests a second look when its assessment doesn't match the radiologist's. Again, the human is in charge, but the robot is serving as a backstop and helpmeet, using its inexhaustible robotic vigilance to augment human skill.
That's centaurs. They're the good automation. Then there's the bad automation: the reverse-centaur, when the human is used to augment the robot.
Amazon warehouse pickers stand in one place while robotic shelving units trundle up to them at speed; then, the haptic bracelets shackled around their wrists buzz at them, directing them pick up specific items and move them to a basket, while a third automation system penalizes them for taking toilet breaks or even just walking around and shaking out their limbs to avoid a repetitive strain injury. This is a robotic head using a human body – and destroying it in the process.
An AI-assisted radiologist processes fewer chest X-rays every day, costing their employer more, on top of the cost of the AI. That's not what AI companies are selling. They're offering hospitals the power to create reverse centaurs: radiologist-assisted AIs. That's what "human in the loop" means.
This is a problem for workers, but it's also a problem for their bosses (assuming those bosses actually care about correcting AI hallucinations, rather than providing a figleaf that lets them commit fraud or kill people and shift the blame to an unpunishable AI).
Humans are good at a lot of things, but they're not good at eternal, perfect vigilance. Writing code is hard, but performing code-review (where you check someone else's code for errors) is much harder – and it gets even harder if the code you're reviewing is usually fine, because this requires that you maintain your vigilance for something that only occurs at rare and unpredictable intervals:
https://twitter.com/qntm/status/1773779967521780169
But for a coding shop to make the cost of an AI pencil out, the human in the loop needs to be able to process a lot of AI-generated code. Replacing a human with an AI doesn't produce any savings if you need to hire two more humans to take turns doing close reads of the AI's code.
This is the fatal flaw in robo-taxi schemes. The "human in the loop" who is supposed to keep the murderbot from smashing into other cars, steering into oncoming traffic, or running down pedestrians isn't a driver, they're a driving instructor. This is a much harder job than being a driver, even when the student driver you're monitoring is a human, making human mistakes at human speed. It's even harder when the student driver is a robot, making errors at computer speed:
https://pluralistic.net/2024/04/01/human-in-the-loop/#monkey-in-the-middle
This is why the doomed robo-taxi company Cruise had to deploy 1.5 skilled, high-paid human monitors to oversee each of its murderbots, while traditional taxis operate at a fraction of the cost with a single, precaratized, low-paid human driver:
https://pluralistic.net/2024/01/11/robots-stole-my-jerb/#computer-says-no
The vigilance problem is pretty fatal for the human-in-the-loop gambit, but there's another problem that is, if anything, even more fatal: the kinds of errors that AIs make.
Foundationally, AI is applied statistics. An AI company trains its AI by feeding it a lot of data about the real world. The program processes this data, looking for statistical correlations in that data, and makes a model of the world based on those correlations. A chatbot is a next-word-guessing program, and an AI "art" generator is a next-pixel-guessing program. They're drawing on billions of documents to find the most statistically likely way of finishing a sentence or a line of pixels in a bitmap:
https://dl.acm.org/doi/10.1145/3442188.3445922
This means that AI doesn't just make errors – it makes subtle errors, the kinds of errors that are the hardest for a human in the loop to spot, because they are the most statistically probable ways of being wrong. Sure, we notice the gross errors in AI output, like confidently claiming that a living human is dead:
https://www.tomsguide.com/opinion/according-to-chatgpt-im-dead
But the most common errors that AIs make are the ones we don't notice, because they're perfectly camouflaged as the truth. Think of the recurring AI programming error that inserts a call to a nonexistent library called "huggingface-cli," which is what the library would be called if developers reliably followed naming conventions. But due to a human inconsistency, the real library has a slightly different name. The fact that AIs repeatedly inserted references to the nonexistent library opened up a vulnerability – a security researcher created a (inert) malicious library with that name and tricked numerous companies into compiling it into their code because their human reviewers missed the chatbot's (statistically indistinguishable from the the truth) lie:
https://www.theregister.com/2024/03/28/ai_bots_hallucinate_software_packages/
For a driving instructor or a code reviewer overseeing a human subject, the majority of errors are comparatively easy to spot, because they're the kinds of errors that lead to inconsistent library naming – places where a human behaved erratically or irregularly. But when reality is irregular or erratic, the AI will make errors by presuming that things are statistically normal.
These are the hardest kinds of errors to spot. They couldn't be harder for a human to detect if they were specifically designed to go undetected. The human in the loop isn't just being asked to spot mistakes – they're being actively deceived. The AI isn't merely wrong, it's constructing a subtle "what's wrong with this picture"-style puzzle. Not just one such puzzle, either: millions of them, at speed, which must be solved by the human in the loop, who must remain perfectly vigilant for things that are, by definition, almost totally unnoticeable.
This is a special new torment for reverse centaurs – and a significant problem for AI companies hoping to accumulate and keep enough high-value, high-stakes customers on their books to weather the coming trough of disillusionment.
This is pretty grim, but it gets grimmer. AI companies have argued that they have a third line of business, a way to make money for their customers beyond automation's gifts to their payrolls: they claim that they can perform difficult scientific tasks at superhuman speed, producing billion-dollar insights (new materials, new drugs, new proteins) at unimaginable speed.
However, these claims – credulously amplified by the non-technical press – keep on shattering when they are tested by experts who understand the esoteric domains in which AI is said to have an unbeatable advantage. For example, Google claimed that its Deepmind AI had discovered "millions of new materials," "equivalent to nearly 800 years’ worth of knowledge," constituting "an order-of-magnitude expansion in stable materials known to humanity":
https://deepmind.google/discover/blog/millions-of-new-materials-discovered-with-deep-learning/
It was a hoax. When independent material scientists reviewed representative samples of these "new materials," they concluded that "no new materials have been discovered" and that not one of these materials was "credible, useful and novel":
https://www.404media.co/google-says-it-discovered-millions-of-new-materials-with-ai-human-researchers/
As Brian Merchant writes, AI claims are eerily similar to "smoke and mirrors" – the dazzling reality-distortion field thrown up by 17th century magic lantern technology, which millions of people ascribed wild capabilities to, thanks to the outlandish claims of the technology's promoters:
https://www.bloodinthemachine.com/p/ai-really-is-smoke-and-mirrors
The fact that we have a four-hundred-year-old name for this phenomenon, and yet we're still falling prey to it is frankly a little depressing. And, unlucky for us, it turns out that AI therapybots can't help us with this – rather, they're apt to literally convince us to kill ourselves:
https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/04/23/maximal-plausibility/#reverse-centaurs
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
#pluralistic#ai#automation#humans in the loop#centaurs#reverse centaurs#labor#ai safety#sanity checks#spot the mistake#code review#driving instructor
855 notes
·
View notes
Text
The reason I took interest in AI as an art medium is that I've always been interested in experimenting with novel and unconventional art media - I started incorporating power tools into a lot of my physical processes younger than most people were even allowed to breathe near them, and I took to digital art like a duck to water when it was the big, relatively new, controversial thing too, so really this just seems like the logical next step. More than that, it's exciting - it's not every day that we just invent an entirely new never-before-seen art medium! I have always been one to go fucking wild for that shit.
Which is, ironically, a huge part of why I almost reflexively recoil at how it's used in the corporate world: because the world of business, particularly the entertainment industry, has what often seems like less than zero interest in appreciating it as a novel medium.
And I often wonder how much less that would be the case - and, by extension, how much less vitriolic the discussion around it would be, and how many fewer well-meaning people would be falling for reactionary mythologies about where exactly the problems lie - if it hadn't reached the point of...at least an illusion of commercial viability, at exactly the moment it did.
See, the groundwork was laid in 2020, back during covid lockdowns, when we saw a massive spike in people relying on TV, games, books, movies, etc. to compensate for the lack of outdoor, physical, social entertainment. This was, seemingly, wonderful for the whole industry - but under late-stage capitalism, it was as much of a curse as it was a gift. When industries are run by people whose sole brain process is "line-go-up", tiny factors like "we're not going to be in lockdown forever" don't matter. CEOs got dollar signs in their eyes. Shareholders demanded not only perpetual growth, but perpetual growth at this rate or better. Even though everyone with an ounce of common sense was screaming "this is an aberration, this is not sustainable" - it didn't matter. The business bros refused to believe it. This was their new normal, they were determined to prove -
And they, predictably, failed to prove it.
So now the business bros are in a pickle. They're beholden to the shareholders to do everything within their power to maintain the infinite growth they promised, in a world with finite resources. In fact, by precedent, they're beholden to this by law. Fiduciary duty has been interpreted in court to mean that, given the choice between offering a better product and ensuring maximum returns for shareholders, the latter MUST be a higher priority; reinvesting too much in the business instead of trying to make the share value increase as much as possible, as fast as possible, can result in a lawsuit - that a board member or CEO can lose, and have lost before - because it's not acting in the best interest of shareholders. If that unsustainable explosive growth was promised forever, all the more so.
And now, 2-3-4 years on, that impossibility hangs like a sword of Damocles over the heads of these media company CEOs. The market is fully saturated; the number of new potential customers left to onboard is negligible. Some companies began trying to "solve" this "problem" by violating consumer privacy and charging per household member, which (also predictably) backfired because those of us who live in reality and not statsland were not exactly thrilled about the concept of being told we couldn't watch TV with our own families. Shareholders are getting antsy, because their (however predictably impossible) infinite lockdown-level profits...aren't coming, and someone's gotta make up for that, right? So they had already started enshittifying, making excuses for layoffs, for cutting employee pay, for duty creep, for increasing crunch, for lean-staffing, for tightening turnarounds-
And that was when we got the first iterations of AI image generation that were actually somewhat useful for things like rapid first drafts, moodboards, and conceptualizing.
Lo! A savior! It might as well have been the digital messiah to the business bros, and their eyes turned back into dollar signs. More than that, they were being promised that this...both was, and wasn't art at the same time. It was good enough for their final product, or if not it would be within a year or two, but it required no skill whatsoever to make! Soon, you could fire ALL your creatives and just have Susan from accounting write your scripts and make your concept art with all the effort that it takes to get lunch from a Star Trek replicator!
This is every bit as much bullshit as the promise of infinite lockdown-level growth, of course, but with shareholders clamoring for the money they were recklessly promised, executives are looking for anything, even the slightest glimmer of a new possibility, that just might work as a life raft from this sinking ship.
So where are we now? Well, we're exiting the "fucking around" phase and entering "finding out". According to anecdotes I've read, companies are, allegedly, already hiring prompt engineers (or "prompters" - can't give them a job title that implies there's skill or thought involved, now can we, that just might imply they deserve enough money to survive!)...and most of them not only lack the skill to manually post-process their works, but don't even know how (or perhaps aren't given access) to fully use the software they specialize in, being blissfully unaware of (or perhaps not able/allowed to use) features such as inpainting or img2img. It has been observed many times that LLMs are being used to flood once-reputable information outlets with hallucinated garbage. I can verify - as can nearly everyone who was online in the aftermath of the Glasgow Willy Wonka Dashcon Experience - that the results are often outright comically bad.
To anyone who was paying attention to anything other than please-line-go-up-faster-please-line-go-please (or buying so heavily into reactionary mythologies about why AI can be dangerous in industry that they bought the tech companies' false promises too and just thought it was a bad thing), this was entirely predictable. Unfortunately for everyone in the blast radius, common sense has never been an executive's strong suit when so much money is on the line.
Much like CGI before it, what we have here is a whole new medium that is seldom being treated as a new medium with its own unique strengths, but more often being used as a replacement for more expensive labor, no matter how bad the result may be - nor, for that matter, how unjust it may be that the labor is so much cheaper.
And it's all because of timing. It's all because it came about in the perfect moment to look like a life raft in a moment of late-stage capitalist panic. Any port in a storm, after all - even if that port is a non-Euclidean labyrinth of soggy, rotten botshit garbage.
Any port in a storm, right? ...right?
All images generated using Simple Stable, under the Code of Ethics of Are We Art Yet?
#ai art#generated art#generated artwork#essays#about ai#worth a whole 'nother essay is how the tech side exists in a state that is both thriving and floundering at the same time#because the money theyre operating with is in schrodinger's box#at the same time it exists and it doesnt#theyre highly valued but usually operating at a loss#that is another MASSIVE can of worms and deserves its own deep dive
443 notes
·
View notes
Text
Fascinated by what the business case could be for Coca-Cola using generative AI.
400 notes
·
View notes
Text
marta svetek -- voice of gregory, vanny, roxanne wolf in fnaf: sb and its dlc, ruin -- has joined kellen goff in voicing her thoughts on ai reproductions of her voice. again: if you want voice actors to maintain a living wage (and their own mental health), please respect her wishes and ensure that this kind of behaviour is heavily discouraged in the future.
full text of thread below the cut for brevity:
Been getting tagged in all kinds of posts and popular TikToks of people using AI voice clones of my FNAF characters to have them sing or say all kinds of random stuff. You might think it's fun. But really it's contributing to the problem all VAs are facing in the wake of AI. The more these AI voice clones get used, the more they're normalized, and the more businesses see it as a viable alternative to real VAs when making content for you. Slowly but surely, most of your favourite VAs will be out of work. Not to mention what this will cost in terms of future generations of VAs. Incredible performances we'll never get to experience because most businesses will more often than not go for the cheapest, quickest option as soon as the technology is up to scratch. This is made even worse by the fact that we currently have very few legal protections against the unauthorized use of our voices using AI. And we're being pressured to waive those too, for a fraction of their worth. This is one of the big reasons the current strikes are a thing. If you enjoyed my work in FNAF and have any respect for the amount of time and effort VAs put into bringing characters to life, don't use AI voice clones in your fanart. I can't express how violated I feel every time I hear my voice say words that aren't my own. And just in case it wasn't clear - I have never consented to having my voice, likeness or any of my performances synthesized into AI voices/avatars etc.
#fnaf#five nights at freddy's#roxanne wolf#fnaf gregory#fnaf roxy#vanny#discourse#speaking!#no your funnie tiktoks and fangames#are not worth violating the bodies of other human beings#no not even if you're broke.#ch/rist alive guys.#vas are human beings who deserve basic respect#as well as the ability to freely live their life#and create their art#support workers. support artists. support strikes.#your actions have consequences. let them be good ones.
1K notes
·
View notes
Text
Tokyo Debunker WickChat Icons
as of posting this no chat with the Mortkranken ghouls has been released, so their icons are not here. If I forget to update when they come out, send me an ask!
Jin's is black, but not the default icon. An icon choice that says "do not percieve me."
Tohma's is the Frostheim crest. Very official, he probably sends out a lot of official Frostheim business group texts.
Kaito's is a doodle astronaut! He has the same astronaut on his phone case! He canonically likes stars, but I wonder if this is a doodle he made himself and put it on his phone case or something?
Luca's is maybe a family crest?
Alan's is the default icon. He doesn't know how to set one up, if I were to take an educated guess. . . .
Leo's is himself, looking cute and innocent. Pretty sure this is an altered version of the 'Leo's is himself, looking cute and innocent. Pretty sure this is an altered version of the 'DATA DELETED' panel from Episode 2 Chapter 2.
Sho's is Bonnie!!! Fun fact, in Episode 2 Chapter 2 you can see that Bonnie has her name spraypainted/on a decal on her side!
Haru's is Peekaboo! Such a mommy blogger choice.
Towa's is some sort of flowers! I don't know flowers well enough to guess what kind though.
Ren's is the "NAW" poster! "NAW" is the in-world version of Jaws that Ren likes, and you can see the same poster over his bed.
Taiga's is a somewhat simplified, greyscale version of the Sinostra crest with a knife stabbing through it and a chain looping behind it. There are also roses growing behind it. Basically says "I Am The Boss Of Sinostra."
Romeo's is likely a brand logo. It looks like it's loosely inspired by the Gucci logo? I don't follow things like this, this honestly could be his family's business logo now that I think about it.
Ritsu's is just himself. Very professional.
Subaru's is hydrangeas I think! Hydrangeas in Japan represent a lot of things apparently, like fidelity, sincerity, remorse, and forgiveness, which all fit Subaru pretty well I think lol. . . .
Haku's is a riverside? I wonder if this is near where his family is from? It looks familiar, but a quick search isn't bringing anything up that would tell me where it is. . .
Zenji's is his professional logo I guess? The kanji used is 善 "Zen" from his first name! It means "good" or "right" or "virtue"!
Edward's appears to be a night sky full of stars. Not sure if the big glowing one is the moon or what. . . .
Rui's is a mixed drink! Assuming this is an actual cocktail of some sort, somebody else can probably figure out what it is. Given the AI generated nature of several images in the game, it's probably not real lol.
Lyca's is his blankie! Do not wash it. Or touch it. It's all he's got.
Yuri's is his signature! Simple and professional, but a little unique.
Jiro's is a winged asklepian/Rod of Asclepius in front of a blue cross? ⚕ Not sure what's at the top of the rod. A fancy syringe plunger maybe? It's very much a symbol of medicine and healing, so his is also very professional. Considering he sends you texts regarding your appointments, it might be the symbol of Mortkranken's medical office?
These are from the NPCs the PC was in the "Concert Buds" group chat! The icons are pretty generic, a cat silhouette staring at a starry sky(SickleMoon), a pink, blue, and yellow gradient swirl(Pickles), a cute panda(Corby), and spider lilies(Mina). Red spider lilies in Japan are a symbol of death--and Mina of course cursed the PC, allegedly cursing them to death in a year.
#tokyo debunker#danie yells at tokyo debunker#not sure if anon meant these or not but here they are anyway lol#tdb ref
163 notes
·
View notes
Text
gonna start this post upfront by saying tumblr's fuckin up bad with moderation right now, regarding the wave of trans people being targeted. but i'm not here to discuss that issue, i'm going to talk about the nature of large and small social spaces on the internet
as this post rightly points out, examining our existing social network structure reveals the crux of the problem: we are tenants on someone else's service. extrapolating from that, we're the source of revenue for someone's business. under that model, there is no incentive whatsoever for a social network to apply a "fair" or "just" moderation scheme. their goal is to maximize the number of people using the service and minimize blowback from advertisers regarding "what goes on" on the site
there will not be an alternative social network that gets this right at scale, unless it meets the following criteria:
1. Has ample moderators to thoughtfully deal with user moderation cases
2. Has terms of service that you agree with
3. Has a moderation team that understands how to apply moderation according to the terms of service, and amends it when necessary
4. Does not rely on external income source to pay for the site
Number 1: An ideal social network is one that has numerous, well-treated moderators who are adept at resolving conflict. Under capitalism, this is a non-starter, as moderation is seen as a money sink that just needs to be barely enough to make the site usable.
Number 2: An ideal social network has terms of service you agree with. Unfortunately there's no set of rules everyone will find fair. While this is not a problem for the people who want to use the site, it will inevitably create an outgroup who are pushed away from the site. The obvious bad actors (nazis, terfs, etc) are pretty straightforward, but there are groups that do things you might find "unpleasant" even if you support their right to do it. Inevitably this turns into lines drawn in the sand about how visible should that content be.
Number 3: An ideal social network has moderators who have internalized the terms of service and consistently make decisions based on the TOS. If a situation comes up where there's no clear ruling in the TOS, but users need a moderation decision regarding it, the moderation team must choose how to act and then, potentially, amend the TOS if the case warrants it. Humans, though, are not robots, and no, AI is not the solution here jesus christ. There will always be variance in moderation decisions. And when it comes to amending the TOS, who's the decision maker? The sites' owners? The moderation team? Users as a whole?
Number 4: An ideal social network does not rely on an external income source to pay for the site. The site pays for itself, and its income flow covers the costs necessary with reserves for unexpected situations. Again, under capitalism this is a no-go, because a corporate social network's only goal is to maximize money. Infinite growth, not stasis. A private social network paid by members requires enough paying members to be sustainable, and costs will generally go up over time, not down. A social network that has some lump sum of cash just generating wealth is also unreliable because, first you need a large lump sum to begin with, and that mechanism is tied to the whims of the investment market. And, again, costs of the site will go up, not down.
As you've read through these you're probably reaching the conclusion: making a large-scale social network that is fair and sustainable is very, very difficult, if not impossible with our current culture and economic systems. There might be a scale where you can reach "almost fair" and "barely sustainable", but then you have to cap its growth.
So the "town square" social network is rife with problems and we need to abandon it's model as the ideal network. Should we go small instead? We have a model already for that with message boards and forums. Though they weren't without their problems, they didn't have the scale that exacerbated those problems to crisis levels. Most of the time.
If you're thinking maybe you need a small network like this, free from a corporate owner (like Discord), the tools are out there for you to accomplish it. However, before you try, keep the above points in mind. Even if you're not out to create a large-scale social network, an open network will run away from you. And all of those points above are guidelines for a good online community.
You and your network of 50 friends and friends of friends might all get along together, but every single person you add increases the risk of creating moderation problems. People also change, or simply have episodes of irrational behavior. You need a dedicated team of moderators who are acting coherently for and agreeably to the community.
And you absolutely must keep this in mind: inevitably, as you add more people, someone will do vile shit. CSAM and violence type shit. You have to be prepared to encounter it. You have to have a plan to see and handle that, and the moderators who are part of your moderation team must be prepared to see and handle it too.
There's been a steady trickle of new alternative social networks (or social media networks) popping up, but you cannot expect those to be perfect havens. Tumblr was once the haven for weirdos on the internet. Now it's hostile to its core members. This is not trying to rationalize staying here because "hey, it could be worse". This is just trying to warn you to temper your expectations, especially because new networks that suddenly get a huge influx of new members hit a critical point where many falter, change, or fail.
Examine who's running those networks closely. Think critically about what they're touting as the benefits of those networks. And if you decide to join them, do not, under any case, expect those new homes to be permanent.
206 notes
·
View notes
Text
Hey guys~ Sorry for my late post, I was super busy today and just came home and only now was able to take a closer look at the new merch and the post that OldXian made. So, first things first - I stand corrected, lol The leaked merch turned out to be real after all. For me personally, quite surprising because it's a LOT at once. (I mean, 58[!!] different cards/buttons/tickets/plates plus 4 special extras��…. WOW!!) Also what I mentioned in my last post already - it's quite a bold move to release merch with those old motifs from early manga chapters and calling it "time mosaic" lmao.
Who knows what went on when these decisions were made at mosspaca headquarters, lol
It's safe to say the images definitely got leaked by either a hacker or a person working there. And a lot of people on xiaohongshu were able to produce replicas quickly and sell them to unsuspecting fans. Which brings me to my next point:
The quality of the merch and the quality of the drawings itself. I promised you to address this 'issue' should there ever be an official announcement about these new items and that happened today.
So. First of all - if you saw the posts on taobao or XHS yourself, where people sold fakes, or even if you saw only screenshots from it, you can tell the image quality definitely seemed off. This will most likely be attributed to two things - producing merch from a small, low quality image will make it look blurry and distorted, sometimes pixel-y. And the other reason could be upscaling. If you use shitty programs to make images bigger, it'll look blurry and unfocused. You can go back to my previous post and take a close look at the parts that I circled and highlighted to point out these issues.
Now. About the thing I initially didn't wanna address because I know some people won't like it. If you look closely at the images posted by OldXian herself today, even there some things still seem a little bit 'off' or 'rushed'. There has been speculation in the past that OX uses an AI model (probably fed/trained with her own works) to generate new images quickly and then she'd just draw over them to fix minor issues etc. Please keep in mind, this is just speculation and rumors. I am NOT saying that this is the case. But it might be a possibility. Personally, I can see quite a few artists using these methods to save time, especially when they're under high pressure. (And if they use their own models, trained with their own works only, there's nothing immoral about it, if you ask me. But that's just my personal opinion.)
So there. This might be an explanation for some of her illustrations or panels looking a bit funky sometimes. The other possibility is simply that she's rushing it when working on these things and heavy time pressure makes it a bit messy. Once again - NOT saying she definitely uses AI, just telling you about the rumors that sometimes surface on the net. That's all.
Anyway. About the merch itself. It drops in about 12h from the time I'm posting this blog. (8pm Hangzhou time)
The taobao link for the items is this for now: https://item.taobao.com/item.htm?ft=t&id=792490172782
There are 4 different options and all of them are blind boxes, meaning you'll receive totally random motifs, unless you order a whole box, which will guarantee you 1 of each regular motif. However, all 4 lots have 1-3 limited pictures, which you might be lucky enough to receive, the chance is small though. (In case you order a complete box and there's 1 or more of the limited motifs inside, it'll lack a regular motif in its place. Example: if you order a full box of 8 buttons and one of them is a limited edition button, one of the regular 8 motifs will be missing in its place. There won't be 9 buttons in the box. It will always be 8 for a full box!)
Option 1: (18 Yuan | ca. 2,70 USD each) Button badges. There are 8 regular badges and 2 limited edition badges. If you order a total of 8 pieces you will not only receive the display box, but also an acrylic standee with Tianshan riding a scooter as a special extra.
Option 2: (10 Yuan | ca. 1,50 USD each) Laser Tickets. There are 17 regular tickets and 2 limited edition tickets. If you order a total of 17 pieces you will not only receive the display box, but also a Shishiki board with Mo from the metamorphosis series as a special extra.
Option 3: (18 Yuan | ca. 2,70 USD each) Tinplates. There are 10 regular plates and 1 limited edition plate. If you order a total of 10 pieces you will not only receive the display box, but also an acrylic standee with Zhanyi cooking/cleaning as a special extra.
Option 4: (15 Yuan | ca. 2,25 USD each) Acrylic Cards. There are 16 regular cards and 3 limited edition cards. If you order a total of 16 pieces you will not only receive the display box, but also an acrylic standee with all 4 boys as chibis as a special extra. [Note about the acrylic cards: The Mo Guanshan card will be the same that was already given as a limited extra during the last round of blind box button badges!]
If you live in the US or Asia, you will most likely be able to use taobao and order directly from the mosspaca shop via the app with the link I gave you above. If you live in a country that's not covered on taobao's shipping list, you can use an agent to order the new merch. Please refer to THIS POST here where I previously explained how to use superbuy and similar shopping agents for buying things from taobao. In case you use superbuy, please keep in mind: They don't offer paypal anymore, so you'll need a credit card or bank transfer or apple pay/google pay.
Also, think carefully if you really want ALL of the merch, even if you're a die-hard fan. You saw I have put the rough amount of US Dollar with each item, so if you buy all 4 boxes, you'll have to pay over 110 USD for the merch alone, plus domestic shipping from mosspaca to the warehouse and then international shipping, which can be as high as 40 USD, depending on where you live. (And perhaps even customs fees on top of it.)
If you have any questions, please drop them below and I'll try my best to answer them~
#19 days#old xian#tianshan#mo guan shan#he tian#zhanyi#jian yi#zhan zheng xi#qiucheng#he cheng#brother qiu#she li#buzzcut#cun tou#merchandise#mosspaca
64 notes
·
View notes
Text
Hey, you know how I said there was nothing ethical about Adobe's approach to AI? Well whaddya know?
Adobe wants your team lead to contact their customer service to not have your private documents scraped!
This isn't the first of Adobe's always-online subscription-based products (which should not have been allowed in the first place) to have sneaky little scraping permissions auto-set to on and hidden away, but this is the first one (I'm aware of) where you have to contact customer service to turn it off for a whole team.
Now, I'm on record for saying I see scraping as fair use, and it is. But there's an aspect of that that is very essential to it being fair use: The material must be A) public facing and B) fixed published work.
All public facing published work is subject to transformative work and academic study, the use of mechanical apparatus to improve/accelerate that process does not change that principle. Its the difference between looking through someone's public instagram posts and reading through their drafts folder and DMs.
But that's not the kind of work that Adobe's interested in. See, they already have access to that work just like everyone else. But the in-progress work that Creative Cloud gives them access to, and the private work that's never published that's stored there isn't in LIAON. They want that advantage.
And that's valuable data. For an example: having a ton of snapshots of images in the process of being completed would be very handy for making an AI that takes incomplete work/sketches and 'finishes' it. That's on top of just being general dataset grist.
But that work is, definitionally, not published. There's no avenue to a fair use argument for scraping it, so they have to ask. And because they know it will be an unpopular ask, they make it a quiet op-out.
This was sinister enough when it was Photoshop, but PDF is mainly used for official documents and forms. That's tax documents, medical records, college applications, insurance documents, business records, legal documents. And because this is a server-side scrape, even if you opt-out, you have no guarantee that anyone you're sending those documents to has done so.
So, in case you weren't keeping score, corps like Adobe, Disney, Universal, Nintendo, etc all have the resources to make generative AI systems entirely with work they 'own' or can otherwise claim rights to, and no copyright argument can stop them because they own the copyrights.
They just don't want you to have access to it as a small creator to compete with them, and if they can expand copyright to cover styles and destroy fanworks they will. Here's a pic Adobe trying to do just that:
If you want to know more about fair use and why it applies in this circumstance, I recommend the Electronic Frontier Foundation over the Copyright Alliance.
182 notes
·
View notes
Text
Auto-Generated Junk Web Sites
I don't know if you heard the complaints about Google getting worse since 2018, or about Amazon getting worse. Some people think Google got worse at search. I think Google got worse because the web got worse. Amazon got worse because the supply side on Amazon got worse, but ultimately Amazon is to blame for incentivising the sale of more and cheaper products on its platform.
In any case, if you search something on Google, you get a lot of junk, and if you search for a specific product on Amazon, you get a lot of junk, even though the process that led to the junk is very different.
I don't subscribe to the "Dead Internet Theory", the idea that most online content is social media and that most social media is bots. I think Google search has gotten worse because a lot of content from as recently as 2018 got deleted, and a lot of web 1.0 and the blogosphere got deleted, comment sections got deleted, and content in the style of web 1.0 and the blogosphere is no longer produced. Furthermore, many links are now broken because they don't directly link to web pages, but to social media accounts and tweets that used to aggregate links.
I don't think going back to web 1.0 will help discoverability, and it probably won't be as profitable or even monetiseable to maintain a useful web 1.0 page compared to an entertaining but ephemeral YouTube channel. Going back to Web 1.0 means more long-term after-hours labour of love site maintenance, and less social media posting as a career.
Anyway, Google has gotten noticeably worse since GPT-3 and ChatGPT were made available to the general public, and many people blame content farms with language models and image synthesis for this. I am not sure. If Google had started to show users meaningless AI generated content from large content farms, that means Google has finally lost the SEO war, and Google is worse at AI/language models than fly-by-night operations whose whole business model is skimming clicks off Google.
I just don't think that's true. I think the reality is worse.
Real web sites run by real people are getting overrun by AI-generated junk, and human editors can't stop it. Real people whose job it is to generate content are increasingly turning in AI junk at their jobs.
Furthermore, even people who are setting up a web site for a local business or an online presence for their personal brand/CV are using auto-generated text.
I have seen at least two different TV commercials by web hosting and web design companies that promoted this. Are you starting your own business? Do you run a small business? A business needs a web site. With our AI-powered tools, you don't have to worry about the content of your web site. We generate it for you.
There are companies out there today, selling something that's probably a re-labelled ChatGPT or LLaMA plus Stable Diffusion to somebody who is just setting up a bicycle repair shop. All the pictures and written copy on the web presence for that repair shop will be automatically generated.
We would be living in a much better world if there was a small number of large content farms and bot operators poisoning our search results. Instead, we are living in a world where many real people are individually doing their part.
165 notes
·
View notes
Text
A slight opportunity missed.
So, there’s not a whole lot that I’d change about Hank and Connor’s storyline in Detroit Become Human, but that being said.
The story takes place in 2038, and Hank is a Millennial.
Millennials don’t have that Gen Z knowledge of new technology, but they do have a pretty decent understanding of it and some have healthy skepticism. They aren’t like Boomers, who struggle to adjust to using new tech and fall for more scams. Like AI generated photos and scam emails.
There are Millennial parents that buy IPads for their literal infants and let them get brain rotted, but Hank doesn’t strike me as the type to do that.
I think there’s a missed opportunity to make Lieutenant Anderson the type of Millennial who doesn’t blindly trust new products in tech. He’s like the sensible Millennial who thinks linking your house up to an Alexa to control the lights, appliances, and doors, is dystopian. Literally does not see a point in doing all that.
Bro probably took one look at the Metaverse trailer, knew it was gonna be dog water, and laughed at its failure. Hank probably used to mess with phone scammers like this Officer:
youtube
Another change I talked about previously is having Conner be in use before the first deviant case, helping with unrelated cases. That way it feels like the Police have a reason to trust Conner enough to include him in the Cyberlife related cases. It’s highly suspicious for them to insert a police Android during an investigation that could make or break their company.
I would write Hank as still having reservations about using Connor, since he’s skeptical of Cyberlife’s intentions. He thinks Cyberlife is using this walking, talking recording device to mine information from the Police department [Which is true].
You know that scene where Connor scans Anderson’s desk to figure out his interests and break the ice? That would literally just make Hank feel like he’s right about the data mining. I’d have him sit down, not stoked about the android but resigned to deal with it, then get progressively more frustrated by Connor’s attempts to act friendly.
Then Hank stomps to the chief’s office and starts refusing to work with the android. Only to be told he has no choice. Lieutenant Anderson disliking Connor, not just because of what happened to Cole, but because he’s smart enough to think Cyberlife is using him as spyware, would be an interesting factor in their relationship.
I think the turning point where you can actually befriend Hank would be when you show up at his house and sober him up. Because a regular machine would probably just stand in one spot and call an ambulance. But Connor very stubbornly moves Anderson to his bathroom and starts briefing him on the mission once he’s sober.
One would assume this android is programmed to wait for an ambulance and confirmation that Hank’s okay, then request a different human cop to help with the investigation that night. But Connor’s actions are much more human and “illogical” than that.
He’s impatient and stubborn, two traits that Cyberlife androids aren’t programmed with. Maybe the Traci Models, but 9 times out of 10, impatient and stubborn androids are bad for business. Any adult should know that, Hank included.
The meaner interrogation could have been written off as Cyberlife programming a bunch of dialogue into Connor based on cases and movie scenes. That was at work, and for all Anderson knows, Connor was always programmed to be able to intimidate criminals. But it’s a lot harder to write off an Android dragging you to the bathtub and refusing to take no for an answer about investigating that night.
That’s human. Illogical, stubborn, overstepping his bounds… and human. Leaving the car at the murder scene, despite being commanded to stay, could have been written off as Connor’s spyware programming too. Not attitude or impatience. But in retrospect, it would make sense as part of his personality too.
#Youtube#Detroit become human#connor rk800#dbh#hank anderson#not shipping#ramblings#Hank the type of guy to never allow Siri to use his microphone#he will type out a misspelled google search if he has to
37 notes
·
View notes
Text
Anyone who has spent even 15 minutes on TikTok over the past two months will have stumbled across more than one creator talking about Project 2025, a nearly thousand-page policy blueprint from the Heritage Foundation that outlines a radical overhaul of the government under a second Trump administration. Some of the plan’s most alarming elements—including severely restricting abortion and rolling back the rights of LGBTQ+ people—have already become major talking points in the presidential race.
But according to a new analysis from the Technology Oversight Project, Project 2025 includes hefty handouts and deregulation for big business, and the tech industry is no exception. The plan would roll back environmental regulation to the benefit of the AI and crypto industries, quash labor rights, and scrap whole regulatory agencies, handing a massive win to big companies and billionaires—including many of Trump’s own supporters in tech and Silicon Valley.
“Their desire to eliminate whole agencies that are the enforcers of antitrust, of consumer protection is a huge, huge gift to the tech industry in general,” says Sacha Haworth, executive director at the Tech Oversight Project.
One of the most drastic proposals in Project 2025 suggests abolishing the Federal Reserve altogether, which would allow banks to back their money using cryptocurrencies, if they so choose. And though some conservatives have railed against the dominance of Big Tech, Project 2025 also suggests that a second Trump administration could abolish the Federal Trade Commission (FTC), which currently has the power to enforce antitrust laws.
Project 2025 would also drastically shrink the role of the National Labor Relations Board, the independent agency that protects employees’ ability to organize and enforces fair labor practices. This could have a major knock on effect for tech companies: In January, Musk’s SpaceX filed a lawsuit in a Texas federal court claiming that the National Labor Relations Board (NLRB) was unconstitutional after the agency said the company had illegally fired eight employees who sent a letter to the company’s board saying that Musk was a “distraction and embarrassment.” Last week, a Texas judge ruled that the structure of the NLRB—which includes a director that can’t be fired by the president—was unconstitutional, and experts believe the case may wind its way to the Supreme Court.
This proposal from Project 2025 could help quash the nascent unionization efforts within the tech sector, says Darrell West, a senior fellow at the Brookings Institution’s Center for Technology Innovation. “Tech, of course, relies a lot on independent contractors,” says West. “They have a lot of jobs that don't offer benefits. It's really an important part of the tech sector. And this document seems to reward those types of business.”
For emerging technologies like AI and crypto, a rollback in environmental regulations proposed by Project 2025 would mean that companies would not be accountable for the massive energy and environmental costs associated with bitcoin mining and running and cooling the data centers that make AI possible. “The tech industry can then backtrack on emission pledges, especially given that they are all in on developing AI technology,” says Haworth.
The Republican Party’s official platform for the 2024 elections is even more explicit, promising to roll back the Biden administration’s early efforts to ensure AI safety and “defend the right to mine Bitcoin.”
All of these changes would conveniently benefit some of Trump’s most vocal and important backers in Silicon Valley. Trump’s running mate, Republican senator J.D. Vance of Ohio, has long had connections to the tech industry, particularly through his former employer, billionaire founder of Palantir and longtime Trump backer Peter Thiel. (Thiel’s venture capital firm, Founder’s Fund, invested $200 million in crypto earlier this year.)
Thiel is one of several other Silicon Valley heavyweights who have recently thrown their support behind Trump. In the past month, Elon Musk and David Sacks have both been vocal about backing the former president. Venture capitalists Marc Andreessen and Ben Horowitz, whose firm a16z has invested in several crypto and AI startups, have also said they will be donating to the Trump campaign.
“They see this as their chance to prevent future regulation,” says Haworth. “They are buying the ability to avoid oversight.”
Reporting from Bloomberg found that sections of Project 2025 were written by people who have worked or lobbied for companies like Meta, Amazon, and undisclosed bitcoin companies. Both Trump and independent candidate Robert F. Kennedy Jr. have courted donors in the crypto space, and in May, the Trump campaign announced it would accept donations in cryptocurrency.
But Project 2025 wouldn’t necessarily favor all tech companies. In the document, the authors accuse Big Tech companies of attempting “to drive diverse political viewpoints from the digital town square.” The plan supports legislation that would eliminate the immunities granted to social media platforms by Section 230, which protects companies from being legally held responsible for user-generated content on their sites, and pushes for “anti-discrimination” policies that “prohibit discrimination against core political viewpoints.”
It would also seek to impose transparency rules on social platforms, saying that the Federal Communications Commission (FCC) “could require these platforms to provide greater specificity regarding their terms of service, and it could hold them accountable by prohibiting actions that are inconsistent with those plain and particular terms.”
And despite Trump’s own promise to bring back TikTok, Project 2025 suggests the administration “ban all Chinese social media apps such as TikTok and WeChat, which pose significant national security risks and expose American consumers to data and identity theft.”
West says the plan is full of contradictions when it comes to its approach to regulation. It’s also, he says, notably soft on industries where tech billionaires and venture capitalists have put a significant amount of money, namely AI and cryptocurrency. “Project 2025 is not just to be a policy statement, but to be a fundraising vehicle,” he says. “So, I think the money angle is important in terms of helping to resolve some of the seemingly inconsistencies in the regulatory approach.”
It remains to be seen how impactful Project 2025 could be on a future Republican administration. On Tuesday, Paul Dans, the director of the Heritage Foundation’s Project 2025, stepped down. Though Trump himself has sought to distance himself from the plan, reporting from the Wall Street Journal indicates that while the project may be lower profile, it’s not going away. Instead, the Heritage Foundation is shifting its focus to making a list of conservative personnel who could be hired into a Republican administration to execute the party’s vision.
64 notes
·
View notes
Note
hey, same person who requested Medium/Psychic Reader. I've noticed you wrote only for 2 characters but is it okay if u do the same for Optimus and Ratchet please?
[ Please do not repost, plagiarize, or use my writing for AI! Translating my work with proper credit is acceptable, but please ask first! ]
Optimus
The rides back to base from the church are always haunted by this eerie and tense silence, and it's impossible to miss the stark difference in their demeanor before and after the church congregation. He's unsure whether it's normal for humans to seem exhausted and on edge after attending a religious congregation, and though he momentarily considers this may be part of their religious culture, he's far too concerned to simply leave it be.
He asks the question up front and directly, it's clear he's not beating around the bush, and he wants an answer because he's concerned for their welbeing not only as their guardian but as a friend. If they choose to lie to him, it'll be hard persuading him to leave it at that unless they're a very--and I mean very--good liar. He is by no means a lie detector, but he's good at picking up on the signs and interpreting them.
If they're honest with him, he believes them rather quickly, given the general belief of the existence of an afterlife in cybertronian culture. However he finds it fascinating, given how it's not exactly clear that humans have a "spark" or a "soul" that would be reborn, and yet some persist after death and take the form of a ghost with the potential to grow malevolent and harmful towards the living.
When he goes to pick them up, he'll offer to escort them back home early and so they can rest and relax from whatever it was they had to do that day. However, if they don't wish to be alone in that moment, he'll bring them back to base and they can sit near the rails as he works, or they can join him on a brief patrol around the area.
Ratchet
He's not one to poke and prod at business that doesn't concern him, no matter the rumors he hears about it, yet the only exception is when said business ends up hurting others. In this case it has to do with his charge. He's heard the kids gossip about the local church in town, and they talk about ghosts and the paranormal, and of course he doesn't believe in such things... Or so he thinks.
His eyes are sharper than they'd expect, and he tends to notice a lot of things that they think they're hiding well as he drives them back to base. He doesn't miss the haunted look on their face, or the tense periods of silence after attending the "assembly" at church, or the way they jump at the creaking of the metal pipes that run along the walls once they're back inside.
And one day he'll ask how "church" went after picking them up. If they're clearly reluctant to talk, he'll push, insisting that it's obvious that they don't look well after whatever they had just done, and as their guardian, he needs to know what's happened because he's concerned that it's causing them some sort of harm.
But if they're honest, he'll hesitate to respond. He's skeptical that ghosts and spirits exist, seeing as humans don't seem to have any sparks that persist after their bodies die, but if humans are the spawn of Unicron, then perhaps this is a result of that? If this is genuinely what is causing them all this stress then he'll take it at that. From then on, every time they finish "church", he'll offer to take them for a drive around the town to get their mind off all the stress they no doubt endured during the exorcism or whatever else it was they had to do that day.
#tfp imagines#tfp headcanons#tfp x reader#tfp optimus prime#optimus prime x reader#tfp ratchet#ratchet x reader#x reader#reader insert#self insert#weenwrites
55 notes
·
View notes
Note
I feel like some people can't be/refuse to be educated, or they're deliberately being obtuse because they're trolls, psyops, or they just fell for the trolls and psyops. But its still good to point out where they're wrong and to give actual, you know, facts, for the benefit of other people reading who might actually be reachable.
yeah, I mean I usually ignore them because usually its bad faith and when a post is getting hundreds even thousands of notes in a day you just can't keep up with the 10-20-ish people who say something, particularly if its in the tags because thats just hard or fighting in the replies which always feels weird
But I was in a bad mood and in general seeing the same either bad faith or straight up don't know comment over and over and over again is very annoying
the "lol Joe Biden didn't do anything about Student loans!" one is pretty annoying since Biden has forgiven well over 100 BILLION dollars worth of student loan debt, so like he has done a lot on student loan debt. I'm not a big deal but I remember I did one of my "what Biden did this week" posts and it had the student loan debt forgiveness for people who got defrauded by the Art Institutes, and a few people added their stories of being defrauded and being in debt to AI for years and the one that'll stay with me was an older guy who went to try to get a new degree to get a job in a different field kinda late in the game, his 50s or 60s and of course didn't get the jobs he hoped for because scam college and saying how he thought he'd die in debt and it was all gone, all forgiven. So just like people flippantly dismissing a very real life changing thing is very annoying
there are a few other very common annoying ones "why didn't he do this when he controlled congress before!" well he was busy passing the biggest climate change bill any government on earth has ever done, investing in our Infrastructure for the first time since before Reagan was President (Reagan 😒) listen Biden passed 4 of the biggest most transformationally progressive bills the US has seen since LBJ
American Rescue Plan
Bipartisan Infrastructure Law
CHIPS and Science Act
Inflation Reduction Act
on top of which he passed the first gun control law out of congress in 30 years, and other things, like the Respect for Marriage Act to protect gay marriage, or making Juneteenth a federal holiday (the first new federal holiday since MLK day in 1983)
SO! thats why he didn't do the things he wants to do in his next term he was busy doing equally (and in the case of climate change more important) things and thats why we should all be hopeful if Joe Biden is President with a Democratic Congress he'll get most if not ALL the things on his agenda done, because he's fucking good at this, we haven't had a President this good at pushing bills through Congress and using every switch and lever of the federal government to make major progressive change since LBJ or FDR, I guess his big mistake was naming it something boring like "Inflation Reduction Act" and not something sexy like "New Deal" or "Great Society"
sorry to go off on a tare there, but its just frustrating to see 40 (out of tens of thousands really) posts saying the same dumb shit and having no real way to respond
92 notes
·
View notes