#DeepFake News Updates
Explore tagged Tumblr posts
Text
Noida News : देश में बढ़ रहे आईटी अपराधों पर लगाम लगाने तथा डीपफेक पर कड़ी कार्रवाई करने के लिए केंद्र सरकार जल्द ही नए आईटी नियमों को लागू करने जा रही है। यह बात केंद्रीय मंत्री कौशल विकास और उद्यमशीलता मंत्रालय, इलेक्ट्रॉनिक्स एवं सूचना प्रौद्योगिकी मंत्रालय और जल शक्ति राजीव चंद्रशेखर ने कही।
राष्ट्रीय स्टार्टअप दिवस पर नोएडा स्थित बूट (boAt) की विनिर्माण इकाई के दौरे पर केंद्रीय मंत्री राजीव चंद्रशेखर ने कहा कि नवाचार के हर लाभ के साथ चुनौतियां और नुकसान भी हैं। हमारी नीतियां, हमारे नियम और हमारा दृष्टिकोण खुले, सुरक्षित और जवाबदेह इंटरनेट का है। उन्होंने कहा कि, यह हमारा कर्तव्य है कि हर भारतीय इंटरनेट पर सुरक्षा और विश्वास का अनुभव करे। हम इसके लिए नियम और कानून बनाएंगे। डीपफेक ��ुद्दे पर, उन्होंने कहा कि एक एडवाइजरी अधिसूचित की है। हम आने वाले समय में नए आईटी नियम भी अधिसूचित करेंगे।
0 notes
Text
Bombay High Court Orders Removal of Deepfake Videos - July 2024 Legal Update
The Bombay High Court has ordered social media platforms to promptly remove deepfake videos concerning the NSE and its Managing Director, highlighting the judiciary's proactive stance against digital manipulations
. . Bombay High Court Orders on Deepfake Videos The Bombay High Court has issued a significant ruling mandating social media platforms to remove deepfake videos and related infringing content concerning the National Stock Exchange (NSE) and its Managing Director, Ashishkumar Chauhan. This decision comes in light of complaints regarding the dissemination of manipulated videos that falsely depict…
#ashishkumar chauhan#Bombay High Court#deepfake videos#digital rights#India#Judiciary#july 2024#Legal News#Legal Updates#nse#social media
0 notes
Text
I keep being told to "adapt" to this new AI world.
Okay.
Well first of all, I've been training myself more and more how to spot fake images. I've been reading every article with a more critical eye to see if it's full of ChatGPT's nonsense. I've been ignoring half the comments on stuff just assuming it's now mostly bots trying to make people angry enough to comment.
When it comes to the news and social issues, I've started to focus on and look for specific journalists and essayists whose work I trust. I've been working on getting better at double-checking and verifying things.
I have been working on the biggest part, and this one is a hurdle: PEOPLE. People whose names and faces I actually know. TALKING to people. Being USED to talking to people. Actual conversations with give and take that a chat bot can't emulate even if their creators insist they can.
All of this combined is helping me survive an AI-poisoned internet, because here's what's been on my mind:
What if the internet was this poisoned in 2020?
Would we have protested after George Floyd?
A HUGE number of people followed updates about it via places like Twitter and Tiktok. Twitter is now a bot-hell filled with nazis and owned by a petulant anti-facts weirdo, and Tiktok is embracing AI so hard that it gave up music so that its users can create deepfakes of each other.
Would information have traveled as well as it did? Now?
The answer is no. Half the people would have called the video of Floyd's death a deepfake, AI versions of it would be everywhere to sew doubt about the original, bots would be pushing hard for people to do nothing about it, half the articles written about it would be useless ChatGPT garbage, and the protests themselves… might just NOT have happened. Or at least, they'd be smaller - AND more dangerous when it comes to showing your face in a photo or video - because NOW what can people DO with that photo and video? The things I mentioned earlier will help going forward. Discernment. Studying how the images look, how the fake audio sounds, how the articles often talk in circles and litter in contradictory misinformation. and PEOPLE.
PEOPLE is the biggest one here, because if another 2020-level event happens where we want to be protesting on the streets by the thousands, our ONLY recourse right now is to actually connect with people. Carefully of course, it's still a protest, don't use Discord or something, they'll turn your chats over to cops.
But what USED to theoretically be "simple" when it came to leftist organizing ("well my tweet about it went viral, I helped!") is just going to require more WORK now, and actual personal communication and connection and community. I know if you're reading this and you're American, you barely know what that feels like and I get it. We're deprived of it very much on purpose, but the internet is becoming more and more hostile to humanity itself. When it comes to connecting to other humans… we now have to REALLY connect to other humans
I'm sorry. This all sucks. But adapting usually does.
484 notes
·
View notes
Text
In early 2020, deepfake expert Henry Ajder uncovered one of the first Telegram bots built to “undress” photos of women using artificial intelligence. At the time, Ajder recalls, the bot had been used to generate more than 100,000 explicit photos—including those of children—and its development marked a “watershed” moment for the horrors deepfakes could create. Since then, deepfakes have become more prevalent, more damaging, and easier to produce.
Now, a WIRED review of Telegram communities involved with the explicit nonconsensual content has identified at least 50 bots that claim to create explicit photos or videos of people with only a couple of clicks. The bots vary in capabilities, with many suggesting they can “remove clothes” from photos while others claim to create images depicting people in various sexual acts.
The 50 bots list more than 4 million “monthly users” combined, according to WIRED's review of the statistics presented by each bot. Two bots listed more than 400,000 monthly users each, while another 14 listed more than 100,000 members each. The findings illustrate how widespread explicit deepfake creation tools have become and reinforce Telegram’s place as one of the most prominent locations where they can be found. However, the snapshot, which largely encompasses English-language bots, is likely a small portion of the overall deepfake bots on Telegram.
“We’re talking about a significant, orders-of-magnitude increase in the number of people who are clearly actively using and creating this kind of content,” Ajder says of the Telegram bots. “It is really concerning that these tools—which are really ruining lives and creating a very nightmarish scenario primarily for young girls and for women—are still so easy to access and to find on the surface web, on one of the biggest apps in the world.”
Explicit nonconsensual deepfake content, which is often referred to as nonconsensual intimate image abuse (NCII), has exploded since it first emerged at the end of 2017, with generative AI advancements helping fuel recent growth. Across the internet, a slurry of “nudify” and “undress” websites sit alongside more sophisticated tools and Telegram bots, and are being used to target thousands of women and girls around the world—from Italy’s prime minister to school girls in South Korea. In one recent survey, a reported 40 percent of US students were aware of deepfakes linked to their K-12 schools in the last year.
The Telegram bots identified by WIRED are supported by at least 25 associated Telegram channels—where people can subscribe to newsfeed-style updates—that have more than 3 million combined members. The Telegram channels alert people about new features provided by the bots and special offers on “tokens” that can be purchased to operate them, and often act as places where people using the bots can find links to new ones if they are removed by Telegram.
After WIRED contacted Telegram with questions about whether it allows explicit deepfake content creation on its platform, the company deleted the 75 bots and channels WIRED identified. The company did not respond to a series of questions or comment on why it had removed the channels.
Additional nonconsensual deepfake Telegram channels and bots later identified by WIRED show the scale of the problem. Several channel owners posted that their bots had been taken down, with one saying, “We will make another bot tomorrow.” Those accounts were also later deleted.
Hiding in Plain Sight
Telegram bots are, essentially, small apps that run inside of Telegram. They sit alongside the app’s channels, which can broadcast messages to an unlimited number of subscribers; groups where up to 200,000 people can interact; and one-to-one messages. Developers have created bots where people take trivia quizzes, translate messages, create alerts, or start Zoom meetings. They’ve also been co-opted for creating abusive deepfakes.
Due to the harmful nature of the deepfake tools, WIRED did not test the Telegram bots and is not naming specific bots or channels. While the bots had millions of monthly users, according to Telegram’s statistics, it is unclear how many images the bots may have been used to create. Some users, who could be in multiple channels and bots, may have created zero images; others could have created hundreds.
Many of the deepfake bots viewed by WIRED are clear about what they have been created to do. The bots’ names and descriptions refer to nudity and removing women’s clothes. “I can do anything you want about the face or clothes of the photo you give me,” the creators’ of one bot wrote. “Experience the shock brought by AI,” another says. Telegram can also show “similar channels” in its recommendation tool, helping potential users bounce between channels and bots.
Almost all of the bots require people to buy “tokens” to create images, and it is unclear if they operate in the ways they claim. As the ecosystem around deepfake generation has flourished in recent years, it has become a potentially lucrative source of income for those who create websites, apps, and bots. So many people are trying to use “nudify” websites that Russian cybercriminals, as reported by 404Media, have started creating fake websites to infect people with malware.
While the first Telegram bots, identified several years ago, were relatively rudimentary, the technology needed to create more realistic AI-generated images has improved—and some of the bots are hiding in plain sight.
One bot with more than 300,000 monthly users did not reference any explicit material in its name or landing page. However, once a user clicks to use the bot, it claims it has more than 40 options for images, many of which are highly sexual in nature. That same bot has a user guide, hosted on the web outside of Telegram, describing how to create the highest-quality images. Bot developers can require users to accept terms of service, which may forbid users from uploading images without the consent of the person depicted or images of children, but there appears to be little or no enforcement of these rules.
Another bot, which had more than 38,000 users, claimed people could send six images of the same man or woman—it is one of a small number that claims to create images of men—to “train” an AI model, which could then create new deepfake images of that individual. Once users joined one bot, it would present a menu of 11 “other bots” from the creators, likely to keep systems online and try to avoid removals.
“These types of fake images can harm a person’s health and well-being by causing psychological trauma and feelings of humiliation, fear, embarrassment, and shame,” says Emma Pickering, the head of technology-facilitated abuse and economic empowerment at Refuge, the UK’s largest domestic abuse organization. “While this form of abuse is common, perpetrators are rarely held to account, and we know this type of abuse is becoming increasingly common in intimate partner relationships.”
As explicit deepfakes have become easier to create and more prevalent, lawmakers and tech companies have been slow to stem the tide. Across the US, 23 states have passed laws to address nonconsensual deepfakes, and tech companies have bolstered some policies. However, apps that can create explicit deepfakes have been found in Apple and Google’s app stores, explicit deepfakes of Taylor Swift were widely shared on X in January, and Big Tech sign-in infrastructure has allowed people to easily create accounts on deepfake websites.
Kate Ruane, director of the Center for Democracy and Technology’s free expression project, says most major technology platforms now have policies prohibiting nonconsensual distribution of intimate images, with many of the biggest agreeing to principles to tackle deepfakes. “I would say that it’s actually not clear whether nonconsensual intimate image creation or distribution is prohibited on the platform,” Ruane says of Telegram’s terms of service, which are less detailed than other major tech platforms.
Telegram’s approach to removing harmful content has long been criticized by civil society groups, with the platform historically hosting scammers, extreme right-wing groups, and terrorism-related content. Since Telegram CEO and founder Pavel Durov was arrested and charged in France in August relating to a range of potential offenses, Telegram has started to make some changes to its terms of service and provide data to law enforcement agencies. The company did not respond to WIRED’s questions about whether it specifically prohibits explicit deepfakes.
Execute the Harm
Ajder, the researcher who discovered deepfake Telegram bots four years ago, says the app is almost uniquely positioned for deepfake abuse. “Telegram provides you with the search functionality, so it allows you to identify communities, chats, and bots,” Ajder says. “It provides the bot-hosting functionality, so it's somewhere that provides the tooling in effect. Then it’s also the place where you can share it and actually execute the harm in terms of the end result.”
In late September, several deepfake channels started posting that Telegram had removed their bots. It is unclear what prompted the removals. On September 30, a channel with 295,000 subscribers posted that Telegram had “banned” its bots, but it posted a new bot link for users to use. (The channel was removed after WIRED sent questions to Telegram.)
“One of the things that’s really concerning about apps like Telegram is that it is so difficult to track and monitor, particularly from the perspective of survivors,” says Elena Michael, the cofounder and director of #NotYourPorn, a campaign group working to protect people from image-based sexual abuse.
Michael says Telegram has been “notoriously difficult” to discuss safety issues with, but notes there has been some progress from the company in recent years. However, she says the company should be more proactive in moderating and filtering out content itself.
“Imagine if you were a survivor who’s having to do that themselves, surely the burden shouldn't be on an individual,” Michael says. “Surely the burden should be on the company to put something in place that's proactive rather than reactive.”
56 notes
·
View notes
Text
We knew this was coming, and it's here...
Teen Girls Confront an Epidemic of Deepfake Nudes in Schools
Using artificial intelligence, middle and high school students have fabricated explicit images of female classmates and shared the doctored pictures.
April 8, 2024
After boys at Francesca Mani’s high school fabricated and shared explicit images of girls last year, she and her mother, Dorota, began urging schools and legislators to enact tough safeguards.Shuran Huang
After boys at Francesca Mani’s high school fabricated and shared explicit images of girls last year, she and her mother, Dorota, began urging schools and legislators to enact tough safeguards.Shuran Huang
Westfield Public Schools held a regular board meeting in late March at the local high school, a red brick complex in Westfield, N.J., with a scoreboard outside proudly welcoming visitors to the “Home of the Blue Devils” sports teams.
But it was not business as usual for Dorota Mani.
In October, some 10th-grade girls at Westfield High School — including Ms. Mani’s 14-year-old daughter, Francesca — alerted administrators that boys in their class had used artificial intelligence software to fabricate sexually explicit images of them and were circulating the faked pictures. Five months later, the Manis and other families say, the district has done little to publicly address the doctored images or update school policies to hinder exploitative A.I. use.
“It seems as though the Westfield High School administration and the district are engaging in a master class of making this incident vanish into thin air,” Ms. Mani, the founder of a local preschool, admonished board members during the meeting.
In a statement, the school district said it had opened an “immediate investigation” upon learning about the incident, had immediately notified and consulted with the police, and had provided group counseling to the sophomore class.
Tenth-grade girls at Westfield High School in New Jersey learned last fall that male classmates had fabricated sexually explicit images of them and shared them.Peter K. Afriyie/Associated Press
“All school districts are grappling with the challenges and impact of artificial intelligence and other technology available to students at any time and anywhere,” Raymond González, the superintendent of Westfield Public Schools, said in the statement.
Blindsided last year by the sudden popularity of A.I.-powered chatbots like ChatGPT, schools across the United States scurried to contain the text-generating bots in an effort to forestall student cheating. Now a more alarming A.I. image-generating phenomenon is shaking schools.
Boys in several states have used widely available “nudification” apps to pervert real, identifiable photos of their clothed female classmates, shown attending events like school proms, into graphic, convincing-looking images of the girls with exposed A.I.-generated breasts and genitalia. In some cases, boys shared the faked images in the school lunchroom, on the school bus or through group chats on platforms like Snapchat and Instagram, according to school and police reports.
Such digitally altered images — known as “deepfakes” or “deepnudes” — can have devastating consequences. Child sexual exploitation experts say the use of nonconsensual, A.I.-generated images to harass, humiliate and bully young women can harm their mental health, reputations and physical safety as well as pose risks to their college and career prospects. Last month, the Federal Bureau of Investigation warned that it is illegal to distribute computer-generated child sexual abuse material, including realistic-looking A.I.-generated images of identifiable minors engaging in sexually explicit conduct.
Yet the student use of exploitative A.I. apps in schools is so new that some districts seem less prepared to address it than others. That can make safeguards precarious for students.
“This phenomenon has come on very suddenly and may be catching a lot of school districts unprepared and unsure what to do,” said Riana Pfefferkorn, a research scholar at the Stanford Internet Observatory, who writes about legal issues related to computer-generated child sexual abuse imagery.
At Issaquah High School near Seattle last fall, a police detective investigating complaints from parents about explicit A.I.-generated images of their 14- and 15-year-old daughters asked an assistant principal why the school had not reported the incident to the police, according to a report from the Issaquah Police Department. The school official then asked “what was she supposed to report,” the police document said, prompting the detective to inform her that schools are required by law to report sexual abuse, including possible child sexual abuse material. The school subsequently reported the incident to Child Protective Services, the police report said. (The New York Times obtained the police report through a public-records request.)
In a statement, the Issaquah School District said it had talked with students, families and the police as part of its investigation into the deepfakes. The district also “shared our empathy,” the statement said, and provided support to students who were affected.
The statement added that the district had reported the “fake, artificial-intelligence-generated images to Child Protective Services out of an abundance of caution,” noting that “per our legal team, we are not required to report fake images to the police.”
At Beverly Vista Middle School in Beverly Hills, Calif., administrators contacted the police in February after learning that five boys had created and shared A.I.-generated explicit images of female classmates. Two weeks later, the school board approved the expulsion of five students, according to district documents. (The district said California’s education code prohibited it from confirming whether the expelled students were the students who had manufactured the images.)
Michael Bregy, superintendent of the Beverly Hills Unified School District, said he and other school leaders wanted to set a national precedent that schools must not permit pupils to create and circulate sexually explicit images of their peers.
“That’s extreme bullying when it comes to schools,” Dr. Bregy said, noting that the explicit images were “disturbing and violative” to girls and their families. “It’s something we will absolutely not tolerate here.”
Schools in the small, affluent communities of Beverly Hills and Westfield were among the first to publicly acknowledge deepfake incidents. The details of the cases — described in district communications with parents, school board meetings, legislative hearings and court filings — illustrate the variability of school responses.
The Westfield incident began last summer when a male high school student asked to friend a 15-year-old female classmate on Instagram who had a private account, according to a lawsuit against the boy and his parents brought by the young woman and her family. (The Manis said they are not involved with the lawsuit.)
After she accepted the request, the male student copied photos of her and several other female schoolmates from their social media accounts, court documents say. Then he used an A.I. app to fabricate sexually explicit, “fully identifiable” images of the girls and shared them with schoolmates via a Snapchat group, court documents say.
Westfield High began to investigate in late October. While administrators quietly took some boys aside to question them, Francesca Mani said, they called her and other 10th-grade girls who had been subjected to the deepfakes to the school office by announcing their names over the school intercom.
That week, Mary Asfendis, the principal of Westfield High, sent an email to parents alerting them to “a situation that resulted in widespread misinformation.” The email went on to describe the deepfakes as a “very serious incident.” It also said that, despite student concern about possible image-sharing, the school believed that “any created images have been deleted and are not being circulated.”
Dorota Mani said Westfield administrators had told her that the district suspended the male student accused of fabricating the images for one or two days.
Soon after, she and her daughter began publicly speaking out about the incident, urging school districts, state lawmakers and Congress to enact laws and policies specifically prohibiting explicit deepfakes.
“We have to start updating our school policy,” Francesca Mani, now 15, said in a recent interview. “Because if the school had A.I. policies, then students like me would have been protected.”
Parents including Dorota Mani also lodged harassment complaints with Westfield High last fall over the explicit images. During the March meeting, however, Ms. Mani told school board members that the high school had yet to provide parents with an official report on the incident.
Westfield Public Schools said it could not comment on any disciplinary actions for reasons of student confidentiality. In a statement, Dr. González, the superintendent, said the district was strengthening its efforts “by educating our students and establishing clear guidelines to ensure that these new technologies are used responsibly.”
Beverly Hills schools have taken a stauncher public stance.
When administrators learned in February that eighth-grade boys at Beverly Vista Middle School had created explicit images of 12- and 13-year-old female classmates, they quickly sent a message — subject line: “Appalling Misuse of Artificial Intelligence” — to all district parents, staff, and middle and high school students. The message urged community members to share information with the school to help ensure that students’ “disturbing and inappropriate” use of A.I. “stops immediately.”
It also warned that the district was prepared to institute severe punishment. “Any student found to be creating, disseminating, or in possession of AI-generated images of this nature will face disciplinary actions,” including a recommendation for expulsion, the message said.
Dr. Bregy, the superintendent, said schools and lawmakers needed to act quickly because the abuse of A.I. was making students feel unsafe in schools.
“You hear a lot about physical safety in schools,” he said. “But what you’re not hearing about is this invasion of students’ personal, emotional safety.”
Natasha Singer writes about technology, business and society. She is currently reporting on the far-reaching ways that tech companies and their tools are reshaping public schools, higher education and job opportunities. More about Natasha Singer
A version of this article appears in print on April 11, 2024, Section B, Page 1 of the New York edition with the headline: Fake A.I. Nudes Create Crisis in Schools. Order Reprints | Today’s Paper | Subscribe
64 notes
·
View notes
Text
if you’re wondering who the Koharu part of “koharumatingpress” is, it is of course a 15 year old female character from Nexon’s notorious gacha Blue Archive.
20k+ morons on this “antifeminist discord”, your “leader” writes shit like this and names himself after a sex act with a teenage slot machine character, the other “new mens solidarity” leader dresses up like another cartoon character, the joker, and none of them start these groups to help or better themselves they just constantly attack women, often violently. they want to start a “blitzkrieg” because laws are being made against deepfake sexual exploitation. Korean women get the word out online about the (again often violent) rampant misogyny in South Korea, they campaign tirelessly for laws against deepfakes, they set up and attend protests both online and offline, they boycott misogynistic companies, they form groups to better handle situations, they take the time to constantly post updates in both Korean and English regarding news there, and so much more. and this is the male reaction.
#south korea#Korean feminism#Misogyny#korean incel#Far right#idk how to search for this post…I’ll add the discord name cuz I’m not writing that username as a tag lol#Antifeminist discord channel coalition#Nexon
35 notes
·
View notes
Text
The website, https://whichfaceisreal.com tests your ability to detect a.i. generated faces. In the pictures above, at least 4 of the faces are real. But which ones?
The website first surfaced about 5 years ago, but it’s new to me, so I thought I’d give it a try. I was doing pretty good at guessing the real face vs. the a.i. generated fake faces … until I started getting most of them wrong and doing really badly.
Anyway, it’s on all of us now. Confirmation bias is real, and you are not immune to propaganda (me neither). Don’t believe everything you see or hear. Slow down before sharing a viral social media post, because it just might be fake. Look for reliable sources; sources that cite sources and quickly post retractions if they’ve made a mistake.
We are going to have to step up our media literacy game (and laws) regarding deepfakes. I recently posted about deepfakes of Richard Nixon and Joe Biden, but those were illustrative; intentionally made with poor quality to drive home a point. There have been more insidious deepfakes, like the one with Bella Hadid saying she supports Israel, even though she’s known for having a history of outspoken support of Palestine. 🇵🇸
With sO many important democratic elections coming up this year, I think that we’re going to be missing “the good old days” when all you had to do was be able to detect if something was photoshopped or not. We are going to need a revised + updated version of media literacy for the new century.
This post is not interactive, but if you’re curious about the faces shown above, the answers (real or fake) are beneath the page break. But before you look at the answers, did you guess correctly, or were you fooled?
#politics#deepfakes#misinformation#media literacy#disinformation#chatgpt#palestine#gaza#sudan#congo#election 2024
40 notes
·
View notes
Text
Wrights and Wrongs
WHAT IS GOING ON IN THE HOUSE OF COMMONS?
Guys, new update...
I've just been reading the pbs promotional stuff for Miss Scarlet and the Deepfake Duke.
This story has done a backflip with Eliza's character. She never, ever, ever gave William a real chance and always struggled to show her softer side around him.
Now, the writers and actors are saying that Eliza is going to show her "softer side", have a "blossoming relationship" with Alexander Flake and "go on a date"
Seriously?!
I have no words...
Let me know what y'all think!
9 notes
·
View notes
Text
youtube
So, I recently saw a new CriticalDrinker video about how major gaming studios are actively making female characters uglier out of some collective dea delusion that anything else is "non-inclusive", and honestly? I wrote it off. I assumed that it was him combining a couple big instances of this(Lara Croft was the big one, and I know Tifa in FF7 Remake got something like it as well) with the masses of dumb shit on twitter and assuming that meant this was a wider trend when in fact that's not enough to suppose a connection. Literally the day afterwards I see this, which if you don't know was a stealth-drop update to Pokemon Go that basically ruined the characters. Assuming the account shown in this video isn't some sort of deepfake troll, there apparently really are dea groups out there pushing for this exact thing just like Drinker said. It does suck that this had to happen to a Nintendo series when Nintendo has been one of the relatively few big developers who haven't gone in for the culture war cash grab attempts, but I'd say this still isn't the biggest issue with Pokemon and third-party devs, given that their MOBA is made by a Chinese firm that's legally required under their home country's law to give the Party your data at any time for any reason.
#pokemon go#niantic#the critical drinker#not exaggerating in the slightest about Tencent btw#if you play Pokemon Unite your data must be turned over to the Chinese government should it ever ask Tencent to do so#Youtube
20 notes
·
View notes
Text
Site Update - 8/3/2023
Hi Pillowfolks!
How has your summer (or winter) been? Our team is back with a new update! As always, we will be monitoring closely for any unexpected bugs after this release, so please let us know if you run into any.
New Features/Improvements
Premium Subscription Updates - Per the request of many users, we’ve made a number of updates to creating & editing Premium Subscriptions.
Users can now make credit-only Subscriptions without needing to enter in payment information, if your credit balance can fully cover at least one payment of the features fees.
Users can also now apply a custom portion of their available accrued credit monthly– i.e., if the cost of features is $5.97 every month, you can choose to cover only a portion of the cost of features with your credit.
Users who have recently canceled a Subscription no longer have to wait until the payment period expires to create a new Subscription.
To access Pillowfort Premium, click on the “PF Premium” icon located in the left-hand sidebar. This page will allow you to convert your legacy donations to Pillowfort premium, review & edit your subscriptions, and more.
Premium Image Upload Limit Increase: Good news! We’ve raised the limit for Premium Image Uploads to 6MB (formerly 4MB), at no extra cost! We may raise the limit further depending on how the subscriptions service performs and how our data fees fare.
Premium Subscription Landing Page & Frames Preview: We improved what users who do not have a Premium Subscription see on the Subscription management page to provide more information about the Premium features available. This includes the ability to preview all premium frames available.
New Premium Frame: We’ve released a new premium avatar frame! We hope you like it. We also have more premium avatar frames in the works that will be released later this month.
Modless / Abandoned Communities Update - Our Developers have made changes to our admin tools to allow our Customer Service Team to be able to change Community Ownership and add/remove Moderators to help revitalize abandoned and modless Communities. We will make a post soon explaining the process for requesting to become a Mod and/or Owner of a Community.
Bug Fixes/Misc Improvements
Some users were not receiving confirmation e-mails when their Pillowfort Premium Subscription was successfully charged. This should now be fixed. Please let us know if you are still not receiving those e-mails.
Related to the above bug, some users who were using their credit balance in Subscriptions were not seeing their credit balance being properly updated to reflect the credit used in those Subscriptions. We have now synced these Subscriptions, so you should see a decrease in your account’s credit balance if you are using that credit in a Subscription.
Fixed a bug where users were unable to delete their accounts in certain scenarios.
Fixed bug that displayed errors on the log-in page incorrectly.
Made improvements to how post images load on Pillowfort, to reduce image loading errors and improve efficiency for users with slow web connections.
Fixed a bug causing the checkmark on avatar frame selection in Settings to display improperly.
Terms of Service Update
We have made a small update to our ToS to specify that “deepfakes” and other digitally-altered pornographic images of real people are considered involuntary pornography and thus prohibited.
And that’s all for today! With this update out, our team will now be working full steam on post drafts, post scheduling queuing, and the progressive mobile app! Be sure to keep checking back on our Pillowfort Dev Blog for further status updates on upcoming features.
Best,
Staff
#pillowfort.social#site update#pfstaffalert#pillowfort blogging#pillowfort premium#communities#bug fixes#long post
56 notes
·
View notes
Text
https://www.reuters.com/world/israel-hezbollah-live-updates-israeli-attacks-lebanon-overnight-2024-09-26/
5 notes
·
View notes
Note
isn't pixiv banning AI art and deepfakes a good thing?
Agree, but other rules mentioned in the new update is just... Ridiculous. You are not even allowed to depict death
YOU ARE NOT EVEN ALLOWED TO WRITE about death :/ also number 5 is the bs reasons that any totaliter government always use to ban art they deemed as 'immoral' for god sake Hitler banned Picasso's because he deemed cubism as degenerate...
8 notes
·
View notes
Text
GJ and ZZH Updates — September 10-16
previous week || all posts || following week
This is part of a weekly series collecting updates from and relating to Gong Jun and Zhang Zhehan.
This post is not wholly comprehensive and is intended as an overview, links provided lead to further details. Dates are in accordance with China Standard Time, the organization is chronological. My own biases on some things are reflected here. Anything I include that is not concretely known is indicated as such, and you’re welcome to do your own research and draw your own conclusions as you see fit. Please let me know if you have any questions, comments, concerns, or additions. :)
[Glossary of names and terms] [Masterlist of my posts about the situation with Zhang Zhehan]
09-10 → Gong Jun Outdoor Office posted a photo of him from the Jay Chou concert he had attended on 09-07.
→ 361° posted two photo ads featuring Gong Jun.
→ Tissot posted a promotional video spoken by Gong Jun. (1129 kadian) Caption includes: "Because of love, we keep moving forward and race against time. Those who cherish time will also be loved by time."
→ The Instagram posted seven photos of "Zhang Zhehan" at a river.
→ Zhenguoli posted a promotional video with a voiceover by Gong Jun.
09-11 → Tissot posted four photo ads featuring Gong Jun. (1129 kadian)
→ The Instagram posted ten photos of "Zhang Zhehan" and scenery.
→ Gong Jun Outdoor Office posted a photo of him at the airport. Caption: "Let's go!"
09-12 → Gong Jun arrived at the JFK airport in New York, where he was welcome by quite a large group of fans. Fan Observations: - He turned around for several seconds to look at a pair of wenzhou cosplayers. [video] [stills] He reportedly turned to look at them again as his car was driving away. 🥺 - Throughout the week fans bought a number of screens in Times Square to display videos of Gong Jun, many also including Zhang Zhehan. Videos: [1] [2] [3] [4] [5] [6] There were some incidents of solo fans trying to report down CPF-made videos; as far as I've seen none succeeded in anything more than coming across as bitter.
→ The Instagram posted eight photos of "Zhang Zhehan" and scenery.
→ Gong Jun's studio posted a photo ad of him for the Asian Games and 361°. Caption: "The grand event is approaching, let us preface the past events and focus on the future. Let’s walk through the Asian Games time with @ Gong Jun Simon , retrieve the love in our memories, and light up the flames of the Hangzhou Asian Games. Cheer for the Hangzhou Asian Games!"
09-13 → Chriselle Lim, an American influencer, posted several videos to her Instagram story that included Gong Jun, referring to him . The two reportedly were filming something together for Net-A-Porter.
→ Tissot posted a photo ad featuring Gong Jun. (1129 kadian)
→ The Instagram posted ten photos of "Zhang Zhehan".
09-14 → Tissot posted a photo ad featuring Gong Jun. (1129 kadian)
→ One of the scam gang's body doubles was filmed at airports flying to Kuala Lumpur. MYFM posted several photos from this.
→ #ZhangZhehanKLConcert trended on Twitter and would continue to do so periodically throughout the week.
09-15 → Tissot posted a photo ad featuring Gong Jun. (1129 kadian) They later posted a promotional video spoken by him.
→ The Instagram posted eight photos of "Zhang Zhehan".
→ Zhang Sanjian held a livestream on TikTok, which included him acting quite rude on multiple fronts. [clip of an example] The platform allows for "livestreams" to be prerecorded, though this doesn't seem to have been the case here; as has been mentioned numerous times by now, deepfakeing (both faces and voices) can be done in real time. (Photos of ZSJ at the bottom of this post, those with sensitive stomachs avert thine eyes.)
09-16 → Zhenguoli posted a photo ad featuring Gong Jun.
→ Net-A-Porter posted two photos of Gong Jun in New York.
→ Zhang Sanjian held another livestream, this one on Instagram. Same comments apply as with the TikTok one, except this time he somehow looked even less like Zhang Zhehan than usual (different body double maybe.)
I usually avoid including Zhang Sanjian pictures in these posts, but just to make a point, here are various people the scam has tried to pass off as Zhang Zhehan; far left is from the Instagram livestream. Addition 09-29: Clip from the livestream where you can clearly hear a voice change as the deepfake glitches.
→ The Instagram posted six photos of "Zhang Zhehan" at a stage rehearsal.
→ #ZhangZhehan trended on Twitter.
Additional Reading: → A reminder of the upcoming Mid-Autumn fandom event on 09-29! The deadline for participation in the vlog part is 09-20.
previous week || all posts || following week
This post was last edited 2023-09-29.
23 notes
·
View notes
Text
Everyone, it seems, wants a piece of Moo Deng. The baby pygmy hippo is barely two months old and already famous. So beloved on TikTok, Instagram, and X is Moo Deng that workers at Khao Kheow Open Zoo, the place in Thailand where she was born, are doing all they can to keep up with her fans’ appetite for more. They post videos, photos, updates. They also welcome thousands of visitors a day and find themselves having to defend Moo Deng when tourists throw shells at her while she’s just trying to chill.
Moo Deng, a name that means “bouncy pig,” has probably been all over your timeline lately—on Sephora makeup tutorials, on X’s main feed. She was born in July and in the past few weeks has become the Internet’s New Favorite Animal. A tradition almost as old as the internet itself, Favorite Animals—Maru, any of the dogs on the shiba inu puppy cam, those two llamas who just happened to run free the same day everyone was trying to decide what color The Dress was—come into the public consciousness seemingly out of nowhere. Some, like Doge, stick around; others disappear, or simply outgrow their cuteness, within a matter of weeks.
All of which makes capitalizing on their fame a matter of some urgency. It seems heartless to think of animals this way, but if their owners don’t, someone else will. Perhaps that’s why zoo director Narongwit Chodchoi told the Associated Press this week that the zoo has begun the process of trying to trademark and patent the hippo to avoid her likeness getting used by anyone else—a smart move considering Moo Deng mugs, T-shirts, and other merch are already popping up online. Income from these efforts, Chodcho told the wire service, could ���support activities that will make the animals’ lives better.”
Moo Deng might need it. Fandom is getting a bit out of control these days. As pop stars like Chappell Roan have amassed online and offline fame, they’ve also had to use their platforms to ask for space from boundary-less fans and stalkers. Social media celebs like Drew Afualo, on whose podcast Roan appeared to talk about the subject, also tell stories of being approached in public by people who simply know them from the internet.
It may seem odd to compare them to Favorite Animals, but the ways in which people feel entitled to their time aren’t that far apart. Everyone wants something for the ’gram, even if that something is a living being with its own sense of agency. One of Moo Deng’s most popular TikToks has 34 million views, and zoo staff have had to limit her visiting time to five minutes on Saturdays and Sundays to keep too many people from trying to get content of their own.
Trademark protections may be the best way for Moo Deng’s caretakers to ensure others don’t cash in on her viral fame. When Jools Lebron made efforts to trademark her “very demure, very mindful” meme, one of the hurdles that emerged was that it’s hard to claim ownership of a phrase. As Kate Miltner, a lecturer in data, AI, and society at the University of Sheffield’s Information School, told me at the time, memes with audiovisual elements, like Nyan Cat or Grumpy Cat, are easier to register. “People will invariably try to make money off of viral or memetic content, as we've seen time and again,” Miltner says when asked about trademarking the baby hippo, adding that the Cincinnati Zoo has already done this with Fiona the Hippo. “It's smart of the Khao Kheow Open Zoo to (at least try to) ensure that they’re the ones that do so.”
Lebron seems to be figuring out how to market her moment, and Moo Deng’s keepers seem to be doing the same. Being online in 2024 means living in a state of near-constant vulnerability. You could get hacked or turned into a humiliating deepfake. Having a public opinion on a video game or The Acolyte could turn your mentions into a hellscape. And that’s what happens to filthy casuals. When you’re an ascending pop star or a baby hippo the weak spots multiply, because the world always seems hungry for more of you. It’s possible to protect yourself, even monetize yourself, but you can lose yourself, too.
Like Boaty McBoatface, Moo Deng was named by the internet. The zoo held a poll on social media. Unlike McBoatface, Moo Deng is a living thing; she’s a member of an endangered species and needs protecting. Moo Deng is more than a meme.
20 notes
·
View notes
Text
I saw my first AI "fic" today.
The user who posted the AI creation added "goes to show we still need writers" I took that to mean they were uploading it to show how bad the AI creation was. Due to that, I felt more okay opening it for investigatory purposes (I made a post a while ago saying that AI would produce a facimile of creativity and I was curious how correct that would prove to be).
In a word, I was extremely correct. The fic had no-to minimal scene intros or transitions, banal dialogue, characters flat enough to feel ooc, silly to non existent conflicts, and it "tells" everything (no showing, no extended metaphors or insightful similes).
Scarily though, each snippet was coherent. Like sure, the characters were flat and the conflicts (when they exist) one dimensional and the turns of phrase often felt cliche (which is exactly what one would expect to get from an AI, which formulates sentences based off of how often words go next to eachother). But the scenes themselves make sense. If AI wasnt in the title I wouldnt have been able to tell the difference between this and a 15 yo's first fic.
The danger of course is that this is how AI starts out. The more people use them, and input content into them they will get better at producing this facimile of a creative work. AI could probably learn to produce some or all of the common components of storytelling that I've noticed this and other AI products lack.The scarier danger for me is when it becomes good enough to justify publishing houses and studios attempting to replace expensive human writers. (And to a degree clogging up AO3... although I hope enough people in fandom are here for the human connections that that wouldnt be a concern...).
Look, AI will never be keeping human writers out of fandom. Theres no finite number of spots on AO3. But AI could keep human writers out of screenwriting and book writing (which do have finite spots/finite funds for written works) and then... we lose something beautiful and precious.
The beauty of human writers is that they constantly have new things to say because every lived experience is so unique and precious. no two people will write a character's emotions the same way or capture the tension of a plot with the same words. I have read and will keep reading thousands of fanfics about the same canon characters because each person captures their pain and love and failures and triumphs in new and exciting ways. Humans always bring something new to the table. Humans always keep learning. Experimenting. Changing the conversation. An AI will never be able to say anything new. They are actively doing the opposite of human thinking and creating. Theyre not doing research or looking up new words or playing with new turns of phrase. Instead theyre cycling through a fixed set of applicable word choices and chosing the one with the highest percent match for what should come after the previous word.
So three bad things happen if you all start using AI as a fic production shortcut:
The AI improves its deepfakes of creativity. your requests, inputted content, and feedback all help it update the word selection math it is doing in the background. This makes it more likely that canon content stops being created by real people and instead starts being replwced by AI products.
You learn and feel nothing. I mean it. Writing is a joy because every creation is a new research rabbit hole. a new word looked up in the thesaurus. a new way to make your readers sob from the feels. AI robs you of that process. So you think less. You explore characters less. You are deprived the joy of marinating in your blorbos angst and pain and love and joy while you decide exactly how to put that into words. Youre deprived the satisfaction of your own work. (And by being deprived of the process above you also dont learn how to write. because the AI is not writing.)
Your readers lose out. Sure the AI has produced something perfectly spelled and grammared... but nothing new has been said. The grand conversation we are constantly having by writing and reading fic and exclaiming about characters stagnates and then festers. No new stories get told. AI is only producing things that look like what came before. your readers dont get any joy from new thoughts. new ideas. changed minds. changed perspectives... none of that happens. (Again I want to go back to point one. Im not worried about AO3 drowning in AI fics as much as I am worried that canon content will become overrun with it).
20 notes
·
View notes
Text
Alien: Romulus
It wasn’t perfect, but I liked this quite a bit. Structurally it’s very similar to the original Alien, which would put it in danger of feeling like merely a nostalgic franchise retread, à la The Force Awakens, except… they actually pulled everything off well here. I loved getting to see the bleak, hellish state of the colony world; the classic analog set design that felt functional and lived-in; the bone-rattling breakthrough into orbit; the pure claustrophobic dread of exploring the derelict station; all these gorgeous shots of the planet’s rings… it’s just some solid fuckin sci-fi.
I also think it’s a noteworthy accomplishment to have maintained the same tone and aesthetic as the original Alien movies despite being a modern film - here they used the budget to expand the scope, rather than change up the style like Prometheus did. A lot of the setpieces are creative as hell - rather than feel the need to make a dozen new monsters, they found new and interesting situations to put the classics in. The facehuggers stalking through the water, the xenomorph’s cocoon, the zero-G acid blood obstacle course - they’re all clever twists on Alien without having to further convolute the lore.
And then… the last 15 minutes happen. Was this the creators feeling the need to further differentiate the movie? As if they preemptively heard fanboys complaining that they didn’t ‘do anything new’, and took it as a challenge? Either way, I found it simultaneously exploitative (the shock value ‘birth’ scene) and dumb as hell (the humalien). Plus, it draws even more attention to Romulus’ most distracting plot hole: the way-too-rapid gestation of its xenomorphs.
The other weird failing here was the usage of AI to try and deepfake the android - it looked and sounded like shit. Look, just trust an actor to recreate Ash’s line delivery, okay? And no offense to Sir Ian Holm, but it’s not like his face is irreplaceably unique - walk down the street and find another white guy that bears a passing resemblance. It’s not a big deal, guys.
Our other android, fortunately, gets by far the best writing and character arc in Romulus, and David Jonsson acts the shit out of the role. From a pure visual standpoint, Andy’s OS update in the cryo room - with all its weird timelapse jitters - may have been my favorite moment in the movie. When I zoom out, it doesn’t sit especially well with me that the only Black actor’s role ended up being a subservient android whose mind was ultimately downgraded again because it’s ‘better for him’, but when the character is this well written and well acted, it’s hard for me to be too upset about it.
“How could I do what? Leave someone behind?” coldest line in the gd movie
2 notes
·
View notes