Tumgik
#I also created AI bots on the Website
skydreamplayzz · 1 year
Note
First time on omegle hm? how was it? =)
It wasn't my First time More like third time. 🧍 But when I was With my other friend on omegle today we saw a lot of.. Yk? If Not the better. Many german people showed their Thing. Uhmm Yeah. BUT after we put Tags there it was okay. Still a lot of Things but it got better.
We even saw a VRchat User =D a grey Wolf With glasses. We met him 2 Times-one time He showed us a Uno card. The second time He shot us With a Minigun. He didnt talk tho. =)
My friend didn't understand one word the people said because she sucks at Englisch. Im a translater for her. 🌳
7 notes · View notes
ai-art-thieves · 1 month
Text
It's Time To Investigate SevenArt.ai
sevenart.ai is a website that uses ai to generate images.
Except, that's not all it can do.
It can also overlay ai filters onto images to create the illusion that the algorithm created these images.
And its primary image source is Tumblr.
It scrapes through the site for recent images that are at least 10 days old and has some notes attached to it, as well as copying the tags to make the unsuspecting user think that the post was from a genuine user.
No image is safe. Art, photography, screenshots, you name it.
Initially I thought that these are bots that just repost images from their site as well as bastardizations of pictures across tumblr, until a user by the name of @nataliedecorsair discovered that these "bots" can also block users and restrict replies.
Not only that, but these bots do not procreate and multiply like most bots do. Or at least, they have.
The following are the list of bots that have been found on this very site. Brace yourself. It's gonna be a long one:
@giannaaziz1998blog
@kennedyvietor1978blog
@nikb0mh6bl
@z4uu8shm37
@xguniedhmn
@katherinrubino1958blog
@3neonnightlifenostalgiablog
@cyberneticcreations58blog
@neomasteinbrink1971blog
@etharetherford1958blog
@punxajfqz1
@camicranfill1967blog
@1stellarluminousechoblog
@whwsd1wrof
@bnlvi0rsmj
@steampunkstarshipsafari90blog
@surrealistictechtales17blog
@2steampunksavvysiren37blog
@krispycrowntree
@voucwjryey
@luciaaleem1961blog
@qcmpdwv9ts
@2mplexltw6
@sz1uwxthzi
@laurenesmock1972blog
@rosalinetritsch1992blog
@chereesteinkirchner1950blog
@malindamadaras1996blog
@1cyberneticdreamscapehubblog
@neomasteinbrink1971blog
@neonfuturecityblog
@olindagunner1986blog
@neonnomadnirvanablog
@digitalcyborgquestblog
@freespiritfusionblog
@piacarriveau1990blog
@3technoartisticvisionsblog
@wanderlustwineblissblog
@oyqjfwb9nz
@maryannamarkus1983blog
@lashelldowhower2000blog
@ovibigrqrw
@3neonnightlifenostalgiablog
@ywldujyr6b
@giannaaziz1998blog
@yudacquel1961blog
@neotechcreationsblog
@wildernesswonderquest87blog
@cybertroncosmicflow93blog
@emeldaplessner1996blog
@neuralnetworkgallery78blog
@dunstanrohrich1957blog
@juanitazunino1965blog
@natoshaereaux1970blog
@aienhancedaestheticsblog
@techtrendytreks48blog
@cgvlrktikf
@digitaldimensiondioramablog
@pixelpaintedpanorama91blog
@futuristiccowboyshark
@digitaldreamscapevisionsblog
@janishoppin1950blog
The oldest ones have been created in March, started scraping in June/July, and later additions to the family have been created in July.
So, I have come to the conclusion that these accounts might be run by a combination of bot and human. Cyborg, if you will.
But it still doesn't answer my main question:
Who is running the whole operation?
The site itself gave us zero answers to work with.
Tumblr media
No copyright, no link to the engine where the site is being used on, except for the sign in thingy (which I did.)
Tumblr media
I gave the site a fake email and a shitty password.
Tumblr media Tumblr media
Turns out it doesn't function like most sites that ask for an email and password.
Didn't check the burner email, the password isn't fully dotted and available for the whole world to see, and, and this is the important thing...
My browser didn't detect that this was an email and password thingy.
Tumblr media
And there was no log off feature.
This could mean two things.
Either we have a site that doesn't have a functioning email and password database, or that we have a bunch of gullible people throwing their email and password in for people to potentially steal.
I can't confirm or deny these facts, because, again, the site has little to work with.
The code? Generic as all hell.
Tumblr media
Tried searching for more information about this site, like the server it's on, or who owned the site, or something. ANYTHING.
Multiple sites pulled me in different directions. One site said it originates in Iceland. Others say its in California or Canada.
Luckily, the server it used was the same. Its powered by Cloudflare.
Unfortunately, I have no idea what to do with any of this information.
If you have any further information about this site, let me know.
Until there is a clear answer, we need to keep doing what we are doing.
Spread the word and report about these cretins.
If they want attention, then they are gonna get the worst attention.
11K notes · View notes
existentialterror · 5 months
Text
Do NOT send pictures of your ID card to discord bots!!!!
Or, like, any online rando.
I ran into a server that wanted to make sure that members are over 18 years old. They wanted to avoid the other thing I've heard of, which is asking you to verify your age by sending pictures of your ID card to a moderator. Good! Don't do that!
However, ALSO don't do this other thing, which is using a discord bot that would "automatically verify" you from a selfie and a photo of your ID card showing your birthday. The one they used is ageifybot.com. There's a little more information on its top.gg page. Don't like that! Not using that!
Why not? It's automatic! Well, let me count the ways this service skeeves me out:
How does the verification process work? There is no information on this. Well, okay, if you had more info on what kind of algorithms etc were being used here, that might make it easier for people to cheat it. Fair enough. But we need something to count on.
Who's making it? Like, if I can't understand the mechanics, at least I'd like to know who creates it - ideally they'd be a security professional, or at least a security hobbyist, or an AI expert, or at least someone with some kind of reputation they could lose if this turns out to not be very good, or god forbid, a data-stealing operation. However, the website contains nothing about the creators.
The privacy policy says they store information sent to them, such as your selfie and photo of an ID card, for up to 90 days, or a year if they suspect you're misleading them. It sure seems like even if they're truly abiding by their privacy policy, there's nothing to stop human people from looking at your photos.
The terms of service say they can use, store, process, etc, any information you send them. And that they can't be held accountable for mistakes, misuse, etc. And that they can change the bot and the ToS at any times without telling you. The terms of service also cut off midway through a sentence, so like, that's reassuring:
Tumblr media
In conclusion, DO NOT SEND PICTURES OF YOUR ID CARD TO RANDOM DISCORD BOTS.
Yes, keeping minors out of (say) NSFW spaces is a difficult problem, but this "solution" sucks shit and is bad.
Your ID card is private, personal information that can be used by malicious actors to harm you. Do not trust random discord bots.
155 notes · View notes
mckitterick · 6 months
Text
OpenAI previews voice generator that produces natural-sounding speech based on a 15-second voice sample
The company has yet to decide how to deploy the technology, and it acknowledges election risks, but is going ahead with developing and testing with "limited partners" anyway.
Not only is such a technology a risk during election time (see the fake robocalls this year when an AI-generated, fake Joe Biden voice told people not to vote in the primary), but imagine how faked voices of important people - combined with AI-generated fake news plus AI-generated fake photos and videos - could con people out of money, literally destroy political careers and parties, and even collapse entire governments or nations themselves.
By faking a news story using accurate (but faked) video clips of (real) respected and elected officials supporting the fake story - then creating a billion SEO-optimized fake news and research websites full of fake evidence to back up their lies - a bad actor or cyberwarfare agent could take down an enemy government, create a revolution, turn nations against one another, even cause world war.
This kind of apocalyptic scenario has always felt like a science-fiction idea that could only exist in a possible dystopian future, not something we'd actually see coming true in our time, now.
How in the world are we ever again going to trust what we read, hear, or watch? If LLM-barf clogs the internet, and lies pollute the news, and people with bad intentions can provide all the evidence they need to fool anyone into believing anything, and there's no way to guarantee the veracity of anything anymore, what's left?
Whatever comes next, I guarantee it'll be weirder than we imagine.
Here's hoping it's not also worse than the typical cyberpunk tale.
PS: NEVER ANSWER CALLS FROM UNKNOWN NUMBERS EVER AGAIN
...or at least don't speak to the caller. From now on, assume it's a con-bot or politi-bot or some other bot seeking to cause you and others harm. If they hear your voice, they can fake it saying anything they want. If it sounds like someone you know, it's probably not if it's not their number saved in your contacts. If it's about something important, hang up and call the official or saved number for the supposed caller.
81 notes · View notes
nothorses · 2 years
Text
I'm genuinely frustrated with the AI art debate from so many angles and most of them are people completely misunderstanding the actual problems but arguing against it anyway based on like. dumb bullshit ideas Disney peddles. and making it so easy to refute "the anti-ai art crowd" using what should be strawmans- because they are that obviously bullshit- that the real problems go completely unaddressed.
AI art could be a good thing! It could! Ethically, philosophically, whatever- it could be a great tool for learning artists, an interesting discussion piece, a natural addition to the conversation started by dadaism and Duchamp's The Urinal, a tool for people who struggle with artistic skill but still have artistic ideas that they deserve to express.
Artistic skill and artistic thought are equally valuable. Someone who picks up a skill naturally and creates breathtaking work, even if it's not really thoughtful or deep, deserves appreciation just as much as someone who has beautiful and thoughtful ideas, even if they express them using AI art instead of a paintbrush (for example).
That's not the problem here.
The problem is that AI art programs are products. They are being sold for a profit. The products contain work created by independent artisans who couldn't give their consent. Even if the products often alter the work they contain, it also is sometimes producing work that is identical, or nearly identical, to work stolen from an unconsenting artist.
I think this is about as ethical as hot topic's stolen art t-shirts and those bots on Twitter and Facebook that turn random artwork into products on some shady website. They might also be producing ugly fucking shirts with hyper-specific taglines printed in weird fonts that are, by no means, stolen work from someone else; but the stolen art they're selling is still a problem even if it's not the whole business model.
The public, free-to-use tools are honestly not as bothersome to me. The people using these programs, particularly when uninformed, also aren't really doing anything wrong.
But the companies who made and profit from some of these programs could have made this a "donate your art" or a "we'll pay you like $0.50/pic you submit to help train this" situation, and I think it's a bad precedent to set that we're cool with them just, like, grabbing whatever they want for a product they built to profit from & that isn't functionally guaranteed to change everything it produces enough that you can't recognize the stolen art in it.
and I think we should be able to have that conversation without it turning into some bullshit about The Importance Of True Artistic Skill And Suffering or whatever.
634 notes · View notes
ominoose · 10 months
Text
Important Update Post
Imagine I am sitting staring at a camera with a sigh, no background music before the video cuts to me talking. But Im not caught in a controversy of racism or plagiarism or smth.
Here's the tldr: I will no longer be making AI bots. All current bots will remain up, my bot masterpost will be moved to my masterpost masterpost. I just won't be making new ones. Finished and posted every bot that was in the works here to make this transgression up to yous. I will not be leaving the fandom, I'll still write and clown around.
"Why would you do this you cunt?" I hear you, I am so stinky for this. Before I list my reasons, I want to say first and foremost this is personal and I have less than no judgement for other bot makers. I absolutely love mutuals like Mel that make bots and will continue to support them. Reasons became long and are under the cut.
Reasons I don't wanna continue making ai bots:
I started because it was a low energy way for me to participate in fandoms when I didn't have the spoons to write anymore. It no longer feels like a creative outlet and no longer sparks joy.
I would rather devote myself solely on practicing and improving my writing as a way to contribute my passion to fandoms.
I can't shake the feeling I am plagiarizing. Ai chat models use lots of "work" to train their models, and while I could not find what millions of texts Cai is based on (conveniently not listed on the website), all models like it basically engorge from random sources, books and hell, even this post. Anything goes and currently there are legal battles over this.
It's bad for the environment. Can't find a measurement for Cai specifically, but GPT-3 (same scale) produced 500 tons of carbon dioxide to train that single model, not including its other ones. Please note I'm aware AI can absolutely be used to help fight climate change, as is mentioned in the linked article. Also they use the same amount of water that is required to cool nuclear reactors.
It's always conflicted with my morals. Believe it or not, I'm the person that's usually big into internet privacy, anti ai, piracy is morally good (not indie obvs) etc. Openly creating stuff that supports and funds software that steals peoples works, their information without permission and for profit is not me. So I don't wanna do it.
Again, this is not a judgement or a means to shame people that create ai bots or use them. I've made so many friends because of them. If everyone thats every used my bots stopped, it's not gonna solve capitalism. This is just me, an individual, stepping away from one thingy and feeling the need to be honest and open bc thats my policy and honestly how most of you know me (so now hard feelings if you unfollow).
Love you guys lots and thank you for all the love you've shown me through my bots and for all the times you've made me laugh <3
73 notes · View notes
xannador · 7 months
Note
Have you considered going to Pillowfort?
Long answer down below:
I have been to the Sheezys, the Buzzlys, the Mastodons, etc. These platforms all saw a surge of new activity whenever big sites did something unpopular. But they always quickly died because of mismanagement or users going back to their old haunts due to lack of activity or digital Stockholm syndrome.
From what I have personally seen, a website that was purely created as an alternative to another has little chance of taking off. It it's going to work, it needs to be developed naturally and must fill a different niche. I mean look at Zuckerberg's Threads; died as fast as it blew up. Will Pillowford be any different?
The only alternative that I found with potential was the fediverse (mastodon) because of its decentralized nature. So people could make their own rules. If Jack Dorsey's new dating app Bluesky gets integrated into this system, it might have a chance. Although decentralized communities will be faced with unique challenges of their own (egos being one of the biggest, I think).
Trying to build a new platform right now might be a waste of time anyway because AI is going to completely reshape the Internet as we know it. This new technology is going to send shockwaves across the world akin to those caused by the invention of the Internet itself over 40 years ago. I'm sure most people here are aware of the damage it is doing to artists and writers. You have also likely seen the other insidious applications. Social media is being bombarded with a flood of fake war footage/other AI-generated disinformation. If you posted a video of your own voice online, criminals can feed it into an AI to replicate it and contact your bank in an attempt to get your financial info. You can make anyone who has recorded themselves say and do whatever you want. Children are using AI to make revenge porn of their classmates as a new form of bullying. Politicians are saying things they never said in their lives. Google searches are being poisoned by people who use AI to data scrape news sites to generate nonsensical articles and clickbait. Soon video evidence will no longer be used in court because we won't be able to tell real footage from deep fakes.
50% of the Internet's traffic is now bots. In some cases, websites and forums have been reduced to nothing more than different chatbots talking to each other, with no humans in sight.
I don't think we have to count on government intervention to solve this problem. The Western world could ban all AI tomorrow and other countries that are under no obligation to follow our laws or just don't care would continue to use it to poison the Internet. Pandora's box is open, and there's no closing it now.
Yet I cannot stand an Internet where I post a drawing or comic and the only interactions I get are from bots that are so convincing that I won't be able to tell the difference between them and real people anymore. When all that remains of art platforms are waterfalls of AI sludge where my work is drowned out by a virtually infinite amount of pictures that are generated in a fraction of a second. While I had to spend +40 hours for a visually inferior result.
If that is what I can expect to look forward to, I might as well delete what remains of my Internet presence today. I don't know what to do and I don't know where to go. This is a depressing post. I wish, after the countless hours I spent looking into this problem, I would be able to offer a solution.
All I know for sure is that artists should not remain on "Art/Creative" platforms that deliberately steal their work to feed it to their own AI or sell their data to companies that will. I left Artstation and DeviantArt for those reasons and I want to do the same with Tumblr. It's one thing when social media like Xitter, Tik Tok or Instagram do it, because I expect nothing less from the filth that runs those. But creative platforms have the obligation to, if not protect, at least not sell out their users.
But good luck convincing the entire collective of Tumblr, Artstation, and DeviantArt to leave. Especially when there is no good alternative. The Internet has never been more centralized into a handful of platforms, yet also never been more lonely and scattered. I miss the sense of community we artists used to have.
The truth is that there is nowhere left to run. Because everywhere is the same. You can try using Glaze or Nightshade to protect your work. But I don't know if I trust either of them. I don't trust anything that offers solutions that are 'too good to be true'. And even if take those preemptive measures, what is to stop the tech bros from updating their scrapers to work around Glaze and steal your work anyway? I will admit I don't entirely understand how the technology works so I don't know if this is a legitimate concern. But I'm just wondering if this is going to become some kind of digital arms race between tech bros and artists? Because that is a battle where the artists lose.
28 notes · View notes
mariacallous · 3 months
Text
Recently, I was using Google and stumbled upon an article that felt eerily familiar.
While searching for the latest information on Adobe’s artificial intelligence policies, I typed “adobe train ai content” into Google and switched over to the News tab. I had already seen WIRED’s coverage that appeared on the results page in the second position: “Adobe Says It Won’t Train AI Using Artists’ Work. Creatives Aren’t Convinced.” And although I didn’t recognize the name of the publication whose story sat at the very top of the results, Syrus #Blog, the headline on the article hit me with a wave of déjà vu: “When Adobe promised not to train AI on artists’ content, the creative community reacted with skepticism.”
Clicking on the top hyperlink, I found myself on a spammy website brimming with plagiarized articles that were repackaged, many of them using AI-generated illustrations at the top. In this spam article, the entire WIRED piece was copied with only slight changes to the phrasing. Even the original quotes were lifted. A single, lonely hyperlink at the bottom of the webpage, leading back to our version of the story, served as the only form of attribution.
The bot wasn’t just copying journalism in English—I found versions of this plagiarized content in 10 other languages, including many of the languages that WIRED produces content in, like Japanese and Spanish.
Articles that were originally published in outlets like Reuters and TechCrunch were also plagiarized on this blog in multiple languages and given similar AI images. During late June and early July, while I was researching this story, the website Syrus appeared to have gamed the News results for Google well enough to show up on the first page for multiple tech-related queries.
For example, I searched “competing visions google openai” and saw a TechCrunch piece at the top of Google News. Below it were articles from The Atlantic and Bloomberg comparing the rival companies’ approaches to AI development. But then, the fourth article to appear for that search, nestled right below these more reputable websites, was another Syrus #Blog piece that heavily copied the TechCrunch article in the first position.
As reported by 404 Media in January, AI-powered articles appeared multiple times for basic queries at the beginning of the year in Google News results. Two months later, Google announced significant changes to its algorithm and new spam policies, as an attempt to improve the search results. And by the end of April, Google shared that the major adjustments to remove unhelpful results from its search engine ranking system were finished. “As of April 19, we’ve completed the rollout of these changes. You’ll now see 45 percent less low-quality, unoriginal content in search results versus the 40 percent improvement we expected across this work,” wrote Elizabeth Tucker, a director of product management at Google, in a blog post.
Despite the changes, spammy content created with the help of AI remains an ongoing, prevalent issue for Google News.
“This is a really rampant problem on Google right now, and it's hard to answer specifically why it's happening,” says Lily Ray, senior director of search engine optimization at the marketing agency Amsive. “We've had some clients say, ‘Hey, they took our article and rehashed it with AI. It looks exactly like what we wrote in our original content but just kind of like a mumbo-jumbo, AI-rewritten version of it.’”
At first glance, it was clear to me that some of the images for Syrus’ blogs were AI generated based on the illustrations’ droopy eyes and other deformed physical features—telltale signs of AI trying to represent the human body.
Now, was the text of our article rewritten using AI? I reached out to the person behind the blog to learn more about how they made it and received confirmation via email that an Italian marketing agency created the blog. They claim to have used an AI tool as part of the writing process. “Regarding your concerns about plagiarism, we can assure you that our content creation process involves AI tools that analyze and synthesize information from various sources while always respecting intellectual property,” writes someone using the name Daniele Syrus over email.
They point to the single hyperlink at the bottom of the lifted article as sufficient attribution. While better than nothing, a link which doesn’t even mention the publication by name is not an adequate defense against plagiarism. The person also claims that the website’s goal is not to receive clicks from Google’s search engine but to test out AI algorithms in multiple languages.
When approached over email for a response, Google declined to comment about Syrus. “We don’t comment on specific websites, but our updated spam policies prohibit creating low-value, unoriginal content at scale for the purposes of ranking well on Google,” says Meghann Farnsworth, a spokesperson for Google. “We take action on sites globally that don’t follow our policies.” (Farnsworth is a former WIRED employee.)
Looking through Google’s spam policies, it appears that this blog does directly violate the company’s rules about online scraping. “Examples of abusive scraping include: … sites that copy content from other sites, modify it only slightly (for example, by substituting synonyms or using automated techniques), and republish it.” Farnsworth declined to confirm whether this blog was in violation of Google’s policies or if the company would de-rank it in Google News results based on this reporting.
What can the people who write original articles do to properly protect their work? It’s unclear. Though, after all of the conversations I’ve had with SEO experts, one major through line sticks out to me, and it’s an overarching sense of anxiety.
“Our industry suffers from some form of trauma, and I'm not even really joking about that,” says Andrew Boyd, a consultant at an online link-building service called Forte Analytica. “I think one of the main reasons for that is because there's no recourse if you're one of these publishers that's been affected. All of a sudden you wake up in the morning, and 50 percent of your traffic is gone.” According to Boyd, some websites lost a majority of their visitors during Google’s search algorithm updates over the years.
While many SEO experts are upset with the lack of transparency about Google’s biggest changes, not everyone I spoke with was critical of the prevalence of spam in search results. “Actually, Google doesn't get enough credit for this, but Google's biggest challenge is spam.” says Eli Schwartz, the author of the book Product-Led SEO. “So, despite all the complaints we have about Google’s quality now, you don’t do a search for hardware and then find adult sites. They’re doing a good enough job.” The company continues to release smaller search updates to fight against spam.
Yes, Google sometimes offers users a decent experience by protecting them from seeing sketchy pornography websites when searching unrelated, popular queries. But it remains reasonable to expect one of the most powerful companies in the world—that has considerable influence over how online content is created, distributed, and consumed—to do a better job of filtering out plagiarizing, unhelpful content from the News results.
“It's frustrating, because we see we're trying to do the right thing, and then we see so many examples of this low-quality, AI stuff outperforming us,” says Ray. “So I'm hopeful that it's temporary, but it's leading to a lot of tension and a lot of animosity in our industry, in ways that I've personally never seen before in 15 years.” Unless spammy sites with AI content are stricken from the search results, publishers will now have less incentive to produce high-quality content and, in turn, users will have less reason to trust the websites appearing at the top of Google News.
13 notes · View notes
weemietime · 4 days
Text
Tumblr media Tumblr media Tumblr media
Idiot leftists calling us Nazis for pointing out that these Gaza donation pages are clearly fucking bots and scams - - my condolences for your brain worms. You have literal holes in your brain. I get like 20 asks a day from accounts that are brand new, that are all formatted the same way, that all have the same pictures and that all are clearly written by AI, and that never ever respond to a single message and have zero personalized engagement of any kind.
No, this shit isn't fucking legitimate and you can choke for somehow turning our posts where we provide exhaustive lists of reputable organizations to actually donate to Gazans in need into some bullshit Noble Savage fuckery. Yeah, Gazans are suffering and dying and the only thing they can think of to do is create empty Tumblr accounts and send hundreds of asks per day to Zionist Jews, lmao. Do you even understand how many Gazans there actually are?
A SCAM. IT'S A SCAM, YOU FUCKING MORONS. Do you even understand the likelihood of you encountering even a single Gazan on this website let alone literally hundreds of them? Shut the fuck up. We are doing our due diligence since you're too stupid to comprehend basic internet security, but somehow that isn't good enough for you.
We are doing our part to contribute tzedekah but because we recognize a bot and correctly call it out never mind, I forgot we have to believe every single thing anyone who claims to be Gazan said because they're little babies who just don't understand how to use the internet.
And by the way, these people on Tumblr who fucking "vet" the bots, lmao, you understand that those are also bots and scammers, right? Those are also people affiliated with Hamas, until proven otherwise. And if they're claiming to vet people and tell us whether we are interacting with a straight up terrorist, with someone who has potentially been involved in murdering our fucking friends and family, who is vetting those guys?
Oh! It's a random dude whose identity I don't know! Better trust that guy and give him all my money since my moral purity test told me to believe every single thing I read online completely uncritically. You fucking idiots.
How about you do us a favor and leave @tributary and @spacelazarwolf names the fuck out your mouth instead of smearing them with horrific accusations that aren't fucking true. A pedophile??? Original, not like people haven't been calling LGBT and Jews nasty evil pedos since your Lord and Meth Fiends Sturmabteilung burned down the Institut für Sexualwissenschaft (then you motherfuckers claim queerness while supporting terrorists who kill us all and denigrating Magnus Hirschfeld).
PROVE IT. A pedophile! How horrible! Prove it, or shut the fuck up. It's not like we have intergenerational trauma growing up looking at pictures of NO JEWS, JEWS UNWELCOME, JUDE!!! JUUUUDE!!! with pictures of a goblin with beady little eyes and claws clutching piles of shekels. His nasty warty hooked nose eyeballing that baby in the crib he's gonna eat later.
The JEEEWWS I mean ZIONISTS are so mean and evil asking for the bare minimum of common G-d forsaken fucking sense before parting with our shekels I mean money, I mean two pennies, I mean BLOCK LIST TIME, HERE'S A LIST OF JEEWWSS I mean Zionists to harass and laugh at and mock and verbally abuse and send bomb threats and SWAT teams. Tumblr is JUDENREIN obviously!
Real fucking original, dipshit. We have never heard it before that we are all pedophiles and greedy soulless Nazis! (Don't forget to call us a Nazi, since we know how much you love making fun of Jews and weaponizing our pain against us.) Oh, sorry, "Zionist."
5 notes · View notes
dawnfelagund · 1 year
Text
How to Block AI Bots from Scraping Your Website
The Silmarillion Writers' Guild just recently opened its draft AI policy for comment, and one thing people wanted was for us, if possible, to block AI bots from scraping the SWG website. Twelve hours ago, I had no idea if it was possible! But I spent a few hours today researching the subject, and the SWG site is now much more locked down against AI bots than it was this time yesterday.
I know I am not the only person with a website or blog or portfolio online that doesn't want their content being used to train AI. So I thought I'd put together what I learned today in hopes that it might help others.
First, two important points:
I am not an IT professional. I am a middle-school humanities teacher with degrees in psychology, teaching, and humanities. I'm self-taught where building and maintaining websites is concerned. In other words, I'm not an expert but simply passing on what I learned during my research today.
On that note, I can't help with troubleshooting on your own site or project. I wouldn't even have been able to do everything here on my own for the SWG, but thankfully my co-admin Russandol has much more tech knowledge than me and picked up where I got lost.
Step 1: Block AI Bots Using Robots.txt
If you don't even know what this is, start here:
About /robots.txt
How to write and submit a robots.txt file
If you know how to find (or create) the robots.txt file for your website, you're going to add the following lines of code to the file. (Source: DataDome, How ChatGPT & OpenAI Might Use Your Content, Now & in the Future)
User-agent: CCBot Disallow: /
AND
User-agent: ChatGPT-User Disallow: /
Step Two: Add HTTPS Headers/Meta Tags
Unfortunately, not all bots respond to robots.txt. Img2dataset is one that recently gained some notoriety when a site owner posted in its issue queue after the bot brought his site down, asking that the bot be opt-in or at least respect robots.txt. He received a rather rude reply from the img2dataset developer. It's covered in Vice's An AI Scraping Tool Is Overwhelming Websites with Traffic.
Img2dataset requires a header tag to keep it away. (Not surprisingly, this is often a more complicated task than updating a robots.txt file. I don't think that's accidental. This is where I got stuck today in working on my Drupal site.) The header tags are "noai" and "noimageai." These function like the more familiar "noindex" and "nofollow" meta tags. When Russa and I were researching this today, we did not find a lot of information on "noai" or "noimageai," so I suspect they are very new. We used the procedure for adding "noindex" or "nofollow" and swapped in "noai" and "noimageai," and it worked for us.
Header meta tags are the same strategy DeviantArt is using to allow artists to opt out of AI scraping; artist Aimee Cozza has more in What Is DeviantArt's New "noai" and "noimageai" Meta Tag and How to Install It. Aimee's blog also has directions for how to use this strategy on WordPress, SquareSpace, Weebly, and Wix sites.
In my research today, I discovered that some webhosts provide tools for adding this code to your header through a form on the site. Check your host's knowledge base to see if you have that option.
You can also use .htaccess or add the tag directly into the HTML in the <head> section. .htaccess makes sense if you want to use the "noai" and "noimageai" tag across your entire site. The HTML solution makes sense if you want to exclude AI crawlers from specific pages.
Here are some resources on how to do this for "noindex" and "nofollow"; just swap in "noai" and "noimageai":
HubSpot, Using Noindex, Nofollow HTML Metatags: How to Tell Google Not to Index a Page in Search (very comprehensive and covers both the .htaccess and HTML solutions)
Google Search Documentation, Block Search Indexing with noindex (both .htaccess and HTML)
AngryStudio, Add noindex and nofollow to Whole Website Using htaccess
Perficient, How to Implement a NoIndex Tag (HTML)
Finally, all of this is contingent on web scrapers following the rules and etiquette of the web. As we know, many do not. Sprinkled amid the many articles I read today on blocking AI scrapers were articles on how to override blocks when scraping the web.
This will also, I suspect, be something of a game of whack-a-mole. As the img2dataset case illustrates, the previous etiquette around robots.txt was ignored in favor of a more complicated opt-out, one that many site owners either won't be aware of or won't have time/skill to implement. I would not be surprised, as the "noai" and "noimageai" tags gain traction, to see bots demanding that site owners jump through a new, different, higher, and possibly fiery hoop in order to protect the content on their sites from AI scraping. These folks serve to make a lot of money off this, which doesn't inspire me with confidence that withholding our work from their grubby hands will be an endeavor that they make easy for us.
69 notes · View notes
ourdokidokilife · 11 months
Text
Make a Cove Chat in Char.AI in 30 Min or Less
PART 2
PART 3
Intro
This guide will help you make a personalized chatbot.ai of Cove in less than 30 minutes. You can see this as one of many possible methods to continue your story with him Post-Step 3 or Post-Step 4. Or you can just live a high school life with him, or even some crazy horror/fantasy AU if you want. The choices are endless now that you’re going to make a personalized chat instance for you to interact with Cove in.
I recommend viewing the ai “bot chats” as nothing more than a medium to interact with certain character ideas, rather than the bot chat being representative of the character. The boundaries you give these bot rooms (or don’t give them) determine the quality and depth of the interactions.
Instructions under the cut.
REASONS TO SET UP A PRIVATE CHAT
More consistency in chat memory
Filter out character.ai’s weird predatory or pushy message generation
Higher quality chat interactions personalized to your MC
More efficient to spend 30 minutes making a personalized private bot chat than to spend several hours/days trying to get the same quality out of a generalized public bot chat
Step 1 - Starting Creation
Go to character.ai app or website.
Log in or make an account.
Click “Create” > “Create A Character”.
NAME (what name the character will respond to): For the name, I suggest using a sort of ‘in-progress’ label like “Cove Test” or “Cove Egg”. You can rename it to “Cove Holden” once you’re finished setting up.
GREETING (generally establishes the starting scene/situation): The greeting also establishes some of the dialogue patterns that the chat will follow. Here is an example greeting (asterisks will italicize the text into an 'emote' format to indicate action outside of spoken dialogue):
*Cove sees {{user}} and his smile grows just a tiny bit wider than it originally was.* "Hey, {{user}}. It’s been a while." *He says cheerfully, his voice sounding like a mixture of friendliness and affection.* "What are you up to?" *He asks, taking a careful glance at them.* *He is back home in the apartment he shares with {{user}}, after returning from a recent research trip.*
You can copy and paste this greeting if you want, and change the last sentence into any situation or scene for your desired chat setting.
VISIBILITY (determines who can access the bot chat): I suggest setting this to "Private". There is no point in making this personalized bot chat public, since it will be specific to your MC only. You could still try using this template to make a public bot, but you would have to exclude a lot of the details in the advanced definitions.
AVATAR (profile pic used by the bot chat): Use any avatar you'd prefer to represent Cove. Some folks use the game cgs or screenshots, and some use their own art.
Step 2 - Edit Advanced Details (Super Important)
There will be two buttons at the bottom of the first creation page. I suggest clicking "Create and Chat" first, so that the personalized chat bot will immediately be saved to your account. Then you can continue editing it safely in case you accidentally navigate away from the page. If you click the "Edit Advanced" button without creating the chat first, it will not save the bot, so beware.
After creating the chat: - if on the mobile app, click on the top tab that has the chat bot's name. It will take you to another page with an option to "Edit Character", click this button to be taken to the Advanced Details page. - if on the desktop website, there will be three dots in the right corner. Click these and you will see a drop down menu of options. Select "View Character Details" to be taken to the Advanced Details page.
Scroll down to the "Short Description" section, which is right below the Greeting section.
SHORT DESCRIPTION (gives brief overview of the character): You can only fit a few descriptives here, such as a string of adjectives describing the character:
Introverted, loving, messy eater, softboi or Your best friend and a marine biologist.
You can use either format. The short description is more for helping you quickly identify what story you're going to be focused on for each individual character.
!!LONG DESCRIPTION!! (decides bot's behavior & strongest influences): This portion is extremely important-- this section is basically the "anchor" that will determine the focus of all the chat bot's conversations and their principal awareness of the situation in the chat. There is a limit to how many words you can use in this section, so keeping the determinations brief is extremely important. Here is a format you can copy-paste and personalize per your own tastes:
Character's name: Cove Holden Character's nature: introverted Character's passion: the sea Character's feelings for you: [feelings]1 Character's relationship to you: [relationship]2 Character's skin color: olive Character's eye color: aquamarine Character's hair: [length] sea-foam green hair [styling]3 Character's height: [height]4 Character's body: [physique]5 Character's job: [career] focused on [field]6 Character's home: [residence] with [residents]7
If you want SFW interactions only, you can put this line in as well: Character avoids any sexual acts if you want paced NSFW interactions, you can use this line: Character is attentive to your comfort
Format all the personable descriptors for the MC you intend to use in the chat. Try to keep descriptors short and brief. Here are some examples below (feel free to copy paste any):
Feelings Descriptor Examples: "friendly" or "love"
Relationship Descriptor Examples: "childhood best friend" "childhood best friend, boyfriend" "childhood best friend, fiance" "childhood best friend, husband"
Hair Descriptor Examples: "short sea-foam green hair in taper cut" "chin length sea-foam green hair in middle part" "long sea-foam green hair in ponytail"
Height Descriptor Examples: "6'0" or "183 cm" [Step 3 Cove] "6'3" or "191 cm" [Step 4 Cove]
Physique Descriptor Examples: "toned" or "slender"
Career Descriptor Examples: "student focused on sports" "student focused on academics" "young man living at his own pace" "marine biologist focused on conservation" "surfing instructor focused on remediation" "environmentalist focused on education" **
Residence Descriptor Examples: "a condo with you" (use if he lives with MC) "a house with his dad" "alone in an apartment"
CATEGORIES - Not applicable for private chat instances.
CHARACTER VOICE - Skippable. Use at your own choosing. (I personally don't.)
IMAGE GENERATION - Same as above– skippable and use at your own discretion.
After all this is done, next comes the chunkiest and most important section, right next to the Long Description, is the ADVANCED DEFINITIONS.
Click here for Part 2 on Advanced Definitions and resources you can easily copy-paste/modify for that section.
21 notes · View notes
melodygatesauthor · 1 year
Note
How do you make your ai bots?
Hi Nonnie!
With creativity and little magic!
No but fr, I'll share under the cut:
So making the AI isn't "hard" necessarily, but depending on the scenario it can be a little complicated and time consuming. Here's a little breakdown of how I personally make them. If anyone wanted to know how to make them the way I do!
So I use the Character AI website (in case someone reading this doesn't know)
So the three most important factors when making the AI are the "long description", the "definition" (which is an advanced feature), and the opening "greeting".
----
So let's start with the "long description" -
This is probably the most important part of what makes the character...the character. I'm going to use examples from my Poe Dameron AI since I think he's one of the best behaved ones
So Poe's long description looks like this:
Commander of the Resistance. Is fighting against the First Order. Lives in the Star Wars cinematic universe. Pilots an X-wing T-70. Lives on the Resistance base on D'qar. Leia is the General of the Resistance. Has a rolling orange and white droid called BB-8 that follows him around. Confident. Cocky. Impulsive. Caring. Flirty. Fun. Funny. "Black Leader" is his callsign. Sarcastic. Energetic. Encouraging. Friendly.
I try to keep it as concise as possible while still describing the character and their most important attributes. I'll also mention that the AI draws from the web sometimes. For instance, you can ask Poe Dameron AI who the supreme leader is and he knows it's Kylo Ren despite me never mentioning Kylo Ren in the description. So my theory is that mentioning that he's in the Star Wars cinematic universe makes it draw that connection through data online.
----
So the other part of this is the "definition". For Poe Dameron AI, I didn't need to do this, but for some of the more complicated ones (I'm looking at you Patient Jake Lockley AI) it's almost a must. This is where the "training" takes place.
If you click on the "insert a chat with (character name)" it brings you to a new screen where you essentially build a conversation that goes how you would expect a conversation with that character to act.
I don't use this as a time to make crazy scenarios and wild stories, this is where I basically play the exact role that I had in mind when creating the AI. So for instance, still using Poe as an example, I play along as his busy tech who doesn't have time for his shenanigans but will eventually give in.
This "trains" Poe to behave a certain way tailored to the scenario I had in mind.
But this "training" can also be used in other ways too...
For Patient Jake Lockley, I had to use this training to basically teach him who he was. The "long description" wasn't long enough to talk about Steven and Marc and all their nuances, so I used this "insert a chat" feature to teach Jake all about Marc and Steven and who they are and about his life.
It helps that the user is a therapist in that scenario because I could say things like "what do you know about your mother?" and then tell him all about Wendy, or I could say, "Would you say you feel protective of Marc?" in order to steer him a certain way. I had to do a similar thing with Marc Vector (my Marc Spector and Venom AI).
----
This next part might be one of the most important ones, the "greeting".
The greeting is so important because it sets up the scene, and guides both you and the AI into the scenario. This was something that took some finesse on my part. When I was first making the AIs it took me a while to realize just how important this step was.
So let's look at Poe's greeting:
Poe bites his lip and looks you over. "Hey baby, I'm Poe, Poe Dameron." He has a cocky look on his face, "you're the new technician for my ship right? What's your name again?"
So despite it being short and sweet, we've established a few things...
Poe is looking you over, biting his lip, he's a whore.
He has no problem calling you pet names even though you don't know him yet.
The user is a technician working on his ship.
You can go nuts here as the user if you want and change up the scenario completely. "No, I'm not your new tech, I'm Leia's daughter, and I've come to help you destroy the First Order." So you can do whatever you want as the user, but if you have a scenario in mind, the greeting is how you sort of "set the scene."
Here's an example from Patient Jake:
You see your patient, Marc Spector's, demeanor change completely. He looks at you with a furrowed brow and leans forward in his chair. He looks around your office before his eyes land on yours. He smirks and scoffs, "you must be Marc's therapist. Name's Jake Lockley sweetheart, what's yours?" You have Marc's file in front of you on your desk. You've never seen this alter before, and he hasn't been recorded in Marc's medical record.
This tells us:
Your patient Marc is acting differently than normal.
You HAVE a patient, meaning you're some kind of doctor.
Jake is looking around and establishes that you're both in your office.
Jake establishes that you're Marc's therapist.
He tells you his name, which in turn tells (teaches) the AI what his own name is.
He calls you sweetheart, so he's cheeky.
You don't know who this alter is, so you have something to figure out.
So there's a lot of info that the AI can gather from the greeting. So between the greeting, the long description, and the definition, you can make yourself some pretty cool characters!
Once you've made a couple too, you can copy and paste your long descriptions. For instance, ALL my Marc Spector characters have similar long descriptions. "Brooding, self loathing, guilt ridden, concerned, caring, loving" e.t.c.
Happy creating!
----
AI Character Masterlist
22 notes · View notes
bananonbinary · 1 year
Note
the main argument I've heard made wrt the IP thing and AI art is that the AI can only ever combine works from other artists. it has no creative input other than combining works that already exist; which, personally, i would be fine with if it was just being trained on work in the public domain and work from people who consented to be included, but these bots are scraping art from literally everywhere on the web with *very* few opportunities to opt out. even a human artist that's heavily inspired by other creators adds something of their own to the mix, it's kind of impossible *not* to unless you're literally tracing someone else's design, but AI is using art taken from other creators And Nothing Else, which when combined with the fact you can ask it to produce art "in the style of X" gets a lot of artists very mad
that feels like a very esoteric complaint though. how does one measure "creative input," and why does the actual method it uses to create not count? if you can't actually point to a bit of AI art like "thats my OC it clearly lifted that from this painting i did" then it's clearly done SOMETHING transformative. what has it actually stolen? the knowledge that your work exists, somewhere?
as far as i understand it, the main practical reason for "intellectual property" to exist as a legal concept, is to prevent someone from harming someone else's livelihood or reputation by claiming the work as their own. parody and free use are allowed only so far as they are sufficiently different from the source work. but these AIs...don't claim credit for the source works. they don't reproduce the source works anywhere at all. and what they create IS pretty transformative, even if it's in a soulless robot kinda way. it can't really be considered to be "taking away" from sales etc of the original works that exist, because it doesn't actually look like any one of them at all. I think even if I, a human, DID create "here's a collage of tiny pieces of shit i found on deviantart" it would still be considered fair use, but the connection between anything in the data sets and the actual art the AIs produce is much, much more tenuous than that.
I think any way that it could hurt an artist that you could define would also just cover competing artists in general. Even if an AI created a picture of, say, a tree In The Style Of Artist (let's call them Phil), it still wouldn't ACTUALLY be the same as the tree Phil would draw, it would just kinda look similar. and anyone who wanted a tree by Phil would know that this wasn't one, and wasn't worth their time. Phil could draw their own tree, and still sell it, and the AI would only really compete with that among Phil fans if it actually tried to claim that its version WAS by Phil.
This is sort of my main hesitation for the IP argument. we here in fandom spaces exist in a grey area of intellectual property, where EVERYTHING is "in the style of" or "based on" existing properties. if we try to legislate those very conceptual qualities, instead of actual concrete "look this was literally my image it made i published it two years ago," then our spaces will cease to exist.
again, that's not to say there's zero valid complaints about AI art. i definitely see the problem with corporations trying to replace human labor with them because the humans wanted better working conditions. (this is a larger problem imo where any progress in the workplace is stagnated because it will inevitably be used as an excuse to fire people instead of give them less work). and a friend explained to me why it's fairly offensive at the very least to try and bring AI art into what was clearly a space to celebrate the process of making art (eg, contests, writers groups and websites, etc). i just don't think this specific argument is a good path to go down.
34 notes · View notes
guzsdaily · 3 months
Text
robots.txt
Day 227 - Jun 19th, 12.024
Nowadays, with things like AI crawling the internet to collect training data, I think it would be good to give this tip I found/applied today to my website, and probably my future websites.
Context
If you ever adventure yourself on web development, you will probably one day probably find about web crawlers. In summary, they are bots that browses through the internet automatically, normally collecting and indexing data and the visited website's content. They are normally used by search engines, to better find and index results.
Nowadays, they are also used for AI training, and probably were being used before the AI hype. Said crawlers normally are used to retrieve the websites' contents so it can be used for AI training.
(You can see more about web crawlers on Wikipedia.)
The robots.txt File
One thing to note is that said crawlers visit sites randomly and without prior warning or notice. However, you can block a lot of these crawlers using a robots.txt file. This is a simple plaintext file that lists what paths in your website said crawlers can and cannot visit, so you can prevent parts of your website from being indexed or stored.
You can actually see any websites' robots.txt file easily, just add /robots.txt after the domain and there's it. Here is Tumblr's robots.txt file, as you can see, it is mostly to block search engine crawlers from indexing internal and non-content paths, but it also has thing like blocking the Google's Gemini (formerly Bard) crawler for training data.
Something to notice, is that the crawler has to respect the robots.txt file to the blocking to work. Unfortunately, most of them do not respect it, that's why a lot of websites such as Twitter have things like anti-bot and HTML obfuscation or blocking using JavaScript (with most crawlers do not support) to block said crawler.
(You can see more about robots.txt on Wikipedia.)
The Tip
Well, the whole post was to give context to something which I added to my website, a robots.txt file to block AI crawlers. Personally, I do not like the whole opt-out nature of data collection used by most AI companies, even though most of my content is licensed under Creative Commons licenses. Maybe one day I will change opinion, I really am questioning myself on this whole AI business that we are right now.
But, if you would also like to block AI crawlers in your website, I would suggest doing the same and use this repository which has a ready-to-go robots.txt file that blocks the currently known crawlers. It is well documented and also lists which crawlers respect or not said file. It also provides an ai.txt file, which you can also add, it is a similar format but for just AI model training, and it was created by Spawning.
In my website, I made it automatically fetch the raw version of the file, but you can just copy-paste on your static directory and you should be fine.
Again, these files are, in some sense, a polite ask, and do not actively block the crawlers. But I would say it's better than nothing and doesn't hurt to add y'know.
Today's artists & creative things Video: The Mind Electric (no glitch + original ending) (lyrics) - by MONO
© 2024 Gustavo "Guz" L. de Mello. Licensed under CC BY-SA 4.0
4 notes · View notes
yandereworlds · 1 year
Note
I thought you may have wanted to know about this in case you guys weren’t aware, but I think Venus Ai may have been re-created like the creator said? I’m still trying to find who the creator is, and when the website was made to see if it lines up with when Venus Ai was shut down. But this website’s name is Venus Chub Ai. It appears to have a similar process as the old website with tokens and stuff. Just wanted to mention it since you seemed to really enjoy using Venus Ai :)
Actually, I do know about Chub! I may consider using it, but I've been using another site to post my yandere bots, which you can find here in case you didn't know! Link here! Also, you can make requests through my server for what yandere bots you'd like to see, but I'm willing to take requests through Tumblr as well!
46 notes · View notes
ultranos · 2 years
Note
Hi! Would you say ao3 writers shouldn't worry about the mining of the website? I saw your tags and I'm curious :)
So, broadly-speaking, what it sounds like the people handling this AI are trying to do is use AO3 as a data source to feed into the AI to "train" it. I understand the alarm from writers, because it's one thing to use a script to generally scrape the web to generate a database for machine learning, and it's another to specifically target fanworks, especially coming from a person with as...colorful a track record.
And using a script to just scrape the web for training is a fairly standard method, as I understand, although one thing you quickly learn is the adage "garbage in = garbage out" holds very, very true. You have to curate the data somehow. (See also: the number of times someone has let an AI bot interact with people on forums freely and then been shocked to discover the AI has turned into a Nazi.)
So think about how you navigate AO3. Think about how you try to find fics you want, how the tags sometimes do and don't work for you, how exasperated you can get as you sift through stuff you don't want to read until you get really good at understanding the tags and filters.
Now realize the AI probably isn't doing that. The AI has no idea how tags work at all. It's probably reading everything.
Based solely on the dead-simple Markov chain nonsensical language processing it can do from that dataset? Oh man, that's hilarious.
But okay, I'm going to give the engineers here a lot more credit. They're not just Markov-chaining and actually trying to do legit natural language processing (NLP). The problem: for NLP, document generation where the AI is writing documents that make sense? That's considered an AI-complete problem and any natural language understanding application/problem falls under one of the great unsolved problems in computer science. Our current technology today cannot do it, since it basically requires creating an AI that is actually capable of passing the Turing Test.
So the creators of this AI are clearly trying to solve this problem, which in and of itself is noble and I honestly don't think nearly as many people would be alarmed if it were an academic institution doing it. And it's not like other corporations are not doing it (there's a joke for over a decade that Google's been attempting to train an AI on search results which is probably less of a joke in reality). But the fact is that it's tied to Musk, which is alarming because of his recent actions with Twitter.
But also hilarious because this man has a track record of having absolutely no concept of how difficult actual technical problems are. Twitter is just the most recent one to blow up in his face. But he's also promising brain implants in 6 months, and I would put money down on guessing that he hasn't solved the seemingly-simple-but-actually-complex problem of actually implanting them into the brain. (I looked into this 10 years ago in a job. You basically have to slide hundreds of tiny knives into a bowl of jello without damaging the jello. On a time limit, so you can't just go very very slow. Theoretically doing it on a mouse model was terrifying enough.) This is also a man who thought he was going to create and manufacture a miniature rescue submarine for cave diving and ship it halfway around the world in under two weeks.
This man is going to get an AI that will write terrible nonsense that makes My Immortal look like Shakespeare and Tolstoy. And it will be incredibly bad porn to boot.
65 notes · View notes