All the things that consume my time and energy, just like black holes ~ 26 ~ Norway ~ photography
Last active 2 hours ago
Don't wanna be here? Send us removal request.
Note
Did your school days start at 7:30am and end at 2:30pm? (This is excluding after schoolactivities like sports, detention, etc.)
#elementary school was like 7:05 to 2:15#middle was slightly later#and high was later i dont remember the exact start but it was a bit after 8 and ended at 3:30#they were staggered because of the buses#drivers delivered the elementary school kids then went and got the middle and high schoolers
3K notes
·
View notes
Text
The exception is cheesy local commercials. Those should be the only ads. I will listen to someone who runs a store in my city doing an awkward rap. We once had a furniture store with these awful CGI ads and the slogan "where the deals are so low, it's almost criminal!" and then they got shut down, by the cops, because it turned out. It turned out the deals were so low because. You're not going to believe this but the prices were so low it was in fact
106K notes
·
View notes
Text
Software developer Xe Iaso reached a breaking point earlier this year when aggressive AI crawler traffic from Amazon overwhelmed their Git repository service, repeatedly causing instability and downtime. Despite configuring standard defensive measures—adjusting robots.txt, blocking known crawler user-agents, and filtering suspicious traffic—Iaso found that AI crawlers continued evading all attempts to stop them, spoofing user-agents and cycling through residential IP addresses as proxies.
Desperate for a solution, Iaso eventually resorted to moving their server behind a VPN and creating "Anubis," a custom-built proof-of-work challenge system that forces web browsers to solve computational puzzles before accessing the site. "It's futile to block AI crawler bots because they lie, change their user agent, use residential IP addresses as proxies, and more," Iaso wrote in a blog post titled "a desperate cry for help." "I don't want to have to close off my Gitea server to the public, but I will if I have to."
Iaso's story highlights a broader crisis rapidly spreading across the open source community, as what appear to be aggressive AI crawlers increasingly overload community-maintained infrastructure, causing what amounts to persistent distributed denial-of-service (DDoS) attacks on vital public resources. According to a comprehensive recent report from LibreNews, some open source projects now see as much as 97 percent of their traffic originating from AI companies' bots, dramatically increasing bandwidth costs, service instability, and burdening already stretched-thin maintainers.
Kevin Fenzi, a member of the Fedora Pagure project's sysadmin team, reported on his blog that the project had to block all traffic from Brazil after repeated attempts to mitigate bot traffic failed. GNOME GitLab implemented Iaso's "Anubis" system, requiring browsers to solve computational puzzles before accessing content. GNOME sysadmin Bart Piotrowski shared on Mastodon that only about 3.2 percent of requests (2,690 out of 84,056) passed their challenge system, suggesting the vast majority of traffic was automated. KDE's GitLab infrastructure was temporarily knocked offline by crawler traffic originating from Alibaba IP ranges, according to LibreNews, citing a KDE Development chat. [...]
Tarpits and labyrinths: The growing resistance
In response to these attacks, new defensive tools have emerged to protect websites from unwanted AI crawlers. As Ars reported in January, an anonymous creator identified only as "Aaron" designed a tool called "Nepenthes" to trap crawlers in endless mazes of fake content. Aaron explicitly describes it as "aggressive malware" intended to waste AI companies' resources and potentially poison their training data.
"Any time one of these crawlers pulls from my tarpit, it's resources they've consumed and will have to pay hard cash for," Aaron explained to Ars. "It effectively raises their costs. And seeing how none of them have turned a profit yet, that's a big problem for them."
On Friday, Cloudflare announced "AI Labyrinth," a similar but more commercially polished approach. Unlike Nepenthes, which is designed as an offensive weapon against AI companies, Cloudflare positions its tool as a legitimate security feature to protect website owners from unauthorized scraping, as we reported at the time.
"When we detect unauthorized crawling, rather than blocking the request, we will link to a series of AI-generated pages that are convincing enough to entice a crawler to traverse them," Cloudflare explained in its announcement. The company reported that AI crawlers generate over 50 billion requests to their network daily, accounting for nearly 1 percent of all web traffic they process.
The community is also developing collaborative tools to help protect against these crawlers. The "ai.robots.txt" project offers an open list of web crawlers associated with AI companies and provides premade robots.txt files that implement the Robots Exclusion Protocol, as well as .htaccess files that return error pages when detecting AI crawler requests.
596 notes
·
View notes
Audio
3K notes
·
View notes
Text
i love when people share music on tumblr. It's like we're all sitting on the bus passing around an iPod and i have one headphone in my ear and u have one in yours
3K notes
·
View notes
Text

Don’t make me put your ass in the sealbarrow
46K notes
·
View notes
Text
i just think it says a lot about the person. my favorite is a bear named theodore
please reblog
13K notes
·
View notes
Text
made a list of my statistically top 50 artists!
!!! I MISSPELLED CONSCIOUSLY AND NOW I CANT CHANGE IT FUCK
46 notes
·
View notes
Text
i need to learn every language in the world so i can know if the subtitles are right or wrong
3K notes
·
View notes