#anti cloud ai
Explore tagged Tumblr posts
melyzard · 1 year ago
Text
Time for a new edition of my ongoing vendetta against Google fuckery!
Hey friends, did you know that Google is now using Google docs to train it's AI, whether you like it or not? (link goes to: zdnet.com, July 5, 2023). Oh and on Monday, Google updated it's privacy policy to say that it can train it's two AI (Bard and Cloud AI) on any data it scrapes from it's users, period. (link goes to: The Verge, 5 July 2023). Here is Digital Trends also mentioning this new policy change (link goes to: Digital Trends, 5 July 2023). There are a lot more, these are just the most succinct articles that might explain what's happening.
FURTHER REASONS GOOGLE AND GOOGLE CHROME SUCK TODAY:
Stop using Google Analytics, warns Sweden’s privacy watchdog, as it issues over $1M in fines (link goes to: TechCrunch, 3 July 2023) [TLDR: google got caught exporting european users' data to the US to be 'processed' by 'US government surveillance,' which is HELLA ILLEGAL. I'm not going into the Five Eyes, Fourteen Eyes, etc agreements, but you should read up on those to understand why the 'US government surveillance' people might ask Google to do this for countries that are not apart of the various Eyes agreements - and before anyone jumps in with "the US sucks!" YES but they are 100% not the only government buying foreign citizens' data, this is just the one the Swedes caught. Today.]
PwC Australia ties Google to tax leak scandal (link goes to: Reuters, 5 July 2023). [TLDR: a Russian accounting firm slipped Google "confidential information about the start date of a new tax law leaked from Australian government tax briefings." Gosh, why would Google want to spy on governments about tax laws? Can't think of any reason they would want to be able to clean house/change policy/update their user agreement to get around new restrictions before those restrictions or fines hit. Can you?
SO - here is a very detailed list of browsers, updated on 28 June, 2023 on slant.com, that are NOT based on Google Chrome (note: any browser that says 'Chromium-based' is just Google wearing a party mask. It means that Google AND that other party has access to all your data). This is an excellent list that shows pros and cons for each browser, including who the creator is and what kinds of policies they have (for example, one con for Pale Moon is that the creator doesn't like and thinks all websites should be hostile to Tor).
97 notes · View notes
onemillionfurries · 5 months ago
Text
Tumblr media
35 notes · View notes
dougielombax · 6 months ago
Text
“But, but ChatGPT told me-
No!
Shut!
Stop!
Be quiet for SEVERAL days.
And let me explain why you are very very very very very very very very very very wrong wrong wrong wrong WRONG!
Now!
ChatGPT told you what you WANTED to hear!
It’s a digital bullshit artist that passes off nonsense masquerading as fact!
(That is when it isn’t fuelling plagiarism or spreading mis/disinformation!)
Do NOT cite its word as though it were some wise all-knowing sage!
You absolute child!
Even children would know better than to do that!
Probably…
38 notes · View notes
Note
something something grow a pair and state thoughts on ai?
So, funny story, I made a post about this before, whenever the topic tag for it was trending. And like, I still stand by that, sans the part where I call the AI itself a form of art under my definition. A little bit after that, I saw a post, while definitely not in response to my own post, made the point that while we should hate AI art for the rampant theft of jobs and content, that its somehow bad to dislike it as Bad Art or Not Art because "gatekeeping art is baddd". Which like, in the context of someone drawing stick figures or painting giant blocks of color, is valid; we shouldn't gatekeep art from people. I still think AI doesn't deserve that privilege. Like, not to try and define art again, but, like hold on ket me grab something.
Tumblr media
This is an ai generated adoptable from deiantart. Now, I have to ask, what's being expressed here- besides "cute girl in big hoodie (despite the one on the left not having a hoodie)"? Like it's easy to take these apart mechanically, but conceptually? It's somehow easier. Like, part of character design is visually communicating stuff about the character. There's nothing here besides anime girl in big outfit with minor armor details maybe? Like nothing else here is coherent! Like she looks sampled off of genshin and honkai characters but that's it. Like the cutains are just blue, and its dull and boring because of it. Why is the jacket neon green? The prompter wanted it that way. Why does she have the shoulder pieces and the case she's holding? Because the prompter likely put "battle girl" and/or "solarpunk" into the prompt. And it's not bad to have design elements for the sake of it, but the ai can't do anything but that, and the content it generates suffers because of it. There's no artistic value there, imo.
Now, not to toot my own horn, but here's my take on this design:
Tumblr media
This is still a "cute girl in a big lime green jacket", but there's more to it. It's a high visibility jacket, with stripes reminiscent of construction vests. In the other doodles on the page, this high visibility theme is expanded to a theme of her being some kind of rescue personnel, and/or an angel (see; the halo in the bottom right). While it's fairly easy for me to point these themes out- it is what I intended- I'd still argue an obersever would be able to point out similar, or other themes and motifs that bring this character together.
No ammount of prompts and generation models can recreate that. Even if the prompter had the exact same intent I had when making the og ai content, that intent doesn't come across whatsoever. Because AI cannot replicate human intent and artistic processes.
These image generators register to me as the miserable end point of the sad, art-illiterate belief that art only is, and is only meant to "look pretty". Every time modern art is decried as "ugly and pointless", another prompter gets validated in their shameless attempts to assert their narrow-as-fuck vosion of what art is.
Art is human. Art is messy, art is intricate, art is sloppy, art is beautiful and art is ugly.
No machine on earth can comprehend or replicate that. And the ceasless attempts to commodify and capitalize on art have made some people forget that fact. The kinds of people who prompt really only see art as a gimmick product, pretty knickknacks that will make them rich quick.
For lack of better terms, the dehumanization of art itself is disgusting, and so like hell am I going to consider AI's mass-produced, slot machine-esque, drivel as art.
And I will not be guilted by other people on this hellsite who think its a moral failure to call mindless content what it is because its dressed up in distorted frills and anime girl boobs.
Art is human, and AI is not human. And what a sad world it is, that we're automating and strangling human creation, instead of letting it thrive.
Thank you for reminding me to share my thoughts.
38 notes · View notes
nando161mando · 5 months ago
Text
Tumblr media
👾
10 notes · View notes
sunnyskies281 · 3 months ago
Text
My mom keeps using Meta AI to write her poems.
Mom….
I’m a poet.
Just ask me please.
2 notes · View notes
muppetminge · 1 year ago
Text
look i'm not going to pretend like my generation didn't have models that weighed less than a bag of sand and airbrushing in the magazines and all that shit because we did and that was fucked up too
but i get so like. genuinely freaked out by like filters on social media and those kinds of things. it makes me worry for the girls who are growing up with these things as normal. i just can't help but feel like a filter that tries to *correct* your fucking face in real time must be so so so much worse than what we had? even just the "silly/fun" ones still smooth out your skin and shave off half your nose and reshape your face. so many phones have magic smoothing as an automatic feature on the front cams. so it's like not even an active choice or something you're aware of. and so much of this world is based on selfies and videos so you're gonna be seeing it *constantly*. you take a selfie for fun but the photo is unrecognisable. it's not you. if that's not a breeding ground for body dysmorphia i don't know what is.
and we knew that those "model standards" were unrealistic and unattainable and they still fucked us up! but today you're seeing your peers all made up like that online and logically that must connect into a feeling of like. that should be attainable? but it's still not! and idk but that can't be fucking healthy.
it just feels like to me there's a difference between seeing heidi klum or whoever and then your classmate maria posting pictures with perfect skin, straightened nose, whitened teeth. it's like the insane otherworldly standards we grew up with has been pulled down into everyday life. idk i just don't think it's coincidence that today we have 15 year olds sharing anti-aging routines and wearing 5 layers of makeup just to leave the house. the standards for a normal face has been digitally altered
7 notes · View notes
tthomusic · 9 months ago
Text
Reminds me of the album art to GHOSTEMANE’s “ANTI-ICON”
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
Antony Gormley: 'Quantum Cloud' Series (2000) medium: steel
7K notes · View notes
seastarryclouds · 3 months ago
Text
Pheww that’s all the artfight! Now I gotta upload them all to insta and twitter
0 notes
inkskinned · 5 months ago
Text
it is hard to explain but there is something so unwell about the cultural fear of ugliness. the strange quiet irradiation of any imperfect sight. the pores and the stomachs and the legs displaced into a digital trashbin. somehow this effect spilling over - the removal of a grinning strangers in the back of a picture. of placing more-photogenic clouds into a frame. of cleaning up and arranging breakfast plates so the final image is of a table overflowing with surplus - while nobody eats, and instead mimes food moving towards their mouth like tantalus.
ever-thinner ever-more-muscled ever-prettier. your landlord's sticky white paint sprayed over every surface. girlchildren with get-ready-with-me accounts and skincare routines. beige walls and beige floors and beige toys in toddler hands. AI-generated "imagined prettier" birds and bugs and bees.
pretty! fuckable! impossible! straighten teeth. use facetune and lightroom and four other products. remove the cars along the street from the video remove the spraypaint from the garden wall remove the native plants from their home, welcome grass. welcome pretty. let the lot that walmart-still-owns lay fallow and rotting. don't touch that, it's ugly! close your eyes.
erect anti-homelessness spikes. erect anti-bird spikes. now it looks defensive, which is better than protective. put the ramp at the back of the building, you don't want to ruin the aesthetic of anything.
you are a single person in this world, and in this photo! don't let the lives of other people ruin what would otherwise be a shared moment! erase each person from in front of the tourist trap. erase your comfortable shoes and AI generate platforms. you weren't smiling perfectly, smile again. no matter if you had been genuinely enjoying a moment. you are not in a meadow with friends, you're in a catalogue of your own life! smile again! you know what, forget it.
we will just edit the right face in.
8K notes · View notes
gandalf-the-bean · 1 year ago
Text
Tumblr media
1 note · View note
mostlysignssomeportents · 4 months ago
Text
Unpersoned
Tumblr media
Support me this summer on the Clarion Write-A-Thon and help raise money for the Clarion Science Fiction and Fantasy Writers' Workshop!
Tumblr media
My latest Locus Magazine column is "Unpersoned." It's about the implications of putting critical infrastructure into the private, unaccountable hands of tech giants:
https://locusmag.com/2024/07/cory-doctorow-unpersoned/
The column opens with the story of romance writer K Renee, as reported by Madeline Ashby for Wired:
https://www.wired.com/story/what-happens-when-a-romance-author-gets-locked-out-of-google-docs/
Renee is a prolific writer who used Google Docs to compose her books, and share them among early readers for feedback and revisions. Last March, Renee's Google account was locked, and she was no longer able to access ten manuscripts for her unfinished books, totaling over 220,000 words. Google's famously opaque customer service – a mix of indifferently monitored forums, AI chatbots, and buck-passing subcontractors – would not explain to her what rule she had violated, merely that her work had been deemed "inappropriate."
Renee discovered that she wasn't being singled out. Many of her peers had also seen their accounts frozen and their documents locked, and none of them were able to get an explanation out of Google. Renee and her similarly situated victims of Google lockouts were reduced to developing folk-theories of what they had done to be expelled from Google's walled garden; Renee came to believe that she had tripped an anti-spam system by inviting her community of early readers to access the books she was working on.
There's a normal way that these stories resolve themselves: a reporter like Ashby, writing for a widely read publication like Wired, contacts the company and triggers a review by one of the vanishingly small number of people with the authority to undo the determinations of the Kafka-as-a-service systems that underpin the big platforms. The system's victim gets their data back and the company mouths a few empty phrases about how they take something-or-other "very seriously" and so forth.
But in this case, Google broke the script. When Ashby contacted Google about Renee's situation, Google spokesperson Jenny Thomson insisted that the policies for Google accounts were "clear": "we may review and take action on any content that violates our policies." If Renee believed that she'd been wrongly flagged, she could "request an appeal."
But Renee didn't even know what policy she was meant to have broken, and the "appeals" went nowhere.
This is an underappreciated aspect of "software as a service" and "the cloud." As companies from Microsoft to Adobe to Google withdraw the option to use software that runs on your own computer to create files that live on that computer, control over our own lives is quietly slipping away. Sure, it's great to have all your legal documents scanned, encrypted and hosted on GDrive, where they can't be burned up in a house-fire. But if a Google subcontractor decides you've broken some unwritten rule, you can lose access to those docs forever, without appeal or recourse.
That's what happened to "Mark," a San Francisco tech workers whose toddler developed a UTI during the early covid lockdowns. The pediatrician's office told Mark to take a picture of his son's infected penis and transmit it to the practice using a secure medical app. However, Mark's phone was also set up to synch all his pictures to Google Photos (this is a default setting), and when the picture of Mark's son's penis hit Google's cloud, it was automatically scanned and flagged as Child Sex Abuse Material (CSAM, better known as "child porn"):
https://pluralistic.net/2022/08/22/allopathic-risk/#snitches-get-stitches
Without contacting Mark, Google sent a copy of all of his data – searches, emails, photos, cloud files, location history and more – to the SFPD, and then terminated his account. Mark lost his phone number (he was a Google Fi customer), his email archives, all the household and professional files he kept on GDrive, his stored passwords, his two-factor authentication via Google Authenticator, and every photo he'd ever taken of his young son.
The SFPD concluded that Mark hadn't done anything wrong, but it was too late. Google had permanently deleted all of Mark's data. The SFPD had to mail a physical letter to Mark telling him he wasn't in trouble, because he had no email and no phone.
Mark's not the only person this happened to. Writing about Mark for the New York Times, Kashmir Hill described other parents, like a Houston father identified as "Cassio," who also lost their accounts and found themselves blocked from fundamental participation in modern life:
https://www.nytimes.com/2022/08/21/technology/google-surveillance-toddler-photo.html
Note that in none of these cases did the problem arise from the fact that Google services are advertising-supported, and because these people weren't paying for the product, they were the product. Buying a $800 Pixel phone or paying more than $100/year for a Google Drive account means that you're definitely paying for the product, and you're still the product.
What do we do about this? One answer would be to force the platforms to provide service to users who, in their judgment, might be engaged in fraud, or trafficking in CSAM, or arranging terrorist attacks. This is not my preferred solution, for reasons that I hope are obvious!
We can try to improve the decision-making processes at these giant platforms so that they catch fewer dolphins in their tuna-nets. The "first wave" of content moderation appeals focused on the establishment of oversight and review boards that wronged users could appeal their cases to. The idea was to establish these "paradigm cases" that would clarify the tricky aspects of content moderation decisions, like whether uploading a Nazi atrocity video in order to criticize it violated a rule against showing gore, Nazi paraphernalia, etc.
This hasn't worked very well. A proposal for "second wave" moderation oversight based on arms-length semi-employees at the platforms who gather and report statistics on moderation calls and complaints hasn't gelled either:
https://pluralistic.net/2022/03/12/move-slow-and-fix-things/#second-wave
Both the EU and California have privacy rules that allow users to demand their data back from platforms, but neither has proven very useful (yet) in situations where users have their accounts terminated because they are accused of committing gross violations of platform policy. You can see why this would be: if someone is accused of trafficking in child porn or running a pig-butchering scam, it would be perverse to shut down their account but give them all the data they need to go one committing these crimes elsewhere.
But even where you can invoke the EU's GDPR or California's CCPA to get your data, the platforms deliver that data in the most useless, complex blobs imaginable. For example, I recently used the CCPA to force Mailchimp to give me all the data they held on me. Mailchimp – a division of the monopolist and serial fraudster Intuit – is a favored platform for spammers, and I have been added to thousands of Mailchimp lists that bombard me with unsolicited press pitches and come-ons for scam products.
Mailchimp has spent a decade ignoring calls to allow users to see what mailing lists they've been added to, as a prelude to mass unsubscribing from those lists (for Mailchimp, the fact that spammers can pay it to send spam that users can't easily opt out of is a feature, not a bug). I thought that the CCPA might finally let me see the lists I'm on, but instead, Mailchimp sent me more than 5900 files, scattered through which were the internal serial numbers of the lists my name had been added to – but without the names of those lists any contact information for their owners. I can see that I'm on more than 1,000 mailing lists, but I can't do anything about it.
Mailchimp shows how a rule requiring platforms to furnish data-dumps can be easily subverted, and its conduct goes a long way to explaining why a decade of EU policy requiring these dumps has failed to make a dent in the market power of the Big Tech platforms.
The EU has a new solution to this problem. With its 2024 Digital Markets Act, the EU is requiring platforms to furnish APIs – programmatic ways for rivals to connect to their services. With the DMA, we might finally get something parallel to the cellular industry's "number portability" for other kinds of platforms.
If you've ever changed cellular platforms, you know how smooth this can be. When you get sick of your carrier, you set up an account with a new one and get a one-time code. Then you call your old carrier, endure their pathetic begging not to switch, give them that number and within a short time (sometimes only minutes), your phone is now on the new carrier's network, with your old phone-number intact.
This is a much better answer than forcing platforms to provide service to users whom they judge to be criminals or otherwise undesirable, but the platforms hate it. They say they hate it because it makes them complicit in crimes ("if we have to let an accused fraudster transfer their address book to a rival service, we abet the fraud"), but it's obvious that their objection is really about being forced to reduce the pain of switching to a rival.
There's a superficial reasonableness to the platforms' position, but only until you think about Mark, or K Renee, or the other people who've been "unpersonned" by the platforms with no explanation or appeal.
The platforms have rigged things so that you must have an account with them in order to function, but they also want to have the unilateral right to kick people off their systems. The combination of these demands represents more power than any company should have, and Big Tech has repeatedly demonstrated its unfitness to wield this kind of power.
This week, I lost an argument with my accountants about this. They provide me with my tax forms as links to a Microsoft Cloud file, and I need to have a Microsoft login in order to retrieve these files. This policy – and a prohibition on sending customer files as email attachments – came from their IT team, and it was in response to a requirement imposed by their insurer.
The problem here isn't merely that I must now enter into a contractual arrangement with Microsoft in order to do my taxes. It isn't just that Microsoft's terms of service are ghastly. It's not even that they could change those terms at any time, for example, to ingest my sensitive tax documents in order to train a large language model.
It's that Microsoft – like Google, Apple, Facebook and the other giants – routinely disconnects users for reasons it refuses to explain, and offers no meaningful appeal. Microsoft tells its business customers, "force your clients to get a Microsoft account in order to maintain communications security" but also reserves the right to unilaterally ban those clients from having a Microsoft account.
There are examples of this all over. Google recently flipped a switch so that you can't complete a Google Form without being logged into a Google account. Now, my ability to purse all kinds of matters both consequential and trivial turn on Google's good graces, which can change suddenly and arbitrarily. If I was like Mark, permanently banned from Google, I wouldn't have been able to complete Google Forms this week telling a conference organizer what sized t-shirt I wear, but also telling a friend that I could attend their wedding.
Now, perhaps some people really should be locked out of digital life. Maybe people who traffick in CSAM should be locked out of the cloud. But the entity that should make that determination is a court, not a Big Tech content moderator. It's fine for a platform to decide it doesn't want your business – but it shouldn't be up to the platform to decide that no one should be able to provide you with service.
This is especially salient in light of the chaos caused by Crowdstrike's catastrophic software update last week. Crowdstrike demonstrated what happens to users when a cloud provider accidentally terminates their account, but while we're thinking about reducing the likelihood of such accidents, we should really be thinking about what happens when you get Crowdstruck on purpose.
The wholesale chaos that Windows users and their clients, employees, users and stakeholders underwent last week could have been pieced out retail. It could have come as a court order (either by a US court or a foreign court) to disconnect a user and/or brick their computer. It could have come as an insider attack, undertaken by a vengeful employee, or one who was on the take from criminals or a foreign government. The ability to give anyone in the world a Blue Screen of Death could be a feature and not a bug.
It's not that companies are sadistic. When they mistreat us, it's nothing personal. They've just calculated that it would cost them more to run a good process than our business is worth to them. If they know we can't leave for a competitor, if they know we can't sue them, if they know that a tech rival can't give us a tool to get our data out of their silos, then the expected cost of mistreating us goes down. That makes it economically rational to seek out ever-more trivial sources of income that impose ever-more miserable conditions on us. When we can't leave without paying a very steep price, there's practically a fiduciary duty to find ways to upcharge, downgrade, scam, screw and enshittify us, right up to the point where we're so pissed that we quit.
Google could pay competent decision-makers to review every complaint about an account disconnection, but the cost of employing that large, skilled workforce vastly exceeds their expected lifetime revenue from a user like Mark. The fact that this results in the ruination of Mark's life isn't Google's problem – it's Mark's problem.
The cloud is many things, but most of all, it's a trap. When software is delivered as a service, when your data and the programs you use to read and write it live on computers that you don't control, your switching costs skyrocket. Think of Adobe, which no longer lets you buy programs at all, but instead insists that you run its software via the cloud. Adobe used the fact that you no longer own the tools you rely upon to cancel its Pantone color-matching license. One day, every Adobe customer in the world woke up to discover that the colors in their career-spanning file collections had all turned black, and would remain black until they paid an upcharge:
https://pluralistic.net/2022/10/28/fade-to-black/#trust-the-process
The cloud allows the companies whose products you rely on to alter the functioning and cost of those products unilaterally. Like mobile apps – which can't be reverse-engineered and modified without risking legal liability – cloud apps are built for enshittification. They are designed to shift power away from users to software companies. An app is just a web-page wrapped in enough IP to make it a felony to add an ad-blocker to it. A cloud app is some Javascript wrapped in enough terms of service clickthroughs to make it a felony to restore old features that the company now wants to upcharge you for.
Google's defenstration of K Renee, Mark and Cassio may have been accidental, but Google's capacity to defenstrate all of us, and the enormous cost we all bear if Google does so, has been carefully engineered into the system. Same goes for Apple, Microsoft, Adobe and anyone else who traps us in their silos. The lesson of the Crowdstrike catastrophe isn't merely that our IT systems are brittle and riddled with single points of failure: it's that these failure-points can be tripped deliberately, and that doing so could be in a company's best interests, no matter how devastating it would be to you or me.
Tumblr media
If you'd like an e ssay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/07/22/degoogled/#kafka-as-a-service
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
521 notes · View notes
onemillionfurries · 8 months ago
Text
"Your art isn't even good enough to be fed into AI" GOOD! I don't WANT it to be! I want people to come to my art because they see something unique and personal, a part of my own personality and taste. Not some pretty-yet-generic slop these generators spit out.
42 notes · View notes
tangibletechnomancy · 8 months ago
Text
Doing It Wrong On Purpose: Episode 1 - The Un-Ship
Today's experiment: What happens if I prompt for something, and then negative prompt all the main keywords, plus various synonyms and related words?
The answer: Some gloriously weird stuff.
For example, let's look at a negative cat:
Positive prompt: A cat on a windowsill during a storm
Negative prompt: Cat, feline, felidae, kitty, kitten, animal, pet, windowsill, window, glass, pane, house, storm, rain, water, lightning, thunder, clouds, torrent, downpour, snow, blizzard, wind, windy
Tumblr media Tumblr media Tumblr media Tumblr media
Interesting! Let's get a little more fantasy with it and try for an anti-deer:
Positive prompt: A deer in a peaceful flowery meadow, crystals, midnight, fantasy, colorful
Negative prompt: Deer, cervidae, animal, elk, moose, stag, doe, fawn, reindeer, antelope, cervid, antlers, flowers, night, dark, trees, foliage, bloom, stars, night, tranquil, fantastic, vibrant, cool, magic, blue, moon, sky, crystal, stone, statue, topiary, floral, blossom
Tumblr media Tumblr media Tumblr media Tumblr media
Between these two experiments, including a few dozen other generations that remain unposted, one thing I can say for sure is that for living subjects, it's a great way to get the kind of anatomical wonk that older models are (in)famous for - and it makes sense why, the model is trying to make something that looks like a certain subject...but once it starts to look too much like it, well, shit, we told it NOT to do that! Break something up! Given that I love that kind of wonk, I think I've found a useful tool for myself.
One more living subject, and let's get even more abstract with our direction here:
Positive prompt: mind horse
Negative prompt: horse, equine, colt, filly, mare, stallion, bronco, pony, mind, brain, thought, essence, psyche, intelligence, consciousness, imagination, dream, soul, visualization, intellect, wit, cognizance
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
Now let's try something that isn't alive. One thing I love AI for is surreal settings and landscapes - lets try one now!
Positive prompt: A magic palace garden made of crystal and gold
Negative prompt: Palace, magic, crystal, gold, fantasy, castle, estate, stronghold, temple, garden, flowers, plants, blossoms, bloom, blooms, trees, grass, stems, foliage, leaves, greenery, branches, bush, bushes, hedge, hedges, metal, luxury, stone, glass, brass, rose, polished, jewel, prism, courtyard
Tumblr media Tumblr media Tumblr media Tumblr media
I then tried to see if, learning from the animal subjects, I could make it more likely to return one of my favorite "mistakes" - making it impossible to discern the point where a water area ends and a sky area begins. I wasn't immediately successful, but I came up with some results I found pleasing regardless-
Positive prompt: Secret hideout in a cave behind a waterfall in the foggy forest on a floating sky island in fluffy clouds
Negative prompt: hideout, camp, campsite, home, abode, house, dwelling, rest, shelter, waterfall, water, cave, grotto, forest, woods, woodland, trees, fountain, cascade, pond, stream, lake, river, brook, puddle, creek, pool, beach, ocean, sea, cloud, clouds, sky, cumulus, cirrus, nimbus, fog, storm, rain, sunshower, falls
Tumblr media Tumblr media Tumblr media Tumblr media
It seems that with landscapes it's got a much clearer and more specific "idea" of what a [SUBJECT] without [SUBJECT] looks like; it's more inclined to invent very specific, very consistent unasked for related elements. With the animals, I was tweaking the weight on the positive prompt to avoid getting straightforwardly just what I had positive (and negative) prompted, but with landscapes, I just get... almost something else entirely.
So how about inanimate objects? Let's try a ship, perhaps?
Positive prompt: A huge sailing ship with brilliant prismatic crystal sails on a stormy, turbulent sea of sunset clouds
Negative prompt: ship, boat, sailboat, sailing ship, pirate ship, galleon, ketch, schooner, sloop, cutter, sail, sea, ocean, storm, wind, rain, water, waves, cloudy, clouds, fog, sunset, dusk, dawn, sunrise, twilight, evening
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
...okay, I'm in love with the un-ship. It truly does manage to consistently give me results that look like, yet entirely unlike, a ship. It is everything I love about AI as a medium. More than that, it is my friend.
Tumblr media Tumblr media Tumblr media Tumblr media
At lower positive prompt weights, they only get even more beautifully chaotic.
I want to live on one of these (in an alternate universe where they're geometrically possible and structurally sound, that is).
Failing that, I will be featuring them a lot from now on.
All images generated using Simple Stable, under the Code of Ethics of Are We Art Yet?
334 notes · View notes
pro-sipper · 9 months ago
Text
#there is a difference between someone drawing our their csa to cope#and someone making fictional cp
Antis once again proving that they have no idea what they are talking about.
"Fictional cp" is not a thing. CSEM is material that harms an actual child. Or in the case of AI (which would be the only "fictional" aspect you could argue) it would have to be indistinguishable from an actual child. Cartoon drawings or words on a page are not a real child. No real child is being harmed. There is no victim for you to defend here. If drawings or words are indistinguishable from real CSEM to you then that sounds like a problem you need to work out on your own.
Furthermore it's ridiculous to think you'd be able to tell at a glance what art is made to cope and what is made purely for pleasure. You think the Good, Acceptable drawings paint their characters with appropriate frowns on their faces and storm clouds overhead, while the Bad, Unacceptable drawings feature big smiles and floating hearts?
One person could draw something to cope and another could draw something to get off and they could still end up with remarkably similar drawings. Or two people could draw to cope and end up with wildly different interpretations of their trauma. Or two people could draw to get off and also have two very different pieces of art. There is no One Way to do fiction.
And assuming you can tell the difference based on a completely arbitrary set of standards that, let's face it, only you adhere to is incredibly myopic and downright stupid.
276 notes · View notes
nando161mando · 3 months ago
Text
In a leaked recording, Amazon Cloud Chief tells employees that most developers could stop coding soon as A.I. takes over
https://www.businessinsider.com/aws-ceo-developers-stop-coding-ai-takes-over-2024-8
0 notes