#not make ai-based databases or whatever
Explore tagged Tumblr posts
Text
okay, what am i doing here
#so many tools and factors i dont know#man i just want to read books and do some research#not make ai-based databases or whatever#maybe im not gonna be a good phd student#domi talks
4 notes
·
View notes
Text
I saw a horrible AI Tam and Lucy this morning in animal onesies and had to use my actual human hands to make a better version.
After drawing the whole thing I was like damn....I should have made s*xy pin ups with little ears, so if you want to tell me to do that, consider joining the patreon
Edited because tumblr absolutely will not allow me to reply to messages, so I'm trying to reply to @booksnwriting:
The difference is that the other person put their prompt into a computer and the computer program took a bunch of artists' hard work and skill without permission to fill that prompt. I'm taking human skill, time and effort and using it to fill a prompt. What I'm doing is no different than any art challenge, or draw your OTP like this meme or whatever. Do I have their permission? No, but frankly if you're out here stealing other people's skills and calling it art, then I think it's only fair that your ideas can be turned around and used as prompts by people with those skills to produce actual drawings. Furthermore, what I'm doing is not hurting them, but if they didn't have access to a database of stolen work, maybe they would have given money to an actual human artist to draw their prompt, maybe they would have held a little drawing prompt contest and shared the art and gained real artists exposure which could then allow those artists to find work doing other commissions. Even if they didn't do either of those things, even if not having AI meant their idea just stayed in their head never to see the light of day, the existence of AI art in general devalues skills people had to work to develop and takes jobs away from those people by taking their existing work. That actively hurts artists. I'm "bashing" them because what they're doing is actively harming me and people like me.
Based on your user name I assume you are probably a writer, and I would like to ask you: is there more value in an actual human being writing things like fanfic (stories using other people's ideas as their jumping off point) or original books that include genre tropes than there is in typing prompts into a text generating AI?
Would you be annoyed if someone chose to write a retelling of Dracula using their own brain and hands in response to hearing that someone else was marketing a Dracula retelling that they'd "written" using a text autofill program?
And just so we're clear, the thing that makes the AI horrible is that it's AI, not whether or not it's nice to look at.
248 notes
·
View notes
Text
Robotsss
remember this post
well i drew more stuff for the au and completely forgot about it- (;-_-)
the CQ brothers :D they are all (or at least inhabiting) rouge Scrapper bots.
Geno used to be an entirely red Scrapper bot. He was on a team with Fresh (who was disguised at the time) with Cq being the sort of handler/trainer person for the bots. One day on a mission Geno gets kind of wrecked. Like REALLY damaged (his core got damaged) and the people in charge tried to transfer Geno's AI to a newer model body. It can be hard to train a scrapper bot ai so it's best to try and save pre existing ones. things are thought to have gone well... But Geno was in fact still in his old body and the people just ended making an anomaly consciousness in the new body. An Error.
Error acted sort of Like Geno at first. But if a person knew Geno before hand they could definitely tell something was off. Error would also randomly glitch and reboot. It got to be such a problem that management was just going to Scrap him. Of course Error did not like that... So he ended up destroying the entire facility and half a city block.
after that fiasco Error went to try and find Geno. Error kind of didn't really now what to do with himself so he thought that maybe he could find the consciousness he glitched off of he might have a better idea. tbh he was also bored.
Error Did end up finding Geno and someone he may or may not have terrified into fixing him (at least as much as possible) So Error and Geno start hiding out in an abandoned warehouse and just sort of doing whatever catches their interest. and also Fresh shows up.
Fresh is a Virus. he used to be a sort of mascot for some kids brand that eventually shut down but his Ai was never properly disposed of so Fresh has just been robot body hopping ever since. He decided to infect a scrapper bot in the first place mostly because he was curious but also because being higher quality Fresh could inhabit the body longer without its code being completely scrambled and unusable.
Fresh had still been with the scrapper bot agency place when Error went rouge. He thought that he would probably have the best chance of finding Error if he had access to the scrappers tracking database. Fresh was mostly curious about Error.
After finding them Fresh basically scrambled Error's and also surprisingly Genos still active signal making them untraceable (the perks of being a virus i suppose). He leaves and finds the warehouse Geno and Error where hiding out in. Geno was able to convince Error not to destroy Fresh (barely) all three have been a sort of trio ever since and are wanted on Three different planets :D (they where able to hijack a ship) and Currently have a remote hanger/base. Geno is currently trying to Find Cq who disappeared after he got damaged.
#herrings rambles#undertale au#utmv#herrings art#Cq brothers#geno sans#fresh sans#error sans#Robot au
72 notes
·
View notes
Note
AITA for not being entirely negative about AI?
05/16/2024
Just before anyone scrolls down just to vote YTA, please hear me out: I'm not an AI bro, I am a hobbyist artist, I do not use generative AI, I know that it's all mostly based off stolen work and that's obviously Bad.
That being said, I am also an IT major so I understand the technology behind it as well as the industry using it. Because of this I understand that at this point it is very, very unlikely that AI art will ever go away, I feel like the best deal out of it that actual artists can get out of it is a compromise on what is and isn't allowed to be used for machine learning. I would love to be proven wrong though and I'm still hoping the lawsuits against Open AI and others will set a precedent for favouring artists over the technology.
Now, to the meat of this ask: I was talking in a discord sever with my other artist friends some of which are actually professionals (all around same age as me) and the topic of discussion was just how much AI art sucks, mostly concerning the fact that another artist we like (but don't know personally) had their works stolen and used in AI. The conversation then developed into talking about how hard it is to get a job in the industry where we live and how AI is now going to make that even worse. That's when I said something along the lines of: "In an ideal world, artists would get paid for all the works of theirs that are in AI learning databases so they can have easy passive income and not have to worry about getting jobs at shitty companies that wouldn't appreciate them anyway." To me that seemed like a pretty sensible take. I mean, if could just get free money every month for (consensually) putting a few dozens of my pieces in some database one time, I honestly would probably leave IT and just focus on art full time since that's always been my passion whereas programming is more of a "I'm good at it but not that excited about doing it, but it pays well so whatever".
My friends on the other hand did not share the sentiment, saying that in an ideal world AI art would be outlawed and the companies hiring them would not be shitty. I did agree about the companies being less shitty, but disagreed about AI being outlawed. I said that the major issue with AI are the copyright concerns so if tech companies were just forced to get artist's full permission to using their work first as well as providing monetary compensation there really wouldn't be anything wrong with using the technology (when concerning stylized AI art, not deepfakes or realistic AI images as those have a completely different slew of moral issues).
This really pissed a few of them off and they accused me of defending AI art. I had to explain to them that I wasn't defending AI art as it was NOW, because I know that the way it works NOW is very harmful, I was just saying that as an IDEAL scenario, not even something I think is particularly realistic, but something I think would be cool if it were actually possible. The rest of the argument was honestly just spinning in circles with me trying to explain the same points and them being outraged at the fact that I'm not 100% wholeheartedly bashing even the mere concept of AI until I just got frustrated and left the conversation.
It's been about a week and I haven't spoken to the friends I had that argument with since then. I still interact on the server and I see them interacting there too but we just kinda avoid each other. It's making me rethink the whole situation and wonder if I really was in the wrong for saying that and if I should just apologize.
134 notes
·
View notes
Text
READ THIS BEFORE INTERACTING
Alright, I know I said I wasn't going to touch this topic again, but my inbox is filling up with asks from people who clearly didn't read everything I said, so I'm making a pinned post to explain my stance on AI in full, but especially in the context of disability. Read this post in its entirety before interacting with me on this topic, lest you make a fool of yourself.
AI Doesn't Steal
Before I address people's misinterpretations of what I've said, there is something I need to preface with. The overwhelming majority of AI discourse on social media is argued based on a faulty premise: that generative AI models "steal" from artists. There are several problems with this premise. The first and most important one is that this simply isn't how AI works. Contrary to popular misinformation, generative AI does not simply take pieces of existing works and paste them together to produce its output. Not a single byte of pre-existing material is stored anywhere in an AI's system. What's really going on is honestly a lot more sinister.
How It Actually Works
In reality, AI models are made by initializing and then training something called a neural network. Initializing the network simply consists of setting up a multitude of nodes arranged in "layers," with each node in each layer being connected to every node in the next layer. When prompted with input, a neural network will propagate the input data through itself, layer by layer, transforming it along the way until the final layer yields the network's output. This is directly based on the way organic nervous systems work, hence the name "neural network." The process of training a network consists of giving it an example prompt, comparing the resulting output with an expected correct answer, and tweaking the strengths of the network's connections so that its output is closer to what is expected. This is repeated until the network can adequately provide output for all prompts. This is exactly how your brain learns; upon detecting stimuli, neurons will propagate signals from one to the next in order to enact a response, and the connections between those neurons will be adjusted based on how close the outcome was to whatever was anticipated. In the case of both organic and artificial neural networks, you'll notice that no part of the process involves directly storing anything that was shown to it. It is possible, especially in the case of organic brains, for a neural network to be configured such that it can produce a decently close approximation of something it was trained on; however, it is crucial to note that this behavior is extremely undesirable in generative AI, since that would just be using a wasteful amount of computational resources for a very simple task. It's called "overfitting" in this context, and it's avoided like the plague.
The sinister part lies in where the training data comes from. Companies which make generative AI models are held to a very low standard of accountability when it comes to sourcing and handling training data, and it shows. These companies usually just scrape data from the internet indiscriminately, which inevitably results in the collection of people's personal information. This sensitive data is not kept very secure once it's been scraped and placed in easy-to-parse centralized databases. Fortunately, these issues could be solved with the most basic of regulations. The only reason we haven't already solved them is because people are demonizing the products rather than the companies behind them. Getting up in arms over a type of computer program does nothing, and this diversion is being taken advantage of by bad actors, who could be rendered impotent with basic accountability. Other issues surrounding AI are exactly the same way. For example, attempts to replace artists in their jobs are the result of under-regulated businesses and weak worker's rights protections, and we're already seeing very promising efforts to combat this just by holding the bad actors accountable. Generative AI is a tool, not an agent, and the sooner people realize this, the sooner and more effectively they can combat its abuse.
Y'all Are Being Snobs
Now I've debunked the idea that generative AI just pastes together pieces of existing works. But what if that were how it worked? Putting together pieces of existing works... hmm, why does that sound familiar? Ah, yes, because it is, verbatim, the definition of collage. For over a century, collage has been recognized as a perfectly valid art form, and not plagiarism. Furthermore, in collage, crediting sources is not viewed as a requirement, only a courtesy. Therefore, if generative AI worked how most people think it works, it would simply be a form of collage. Not theft.
Some might not be satisfied with that reasoning. Some may claim that AI cannot be artistic because the AI has no intent, no creative vision, and nothing to express. There is a metaphysical argument to be made against this, but I won't bother making it. I don't need to, because the AI is not the artist. Maybe someday an artificial general intelligence could have the autonomy and ostensible sentience to make art on its own, but such things are mere science fiction in the present day. Currently, generative AI completely lacks autonomy—it is only capable of making whatever it is told to, as accurate to the prompt as it can manage. Generative AI is a tool. A sculpture made by 3D printing a digital model is no less a sculpture just because an automatic machine gave it physical form. An artist designed the sculpture, and used a tool to make it real. Likewise, a digital artist is completely valid in having an AI realize the image they designed.
Some may claim that AI isn't artistic because it doesn't require effort. By that logic, photography isn't art, since all you do is point a camera at something that already looks nice, fiddle with some dials, and press a button. This argument has never been anything more than snobbish gatekeeping, and I won't entertain it any further. All art is art. Besides, getting an AI to make something that looks how you want can be quite the ordeal, involving a great amount of trial and error. I don't speak from experience on that, but you've probably seen what AI image generators' first drafts tend to look like.
AI art is art.
Disability and Accessibility
Now that that's out of the way, I can finally move on to clarifying what people keep misinterpreting.
I Never Said That
First of all, despite what people keep claiming, I have never said that disabled people need AI in order to make art. In fact, I specifically said the opposite several times. What I have said is that AI can better enable some people to make the art they want to in the way they want to. Second of all, also despite what people keep claiming, I never said that AI is anyone's only option. Again, I specifically said the opposite multiple times. I am well aware that there are myriad tools available to aid the physically disabled in all manner of artistic pursuits. What I have argued is that AI is just as valid a tool as those other, longer-established ones.
In case anyone doubts me, here are all the posts I made in the discussion in question: Reblog chain 1 Reblog chain 2 Reblog chain 3 Reblog chain 4 Potentially relevant ask
I acknowledge that some of my earlier responses in that conversation were poorly worded and could potentially lead to a little confusion. However, I ended up clarifying everything so many times that the only good faith explanation I can think of for these wild misinterpretations is that people were seeing my arguments largely out of context. Now, though, I don't want to see any more straw men around here. You have no excuse, there's a convenient list of links to everything I said. As of posting this, I will ridicule anyone who ignores it and sends more hate mail. You have no one to blame but yourself for your poor reading comprehension.
What Prompted Me to Start Arguing in the First Place
There is one more thing that people kept misinterpreting, and it saddens me far more than anything else in this situation. It was sort of a culmination of both the things I already mentioned. Several people, notably including the one I was arguing with, have insisted that I'm trying to talk over physically disabled people.
Read the posts again. Notice how the original post was speaking for "everyone" in saying that AI isn't helpful. It doesn't take clairvoyance to realize that someone will find it helpful. That someone was being spoken over, before I ever said a word.
So I stepped in, and tried to oppose the OP on their universal claim. Lo and behold, they ended up saying that I'm the one talking over people.
Along the way, people started posting straight-up inspiration porn.
I hope you can understand where my uncharacteristic hostility came from in that argument.
160 notes
·
View notes
Text
My speculations on Indigo Park
I'm putting this post under a read-more in case it finds someone who hasn't played Indigo Park yet and wants to experience it blind.
(BTW, it's free and takes about an hour to finish so just go play it. The horror value's kinda tame overall, but trigger warning for blood splatter at the end.)
Why Rambley doesn't recognize Ed/the Player: The collectables notes make it obvious that our Character, Ed, used to be a regular guest at Indigo Park as a kid. Yet, when Rambley goes to register them at the beginning he says he doesn't recognize Ed's face. I've seen speculation that this might be due either to Ed's age or the facial data database being wiped or corrupted after the park's closure. However, I think there's another possibility.
The Rambley AI Guide was a relatively new addition to the Park. Indigo Park is essentially Disneyland; it's been around for a long time and I rather doubt that the technology for a sentient AI park guide was available on opening day. Rambley mostly appears on modern-looking flat-screens, but in the queue for the railroad he pops up on small CRTS, so technology has advanced over the park's life time. I suspect that Rambley as an AI was implemented a short time before whatever caused the park to be shut down, and the reason that Ed's face isn't already in the system is because Ed just never went to the Park during the time between Rambley's implementation and the closure.
Rambley needs Ed just to move around. Rambley claims he'd been stuck in the entrance area since the closure. That might imply that as an AI guide he's not permitted to move around inside the Park unless he's attached to a guest, and he has to stick close to them. He's probably linked to the Critter Cuff we wear, which would explain why he insists we get it and doesn't just override the turnstile or something. He still needs cameras to see us and TVs to communicate, but it's the Critter Cuff that determines which devices he's able to use at a given moment.
There are other AI Guides. Rambley's limitations in where in the park he can be seems inconvenient for an AI that's meant to assist all the park's guests. Perhaps during normal operations he was less limited because every guest had a Critter Cuff on, but that might have put too much strain on his processing if he was the only AI avatar. Ergo, some or all of the other Indigo characters could have been used as AI guides as well; either a guest would be assigned to one character through the whole park or the others would take over for Rambley in their themed areas while the raccoon managed the main street. Due to the sudden closure, the other AIs may be stuck in certain sections of the Park like Rambley was stuck at the entrance, and we'll interact with them and/or free them as part of the efforts to fix the place up.
The "mascots" are unrelated to the AI. But Rambley believes they are linked. The official music video for Rambely Review has garnered a lot of speculation for how different Rambley's perception of how the Mollie Macaw chase ended is to what we saw in the game. I'm not 100% sold on the idea that Rambley flat out doesn't know that the Mollie mascot got killed. His decision to drop his act and acknowledge the park's decayed state is because he sees how freaked out Ed is by the Mollie chase, and he seems to glance down toward Mollie's severed head when he trails off without describing the mascots. HOWEVER, I don't think he sees Mollie as being truly dead. He's possibly come to the conclusion (or rationalization) that the AI guides, based on the actual characters, are stuck inside the feral fleshy mascots and the mascot's death has led to Mollie's AI being liberated. This idea will stick with him until such time as we encounter an AI character before dealing with the associated mascot (likely Lloyd).
Salem is central to the park's closure. All we really know about Salem the Skunk is what we see in the Rambley's Rush arcade game, where Salem uses a potion to turn Mollie into a boss for us to fight. This reflects real world events, although whether Salem instigated the disaster due to over-committing to their characterization or was merely a catalyst that unwittingly turned the already dubious new mascots into outright dangers remains to be seen.
Rambley's disdain for Lloyd is unwarranted. Collectables commentary indicates that Lloyd's popularity may have been eclipsing Rambley's, and that ticks Rambley off. That's not the fault of the Lloyd(s) we're going to interact with, however. That's on Indigo's marketing for emphasizing Lloyd so much. And who knows, maybe there were plans for other retro-style plushies, but the Park got shut down before those could come out. Either way, while Lloydford L. Lion may be a bit of an arrogant overdramatic actor, the AI Guide version of him isn't going to come across as deserving Rambley's vitriol, and that's going to be the cause of one chapter's main conflict.
38 notes
·
View notes
Text
“dont irl artists base their art on other artists how is ai different” i mean there’s a million answers but most crucial is that real artists don’t train on actual child sexual abuse material and ai databases have been found to have been trained on those! like am i going crazy why are people not fucking mentioning that. it's immoral to use a program that uses actual child abuse images to make art for you and bc of the way ai is currently trained It Almost Certainly Is. csam is not exclusive to the “dark web” or whatever if you’ve talked to a mod of anything ever you’ll know idiots try and put the most awful graphic images of children being harmed on places like Reddit and Twitter all the fucking time. places ai Is Trained On. and an ai can scrape an image before a human can delete it. i don’t want an image that is in any way associated with a child being molested and that is not an acceptable way for any technology to act.
#like ignore ai art discourse#You Are Talking About A CSAM Trained Machine#Ai needs to be trained differently Else It Will Download And Train Off Of Images Of Children Being Abused#And I believe it’s not ethical to use it until thats done bc I don’t want an image potentially influenced by the ai seeing actual csam
16 notes
·
View notes
Text
Okay, I just can't. I need to rant about a very specific thing from 7.0 main story. Spoilers, ofc.
.
.
.
Playing through Stormblood and post-Stormblood, I thought that Lyse's decision to invite a very openly and well-known tempered snake into the city without any supervision (despite working with Scions for years and knowing about tempering from experience) would be THE stupidest moment of the entire game. And it was, right until the Living Memory from Dawntrail.
Immediately after entering, the gang just... go oh so slowly through this Disneyland, talking with locals, tasting the food, playing with gondolas and whatever else. Under the excuse of "this is an expansion about shoehorning knowing other's lives and cultures, so we must learn about these people and therefore their deaths won't be in vain".
Guys. Those are not real people. They are not even souls of the deceased locked in the limbo. This is literally a simulation made by a generative AI of sorts, which calculates on what the person should look like, do, say, etc. based on the database of extracted memories. And each second of this Disneyland's existence is fueled by burning through the very real souls. Including the very same people whose memories are used for simulation. It's not even "saving dead at the cost of living" or "saving past at the cost of future", it's akin to saving fucking memory NFTs at the cost of everything and everyone.
Okay, Wuk Lamat is dumb as fuck and learned about the whole reflections-souls-aether mumbo-jumbo like yesterday, but what the fuck is wrong with the rest? Are they nuts? Did they loose the last brain cell somewhere in the previous slog? They should have turn the whole thing off ASAP, not take a leisure stroll.
And it's so clear that the writers wanted to make the same dramatic plot twist as with the recreated Amaurot in Shadowbringers, but just as everything else in this MSQ it flopped horribly. Boo hoo, dead Namikka's memory is here too, it's so sad, Alexa, play Despacito while Namikka's very soul is being slowly disintegrated to fuel the illusion, completely unaware of her reunion.
I'm so fucking done, I hate DT's MSQ so much--
14 notes
·
View notes
Text
Been playing Mass Effect lately and have to say it's so interesting how paragon Shepard is the definition of a "good cop". You're upholding a racially hierarchical regime where some aliens are explicitly stated to be seen as lesser and incapable of self governance despite being literal spacefarers with their own personal governments, and the actual emphasized incompetence of those supposedly "capable of governing", the council allows for all sorts of excesses and brutality among it's guard seemingly, and chooses on whims whether or not to aid certain species in their struggles based on favoritism, there is, from the councils perspective, *literal* slave labor used on the citadel that they're indifferent to because again, lesser species (they don't know that the keepers are designed to upkeep the citadel they just see them as an alien race to take advantage of at 0 cost), there is seemingly overt misogyny present among most races that is in no way tackled or challenged, limitations on free speech, genocide apologia from the highest ranks and engrained into educational databases, and throughout all of this, Shepard can't offer any institutional critique, despite being the good guy hero jesus person, because she's incapable of analyzing the system she exists in and actively serves and furthers. sure she criticizes individual actions of the council and can be rude to them, but ultimately she remains beholden to them, and carries out their missions, choosing to resolve them as a good cop or bad cop, which again maybe individually means saving a life or committing police brutality, but she still ultimately reinforces a system built upon extremely blatant oppression and never seriously questions this, not even when she leaves and joins Cerberus briefly.
And then there's the crew, barring Liara (who incidentally is the crewmate least linked to the military, and who,, is less excluded from this list in ME2,, but i wanna focus on 1) Mass Effect 1 feels like Bad Apple fixer simulator, you start with
Garrus: genocide apologist (thinks the genophage was justified) who LOVES extrajudicial murder
Ashley: groomed into being a would-be klan member
Tali: zionist who hated AI before it was cool (in a genocidal way)
Wrex: war culture mercenary super chill on war crimes
Kaidan: shown as the other "good cop" and generally the most reasonable person barring Liara, but also he did just murder someone in boot camp in a fit of rage
Through your actions, you can fix them! You can make the bad apples good apples (kinda) but like,,,,
2 of course moves away from this theme a bit while still never properly tackling corrupt institutions in a way that undoes the actions of the first game, but its focus is elsewhere and the crew is more diverse in its outlook
Ultimately i just find it interesting how Mass Effect is a game showcasing how a good apple or whatever is capable of making individual changes for the better but is ultimately still a tool of an oppressive system and can't do anything to fundamentally change that, even if they're the most important good apple in said system.
Worth noting maybe this'll change in Mass Effect 3, which i have yet to play as im in the process of finishing 2 currently (im a dragon age girl) but idk i like how it's handled at first i was iffy on it but no it's actually pretty cool.
Also sorry if this is super retreaded ground im new to mass effect discourse this is just my takeaways from it lol
14 notes
·
View notes
Note
Tbh the reason AI can't replicate reality in a realistic way is simply because you can't recreate reality. You can simulate reality, sure. But to properly recreate reality isn't possible. The reason that is is because there are no lines in real life. Images are made up of pixles and real life is made up of billions of hyper-complicated things. I can very easily see the distance of the doll on my desk to the wall. Can I tell you *what* the distance is? No, but I can see how far the doll is from the wall. Computers can't do that. They think in numbers as they are forced to decipher *flat* images. To get an AI to create an even semi realistic reality would be ung_dly expensive because you would have to teach the AI using real life distance and form, not just letting it calculate how far or what form is what from a flat image. But again, cameras can't see distance and form in the same way human eyes do. Cameras can only capture a flat image and have to decipher through said flat image. AI doesn't understand the complexities of how things move, look, or even sound because it can't look at it the way a pair of eyes could. I *know* how much I have to extend my arm to touch something, because I can see how far away it is. I know where my posters begin and end, not because of *just* a color difference or hue change, but because I can *see* exactly where they end. Computers will never be able to replace artist, maybe in the mainstream industry BUT they are still going to have to hire real artist to make their content because AI can't produce exactly what you want because it can't think like a human brain. There are companies who've tried to use AI to replace certain aspect and it's proved to be so frustrating that these animators are forced to reanimate the ai work, because it just *isn't* what they wanted or need for the project.
AI assistant tools can certainly be helpful to artists, especially in the industry. But the fat cats in Hollywood already know they can't *actually* get rid of us, because their silly robots just don't do it right.
all of this is true yes and I think moreso even without questioning the reality of human perception there is just the fact that ai doesn’t think in the same way a conscious being does. text algorithms don’t generate compelling (or, let’s be real, comprehensible) narratives because they work by stringing together words one by one — every singular word is followed by the most likely next singular word based on whatever database the model is using. ai can’t write unique characters or dialogue or even navigate most plot holes because it doesn’t have a memory of what it’s said beforehand and even if it did it wouldn’t have a larger context to place its writing within
the same goes for image algorithms. sure, an ai can give you can approximation of a knight, but the armor is going to be completely nonfunctional if you examine it even a bit. an ai can give you a room with the prettiest color palette in the world, but there’s also going to be a hole in the ceiling with a branch going through it because it doesn’t understand the concept of skylights beyond knowing vaguely what they look like. regardless of whether or not what it’s doing counts as “thinking” (though I do think there’s a pretty clear answer to that), what ultimately matter is that an ai is incapable of thinking critically. you can give an image algorithm a prompt like “add flowers in foreground” yes but you’re never going to succeed with a prompt like “follow the laws of physics”
9 notes
·
View notes
Text
become your own librarian
i think we are coming to a time where the former promise of the internet-- namely its function as a vast library of information previously out of reach-- is fading out. for multiple reasons, the search engines are losing their ability to actually find anything based on your keywords, and whatever search results do appear are bogged down with a swarm of dead, useless AI-driven webpages or sites made by the various corporate overlords & their institutions, who squeeze their questionable and meager info between the main event: advertisements. those weirder, more esoteric independently-curated websites are much harder to find.
i think we take it for granted that the internet is sort of this permanent lifeline to information that makes things like books or other more physical media obsolete. we would like to believe that search engines and databases are some kind of naturally-produced phenomena which will effortlessly evolve into the best possible form to suit progressive human needs. No, it’s very possible that these systems will fail, and they already are starting to.
the same goes for access to music, art, films, etcetera. we shouldn’t assume it will always be available to stream and view online because you have a membership. the shift from owning music to streaming promised a superior convenience under the false notion -a fantasy really- that the current state of media technology will go on, as it is, forever. it doesn’t take much for a server to crash, and it also doesn’t take much for the whole system to be bought out by some other company and suddenly you are locked out of all of “your” music until you pay their increased monthly service charge.
another thing, it has become harder to discover good information and good art compared to the earlier days of the internet because there’s just so much of everything already, most of which is a complete pile of waste-- waste that clogs the pipes so nothing of value can get past. in conspiracy theory circles, they might also call this disinformation or misinformation-- a type of “psy-op” that creates the illusion of abundant & diverse sources of information, all while the real knowledge is covered up, distorted, or slandered.
we will have to consider all of this soon, i think. how do we learn new things in a way which is reliable and connected to something other than the dominant tech corporations? how do we source our knowledge, what private databases are we creating in our own private space and how are we giving life to it? a life of its own, so that it cannot be erased either suddenly, or perhaps more perniciously, in a slow way, so gradual that we don’t even realize, for example, how when we search for something now, there are seemingly hundreds of pages of results, but each and every page has the same 10 websites repeated over and over again...
20 notes
·
View notes
Text
How AI art is made and how it can be a problem
Lucas Melo
Art made by AI (Artificial Intelligence) nowadays is something really popular, in every place on the internet. The idea of AI who makes art is teaching an algorithm how to draw, and use that algorithm to make the drawing for you. Also, the way the tool is made can lead to some problems involving right of images.
You can ask the AI to do everything, for example, you can ask it to draw a knight riding a bear while holding a candy weapon (that says a lot about how powerful AIs can be), and the AI will make the drawing based on its database. There are a lot of websites like “crayon.com”, “creator.nightcafe.studio“ and many others who provide you an AI to make whatever art you ask for. Today the tool is not really developed, still has some problems drawing hands, elbows and other things that are being fixed as the time passes.
To develop an AI you take an algorithm and feed it with data and that data is used to teach the algorithm how to make a task, sometimes the data is supervised by humans, sometimes it’s not. Depends on the way the algorithm it's built. While the AI does the same task multiple times the technique develops and perfections every time it is used.
That means that when an AI is made to do art, it needs to be fed by art, made by actual artists, people. Learning from the data it received, the AI also becomes able to copy the style of artists. It wouldn’t be a problem if they had the consent of the artists to do that, but that obviously isn’t the case since the data is automatically taken from the internet most times.
Artist’s drawing and painting styles are being copied by AI. Not only does it use artworks without consent, it can make some artists lose their jobs. Since the AI is able to draw for free (or cheaper) the same style of the artist, there’s no reason to hire an artist to do that. Nowadays it’s not happening, because these AI are in development and still lack a lot of quality and precision to do what the user is exactly asking, but if it keeps going like that, it’s just a matter of time for it to happen.
In Conclusion, AI itself is not the problem. The problem is how they are made. To make an AI that does art, you have to use other people's art, and that is made without any consensus, and that may affect the artist's job.
3 notes
·
View notes
Text
Actually, I’ve noticed that Tumblr, the internet at large, probably, too, has a problem of people being ‘anti’ something, without actually knowing what they are ‘anti’
I’m just thinking about that post about an AI thing that was helping to prevent art theft and people were going ‘I can’t believe they made an anti AI AI!’ As if AI stands for Art Itheft not Artificial Intelligence.
Also the sudden blame that was being suddenly leveraged against people who used AI art generators, as if they were personally responsible for the art theft. After ages of nothing but ‘Please fucking tag it unreality thanks.’ Like, you can think that it’s not ethical or whatever without acting as if someone who uses a generator is personally creating the database it uses.
Also like… ‘Nobody deserves art so if you couldn’t pay an artist originally you don’t deserve to use an AI art generator’ Fuck you. Somebody using a generator who couldn’t pay you to begin with is not stealing your wages. Someone with no money is not a potential employer who has decided to use an AI instead, there are different types of people in the world.
I too think that a database should be made ethically. And I think some of the AI artist guys are just… NFT people moving on to the next thing that will definitely make them money this time.
The problem you have is with the datasets. And the regulations around money and AI art. You are anti databases based on art theft and corporations using computers instead of people. You are not anti AI art, and you are definitely not anti AI. And if you are, there’s something wrong with you.
The technology is not the problem. Every person using it is not the problem. The people behind it and the lack of ethics involved in constructing the datasets is the problem. A 13 year old generating AI fanart of their favourite character isn’t and is not. Fucking. Stealing. A poor 21 year old with little money who couldn’t afford a commission who uses a generator to make an image of their OC is not. Fucking Stealing.
Be mad at the fucking corporations. Say that you don’t like it when people use the generators due to their issues. I don’t give a fuck. Just stop telling people they’re solely responsible for your hardships when they wouldn’t have given you money to begin with.
3 notes
·
View notes
Note
On anon because this is an unpopular opinion but like I need to tell SOMEONE and ur posting about how people are fear mongering ai too much... Like... As much as it sucks ai are using artists and writing as references without permission.... It's fair use. It's the legal definition of fair use. If people manage to make that stuff not allowed we are very few steps away from not allowing fanart or music remixes you know!?? Like I know I know it SUCKS to have your art used to train an AI without permission and trying to replace real designers with it is disgusting but GUYS IT'S FAIR USE!!!!!! ITS CREATING SOMETHING NEW. I'm literally dying here dude.
im not entirely sure whether it falls under fair use bc i dont know for sure that most ai like.. even uses writing/art from any singular person to a degree where itd even come into conflict w copyright? these databases are so so massive that one piece of art put into it is a drop in a gigantic bucket, and also people kind of imagine that the ai stores all of that data and uses specific images, but it doesnt - its not like, 'keeping' the images and directly referencing from them.
in any case tho i dont rly base my view on the subject on whether or not its legal, and i do think it is generally scummy to use the work of individual artists online without their consent even if its legal. itd also be a huge benefit if the companies training ai had to use only public domain works, because a lot of the public domain is not properly digitized and accessible, and companies needing to do that work to train ai would benefit everyones access to art! but yeah, its cheaper to just scrape a bunch of art from google images and whatnot, and capital always wins, so!
ig this is all part of the point for me tho - like, even just the training of ai art machines (not even the finished tech itself but the process of making it) could have benefits for all of us if it was used in a way that privileges that benefit rather than being primarily motivated by money making. the issue is always always always the capitalist machine or whatever.. sorry lol im jetlagged so i may not make senseeee
#97#ask#not rbable bc im all over the place so im probably not v eloquent + have a lot of thots on the subject#im v much of two minds on the issue of copyright and such and its a difficult thing to detangle for me
2 notes
·
View notes
Text
The growing popularity of AI art should worry anyone who even remotely cares about the future of creative expression as we know it. Sure, the AI can’t draw fingers right now, but eventually, as more and more people feed into the algorithms for these things, they will get more accurate.
Right now, the biggest problem with them is that the only way they work is that the actual work of real, living artists is fed into a database without the artists’ permission, generating theoretically infinite pieces of “art” based on stolen art. If you’ve ever seen AI art before, you should know that you aren’t looking at an original work, but a facsimile of multiple real artists’ existing work shoved into a blender (again, without consent and often against the artist's express wishes). If you actually want to see what you’d look like as an anime character or a DnD monster or whatever, I’d recommend commissioning an actual digital artist. You’ll get a better picture and you’ll be helping an artist make a living, which is dreadfully difficult to do as an artist.
But what will this sort of thing look like in the future? Well, the translation industry went through a similar sort of change when AI was introduced. Now, AI is still pretty stupid, so any translations done by it are wildly inaccurate. The problem is, that’s usually “good enough” for companies that don’t have a lot of money and are just meeting the bear minimum of language accessibility. Well, if an AI can do it for free, the companies who want actual, real translators will be willing to pay significantly less for those translations because, well, the competition does it for free. And this is basically what happened.
This is undoubtedly going to creep into other creative fields such as freelance writing. Now, we as everyday people generally would say an actual human creating the thing is better than AI creating the thing, because all forms of art are meant to be about human expression. IMO, it’s not really art if an AI does it, because an inherent piece of what makes art, well, art is that it’s coming from the imagination of a person. Even photography has that human element, as you are viewing things from a carefully thought out perspective.
So okay, giving humanity the benefit of the doubt, most people would rather have the content they consume to be stuff made by humans. But what about corporations? You know, the ones funding all the mainstream content? Well, frankly, they don’t give a damn, at least not the suits at the top making all the decisions. They’ll make whatever decision nets them the biggest profit, and all the numbers and metrics in the world will choose the artist that charges nothing over the artist that charges anything. See, art isn’t cheap to make. Beyond the physical resources it requires, it takes a ton of time and energy. AI can do it instantly, and it does it all by copying the work of real humans who have already put in the time.
So what we’ll end up with is a slow descent into algorithms, AI art feeding AI art, until the human element is barely a layer of sediment.
Now, okay, you might say, well Casey, surely people will still want to make real art themselves, right? And, well, yeah! I’d hope so! But we live in an economic system that requires us to make $$ to survive, and to continue making art. If artists aren’t getting paid, they aren’t going to make art. And this issue will keep compounding upon itself until the industry is dominated by AI.
Art is meant to speak to the human experience, and it comes from the real emotions, thoughts, and ideals of the artist. That's what makes it beautiful, and that's what makes it speak to us on a deep, intimate level. I don’t want a future where that beauty and truth is replaced by an algorithm.
5 notes
·
View notes
Text
holy fuck
if you/someone you know use UHC, makes sure whoever pays for your/their healthcare - you/them, a spouse, family, whatever - knows about this because to quote a content creator i follow called liv pearsall, "what in, and i do not say this lightly, fresh hell"
also, if you're like me (in my case it's autism but it could be something else for you) and struggle with understand big, fancy words (like the article) i ran the article through this informalizer to make a more simple version. it's under the cut! :)
Yo, so here's the deal. UnitedHealthcare, the biggest health insurance company in the US, is supposedly using this totally messed up AI algorithm to screw over old folks and deny them the coverage they really need. It's a total mess. These elderly patients are getting booted out of rehab and care facilities way too early, and they're being forced to drain their savings just to get the care that should be covered by their Medicare Advantage Plan.
And get this, someone actually filed a lawsuit about it. It's going down in the US District Court for the District of Minnesota. The lawsuit is all about how UnitedHealth denied health coverage to two people who eventually died. But it's not just about them — there could be thousands of other people in similar situations.
This lawsuit lines up with an investigation by Stat News that basically supports all the claims. Stat got their hands on internal documents and talked to former employees of NaviHealth, which is a subsidiary of UnitedHealth that created this messed up AI algorithm called nH Predict.
One former employee, Amber Lynch, spilled the tea to Stat. She used to work for NaviHealth and said that the whole company cared more about money than actually helping patients. She hated how they treated patients like data points instead of human beings.
Here's the deal with nH Predict. It started being used by UnitedHealth back in November 2019, and they're still using it. Basically, this algorithm tries to guess how much care a patient on Medicare Advantage will need after they have, like, a big injury or illness. Stuff like therapy and skilled care in hospitals and nursing homes.
No one really knows how nH Predict works exactly, but it apparently takes info from a database with cases from 6 million patients. The case managers at NaviHealth put in some details about a patient, like age and living situation, and the algorithm spits out estimates based on similar patients in the database. It tries to figure out stuff like how long the patient will need care and when they can be discharged.
But here's the problem — the algorithm doesn't take a bunch of important factors into account, like other medical issues the patient might have or if they catch something like pneumonia or COVID-19 during their stay. It's a mess.
According to the investigation and the lawsuit, the estimates the algorithm gives are usually way off. With nH Predict, patients are hardly ever getting the full 100 days of covered care they're supposed to get in a nursing home. Usually, they only get like 14 days before UnitedHealth denies payment.
And get this, when patients or their doctors ask to see the algorithm's reports, UnitedHealth just says no and claims it's top secret. And if the doctors try to disagree with UnitedHealth's decision, tough luck. UnitedHealth just overrides them.
This whole thing is messed up, but sadly it's not the first time the healthcare industry has screwed up with AI. They've been doing this racist algorithm stuff for a while now. But what makes this situation even worse is that it seems like UnitedHealth is deliberately denying coverage to save money.
Ever since UnitedHealth bought NaviHealth, former employees say the company cares more about hitting targets and making post-acute care as short and cheap as possible. They even have requirements for case managers to follow the algorithm's predictions. If they don't, they could get in trouble or even lose their jobs.
Apparently, case managers are trained to defend the algorithm's estimates to patients and their caregivers. They use all these tactics to shut down any objections. Like, if a nursing home doesn't want to let a patient leave, the case manager will say the patient doesn't need certain care because of some calorie rule. It's like they don't even care about the patients.
And even if patients manage to win an appeal and get the denial overturned, UnitedHealth just comes right back with another denial a few days later. It's a never-ending cycle.
The lawsuit is fighting back against UnitedHealth and NaviHealth for all this shady behavior. They're accusing them of breaking contracts, not acting in good faith, unfairly getting rich, and violating insurance laws in a bunch of states. They want damages, emotional distress compensation, and for UnitedHealth to stop using the AI-based claims denials.
We don't know exactly how much money UnitedHealth saves with nH Predict, but it's gotta be a crazy amount. Last year, the CEO made almost $21 million, and other execs made millions too.
In the end, it's just a messed up situation. These elderly people are getting screwed over by an AI algorithm that values money over their health. It's not cool, and something needs to change.
UnitedHealthcare, the largest health insurance company in the US, is allegedly using a deeply flawed AI algorithm to override doctors' judgments and wrongfully deny critical health coverage to elderly patients. This has resulted in patients being kicked out of rehabilitation programs and care facilities far too early, forcing them to drain their life savings to obtain needed care that should be covered under their government-funded Medicare Advantage Plan.
It's not just flawed, it's flawed in UnitedHealthcare's favor.
That's not a flaw... that's fraud.
45K notes
·
View notes