Tumgik
#not make ai-based databases or whatever
aceteling · 1 year
Text
okay, what am i doing here
4 notes · View notes
Robotsss
remember this post
well i drew more stuff for the au and completely forgot about it- (;-_-)
the CQ brothers :D they are all (or at least inhabiting) rouge Scrapper bots.
Tumblr media
Geno used to be an entirely red Scrapper bot. He was on a team with Fresh (who was disguised at the time) with Cq being the sort of handler/trainer person for the bots. One day on a mission Geno gets kind of wrecked. Like REALLY damaged (his core got damaged) and the people in charge tried to transfer Geno's AI to a newer model body. It can be hard to train a scrapper bot ai so it's best to try and save pre existing ones. things are thought to have gone well... But Geno was in fact still in his old body and the people just ended making an anomaly consciousness in the new body. An Error.
Tumblr media
Error acted sort of Like Geno at first. But if a person knew Geno before hand they could definitely tell something was off. Error would also randomly glitch and reboot. It got to be such a problem that management was just going to Scrap him. Of course Error did not like that... So he ended up destroying the entire facility and half a city block.
after that fiasco Error went to try and find Geno. Error kind of didn't really now what to do with himself so he thought that maybe he could find the consciousness he glitched off of he might have a better idea. tbh he was also bored.
Error Did end up finding Geno and someone he may or may not have terrified into fixing him (at least as much as possible) So Error and Geno start hiding out in an abandoned warehouse and just sort of doing whatever catches their interest. and also Fresh shows up.
Tumblr media
Fresh is a Virus. he used to be a sort of mascot for some kids brand that eventually shut down but his Ai was never properly disposed of so Fresh has just been robot body hopping ever since. He decided to infect a scrapper bot in the first place mostly because he was curious but also because being higher quality Fresh could inhabit the body longer without its code being completely scrambled and unusable.
Fresh had still been with the scrapper bot agency place when Error went rouge. He thought that he would probably have the best chance of finding Error if he had access to the scrappers tracking database. Fresh was mostly curious about Error.
After finding them Fresh basically scrambled Error's and also surprisingly Genos still active signal making them untraceable (the perks of being a virus i suppose). He leaves and finds the warehouse Geno and Error where hiding out in. Geno was able to convince Error not to destroy Fresh (barely) all three have been a sort of trio ever since and are wanted on Three different planets :D (they where able to hijack a ship) and Currently have a remote hanger/base. Geno is currently trying to Find Cq who disappeared after he got damaged.
64 notes · View notes
Note
AITA for not being entirely negative about AI?
05/16/2024
Just before anyone scrolls down just to vote YTA, please hear me out: I'm not an AI bro, I am a hobbyist artist, I do not use generative AI, I know that it's all mostly based off stolen work and that's obviously Bad.
That being said, I am also an IT major so I understand the technology behind it as well as the industry using it. Because of this I understand that at this point it is very, very unlikely that AI art will ever go away, I feel like the best deal out of it that actual artists can get out of it is a compromise on what is and isn't allowed to be used for machine learning. I would love to be proven wrong though and I'm still hoping the lawsuits against Open AI and others will set a precedent for favouring artists over the technology.
Now, to the meat of this ask: I was talking in a discord sever with my other artist friends some of which are actually professionals (all around same age as me) and the topic of discussion was just how much AI art sucks, mostly concerning the fact that another artist we like (but don't know personally) had their works stolen and used in AI. The conversation then developed into talking about how hard it is to get a job in the industry where we live and how AI is now going to make that even worse. That's when I said something along the lines of: "In an ideal world, artists would get paid for all the works of theirs that are in AI learning databases so they can have easy passive income and not have to worry about getting jobs at shitty companies that wouldn't appreciate them anyway." To me that seemed like a pretty sensible take. I mean, if could just get free money every month for (consensually) putting a few dozens of my pieces in some database one time, I honestly would probably leave IT and just focus on art full time since that's always been my passion whereas programming is more of a "I'm good at it but not that excited about doing it, but it pays well so whatever".
My friends on the other hand did not share the sentiment, saying that in an ideal world AI art would be outlawed and the companies hiring them would not be shitty. I did agree about the companies being less shitty, but disagreed about AI being outlawed. I said that the major issue with AI are the copyright concerns so if tech companies were just forced to get artist's full permission to using their work first as well as providing monetary compensation there really wouldn't be anything wrong with using the technology (when concerning stylized AI art, not deepfakes or realistic AI images as those have a completely different slew of moral issues).
This really pissed a few of them off and they accused me of defending AI art. I had to explain to them that I wasn't defending AI art as it was NOW, because I know that the way it works NOW is very harmful, I was just saying that as an IDEAL scenario, not even something I think is particularly realistic, but something I think would be cool if it were actually possible. The rest of the argument was honestly just spinning in circles with me trying to explain the same points and them being outraged at the fact that I'm not 100% wholeheartedly bashing even the mere concept of AI until I just got frustrated and left the conversation.
It's been about a week and I haven't spoken to the friends I had that argument with since then. I still interact on the server and I see them interacting there too but we just kinda avoid each other. It's making me rethink the whole situation and wonder if I really was in the wrong for saying that and if I should just apologize.
134 notes · View notes
pizzaronipasta · 1 year
Text
READ THIS BEFORE INTERACTING
Alright, I know I said I wasn't going to touch this topic again, but my inbox is filling up with asks from people who clearly didn't read everything I said, so I'm making a pinned post to explain my stance on AI in full, but especially in the context of disability. Read this post in its entirety before interacting with me on this topic, lest you make a fool of yourself.
AI Doesn't Steal
Before I address people's misinterpretations of what I've said, there is something I need to preface with. The overwhelming majority of AI discourse on social media is argued based on a faulty premise: that generative AI models "steal" from artists. There are several problems with this premise. The first and most important one is that this simply isn't how AI works. Contrary to popular misinformation, generative AI does not simply take pieces of existing works and paste them together to produce its output. Not a single byte of pre-existing material is stored anywhere in an AI's system. What's really going on is honestly a lot more sinister.
How It Actually Works
In reality, AI models are made by initializing and then training something called a neural network. Initializing the network simply consists of setting up a multitude of nodes arranged in "layers," with each node in each layer being connected to every node in the next layer. When prompted with input, a neural network will propagate the input data through itself, layer by layer, transforming it along the way until the final layer yields the network's output. This is directly based on the way organic nervous systems work, hence the name "neural network." The process of training a network consists of giving it an example prompt, comparing the resulting output with an expected correct answer, and tweaking the strengths of the network's connections so that its output is closer to what is expected. This is repeated until the network can adequately provide output for all prompts. This is exactly how your brain learns; upon detecting stimuli, neurons will propagate signals from one to the next in order to enact a response, and the connections between those neurons will be adjusted based on how close the outcome was to whatever was anticipated. In the case of both organic and artificial neural networks, you'll notice that no part of the process involves directly storing anything that was shown to it. It is possible, especially in the case of organic brains, for a neural network to be configured such that it can produce a decently close approximation of something it was trained on; however, it is crucial to note that this behavior is extremely undesirable in generative AI, since that would just be using a wasteful amount of computational resources for a very simple task. It's called "overfitting" in this context, and it's avoided like the plague.
The sinister part lies in where the training data comes from. Companies which make generative AI models are held to a very low standard of accountability when it comes to sourcing and handling training data, and it shows. These companies usually just scrape data from the internet indiscriminately, which inevitably results in the collection of people's personal information. This sensitive data is not kept very secure once it's been scraped and placed in easy-to-parse centralized databases. Fortunately, these issues could be solved with the most basic of regulations. The only reason we haven't already solved them is because people are demonizing the products rather than the companies behind them. Getting up in arms over a type of computer program does nothing, and this diversion is being taken advantage of by bad actors, who could be rendered impotent with basic accountability. Other issues surrounding AI are exactly the same way. For example, attempts to replace artists in their jobs are the result of under-regulated businesses and weak worker's rights protections, and we're already seeing very promising efforts to combat this just by holding the bad actors accountable. Generative AI is a tool, not an agent, and the sooner people realize this, the sooner and more effectively they can combat its abuse.
Y'all Are Being Snobs
Now I've debunked the idea that generative AI just pastes together pieces of existing works. But what if that were how it worked? Putting together pieces of existing works... hmm, why does that sound familiar? Ah, yes, because it is, verbatim, the definition of collage. For over a century, collage has been recognized as a perfectly valid art form, and not plagiarism. Furthermore, in collage, crediting sources is not viewed as a requirement, only a courtesy. Therefore, if generative AI worked how most people think it works, it would simply be a form of collage. Not theft.
Some might not be satisfied with that reasoning. Some may claim that AI cannot be artistic because the AI has no intent, no creative vision, and nothing to express. There is a metaphysical argument to be made against this, but I won't bother making it. I don't need to, because the AI is not the artist. Maybe someday an artificial general intelligence could have the autonomy and ostensible sentience to make art on its own, but such things are mere science fiction in the present day. Currently, generative AI completely lacks autonomy—it is only capable of making whatever it is told to, as accurate to the prompt as it can manage. Generative AI is a tool. A sculpture made by 3D printing a digital model is no less a sculpture just because an automatic machine gave it physical form. An artist designed the sculpture, and used a tool to make it real. Likewise, a digital artist is completely valid in having an AI realize the image they designed.
Some may claim that AI isn't artistic because it doesn't require effort. By that logic, photography isn't art, since all you do is point a camera at something that already looks nice, fiddle with some dials, and press a button. This argument has never been anything more than snobbish gatekeeping, and I won't entertain it any further. All art is art. Besides, getting an AI to make something that looks how you want can be quite the ordeal, involving a great amount of trial and error. I don't speak from experience on that, but you've probably seen what AI image generators' first drafts tend to look like.
AI art is art.
Disability and Accessibility
Now that that's out of the way, I can finally move on to clarifying what people keep misinterpreting.
I Never Said That
First of all, despite what people keep claiming, I have never said that disabled people need AI in order to make art. In fact, I specifically said the opposite several times. What I have said is that AI can better enable some people to make the art they want to in the way they want to. Second of all, also despite what people keep claiming, I never said that AI is anyone's only option. Again, I specifically said the opposite multiple times. I am well aware that there are myriad tools available to aid the physically disabled in all manner of artistic pursuits. What I have argued is that AI is just as valid a tool as those other, longer-established ones.
In case anyone doubts me, here are all the posts I made in the discussion in question: Reblog chain 1 Reblog chain 2 Reblog chain 3 Reblog chain 4 Potentially relevant ask
I acknowledge that some of my earlier responses in that conversation were poorly worded and could potentially lead to a little confusion. However, I ended up clarifying everything so many times that the only good faith explanation I can think of for these wild misinterpretations is that people were seeing my arguments largely out of context. Now, though, I don't want to see any more straw men around here. You have no excuse, there's a convenient list of links to everything I said. As of posting this, I will ridicule anyone who ignores it and sends more hate mail. You have no one to blame but yourself for your poor reading comprehension.
What Prompted Me to Start Arguing in the First Place
There is one more thing that people kept misinterpreting, and it saddens me far more than anything else in this situation. It was sort of a culmination of both the things I already mentioned. Several people, notably including the one I was arguing with, have insisted that I'm trying to talk over physically disabled people.
Read the posts again. Notice how the original post was speaking for "everyone" in saying that AI isn't helpful. It doesn't take clairvoyance to realize that someone will find it helpful. That someone was being spoken over, before I ever said a word.
So I stepped in, and tried to oppose the OP on their universal claim. Lo and behold, they ended up saying that I'm the one talking over people.
Along the way, people started posting straight-up inspiration porn.
I hope you can understand where my uncharacteristic hostility came from in that argument.
160 notes · View notes
vulpinmusings · 4 months
Text
My speculations on Indigo Park
I'm putting this post under a read-more in case it finds someone who hasn't played Indigo Park yet and wants to experience it blind.
(BTW, it's free and takes about an hour to finish so just go play it. The horror value's kinda tame overall, but trigger warning for blood splatter at the end.)
Why Rambley doesn't recognize Ed/the Player: The collectables notes make it obvious that our Character, Ed, used to be a regular guest at Indigo Park as a kid. Yet, when Rambley goes to register them at the beginning he says he doesn't recognize Ed's face. I've seen speculation that this might be due either to Ed's age or the facial data database being wiped or corrupted after the park's closure. However, I think there's another possibility.
The Rambley AI Guide was a relatively new addition to the Park. Indigo Park is essentially Disneyland; it's been around for a long time and I rather doubt that the technology for a sentient AI park guide was available on opening day. Rambley mostly appears on modern-looking flat-screens, but in the queue for the railroad he pops up on small CRTS, so technology has advanced over the park's life time. I suspect that Rambley as an AI was implemented a short time before whatever caused the park to be shut down, and the reason that Ed's face isn't already in the system is because Ed just never went to the Park during the time between Rambley's implementation and the closure.
Rambley needs Ed just to move around. Rambley claims he'd been stuck in the entrance area since the closure. That might imply that as an AI guide he's not permitted to move around inside the Park unless he's attached to a guest, and he has to stick close to them. He's probably linked to the Critter Cuff we wear, which would explain why he insists we get it and doesn't just override the turnstile or something. He still needs cameras to see us and TVs to communicate, but it's the Critter Cuff that determines which devices he's able to use at a given moment.
There are other AI Guides. Rambley's limitations in where in the park he can be seems inconvenient for an AI that's meant to assist all the park's guests. Perhaps during normal operations he was less limited because every guest had a Critter Cuff on, but that might have put too much strain on his processing if he was the only AI avatar. Ergo, some or all of the other Indigo characters could have been used as AI guides as well; either a guest would be assigned to one character through the whole park or the others would take over for Rambley in their themed areas while the raccoon managed the main street. Due to the sudden closure, the other AIs may be stuck in certain sections of the Park like Rambley was stuck at the entrance, and we'll interact with them and/or free them as part of the efforts to fix the place up.
The "mascots" are unrelated to the AI. But Rambley believes they are linked. The official music video for Rambely Review has garnered a lot of speculation for how different Rambley's perception of how the Mollie Macaw chase ended is to what we saw in the game. I'm not 100% sold on the idea that Rambley flat out doesn't know that the Mollie mascot got killed. His decision to drop his act and acknowledge the park's decayed state is because he sees how freaked out Ed is by the Mollie chase, and he seems to glance down toward Mollie's severed head when he trails off without describing the mascots. HOWEVER, I don't think he sees Mollie as being truly dead. He's possibly come to the conclusion (or rationalization) that the AI guides, based on the actual characters, are stuck inside the feral fleshy mascots and the mascot's death has led to Mollie's AI being liberated. This idea will stick with him until such time as we encounter an AI character before dealing with the associated mascot (likely Lloyd).
Salem is central to the park's closure. All we really know about Salem the Skunk is what we see in the Rambley's Rush arcade game, where Salem uses a potion to turn Mollie into a boss for us to fight. This reflects real world events, although whether Salem instigated the disaster due to over-committing to their characterization or was merely a catalyst that unwittingly turned the already dubious new mascots into outright dangers remains to be seen.
Rambley's disdain for Lloyd is unwarranted. Collectables commentary indicates that Lloyd's popularity may have been eclipsing Rambley's, and that ticks Rambley off. That's not the fault of the Lloyd(s) we're going to interact with, however. That's on Indigo's marketing for emphasizing Lloyd so much. And who knows, maybe there were plans for other retro-style plushies, but the Park got shut down before those could come out. Either way, while Lloydford L. Lion may be a bit of an arrogant overdramatic actor, the AI Guide version of him isn't going to come across as deserving Rambley's vitriol, and that's going to be the cause of one chapter's main conflict.
38 notes · View notes
Text
“dont irl artists base their art on other artists how is ai different” i mean there’s a million answers but most crucial is that real artists don’t train on actual child sexual abuse material and ai databases have been found to have been trained on those! like am i going crazy why are people not fucking mentioning that. it's immoral to use a program that uses actual child abuse images to make art for you and bc of the way ai is currently trained It Almost Certainly Is. csam is not exclusive to the “dark web” or whatever if you’ve talked to a mod of anything ever you’ll know idiots try and put the most awful graphic images of children being harmed on places like Reddit and Twitter all the fucking time. places ai Is Trained On. and an ai can scrape an image before a human can delete it. i don’t want an image that is in any way associated with a child being molested and that is not an acceptable way for any technology to act.
16 notes · View notes
radigalde · 2 months
Text
Okay, I just can't. I need to rant about a very specific thing from 7.0 main story. Spoilers, ofc.
.
.
.
Playing through Stormblood and post-Stormblood, I thought that Lyse's decision to invite a very openly and well-known tempered snake into the city without any supervision (despite working with Scions for years and knowing about tempering from experience) would be THE stupidest moment of the entire game. And it was, right until the Living Memory from Dawntrail.
Immediately after entering, the gang just... go oh so slowly through this Disneyland, talking with locals, tasting the food, playing with gondolas and whatever else. Under the excuse of "this is an expansion about shoehorning knowing other's lives and cultures, so we must learn about these people and therefore their deaths won't be in vain".
Guys. Those are not real people. They are not even souls of the deceased locked in the limbo. This is literally a simulation made by a generative AI of sorts, which calculates on what the person should look like, do, say, etc. based on the database of extracted memories. And each second of this Disneyland's existence is fueled by burning through the very real souls. Including the very same people whose memories are used for simulation. It's not even "saving dead at the cost of living" or "saving past at the cost of future", it's akin to saving fucking memory NFTs at the cost of everything and everyone.
Okay, Wuk Lamat is dumb as fuck and learned about the whole reflections-souls-aether mumbo-jumbo like yesterday, but what the fuck is wrong with the rest? Are they nuts? Did they loose the last brain cell somewhere in the previous slog? They should have turn the whole thing off ASAP, not take a leisure stroll.
And it's so clear that the writers wanted to make the same dramatic plot twist as with the recreated Amaurot in Shadowbringers, but just as everything else in this MSQ it flopped horribly. Boo hoo, dead Namikka's memory is here too, it's so sad, Alexa, play Despacito while Namikka's very soul is being slowly disintegrated to fuel the illusion, completely unaware of her reunion.
I'm so fucking done, I hate DT's MSQ so much--
14 notes · View notes
Note
Tbh the reason AI can't replicate reality in a realistic way is simply because you can't recreate reality. You can simulate reality, sure. But to properly recreate reality isn't possible. The reason that is is because there are no lines in real life. Images are made up of pixles and real life is made up of billions of hyper-complicated things. I can very easily see the distance of the doll on my desk to the wall. Can I tell you *what* the distance is? No, but I can see how far the doll is from the wall. Computers can't do that. They think in numbers as they are forced to decipher *flat* images. To get an AI to create an even semi realistic reality would be ung_dly expensive because you would have to teach the AI using real life distance and form, not just letting it calculate how far or what form is what from a flat image. But again, cameras can't see distance and form in the same way human eyes do. Cameras can only capture a flat image and have to decipher through said flat image. AI doesn't understand the complexities of how things move, look, or even sound because it can't look at it the way a pair of eyes could. I *know* how much I have to extend my arm to touch something, because I can see how far away it is. I know where my posters begin and end, not because of *just* a color difference or hue change, but because I can *see* exactly where they end. Computers will never be able to replace artist, maybe in the mainstream industry BUT they are still going to have to hire real artist to make their content because AI can't produce exactly what you want because it can't think like a human brain. There are companies who've tried to use AI to replace certain aspect and it's proved to be so frustrating that these animators are forced to reanimate the ai work, because it just *isn't* what they wanted or need for the project.
AI assistant tools can certainly be helpful to artists, especially in the industry. But the fat cats in Hollywood already know they can't *actually* get rid of us, because their silly robots just don't do it right.
all of this is true yes and I think moreso even without questioning the reality of human perception there is just the fact that ai doesn’t think in the same way a conscious being does. text algorithms don’t generate compelling (or, let’s be real, comprehensible) narratives because they work by stringing together words one by one — every singular word is followed by the most likely next singular word based on whatever database the model is using. ai can’t write unique characters or dialogue or even navigate most plot holes because it doesn’t have a memory of what it’s said beforehand and even if it did it wouldn’t have a larger context to place its writing within
the same goes for image algorithms. sure, an ai can give you can approximation of a knight, but the armor is going to be completely nonfunctional if you examine it even a bit. an ai can give you a room with the prettiest color palette in the world, but there’s also going to be a hole in the ceiling with a branch going through it because it doesn’t understand the concept of skylights beyond knowing vaguely what they look like. regardless of whether or not what it’s doing counts as “thinking” (though I do think there’s a pretty clear answer to that), what ultimately matter is that an ai is incapable of thinking critically. you can give an image algorithm a prompt like “add flowers in foreground” yes but you’re never going to succeed with a prompt like “follow the laws of physics”
9 notes · View notes
faif-girlcel · 9 months
Text
Been playing Mass Effect lately and have to say it's so interesting how paragon Shepard is the definition of a "good cop". You're upholding a racially hierarchical regime where some aliens are explicitly stated to be seen as lesser and incapable of self governance despite being literal spacefarers with their own personal governments, and the actual emphasized incompetence of those supposedly "capable of governing", the council allows for all sorts of excesses and brutality among it's guard seemingly, and chooses on whims whether or not to aid certain species in their struggles based on favoritism, there is, from the councils perspective, *literal* slave labor used on the citadel that they're indifferent to because again, lesser species (they don't know that the keepers are designed to upkeep the citadel they just see them as an alien race to take advantage of at 0 cost), there is seemingly overt misogyny present among most races that is in no way tackled or challenged, limitations on free speech, genocide apologia from the highest ranks and engrained into educational databases, and throughout all of this, Shepard can't offer any institutional critique, despite being the good guy hero jesus person, because she's incapable of analyzing the system she exists in and actively serves and furthers. sure she criticizes individual actions of the council and can be rude to them, but ultimately she remains beholden to them, and carries out their missions, choosing to resolve them as a good cop or bad cop, which again maybe individually means saving a life or committing police brutality, but she still ultimately reinforces a system built upon extremely blatant oppression and never seriously questions this, not even when she leaves and joins Cerberus briefly.
And then there's the crew, barring Liara (who incidentally is the crewmate least linked to the military, and who,, is less excluded from this list in ME2,, but i wanna focus on 1) Mass Effect 1 feels like Bad Apple fixer simulator, you start with
Garrus: genocide apologist (thinks the genophage was justified) who LOVES extrajudicial murder
Ashley: groomed into being a would-be klan member
Tali: zionist who hated AI before it was cool (in a genocidal way)
Wrex: war culture mercenary super chill on war crimes
Kaidan: shown as the other "good cop" and generally the most reasonable person barring Liara, but also he did just murder someone in boot camp in a fit of rage
Through your actions, you can fix them! You can make the bad apples good apples (kinda) but like,,,,
2 of course moves away from this theme a bit while still never properly tackling corrupt institutions in a way that undoes the actions of the first game, but its focus is elsewhere and the crew is more diverse in its outlook
Ultimately i just find it interesting how Mass Effect is a game showcasing how a good apple or whatever is capable of making individual changes for the better but is ultimately still a tool of an oppressive system and can't do anything to fundamentally change that, even if they're the most important good apple in said system.
Worth noting maybe this'll change in Mass Effect 3, which i have yet to play as im in the process of finishing 2 currently (im a dragon age girl) but idk i like how it's handled at first i was iffy on it but no it's actually pretty cool.
Also sorry if this is super retreaded ground im new to mass effect discourse this is just my takeaways from it lol
12 notes · View notes
realbeeing · 2 years
Text
become your own librarian
i think we are coming to a time where the former promise of the internet-- namely its function as a vast library of information previously out of reach-- is fading out. for multiple reasons, the search engines are losing their ability to actually find anything based on your keywords, and whatever search results do appear are bogged down with a swarm of dead, useless AI-driven webpages or sites made by the various corporate overlords & their institutions, who squeeze their questionable and meager info between the main event: advertisements. those weirder, more esoteric independently-curated websites are much harder to find. 
i think we take it for granted that the internet is sort of this permanent lifeline to information that makes things like books or other more physical media obsolete. we would like to believe that search engines and databases are some kind of naturally-produced phenomena which will effortlessly evolve into the best possible form to suit progressive human needs. No, it’s very possible that these systems will fail, and they already are starting to. 
the same goes for access to music, art, films, etcetera. we shouldn’t assume it will always be available to stream and view online because you have a membership. the shift from owning music to streaming promised a superior convenience under the false notion -a fantasy really- that the current state of media technology will go on, as it is, forever. it doesn’t take much for a server to crash, and it also doesn’t take much for the whole system to be bought out by some other company and suddenly you are locked out of all of “your” music until you pay their increased monthly service charge. 
another thing, it has become harder to discover good information and good art compared to the earlier days of the internet because there’s just so much of everything already, most of which is a complete pile of waste-- waste that clogs the pipes so nothing of value can get past. in conspiracy theory circles, they might also call this disinformation or misinformation-- a type of “psy-op” that creates the illusion of abundant & diverse sources of information, all while the real knowledge is covered up, distorted, or slandered. 
we will have to consider all of this soon, i think. how do we learn new things in a way which is reliable and connected to something other than the dominant tech corporations? how do we source our knowledge, what private databases are we creating in our own private space and how are we giving life to it? a life of its own, so that it cannot be erased either suddenly, or perhaps more perniciously, in a slow way, so gradual that we don’t even realize, for example, how when we search for something now, there are seemingly hundreds of pages of results, but each and every page has the same 10 websites repeated over and over again...
20 notes · View notes
cyberart1a · 11 months
Text
How AI art is made and how it can be a problem
Lucas Melo
Art made by AI (Artificial Intelligence) nowadays is something really popular, in every place on the internet. The idea of AI who makes art is teaching an algorithm how to draw, and use that algorithm to make the drawing for you. Also, the way the tool is made can lead to some problems involving right of images. 
You can ask the AI to do everything, for example, you can ask it to draw a knight riding a bear while holding a candy weapon (that says a lot about how powerful AIs can be), and the AI will make the drawing based on its database. There are a lot of websites like “crayon.com”, “creator.nightcafe.studio“ and many others who provide you an AI to make whatever art you ask for. Today the tool is not really developed, still has some problems drawing hands, elbows and other things that are being fixed as the time passes. 
To develop an AI you take an algorithm and feed it with data and that data is used to teach the algorithm how to make a task, sometimes the data is supervised by humans, sometimes it’s not. Depends on the way the algorithm it's built. While the AI does the same task multiple times the technique develops and perfections every time it is used. 
That means that when an AI is made to do art, it needs to be fed by art, made by actual artists, people. Learning from the data it received, the AI also becomes able to copy the style of artists. It wouldn’t be a problem if they had the consent of the artists to do that, but that obviously isn’t the case since the data is automatically taken from the internet most times.
Artist’s drawing and painting styles are being copied by AI. Not only does it use artworks without consent, it can make some artists lose their jobs. Since the AI is able to draw for free (or cheaper) the same style of the artist, there’s no reason to hire an artist to do that. Nowadays it’s not happening, because these AI are in development and still lack a lot of quality and precision to do what the user is exactly asking, but if it keeps going like that, it’s just a matter of time for it to happen. 
In Conclusion, AI itself is not the problem. The problem is how they are made. To make an AI that does art, you have to use other people's art, and that is made without any consensus, and that may affect the artist's job.
3 notes · View notes
clueingforbeggs · 1 year
Text
Actually, I’ve noticed that Tumblr, the internet at large, probably, too, has a problem of people being ‘anti’ something, without actually knowing what they are ‘anti’
I’m just thinking about that post about an AI thing that was helping to prevent art theft and people were going ‘I can’t believe they made an anti AI AI!’ As if AI stands for Art Itheft not Artificial Intelligence.
Also the sudden blame that was being suddenly leveraged against people who used AI art generators, as if they were personally responsible for the art theft. After ages of nothing but ‘Please fucking tag it unreality thanks.’ Like, you can think that it’s not ethical or whatever without acting as if someone who uses a generator is personally creating the database it uses.
Also like… ‘Nobody deserves art so if you couldn’t pay an artist originally you don’t deserve to use an AI art generator’ Fuck you. Somebody using a generator who couldn’t pay you to begin with is not stealing your wages. Someone with no money is not a potential employer who has decided to use an AI instead, there are different types of people in the world.
I too think that a database should be made ethically. And I think some of the AI artist guys are just… NFT people moving on to the next thing that will definitely make them money this time.
The problem you have is with the datasets. And the regulations around money and AI art. You are anti databases based on art theft and corporations using computers instead of people. You are not anti AI art, and you are definitely not anti AI. And if you are, there’s something wrong with you.
The technology is not the problem. Every person using it is not the problem. The people behind it and the lack of ethics involved in constructing the datasets is the problem. A 13 year old generating AI fanart of their favourite character isn’t and is not. Fucking. Stealing. A poor 21 year old with little money who couldn’t afford a commission who uses a generator to make an image of their OC is not. Fucking Stealing.
Be mad at the fucking corporations. Say that you don’t like it when people use the generators due to their issues. I don’t give a fuck. Just stop telling people they’re solely responsible for your hardships when they wouldn’t have given you money to begin with.
3 notes · View notes
moodr1ng · 1 year
Note
On anon because this is an unpopular opinion but like I need to tell SOMEONE and ur posting about how people are fear mongering ai too much... Like... As much as it sucks ai are using artists and writing as references without permission.... It's fair use. It's the legal definition of fair use. If people manage to make that stuff not allowed we are very few steps away from not allowing fanart or music remixes you know!?? Like I know I know it SUCKS to have your art used to train an AI without permission and trying to replace real designers with it is disgusting but GUYS IT'S FAIR USE!!!!!! ITS CREATING SOMETHING NEW. I'm literally dying here dude.
im not entirely sure whether it falls under fair use bc i dont know for sure that most ai like.. even uses writing/art from any singular person to a degree where itd even come into conflict w copyright? these databases are so so massive that one piece of art put into it is a drop in a gigantic bucket, and also people kind of imagine that the ai stores all of that data and uses specific images, but it doesnt - its not like, 'keeping' the images and directly referencing from them.
in any case tho i dont rly base my view on the subject on whether or not its legal, and i do think it is generally scummy to use the work of individual artists online without their consent even if its legal. itd also be a huge benefit if the companies training ai had to use only public domain works, because a lot of the public domain is not properly digitized and accessible, and companies needing to do that work to train ai would benefit everyones access to art! but yeah, its cheaper to just scrape a bunch of art from google images and whatnot, and capital always wins, so!
ig this is all part of the point for me tho - like, even just the training of ai art machines (not even the finished tech itself but the process of making it) could have benefits for all of us if it was used in a way that privileges that benefit rather than being primarily motivated by money making. the issue is always always always the capitalist machine or whatever.. sorry lol im jetlagged so i may not make senseeee
2 notes · View notes
gokukazoo · 2 years
Text
The growing popularity of AI art should worry anyone who even remotely cares about the future of creative expression as we know it. Sure, the AI can’t draw fingers right now, but eventually, as more and more people feed into the algorithms for these things, they will get more accurate.
Right now, the biggest problem with them is that the only way they work is that the actual work of real, living artists is fed into a database without the artists’ permission, generating theoretically infinite pieces of “art” based on stolen art. If you’ve ever seen AI art before, you should know that you aren’t looking at an original work, but a facsimile of multiple real artists’ existing work shoved into a blender (again, without consent and often against the artist's express wishes). If you actually want to see what you’d look like as an anime character or a DnD monster or whatever, I’d recommend commissioning an actual digital artist. You’ll get a better picture and you’ll be helping an artist make a living, which is dreadfully difficult to do as an artist.
But what will this sort of thing look like in the future? Well, the translation industry went through a similar sort of change when AI was introduced. Now, AI is still pretty stupid, so any translations done by it are wildly inaccurate. The problem is, that’s usually “good enough” for companies that don’t have a lot of money and are just meeting the bear minimum of language accessibility. Well, if an AI can do it for free, the companies who want actual, real translators will be willing to pay significantly less for those translations because, well, the competition does it for free. And this is basically what happened.
This is undoubtedly going to creep into other creative fields such as freelance writing. Now, we as everyday people generally would say an actual human creating the thing is better than AI creating the thing, because all forms of art are meant to be about human expression. IMO, it’s not really art if an AI does it, because an inherent piece of what makes art, well, art is that it’s coming from the imagination of a person. Even photography has that human element, as you are viewing things from a carefully thought out perspective.
So okay, giving humanity the benefit of the doubt, most people would rather have the content they consume to be stuff made by humans. But what about corporations? You know, the ones funding all the mainstream content? Well, frankly, they don’t give a damn, at least not the suits at the top making all the decisions. They’ll make whatever decision nets them the biggest profit, and all the numbers and metrics in the world will choose the artist that charges nothing over the artist that charges anything. See, art isn’t cheap to make. Beyond the physical resources it requires, it takes a ton of time and energy. AI can do it instantly, and it does it all by copying the work of real humans who have already put in the time.
So what we’ll end up with is a slow descent into algorithms, AI art feeding AI art, until the human element is barely a layer of sediment.
Now, okay, you might say, well Casey, surely people will still want to make real art themselves, right? And, well, yeah! I’d hope so! But we live in an economic system that requires us to make $$ to survive, and to continue making art. If artists aren’t getting paid, they aren’t going to make art. And this issue will keep compounding upon itself until the industry is dominated by AI.
Art is meant to speak to the human experience, and it comes from the real emotions, thoughts, and ideals of the artist. That's what makes it beautiful, and that's what makes it speak to us on a deep, intimate level. I don’t want a future where that beauty and truth is replaced by an algorithm.
5 notes · View notes
darkmaga-retard · 9 days
Text
When Apple introduced Siri (a personal assistant) in October 2011, the world got its first taste of AI. Since then, Siri has improved considerably, and now Apple, after spending billions over the years, is about to introduce Apple Intelligence. Who cares? The top uses for Siri are checking the weather and playing music, while 85% of users prefer to type search queries.
CNN Business says that “Apple is wedging AI into its phones like a new U2 album no one asked for.” At what price? What will it do to produce additional revenue?
At the end of the day, when the AI bubble pops, it will make the “dot-com bubble” look like a bump in the road.
Forgetting the consumer market, the AI industry is lusting for control over a constant stream of real-time data from financial markets, IoT sensors embedded everywhere and location data. These applications are driving the stampede to establish humongous, energy-gobbling  data centers around the world. ⁃ Patrick Wood, TN Editor.
Since early 2022, the big buzz in the tech industry, and among laymen in the general public, has been “artificial intelligence.” While the concept isn’t new—AI has been the term used to describe how computers play games since at least the 1980s—it’s once again captured the public’s imagination.
Before getting into the meat of the article, a brief primer is necessary. When talking about AI, it’s important to understand what is meant. AI can be broken down into seven broad categories. Most of the seven are, at best, hypothetical and do not exist. The type of AI everyone is interested in falls under the category of Limited Memory AI. These are where large language models (LLMs) reside. Since this isn’t a paper on the details, think of LLMs as complex statistical guessing machines. You type in a sentence and it will output something based on the loaded training data that statistically lines up with what you requested.
Based on this technology, LLMs can produce (at least on the surface) impressive results. For example, ask ChatGPT 4.0 (the latest version at the time of writing) the following logic puzzle:
This is a party: {}
This is a jumping bean: B
The jumping bean wants to go to the party.
It will output, with some word flair, {B}. Impressive, right? It can do this same thing no matter what two characters you use in the party and whatever character you desire to go to the party. This has been used as a demonstration of the power of artificial intelligence.
However, do this:
This is a party: B
This is a jumping bean: {}
The jumping bean wants to go to the party.
When I asked this, I was expecting the system to, at minimum, give me a similar answer as above, however, what I got was two answers: B{} and {}B. This is not the correct answer since the logic puzzle is unsolvable, at least in terms of how computers operate. The correct answer, to a human, would be I{}3.
To understand what’s going on under the hood, here’s the next example:
Dis be ah pahtah: []
Messa wanna boogie woogie: M
Meesa be da boom chicka boom.
This silly Jar Jar Binks-phrased statement, if given to a human, makes no sense since the three statements aren’t related and there isn’t a logic puzzle present. Yet, GPT4 went through the motions and said that I’m now the party. This is because—for all its complexity—the system is still algorithmically driven. It sees the phrasing, looks in its database, sees what a ton of people previously typed with similar phrasing (because OpenAI prompted a ton of people to try), and pumps out the same format. It’s a similar result that a first year programming student could produce.
0 notes
skillyards · 2 months
Text
Exploring the Integration of Full-Stack Web Development and AI
https://www.skillyards.com/Web_logo/Blog_Images/blogAi.jpg
Dive into the fusion of Full-Stack Web Developent and AI with us! Discover cutting-edge insights tools, future opportunities, and more that bridge coding and artificial intelligence, shaping the future of tech.
Demystifying Full-Stack Web Development
The full-stack development is the procedure of creating the front-end and the back-end of the applications. Whatever the application is, it has both a frontend (user-facing) and a backend (database and logic) component. The front-end is the part of the system that has the user interface and the code for user interactions with the application. The backend holds all the codes necessary for the program to work, such as the integration of data systems, instruction of other applications, and the processing of data.
How does AI contribute to innovation in Full-Stack Web Development?
Automating routine: AI is a great tool for developers to be able to take on mostly boring and time-consuming tasks like writing functions or setting up boilerplate code, thus, this will give them more time to focus on more exciting parts of their jobs.
Testing made easier: Artificial intelligence can use predictive analysis to foresee possible situations, automate testing procedures, and help developers find and repair bugs more quickly.
Improvement of User Interface/User Experience (UI/UX): Thus, artificial intelligence technologies provide the users with design ideas according to the data on user behavior, enhancing the user experience. bugs more quickly.
https://www.skillyards.com/Web_logo/Blog_Images/webdev.png
Enhancing Full-Stack Web Development with AI Tools
Codota: Codota is an AI-enabled integrated development environment with code prediction features which are the basis of the AI models that understand the patterns of coding and can suggest the code appropriate to the user's code habits and the context of the code through machine learning.
Rookout: The AI-based rookout debugging tool enables real-time data collection and pipeline abilities which in turn help the developers in finding and fixing the bugs quickly while using the AI to understand the code flow and to trace the source of the errors more accurately.
Testim: The Testim is an AI-based testing tool that makes the lives of developers easier by providing them with the facilities to plan, conduct, and supervise the tests.
Applitools:It is a visual testing and monitoring tool that is powered by artificial intelligence and thus, it is an artificial intelligence-driven tool that is designed for mobile application testing and monitoring.
The Impact of AI on Full-Stack Web Development
The Full-stack developers are the ones at the forefront when it comes to the application of AI's capabilities to the production of feature-rich, intelligent web apps with back-end development practices and user experience design features. AI Chatbots and Virtual Assistants have now turned out to be the most used tools of the websites, because they have become unique to each user and thus, made the general user experience better. Full-stack developers these days can apply machine learning and AI full-stack models and also use Natural Language Processing (NLP). Hence, making possible the creation of intelligent and target-shooting user interfaces using these AI technologies.
In what ways can AI improve Full-Stack Web Development?
Enhanced User Experience: AI in full-stack development creates the perfect user-customized interaction, therefore increasing user engagement and satisfaction by providing them with the content and the features that are best for them.
Efficient Automation: AI can carry out repetitive tasks like code generation and testing, which frees the developers to concentrate on innovation, in this way, it combines faster development cycles and the optimization of the resources.
Scalability and Performance: AI smartly allocates the resources and finds the performance lags, which is the reason for the smooth running of the applications with the increase of the workloads.
How does AI pose challenges in Full-Stack Web Development?
Complexity in Implementation: The process of blending AI into full-stack web development needs knowledge of AI algorithms and technologies, which consequently makes the development and maintenance of the system complicated.
Data Privacy and Security Concerns: Our society depends on AI-powered systems which makes data the most important resource. Thus, there are big concerns regarding privacy, security, and ethical use of data; as such, we learn the most about data protection measures and compliance with the regulations.
Skill Gap and Training Needs: AI-capable full-stack developers are needed, but their development requires specialized training and upskilling, hence the problem of talent acquisition and retention becomes serious in a fast-changing field.
Key Takeaways
Summing up everything, Full-Stack Web Development with AI represents a transformative leap into the future of technology education. This integration not only equips learners with comprehensive web development skills but also empowers them with the capabilities of artificial intelligence This fusion is changing the way we live, reshaping the way of life by providing better user experiences. Be the first to embrace the future, which is a period where innovation has no limit and it is all because of the unstoppable fusion of coding and artificial intelligence.
0 notes