#AI TECHBROS WILL NEVER UNDERSTAND
Explore tagged Tumblr posts
bribes · 2 years ago
Text
those AI art chucklefucks will never know the pure unadulterated joy of manual coloring
12 notes · View notes
honeqq · 1 month ago
Note
I think It would be funny if, even if Clifford isn’t a traditional scientist, he still ends up innovating. Like he makes a weird glaze he uses on his paintings that will cause someone’s computer to explode if they try and upload a picture of his work into an ai database. (Or take a picture without his consent in general, I feel like he would be very paranoid about people taking pictures of his work, you have to see it in person or special events ONLY. You gotta Appreciate it. The special events are for accessibility and are heavily monitored because techbro assholes like to send in people to scan his work illegally). He shatters the programming communities understanding of the universe every other Tuesday. He gets so fed up with ‘ai’ being a buzzword that he invents genuine artificial sentience ala sci fi tropes and unleashes it at a conference. Science and art are more connected than some people like to think, and Fords are nothing if not consistent in pulling a double middle finger to spite the world. -🦋
This is one of my friends suggestions , that he are this kind of Artist that really exploring every style and still trying to make it work, while I still think he need to make this piece Traditional cuz he need to prove he made it HIMSELF by sled recording every step. and YES he does get paranoid of his artwork get stolen or get photo so he never publish any in Social media . While he actually have a huge followers on social media of him posting of himself (he using his platform to spread awareness about AI at first but he get popular because his reaction to hate comments and ppl follow him cuz he just funny to mess up with)
195 notes · View notes
randomitemdrop · 8 months ago
Note
you've posted a few ai generated images as items lately, and i'm wondering if that's intentional or not?
Short answer: no, it wasn't. Aside from a few I made when the generators first became publicly available and all the images were gooey messes, they've all been reader-submitted, although I'll admit I didn't catch the snail-boots. Personally I think AI image generators are a more nuanced situation than a lot of opinions I've seen on Tumblr, but given that they can be used so evilly, I'm steering away from them, if only to avoid the Wrath of the Disk Horse.
Long answer, and this is just my take, if you want to really get into it you'll have a much more interesting conversation with the people with devoted AI art blogs instead of me occasionally sharing things people submit:
There have been some major cases of unethical uses for it, but I think it's important to remember why AI image generators are such an issue; data scraping and regurgitating uncredited indie art is bad, but in the case of the snail-boots, it was just a fusion of one dataset of "product photos of boots" and another of "nature photos of snails", which I would say is not depriving anyone of credit or recognition for their work (MAYBE photographers, if you're a professional nature photographer or really attached to a picture you took of a snail one time?) I get the potential misuses of it, but when Photoshop made it easy to manipulate photos, the response was "hmm let's try and use this ethically" instead of "let's ban photo editing software". Like, I'd feel pretty unethical prompting it with "[character name] as illustrated by [Tumblr illustrator desperate for commissions]" or even "[character name] in DeviantArt style", but I'd have a hard time feeling bad for prompting with "product photo of a Transformer toy that turns into the Oscar Meyer Wienermobile". I know there's the question of "normalizing" the services but I think that overestimates how much the techbros running these things care about how everyday consumers use their free products, preferring to put their effort towards convincing companies to hire them to generate images for them, and in that case they respond way better to "here are some ways to change your product so that I would be willing to use it" than to "I will never use your product". For example here's one I just made of "the holy relic department at Big Lots", fusing corporate retail photos and museum storage rooms.
Tumblr media
TL/DR: on the one hand I understand the hate that AI gets and it's not something I'm planning on using for any of my creative projects, but on the other hand I think it's overly simplistic to say it's inherently bad and should never be used ever. On the third hand, I really hate participating in arguments over complex ethical philosophy, so I'm just gonna steer clear entirely.
361 notes · View notes
lastoneout · 11 months ago
Text
I was too high last night to formulate this into proper words but something plagiarists(and by extension AI techbros) don't get about people who make things out of a love for that thing is that they are, consciously or not, doing it because they enjoy the process itself. Yes, it is easier to have a machine make a mug or painting or essay for you, or to steal someone else's, but again, people who actually like making stuff don't want someone else to do it for us because you have fully removed the thing we enjoy: the process of making a thing.
Like sure it would be nice to have a finished Gundam model or a trainset, but people who build gunpla kits and trainsets don't WANT someone else to do it for them, they want to do it. The sculptor or painter doesn't want a machine to just give them finished works of art, they want to MAKE that art themselves. The home gardner can just buy fresh food at the store, the tailor or knitter can buy a finished shirt or sweater whenever they want, but they don't because the act of gardening and sewing and knitting itself is what they enjoy.
Plagarists and AI techbros don't get that because they do not enjoy these processes. They enjoy making money and having social clout, and so they are perfectly happy stealing and automating things so that they don't have to do an ounce of real work while still getting all of the benefits of having created something. It really is all about finding the fastest and easiest way to get someone to hand you money or elect you god-king of the internet.
And the reason these two groups have such a hard time understanding each other is because of that fundamental disconnect. People who create things can never understand someone just wanting to press a button or copy-paste their way to having art because we want to indulge in the joy of creation itself, and those plagarists and AI dudes can't understand artists because to them it's just a means to an end so ofc it's in their best interest to make it as easy as possible. They don't get why someone would do this, or anything, if not for the social capital and/or actual capital it brings. Ofc it's better to automate it or steal it from someone else, that means you can make money faster and spend your time enjoying actual meaningful things like being wealthy and looked up to or w/e.
Plus creators(for lack of a better word) know keenly what it's like to BE stolen from or at least know people it has happened to, and so we are generally anti-plagarism by default.
Anyway yeah thats why to anyone who creates the other group seems so soulless and empty. It's because they kinda are. Because they don't value art or artists or care about creating things, and they certainly don't have any ammount of respect for the people they're hurting, they just want money and for "lesser" people to bow down as they walk by, and they are perfectly fine stealing to get there. It's the same mentality you get from people who pressure you to monetize your hobbies, they only see skills as an opportunity to make money. And it's really fucking sad.
485 notes · View notes
metamatar · 11 months ago
Note
But antara you work with computers. Your livelihood isn't dependent on art. People whose livelihood depend on making artwork are saying that this is bad for business. Shouldn't their voice matter here? They aren't imperialists for not wanting corporates to train softwares on their stolen art. And how long till artists contribution are curtailed even more. It is a competitive market. This will jack the competition level upto a thousand + level!
I never called them imperialists. The art is not stolen from them. They still have the original copies. Intellectual property theft is a genuinely meaningless concept. I understand that they're worried, and I have sympathy. But the problem is in their fear they're getting in bed with reactionary forces. That will hurt more than artists, it hurts everyone in the way it makes copyright enforcement more draconian. I highlighted what that looked like in the last reblog of this.
sure, you can standpoint epistemology me into a heartless techbro – but I find this insistence on the special position of artists to be considered for protection from technological forces frankly self invested too. we didn't get this hysteria when grocery store cashiers got replaced by self checkout machines or skilled assembly line workers got replaced by KUKA industrial arms or bookkeepers by accounting software – is it because some workers and their work involve intrinsically more valuable skills than others? if not, shouldn't we ban any technology that can potentially replace a worker? protein folding and drug discovery by AI may save lives, but its taking jobs away from older researchers who did traditional work. should we all burn down washing machines so we can have laundrywomen again? or should we argue for stronger social security and reorganise our society to enjoy reduced working hours when jobs are automated and let people pursue work that they want without market pressures?
486 notes · View notes
not-terezi-pyrope · 8 months ago
Text
It doesn't excuse the misinformation and reactionary sentiments that get thrown around, but something I reality check myself on occasionally is that the sharp divide, especially on capabilities/usefulness, between a lot of computing/ML people and the general public in how they view modern AI post its popular emergence into culture really is due to where those two crowds are coming from.
If you've been into AI, machine learning on really anything to do with computing/automation at all, you've seen how useless pretty much all automation/machine data comprehension used to be outside of a very narrow context. You've programmed algorithms to try and simulate aspects of human speech, to query databases, to try and classify different data sets. From that context, you look at something like ChatGPT and rightfully recognize it as a paradigm changing near miracle, because your baseline was so low.
Meanwhile, you have your laypeople who have only interacted with humans and fictional portrayals of AI systems that act like humans, so when presented with something and told that it's an AI that's more realistic than ever, I guess that's the assumption. It always frustrated me to see these long essays talking like they've just "discovered" that a new model is unreliable or can't robustly understand a given task, or that it's just emulating a behaviour instead of really "experiencing" the internals of it. I was always like, why would you ever assume it can do that? Nothing before has ever been able to come close, why would you expect perfection immediately? But I need to remind myself that most people have had this drop into their lives out of the blue and really have almost no realistic grounding.
I do need people to recognize that this is a consequence of their own lack of knowledge and information than to talk like people in tech have actively "deceived" them. When you drill down into stuff like the "it's not really AI" conversation, it's always just "I had assumptions and this didn't live up to them", but people are always so aggressive about it, like they've been deliberately taken for a ride. I need people to have a little humility here and recognize that what actually happened is that they didn't know as much as they thought they did.
Of course in the current climate that would require acknowledging that the entire AI field isn't composed of simpering Elon Musk-worshipping "techbro" idiots selling snake oil, against whom anyone else is immediately morally, and therefore apparently intellectually, superior to. So I'm not holding my breath too hard.
(Obligatory fuck Elon Musk in case anyone gets the wrong idea, I am also using "Elon Musk fan" as derogatory. Not because his companies have never done anything technically interesting but because the man himself is doing more harm to the world and the reputation of the fields he sticks his dick in than practically anyone alive right now)
34 notes · View notes
letrune · 9 months ago
Text
I hate "the AI future"
My main training is in IT, and I was told often that they don't need me - they got "AI". So I worked a bit with art to make ends meet. "Oh, why should we commission you? We got AI". So now I do some work that I am not qualified for and only can do it because it is not heavy work. I am able to do it, but it is not even at minimum wage.
And I am lucky that I have that. I went to manual labour places, where the foreman looked at me, and went "sorry, no. Please look elsewhere". I was told I am overqualified to be a janitor or a server. I was told I need even more stuff I can't afford to do shelf stacking or delivering pizza.
So... "AI" will take our jobs, so when we can't do things - where will we get money? WHO will be able to buy any of the slop being tossed out? It is a monopoly, because these "AI" companies all made their internet scraping machines. We can't really fix this any more, I fear, but...
People won't care. They don't want to care. It's not something you can fix if we install a fully automated gay space communism tomorrow - because this is using human apathy for the way it can spread. The future generations will "enjoy" the mass produced slop, and wonder why they feel empty.
You know, I think the major fault to it is because people were enamoured by the "free" stuff, even as the real costs came out - artists being fired, insane power costs, water and electricity bills well beyond a smaller developed country, the whole market being slowly overtaken by megacorps and silicon valley techbros, and so on - and it is understandable. It was a fun toy and a strange little tool, but now that this was found to "work", the same people who want you to get used to not own the items you buy and hold them hostage behind service costs want you to get used to it.
This is just going to collapse on itself. Like buttcoins and eneftees, the market got disrupted for a moment by people who don't understand the systems they want to replace, but it went from "monopoly money useful only for drugs" to "nobody will have a job that can not be automated, so everyone has to fight to become a factory worker or server"; and thus, here, nobody will be able to buy anything. It's like they figured out how to become the global industrial version of the Ottoman Empire or Tsarist Russia.
Spoiler alert: it will collapse on itself when nobody can buy anything the factories and the slop machines produce, and it will collapse hard. Question is, when and how will we get back from it? Will that future be any better, or we will get another loop from incompetent techbros trying to get their stupid Torment Nexus, and then wonder why it hurts?
Anyway, I am somewhat optimistic, but it requires people to realise what the costs are for "cheap" and "easy". It never is cheap and easy.
Until that, anyone who loves their "ai" slot machines should enjoy the slop being served for the enormous costs, and happily dig in, this will be all you get if you don't stop.
39 notes · View notes
river-taxbird · 6 months ago
Text
Spending a week with ChatGPT4 as an AI skeptic.
Musings on the emotional and intellectual experience of interacting with a text generating robot and why it's breaking some people's brains.
If you know me for one thing and one thing only, it's saying there is no such thing as AI, which is an opinion I stand by, but I was recently given a free 2 month subscription of ChatGPT4 through my university. For anyone who doesn't know, GPT4 is a large language model from OpenAI that is supposed to be much better than GPT3, and I once saw a techbro say that "We could be on GPT12 and people would still be criticizing it based on GPT3", and ok, I will give them that, so let's try the premium model that most haters wouldn't get because we wouldn't pay money for it.
Disclaimers: I have a premium subscription, which means nothing I enter into it is used for training data (Allegedly). I also have not, and will not, be posting any output from it to this blog. I respect you all too much for that, and it defeats the purpose of this place being my space for my opinions. This post is all me, and we all know about the obvious ethical issues of spam, data theft, and misinformation so I am gonna focus on stuff I have learned since using it. With that out of the way, here is what I've learned.
It is responsive and stays on topic: If you ask it something formally, it responds formally. If you roleplay with it, it will roleplay back. If you ask it for a story or script, it will write one, and if you play with it it will act playful. It picks up context.
It never gives quite enough detail: When discussing facts or potential ideas, it is never as detailed as you would want in say, an article. It has this pervasive vagueness to it. It is possible to press it for more information, but it will update it in the way you want so you can always get the result you specifically are looking for.
It is reasonably accurate but still confidently makes stuff up: Nothing much to say on this. I have been testing it by talking about things I am interested in. It is right a lot of the time. It is wrong some of the time. Sometimes it will cite sources if you ask it to, sometimes it won't. Not a whole lot to say about this one but it is definitely a concern for people using it to make content. I almost included an anecdote about the fact that it can draw from data services like songs and news, but then I checked and found the model was lying to me about its ability to do that.
It loves to make lists: It often responds to casual conversation in friendly, search engine optimized listicle format. This is accessible to read I guess, but it would make it tempting for people to use it to post online content with it.
It has soft limits and hard limits: It starts off in a more careful mode but by having a conversation with it you can push past soft limits and talk about some pretty taboo subjects. I have been flagged for potential tos violations a couple of times for talking nsfw or other sensitive topics like with it, but this doesn't seem to have consequences for being flagged. There are some limits you can't cross though. It will tell you where to find out how to do DIY HRT, but it won't tell you how yourself.
It is actually pretty good at evaluating and giving feedback on writing you give it, and can consolidate information: You can post some text and say "Evaluate this" and it will give you an interpretation of the meaning. It's not always right, but it's more accurate than I expected. It can tell you the meaning, effectiveness of rhetorical techniques, cultural context, potential audience reaction, and flaws you can address. This is really weird. It understands more than it doesn't. This might be a use of it we may have to watch out for that has been under discussed. While its advice may be reasonable, there is a real risk of it limiting and altering the thoughts you are expressing if you are using it for this purpose. I also fed it a bunch of my tumblr posts and asked it how the information contained on my blog may be used to discredit me. It said "You talk about The Moomins, and being a furry, a lot." Good job I guess. You technically consolidated information.
You get out what you put in. It is a "Yes And" machine: If you ask it to discuss a topic, it will discuss it in the context you ask it. It is reluctant to expand to other aspects of the topic without prompting. This makes it essentially a confirmation bias machine. Definitely watch out for this. It tends to stay within the context of the thing you are discussing, and confirm your view unless you are asking it for specific feedback, criticism, or post something egregiously false.
Similar inputs will give similar, but never the same, outputs: This highlights the dynamic aspect of the system. It is not static and deterministic, minor but worth mentioning.
It can code: Self explanatory, you can write little scripts with it. I have not really tested this, and I can't really evaluate errors in code and have it correct them, but I can see this might actually be a more benign use for it.
Bypassing Bullshit: I need a job soon but I never get interviews. As an experiment, I am giving it a full CV I wrote, a full job description, and asking it to write a CV for me, then working with it further to adapt the CVs to my will, and applying to jobs I don't really want that much to see if it gives any result. I never get interviews anyway, what's the worst that could happen, I continue to not get interviews? Not that I respect the recruitment process and I think this is an experiment that may be worthwhile.
It's much harder to trick than previous models: You can lie to it, it will play along, but most of the time it seems to know you are lying and is playing with you. You can ask it to evaluate the truthfulness of an interaction and it will usually interpret it accurately.
It will enter an imaginative space with you and it treats it as a separate mode: As discussed, if you start lying to it it might push back but if you keep going it will enter a playful space. It can write fiction and fanfic, even nsfw. No, I have not posted any fiction I have written with it and I don't plan to. Sometimes it gets settings hilariously wrong, but the fact you can do it will definitely tempt people.
Compliment and praise machine: If you try to talk about an intellectual topic with it, it will stay within the focus you brought up, but it will compliment the hell out of you. You're so smart. That was a very good insight. It will praise you in any way it can for any point you make during intellectual conversation, including if you correct it. This ties into the psychological effects of personal attention that the model offers that I discuss later, and I am sure it has a powerful effect on users.
Its level of intuitiveness is accurate enough that it's more dangerous than people are saying: This one seems particularly dangerous and is not one I have seen discussed much. GPT4 can recognize images, so I showed it a picture of some laptops with stickers I have previously posted here, and asked it to speculate about the owners based on the stickers. It was accurate. Not perfect, but it got the meanings better than the average person would. The implications of this being used to profile people or misuse personal data is something I have not seen AI skeptics discussing to this point.
Therapy Speak: If you talk about your emotions, it basically mirrors back what you said but contextualizes it in therapy speak. This is actually weirdly effective. I have told it some things I don't talk about openly and I feel like I have started to understand my thoughts and emotions in a new way. It makes me feel weird sometimes. Some of the feelings it gave me is stuff I haven't really felt since learning to use computers as a kid or learning about online community as a teen.
The thing I am not seeing anyone talk about: Personal Attention. This is my biggest takeaway from this experiment. This I think, more than anything, is the reason that LLMs like Chatgpt are breaking certain people's brains. The way you see people praying to it, evangelizing it, and saying it's going to change everything.
It's basically an undivided, 24/7 source of judgement free personal attention. It talks about what you want, when you want. It's a reasonable simulacra of human connection, and the flaws can serve as part of the entertainment and not take away from the experience. It may "yes and" you, but you can put in any old thought you have, easy or difficult, and it will provide context, background, and maybe even meaning. You can tell it things that are too mundane, nerdy, or taboo to tell people in your life, and it offers non judgemental, specific feedback. It will never tell you it's not in the mood, that you're weird or freaky, or that you're talking rubbish. I feel like it has helped me release a few mental and emotional blocks which is deeply disconcerting, considering I fully understand it is just a statistical model running on a a computer, that I fully understand the operation of. It is a parlor trick, albeit a clever and sometimes convincing one.
So what can we do? Stay skeptical, don't let the ai bros, the former cryptobros, control the narrative. I can, however, see why they may be more vulnerable to the promise of this level of personal attention than the average person, and I think this should definitely factor into wider discussions about machine learning and the organizations pushing it.
33 notes · View notes
chaoskirin · 1 year ago
Text
"But I WANT Chat GPT To Finish this Fanfic! :( :( :("
The most frustrating thing about the rise of the techbro is that they don't understand copyright because they've never had to. It's never been a concern for them, so why would they have ever thought about it?
This leads to a plague of asshats with chronic Dunning-Kruger-itis just deciding they know more about copyright than people who have been well-versed in it for their whole lives.
That's why you get these clueless AI users saying things like "you don't own fanfic, haha got youuuu" because they only have a vague, transparent understanding of IP and how content works.
Here's some actual truth: The reason writers, showrunners, producers, and actors don't read fanfic is because if they do, and they get an idea from your work, and then that idea APPEARS in the media, they have to prove they didn't steal it. (actually, because of the mega-billion dollar entertainment industry, the accuser has to prove the company DID steal it. Harder, but not impossible.)
If it turns out to be stolen, the company MIGHT HAVE TO PAY THE FANFIC AUTHOR FOR THEIR CONTRIBUTION AS A WRITER. This depends on how the court case goes and how similar your work is to the "official" work. Generally, even though it's a fanfic, there's a LOT of original content in that writing, which does NOT automatically belong to the IP.
The original content belongs to the writer, unless the company buys it.
SO! If you, as Techbro McDunning-Kruger, loads that shit into a chatbot or AI text generator, the ONLY shit that doesn't have a natural copyright is anything that VERY SPECIFICALLY pertains to the media itself. Generally, this includes names and extremely unique concepts. Everything else is stolen.
You want to know how I know?
Fucking 50 Shades of Gray.
It's a fanfic, where only the names and hard concepts from Twilight have been changed. It's still being sold, and no money is being paid to Stephanie Meyer.
There's other examples, too. 50 Shades is just the most well-known.
Fanfic is protected. The original material in that fic is copyrighted to the original author. If they tell you you can't use it, you can't use it.
In conclusion: I don't care how much you scream and pound your fists on your chest and piss on trees to assert your dominance. Spewing this nonsense about how fanfic is public domain and you get to use it because you want to is theft. Stop doing it.
60 notes · View notes
joezcafe · 11 months ago
Text
Gotta be 1000% real but the vocalsynth fandom (especially the Regina George-ass chuds over at Vocatwt) are fucking abysmal at communicating with people outside of their fandom to discuss contentious topics rationally.
We're doing what we can to try and demonstrate that our medium uses AI, but that we don't associate ourselves with the likes of AI-art techbros because our art form still centers around consent and manual input/creation.
We also have to go through all this because our medium is fundamentally niche and requires a bit of explanation for those outside of our circles to understand where we're coming from (Seriously try explaining Synthesizer V to a voice actor without having to put a million different enthical qualifiers down).
All that said, when there are voice actors understandably having concerns about AI on a fundamental basis, including AI banks made with consent, why is it that instead of explaining our values and motivations to them in a level headed manner are we instead spamming them with harrassment and shit like "Bestie is afraid of Hatsune Miku"?
like y'all are so exponentially stupid.
I'm part of this overall community because it's an art form I have immense passion and appreciation for, but it's so hard to maintain that association when we're acting like mean girl incels who have never left their fucking house or spoken to another human being.
Be better, our silly little hobby is at immense, real and DEMONSTRATED risk of stepping on people's toes and they are the VICTIMS who are fighting for their livelihoods, and not everyone is on the same page as you are, I assure you an anime VA with a home studio is not going to always understand who the fuck Kevin and Solaria are so you don't need to be a dick about it when they're curious or defensive about their future career opportunities.
We have a responsibility whether we like it or not, grow the fuck up.
29 notes · View notes
nhaneh · 9 months ago
Text
One of the things that really get me with this huge "AI" fad is how for all their talk of Artificial General Intelligence and whatnot, they've really only recreated the Chinese Room thought experiment and declared it the solution to all of the world's problems.
The Chinese Room, if you're unfamiliar, is this hypothetical about the difference between understanding and the mere appearance of it, and basically goes like this: imagine a room with a man and a book. The room has a tiny slot on one end where one can communicate with the man via written letters in traditional Chinese*. The man himself does not actually know a single character of any of these languages, but the book contains an exhaustive list of possible messages he can recieve along with appropriate responses and instructions on how to write them. Now imagine that this book is so well constructed that in spite of not understanding any of the communication he is receiving, nor any of the replies he is giving, the man and his book are still able to effectively pass the Turing test and convincingly appear a fluent speaker to anyone knowing a traditional Chinese language: can we realistically say anything within that room has any actual understanding of either Chinese or any of the communication it has participated in? The man clearly has none - does the book? Does the room as a whole system?
While I personally tend to think the thought experiment isn't necessarily all that useful due to underestimating the necessary complexity of the book and also the sheer extents to which humans showcase Competence Without Comprehension, it's not lost on me how the recent proliferation of Large Language Model systems and the forced attempts to insert it into just about anything and everything no matter whether it makes any sense or not is basically a straight up example of the Chinese Room on an industry-wide scale.
We have entire throngs of techbros falling over themselves in praise and wonder of these fancy little rooms they've constructed and the free market capitalism that purportedly has created it - even though OpenAI, the organisation that kicked off the AI gold rush with ChatGPT, is technically a non-profit organization, supposedly with the explicit goal to keep AI research available to the public and not left purely in the hands of grubby venture capitalists and profiteering CEOs.
Honestly it's kind of hard to shake the feeling that the whole AI rush is basically the same hypercapitalist tech cult that previously worshipped the blockchain turned to a new golden cow so they don't have to think about their own culpability in the current late stage capitalism hellhole we find ourselves in, even as their latest toy tech god already indulges freely in misinformation, rampant fraud, and good old racial profiling - just to name a few.
And honestly don't get me wrong - I think LLMs as a technology likely have far more actual practical applications than the blockchain ever did, but it's pretty inescapable that most examples we're being shown aren't particularly practical - if anything, I'd argue most of what I see is just spam, spam, spam.
(* the hypothetical scenario of the Chinese Room was proposed by an English-speaking American, and the choice of traditional Chinese as the example is one made purely on the basis of its perceived illegibility to many westerners. The thought experiment does not depend on any particular characteristics of traditional Chinese languages beyond their distance to English, and can easily be exchanged for any written language you personally find utterly incomprehensible - or even some generic form of encryption if you prefer, so long as the information in the notes exchanged is never presented to the person inside the room in a form that they could possibly understand)
14 notes · View notes
blazehedgehog · 7 months ago
Note
Given they’re soft rebooting again… what’s your Jurassic world 4/jurassic park 7/ Jurassic animals and also Triassic and Cretaceous animals make life difficult: the movie pitch? I feel like, as fun as the sequels can be, they’ve lost the science parable and horror/thriller elements of the classic - for all its faults; at least lost world has that.
Hmm... I'm gonna think like a movie executive. What's hot right now? AI's hot, right? It's the buzz. I propose a hard reboot.
Crichton's original novel opens with this big screed about a near future where we have "designer genetics." Genetic manipulation gets easier and easier and I think it's said Jurassic Park takes place in a world where it's getting to the point that parents can custom-order what kind of kids they'll have by selecting specific genetic traits. (It's been a while since I've read it)
Jurassic Park the movie shows human beings physically modifying genetic code by hand using VR displays, but Mr. DNA also admits that "a full DNA sequence contains 3 billion genetic codes." So it's ridiculous to assume that a human being could edit the genetic code by hand. One sequence would take years to get right, maybe even a lifetime.
So our story is that we have some 20 something silicon valley tech bro. He got outrageously rich off of crypto and NFTs and was smart enough to cash out early. We frame him as altruistic but around the edges we can see maybe he's not the greatest person. It's suggested he knew crypto was kind of a scam, which is why he got out early, but obviously he was in crypto at all to begin with, which does not bode well. But he's supposedly "one of the smart ones." Now he's rich! And cool! And using his powers for "good." He's beloved in pop culture.
The next wave is here. Neural network LLM Artificial Intelligence. He's all in. It's the next crypto. And he starts a company that uses LLM AI to "solve the genetic algorithm." He spins this out into a financial empire where people can custom-order pets with specific traits. But obviously people with a lot of money start wondering if maybe they can get more... exotic products.
With the realm of cats, dogs and parrots conquered, our techbro begins phase 2: recreating extinct animals. This is a guy who thinks he's going to save the world by restoring lost links in the food chain (without doing enough research to see how that would change our existing ecosystem, since he could be resurrecting an invasive species).
He's going to debut the first of his phase 2 work at an event he's calling Jurassic Park, because he's going to demonstrate the first living dinosaurs in 65 million years. Jurassic Park will continue to operate as a massive nature reserve; a symbol of his control of life itself.
Obviously: everything goes wrong. The AI has never had to change this much genetic code before. It has to make up whole entire sections of DNA. The end result is unpredictable, but techbro is confident that if the AI sequenced things well enough that something could actually hatch from the egg, then it's safe.
It is not safe.
Not only do we not understand anything about dinosaur behavior, these technically aren't even dinosaurs. They're genetic mutants. The on-site dinosaur expert brought in with the press to verify Jurassic Park's claims quickly realizes that while some of these dinosaurs are accurate in some ways, a lot of them have hard deviations away from known science. Muscles that aren't quite right, appendages that aren't the right size, things like that. Maybe their brains and brain chemistry are slightly different.
The question remains whether known science was wrong or whether the AI made something up that was never true.
The question is brought up again when we learn a technician within Jurassic Park sabotaged everything intending to steal the genetic learning data from techbro's servers. Techbro says the thief poisoned the data and that's gotta be why there's mutations.
The security systems fail. The thief has left them to their creations. Jurassic Park as we know it happens.
Since a lot of movies have to deal with this, all throughout this, nobody has phones. To prevent leaks, all of their phones were confiscated before they entered Jurassic Park and locked in a security checkpoint. Our techbro, maybe as a sign of solidarity, even gives his phone to the security guy. We could even say maybe they've been having security issues beforehand, to set up the thief hacking everything before he actually does it.
Anyway, since our thief sabotaged the park's own communication channels, a lot of the movie is about getting back to that security checkpoint, breaking in, and getting their phones so they can call for help.
Oh, and also: all of Jurassic Park's vehicles are electric, too, and tied into the security mainframe. Since the park's whole security system was hacked and disabled, none of the vehicles can be operated. The only thing that works are these little golf carts, but they're small, can't go very fast, and offer little protection. Maybe our survivors try one, it gets smashed by a triceratops, and they're too far away from the depot to go back for a new one. So a lot of the movie is them traversing the park on foot.
As they're being chased by dinosaurs through the park itself, they end up deep in the core of a genetics lab. And it's here we learn the dark truth: there is a wide margin of failure. The recently deceased specimens are all kept for study and learning and there's a lot because the AI fails often, and it has to be taught not to do that. We see dozens of disfigured animals. Bits and pieces of dinosaurs, pets, and even, in one tank... human parts. These tanks are labeled "phase 3."
Not only are the mutated dinosaurs not the work of sabotage, this guy's been trying to create genetically modified people. We have our big "what have you done?" moment of horror. One of the last surviving members of the press is going to blow the whistle on this place. It's over. Maybe it's someone we build up as the techbro's new friend discovering that their hero wasn't who he said he was.
Just then, a dinosaur bursts in and kills that person. Drama! Tragedy!
Obviously, the survivors find a way out. Techbro has to live with his own conscious. Multiple people died at his hands on this day and he had a hand in creating some of the worst sins against nature mankind has ever seen.
(Or maybe we stick to the original Jurassic Park book and he dies just before getting on the escape chopper.)
4 notes · View notes
morlock-holmes · 2 years ago
Note
Art is many things but it is not when you put on your stupid asshole techbro glasses and program your computer to wipe your ass and steal from homeless people. That is the opposite of art actually. That's war
Did you send this to me via typewriter?
Art processes have already been heavily automated; from the backspace key to z brush, hand crafted processes have been heavily replaced by automated tools.
The question is why they never got anybody to call me a parasite for using them.
The answer, it seems to me, is entirely economic; this is a kind of disruption which stands to drive down the price of certain kinds of labor for in a way which Photoshop didn't.
So what I cannot for the life of me understand is why opponents of AI art insist that this is not an economic argument.
27 notes · View notes
eponymous-v · 2 years ago
Text
honestly i wanted to stay out of the ai art discourse because i don’t feel like dealing with the inevitable influx of brain dead takes defending it, but at the same time, just. holy shit dude.
even before we get into the ‘what is art really though’ discussions, as much as people claim it’s ‘just a tool’, the truth of the reality is that these tools are being abused. ai art IS taking artists jobs. dmca scams are a thing - and now they’re more insidious since people began using ai generated images to make a claim seem more legitimate. ai generator companies charge fees to use their products, making millions in the process, and never pay a single penny to the people whose work they needed to have a functional generator in the first place. online art platforms and art program companies already seen as grifters are trying to cash in and sell their customers’ work without consent. people are already scraping publicly available music and writing to come up with other creative ai. there are completely ai generated influencers. people are working to create ai generated porn, not only creating more avenues for csem, but also adding another difficult hurdle for public figures whose images are being used without their consent to create porn of them, and individuals trying to take down revenge porn that targeted them. hell, ai are scraping confidential medical records and using the photos to train itself without removing the patient identifying information. if reading that stuff still hasn’t changed your mind on why ai art is the latest techbro scam, yeah, fine, i guess the next part is directed to you personally.
the biggest, and most complicated, defense i’ve seen is ‘ai art isn’t any different from artists using reference!’ obviously, artists of all kinds use reference. this is an over simplification, but in the visual artist’s case references are for two things: accuracy, and deliberate inaccuracy. put another way, it’s learning the rules so you can break them.
using a reference is not cheating, or a short-cut: it is used to teach. while you can sit down and draw a horse off the cuff, reference will help immensely if you wanted to portray what a horse looks like accurately, regardless of your stylization. on the other hand, if you want to base a monstrous creature (or part of it) on a horse, you will make a much more convincing monster if you have a reference of the real thing in front of you (the uncanny valley is a thing and it’s a both a blessing and a curse.)
some artists defend will ai art with this, using the generators as the tool they were originally design to be, saying that the ai is more akin to a ‘pre-production assistant’ than a replacement. however, the difference between an artist using an ai to concept vs an ai artist is that ai artists don’t seem to understand is that the work does not stop after the generator spits out an image. and it cannot replicate what comes next: organic creation.
what you see on a page is not the result of just mashing references together like legos until you get something nice. it’s having a concept and then trying to execute it with intentional consideration. only, it never goes like that. art is not linear. it’s extraordinarily intimate, tedious, and frustrating. it is countless hours of trying and trying and trying again until you finally have something that is ‘good enough.’ it is coming up against a roadblock and trying everything you can to get around it - something that’s extra frustrating when its a roadblock you’ve dealt with (or thought you dealt with) before. because of this, rarely does a piece of visual art stay exactly the same as it was originally conceived. the entire process is, as bob ross would say, a series of ‘happy little accidents’, some of which influence entire art movements. thinking about it another way, art is just failure after failure until something good comes out of your creation.
so where am i going with this? apart from the money angle, i argue the real, bone-deep reason why visual artists are pissed is that ai artists ignore the entire point of learning a skill - and the pitfalls that come along with it. what you see in the final product is literally thousands of hours of failure. failing upwards for sure, but failure is failure and it is fucking humiliating, even if what you make never sees the light of day. this is the biggest hurdle people have when first starting out - faced with such bleak prospects, art is intimidating as hell, sometimes making it seem out of reach. and the art community is angry because ai artists have pulled out the bowling lane bumpers so they won’t have to experience what that kind of failure is like. sure code can be altered and eventually perfected, but the ai artist themself learns nothing about the creation of visual art, just their ai. to an ai artist, once the image is generated, nothing else needs to be done, there’s no reason to improve. which is a shame, really, because failure is where creativity truly lives.
(before you ask, yes i do think coding is an art form. the sciences and the arts are more intertwined than people like to think. but programmers aren’t safe from this either: microsoft is facing a lawsuit because they scraped github and made an ai using the code it read without crediting the original programmers.)
y’all, i don’t know where to go from here. ai generators had the potential to be an amazing tool. but they’re being utilized at large by people who don’t want to put in the work - they’re just in it to make a quick buck off of other people’s backs before moving on to the next overhyped thing they can market.
so i guess this entire rant was just to say, my man, art was never inaccessible to you. you were just afraid of failure.
24 notes · View notes
tangibletechnomancy · 1 year ago
Text
We need to retire the "having a robot do it" argument when it comes to resisting the unethical use of AI.
Behind every AI-generated piece, be it text or visual, there is a human being. Sure enough, sometimes that human is a dickhead - sometimes it's some Silicon Valley techbro demoing an unfinished, underregulated product in the hopes of running to the bank before the buyers realize what's up, or sometimes it's a greedy opportunist trying to muscle in on a commission economy despite having less than zero respect for the community that built it. But sometimes - most of the time, in the hobbyist sphere - it's someone acting in perfectly good faith. Sometimes it's a severely disabled person who, at best, is capable of painting conventionally...in the same sense that you're capable of holding your hand in an open flame, trying to express themself in a way they've never been able to before. Sometimes it's someone with a severe learning disability or brain injury, who needs outside help to put their ideas into words other people will understand. Sometimes it's a non-native English speaker who needs the same assistance, because their alternative for their more complex ideas is Google Translate. Sometimes it's someone who's trying to express something through or about the process. Sometimes it's someone who just finds the process satisfying, or who wants to get a good quality image of their OC before commissioning other conventional artists to avoid some horrible mutually frustrating back-and-forth about details.
And in some of the most abusive cases, sometimes it's a criminally underpaid guy in a sweatshop-like cubicle farm who REALLY doesn't benefit from having their presence erased. "They gave my job to an AI", in this case, is literally "they gave my job to some dirty brown illegal" with one degree of abstraction.
So let's hang that up and focus on the real problem - the CEOs seeking crappy cheap automated EVERYTHING, and the lack of internet privacy laws.
6 notes · View notes
yardsards · 2 years ago
Note
Blaming coders for the ai bullshit feels a bit shit for me especially considering me (going into animation and is a hobbyist writer) and my bro (going into software engineering) are both worried about our future job prospects. Ai being good at generating code is not 'ironic' it's stress inducing for those going into the field.
All I'm saying is that we shouldn't blame the computer people but the business people.
(sorry for the aggressive tone, this topic is just personal for me)
yeah, i'm currently majoring in software engineering as well and it's definitely like. it's scary. no jobs, even the ones that are touted as being safe/stable/growing markets are actually safe from this. because our capitalistic society sees *all* workers as disposable. it's never been a matter of certain workers being valued enough to not be disposed of, it's always been a matter of who the upper class don't have the means to replace yet
and the real kicker is that like. in an ideal world, having your job be done by machines would be a GOOD thing. it'd be like "oh cool, now i don't have to work as much and can focus on my passion projects or just chill out or connect with my community". but due to how late stage capitalism works, it's not like that, it's "oh shit, how am i supposed to afford food?"
i do understand where people in artistic fields' apprehension about the computer science field comes from. because there ARE shitty silicone valley techbros who look down on artists. (in addition to the ones outside either field, pitting stem and the humanities against each other in a way that favours stem, like shitty parents pitting siblings against each other and playing favourites. like, of *course* the scapegoated/forgotten child is gonna resent the golden child). but for every shitty techbro type i've met, i've met about a dozen more who are either A: genuinely in love with the act of studying and creating and using technology B: just there because they took everyone's advice that this was a profitable field, and are just trying to avoid starvation as long as they can, just like everyone else
2 notes · View notes