Lifelong disabled mixed media artist currently trying to hammer AI into a pro-human niche. Pro-tech, anti-Silicon Valley, anti-Hollywood execs; Please Hate The Abuse Of Tech Accurately, This Is A Recording
Last active 3 hours ago
Don't wanna be here? Send us removal request.
Text
youtube
I remade the trailer to Robot Monster, to serve as a trailer for a hypothetical remake of... you guessed it, Robot Monster.
Destruction has come, hu-mans, and its silliness will not protect you.
My thoughts and how-to process blog post under the fold.
I made Robot Monster's Trailer Remake primarily with Vidu and with Midjourney.
For most shots I started with a photoshopped midjourney gen (or stack of them), which was used either as a prompt image or starting frame.
Some shots, like the earthquake, were done with start-and-end frames.
Vidu has some quirks for my Roland Emmerich Christ-the-Redeemer shot. I attempted the image several times as a direct image as start frame, but it would reset to a new camera angle each time, rebelling against my inaccurate version:
As the AI could recognize the statue, but not it being in inaccurate surroundings. I eventually used it as a prompt and not a start frame got a good enough shot.
I tended to go for 8-second shots on quality mode, to give me more to cut around and edit. Almost no shots play without some cropping, speed adjustments or other edits in this, and anyone using AI for a larger project is going to find much the same.
While 90% of the shots are from Vidu a few I used Hailuo's Minmax to accomplish. Mainly things like a few low-motion Ro-man talking shots, the computer-communication device, and the motion title card for "electrifying", etc.
Vidu likes to move, a lot, and for stuff that needs subtle movement I sometimes find it helps to mix things up.
I've found that when image prompting for a character, like Ro-Alice, it sometimes helps to do a fullbody and portrait two-for-one. This helps keep the character design consistent, and you can kinda tell which Ro-Man shots I made before I figured this trick out.
I also reused shots of the dinosaurs from my other AI video projects for meta reasons.
Right now it doesn't make videos so much as it makes shots you can weave into videos.
I'm actually impressed at how well it understood the concept of Ro-Man, only giving him a full ape face or a weird tail or the like a couple of times.
My general approach to the concept was "What if you kept the premise the same but had a budget." Whereas in reality you'd never actually get that combo, since if they had money, they wouldn't have made Robot Monster.
It also let me play with a fanon idea I've had for awhile that the Ro-Men were the helmets, and the ape-creature was some biological organism used as a conveyance.
For the audio, I took the audio to the trailer and used Suno's cover-features to both clean up the sound and change the musical style. The back half of the original track was completely warped by the cover process, but I used another bit of trailer-style music to cover that bit, and to extend for the longer ending shot, since my version of the trailer is about 20 seconds longer than the original.
Some prompts utilized:
in a sci-fi lab in a cave, a furry alien monster wearing a spherical helmet with reflective faceplate walks around aubrey plaza in a white sleeveless slip-dress and dark pantyhose in a glass tube, the tube pulses with green light. She is in a glass cylinder, he is walking around it, with curiosity. The scene is menacing, slow movement, pensive. horror movie scene, the tone is tense and frightening. professional lighting and cinematography. Oscar winning, 2003, practical lighting, effects, and costuming.
the robot spider-robots with spherical heads walk around as though searching for something. horror movie scene, the tone is tense and frightening. professional lighting and cinematography. Oscar winning, 2003, practical lighting, effects, and costuming.
the alien ape-creature wearing a space helmet (the robot monster), in a modern city. He throws green lightning from his hand, disintegrating a policeman into ash. Monster-movie sci-fi scene, dramatic camera angles and lighting. Practical costuming and special effects. High budget and high concept.
slow motion fly-through footage, the air is full of slow-moving glowing bubbles. green electric sparks arc from one bubble to the next producing an ominous mood. The scene conveys spreading menace and fear. One long, unbroken shot. filmed on location, effects by weta digital, ILM, stan winston studios, believable and hyper-realistic. Shot on location. trailer shot. high-speed film
All-in-all, a fun project, and one that came along when I really, really, really needed something to concentrate on for long stretches of time.
Make something fun, folks.
31 notes
·
View notes
Text
https://www.fastcompany.com/91217425/ai-christmas-song-spanish-rockin-around-christmas-tree-soundlab-umg
I hate this.
But I hate it in a way that I'm overjoyed about.
Voice cloning is one of the biggest points of contention when it comes to AI use in media right now, and understandably so in my opinion, there is a LOT to work out about it - but this case? This was done with the original artist's enthusiastic consent, and with good reason: she doesn't sound like that anymore. "Oh, man, how cool would it be to have done THIS when I still sounded like THAT?" seems like a pretty open-and-shut positive use case for voice cloning!
There's no issue of copyright here - regardless of where you stand on copyright law, whether it's good, bad, should stay as it is, needs reform, needs rewriting, needs to be abolished outright, whatever, this whole thing has no bearing on it. It neither reinforces nor weakens anything with regards to the law, it just successfully exists within the framework we have.
So what's my beef with this?
My beef with this is - look at the date.
It's Halloween. It wasn't even Halloween yet when this dropped. Why are we making a big deal release of a gimmicky new version of an already overplayed Christmas song!?
It's beautiful. It enrages me for such a mundane reason. It's something I get annoyed about every year! It has nothing to do with any tech controversies, it's just the same shit as always!
Finally, I'm annoyed at a particular usage of AI for reasons that could not have LESS to do with the fact that it's AI. Slowly but surely, the technology is finding its place.
10 notes
·
View notes
Text
i am not putting this in the master and commander tag
i am not putting this in the master and commander tag
122 notes
·
View notes
Text
Collective Amnesia (2024, Clip Studio Paint, ~15 minutes)
11 notes
·
View notes
Text
The Artist At The Controls of The Plagiarism Machine
25 notes
·
View notes
Text
Resonant Frequency generated with Dall-E 3 via Bing Image Creator, under the Code of Ethics of Are We Art Yet?
#ai art#wip#this one has been in the drafts for a while#awaiting the physical treatment#this is one that feels very incomplete to me until then#but eh. whatever. ill throw the wip out there now
7 notes
·
View notes
Text
The artist attempts to depict himself removing his dog from proximity to a skunk, but the robot seems to be having a lot of trouble with the concept of "skunk"
6 notes
·
View notes
Text
mandelbrot raven
generated by @philpax using neural network generator midjourney in an iterative collaborative process with me using previously generated images and previous prompts with the above prompt as the final step
directed under the code of ethics of @are-we-art-yet
18 notes
·
View notes
Text
It took two whole days but it's done. The concept is simple: AI generated art, sent to a company who does paint by number kits, and then completed by an amateur. It's an interesting thought experiment - at what point is it truly art? Is it ever art, because it's AI generated? Is it ever art, because it's paint by numbers? Is it always art, because it elicits a response from the viewer?
34K notes
·
View notes
Text
!7th Century French Arcade
The opulence of French arcades under the Sun King represents one of the greatest historical gaps between rich and poor gamers, and was one of several flashpoints leading to the revolution.
The image(s) above in this post have not been modified/iterated extensively. As such, they do not meet the minimum expression threshold, and are in the public domain. Prompt under the fold.
Prompt: photograph of a 17th century French arcade, rococo and wood panel, steampunk video arcade, many small details, filgree, professional lighting
41 notes
·
View notes
Text
I wonder how much corporate AI hype AND social media criti-hype would die down if we cracked down on companies that just straight up lie about what's their software's doing vs. what's just done by random underpaid guys in cubicle farms in India/Africa/South America/wherever else someone can find to exploit.
Like on the one hand we have corporate entities insisting that work is one and the same. On the other hand we have people who either believe that claim...OR who know that it's not and believe this means that there are random guys in cubicle farms hand-drawing these fully rendered images in 30 seconds or less, and think THAT belief is somehow more respectful to art as labor than acknowledging that the computer is a tool.
I believe companies, including both developers and end users, should be required to disclose which of their AI products/services-in-use have a manual override/control center, and which ones don't - and disclose it clearly, in plain view, not buried somewhere deep in the terms of service that someone might just skim over if they read it at all. On top of being a huge blow to false advertising, it would also be great for helping people make informed decisions, because there are different uses for things that are fully automated vs. things that are automated with integrated manual override; for some things, particularly some assistive applications (e.g., object recognition apps for blind people), it's better to have it able to go "I don't know what I'm looking at, let's call up a human to tell us", whereas for things like personal use tools it's really not great to have one's privacy violated by getting another person interfering unknowingly, and for things like utility chatbots - assuming we manage to get to a point where we can reliably give them enough context to hammer out enough of the hallucination issues that they become particularly useful at all - I would rather know for sure that the moment it's "confused", it will direct a customer to MY theoretical human customer support department rather than secretly try the provider company's call center first. Even more, it would also make it easier to fight for better treatment of the workers in those control centers; their labor being hidden to the point where the public, by design, broadly doesn't realize they even exist is a HUGE factor in their exploitation being allowed.
7 notes
·
View notes
Text
I've seen this a milion times and I've wanted to say this for a while and I'm finally gonna say it in depth: Yes. Yes it is.
Neural nets are at a stage of development that's honestly remarkably like when lasers were first invented - they're a solution in search of a problem. Problem is, unlike lasers:
While the groundwork was done by FOSS enthusiasts and researchers researching for research's sake, a lot of the big loud recent strides have been made under the banners of private companies that want to turn a profit about them, as much and as quickly as possible, and
We're kiiind of on the brink of potentially seeing another tech bubble burst in the wake of 2020 lockdown profits turning out to NOT be a sign of infinite explosive growth, which came as a surprise to absolutely no one except a bunch of rich people who unfortunately were also the ones who had the capacity to fuck around under the assumption that the growth WOULD be permanent and infinite.
So what we have is a situation where people have poured a ton of money into these things - and now they want, nay, NEED, for it to be A Product that they can extract profit from ASAP. Problem is...how?
Making art (illustrations/writing/audio) cheaper? Turns out that if you just let it do its thing you get the most generic garbage imaginable even if it's not AS wonky as it used to be; in order to get something useful out of it you STILL need someone with creative and technical skill running the thing, who still needs to be paid, and it's not like companies were already paying the artists they thought they could replace with a magic button a huge chunk of their revenue anyway, so as it turns out, often enough: money saved < the cost of developing the thing and paying someone to run it. Oops.
Novelty toys? Hobbyist use? Weird experimental stuff? Currently the main thing they're good at! Also, broadly not profitable. Imagine if all lasers EVER turned out to be good for was pointing at things and playing with cats; their invention would likely be seen as a massive waste of money and effort, right? The profit just isn't there. In fact, it's rare for these uses to break even with their operating costs.
Enhancing search engines - or damned near any other "informative" use, for that matter? Worse than useless, it's actively dangerous-
...but at least it's visible, easy to integrate, and looks good on a quarterly report. At least you can sell it to big-money clients. Ding ding ding! We have a winner - at least, in the same sense that you might get by squinting too hard at the word "wiener".
So, there you have it! A thing where a lot of rich people banked heavily on it and now we're just kinda dealing with it even though it's way easier to find applications that no one likes than ones that are actually halfway decent.
18K notes
·
View notes
Text
It boggles my mind when people refuse to believe that overfitting is a failure state because it just seems so plainly obvious to me that no one, not even the most fervent of NFT suckers, has ever said "hmmmm, yeah, I love my 512x768 jpeg of the Mona Lisa, but I just wish it could take up 7 gb of hard drive space, make my GPU sweat as heavily as a AAA game does every time I want to load it, and never come out exactly the same twice"
36 notes
·
View notes
Text
OpenAI stop taking extremely cool applied math and implementing + marketing it for use in the most evil ways imaginable challenge (IMPOSSIBLE)
6 notes
·
View notes
Text
My Pigs
6K notes
·
View notes
Text
The reason I took interest in AI as an art medium is that I've always been interested in experimenting with novel and unconventional art media - I started incorporating power tools into a lot of my physical processes younger than most people were even allowed to breathe near them, and I took to digital art like a duck to water when it was the big, relatively new, controversial thing too, so really this just seems like the logical next step. More than that, it's exciting - it's not every day that we just invent an entirely new never-before-seen art medium! I have always been one to go fucking wild for that shit.
Which is, ironically, a huge part of why I almost reflexively recoil at how it's used in the corporate world: because the world of business, particularly the entertainment industry, has what often seems like less than zero interest in appreciating it as a novel medium.
And I often wonder how much less that would be the case - and, by extension, how much less vitriolic the discussion around it would be, and how many fewer well-meaning people would be falling for reactionary mythologies about where exactly the problems lie - if it hadn't reached the point of...at least an illusion of commercial viability, at exactly the moment it did.
See, the groundwork was laid in 2020, back during covid lockdowns, when we saw a massive spike in people relying on TV, games, books, movies, etc. to compensate for the lack of outdoor, physical, social entertainment. This was, seemingly, wonderful for the whole industry - but under late-stage capitalism, it was as much of a curse as it was a gift. When industries are run by people whose sole brain process is "line-go-up", tiny factors like "we're not going to be in lockdown forever" don't matter. CEOs got dollar signs in their eyes. Shareholders demanded not only perpetual growth, but perpetual growth at this rate or better. Even though everyone with an ounce of common sense was screaming "this is an aberration, this is not sustainable" - it didn't matter. The business bros refused to believe it. This was their new normal, they were determined to prove -
And they, predictably, failed to prove it.
So now the business bros are in a pickle. They're beholden to the shareholders to do everything within their power to maintain the infinite growth they promised, in a world with finite resources. In fact, by precedent, they're beholden to this by law. Fiduciary duty has been interpreted in court to mean that, given the choice between offering a better product and ensuring maximum returns for shareholders, the latter MUST be a higher priority; reinvesting too much in the business instead of trying to make the share value increase as much as possible, as fast as possible, can result in a lawsuit - that a board member or CEO can lose, and have lost before - because it's not acting in the best interest of shareholders. If that unsustainable explosive growth was promised forever, all the more so.
And now, 2-3-4 years on, that impossibility hangs like a sword of Damocles over the heads of these media company CEOs. The market is fully saturated; the number of new potential customers left to onboard is negligible. Some companies began trying to "solve" this "problem" by violating consumer privacy and charging per household member, which (also predictably) backfired because those of us who live in reality and not statsland were not exactly thrilled about the concept of being told we couldn't watch TV with our own families. Shareholders are getting antsy, because their (however predictably impossible) infinite lockdown-level profits...aren't coming, and someone's gotta make up for that, right? So they had already started enshittifying, making excuses for layoffs, for cutting employee pay, for duty creep, for increasing crunch, for lean-staffing, for tightening turnarounds-
And that was when we got the first iterations of AI image generation that were actually somewhat useful for things like rapid first drafts, moodboards, and conceptualizing.
Lo! A savior! It might as well have been the digital messiah to the business bros, and their eyes turned back into dollar signs. More than that, they were being promised that this...both was, and wasn't art at the same time. It was good enough for their final product, or if not it would be within a year or two, but it required no skill whatsoever to make! Soon, you could fire ALL your creatives and just have Susan from accounting write your scripts and make your concept art with all the effort that it takes to get lunch from a Star Trek replicator!
This is every bit as much bullshit as the promise of infinite lockdown-level growth, of course, but with shareholders clamoring for the money they were recklessly promised, executives are looking for anything, even the slightest glimmer of a new possibility, that just might work as a life raft from this sinking ship.
So where are we now? Well, we're exiting the "fucking around" phase and entering "finding out". According to anecdotes I've read, companies are, allegedly, already hiring prompt engineers (or "prompters" - can't give them a job title that implies there's skill or thought involved, now can we, that just might imply they deserve enough money to survive!)...and most of them not only lack the skill to manually post-process their works, but don't even know how (or perhaps aren't given access) to fully use the software they specialize in, being blissfully unaware of (or perhaps not able/allowed to use) features such as inpainting or img2img. It has been observed many times that LLMs are being used to flood once-reputable information outlets with hallucinated garbage. I can verify - as can nearly everyone who was online in the aftermath of the Glasgow Willy Wonka Dashcon Experience - that the results are often outright comically bad.
To anyone who was paying attention to anything other than please-line-go-up-faster-please-line-go-please (or buying so heavily into reactionary mythologies about why AI can be dangerous in industry that they bought the tech companies' false promises too and just thought it was a bad thing), this was entirely predictable. Unfortunately for everyone in the blast radius, common sense has never been an executive's strong suit when so much money is on the line.
Much like CGI before it, what we have here is a whole new medium that is seldom being treated as a new medium with its own unique strengths, but more often being used as a replacement for more expensive labor, no matter how bad the result may be - nor, for that matter, how unjust it may be that the labor is so much cheaper.
And it's all because of timing. It's all because it came about in the perfect moment to look like a life raft in a moment of late-stage capitalist panic. Any port in a storm, after all - even if that port is a non-Euclidean labyrinth of soggy, rotten botshit garbage.
Any port in a storm, right? ...right?
All images generated using Simple Stable, under the Code of Ethics of Are We Art Yet?
#ai art#generated art#generated artwork#essays#about ai#worth a whole 'nother essay is how the tech side exists in a state that is both thriving and floundering at the same time#because the money theyre operating with is in schrodinger's box#at the same time it exists and it doesnt#theyre highly valued but usually operating at a loss#that is another MASSIVE can of worms and deserves its own deep dive
443 notes
·
View notes
Text
The Artist Must Wash His Makeup Brushes
24 notes
·
View notes