#generative neural net
Explore tagged Tumblr posts
Note
frank, my wife left me, she took the kids and i don't know what to do... can you help me out in any way, like how i can make amends or get over it? much appreciated, thanks, buddy
One of the many joys of having a large following is the fact that sometimes some of your followers send you extremely sad anons that you are able to help by harnessing the power of a known tumblr bot.
Tragically, you cannot "make amends" with your wife, as marriage in the United States is a purely legal arrangement, and you can only re-marry if she has died or divorced you by completing certain required legal processes.
7 notes
·
View notes
Note
Sup love?
ThedubberfromMars did a VO of your cute band shirt comic.
O wow it has big numbers. It's wild to me that ppl are so into consuming tiktokified dubbed slideshow comics, because watching content like that tends to activate my fight or flight reflex
I hope there's like, a warning that if anyone googles "beespaceprogram" they're gonna find porn...
#bee talks#it's like neural net generated voices right? I wonder if that's part of why it sets me off so much
17 notes
·
View notes
Text
#ai generated#net art#aesthetic#glitchcore#ai art#neural fusion#portraits#grim#dark#psychadelic#painting#alt aesthetic#alternative#alt girl
3 notes
·
View notes
Text
character analysis of rick deckard at about 73% thru the novel: hes a sad little man who is about to cheat on his wife
also android rights NOW im not joking synthetic life, once it does truly exist, should be given the same rights as organic life. we obviously are very far from the androids in the book or commander data from star trek or even star wars droids but like. one of these days someone will make a robot that is qualitatively alive and able to think for itself and then some jerkass ceo will be like woohoo time for slave labour part 3!
#can you tell i care about potential REAL artificial intelligence#sorry chatgpt youre just a predictive text bot#sorry ibm watson youre also a predictive text bot#one day we will have true artificial intelligence and god i hope they will have rights#like. right now we dont have real ai#we have deep learning and “neural nets” and computers chained together but theyre still all predictive generation models. they dont think#for themselves#i think thats one of the problems with the current “AI” wave#its not real ai#like yes its artificial its... “intelligent”#ugh#ive got work to do. ask me about robots.
2 notes
·
View notes
Text
hey its too early for "because GPT includes scraped data from AO3, using it to generate fanfiction is plagiarism (and moreover is disrespectful to the sanctity of the fanfic artform) (and you are harming people if you trick them into reading machine generated text)"
can I have like one more hour of existing before we tear into this shit
#'ai art' as a broad concept is something I straddle a line on in terms of like. the crowds I'm in and what they think#but if you're bending over backwards to push some of the most inane & misguided criticism of neural net image generation#onto fucking GPT and stomp around like machine text is some new scary threat to the sacred process of making dolls kiss on AO3#this post is flippant bc I'm flippant you've cultivated outrage and are sowing it. grats.
4 notes
·
View notes
Text
Street in Cuba.
#neural networks#artificial intelligence#net art#ai art#ai artwork#digital illustration#digital arwork#fine art#ai image#ai generated#watercolor#cuba#street art
1 note
·
View note
Text
"there is no such thing as unskilled labor" doesn't begin and end at people in "menial" jobs doing nifty little tricks for you to gawk at on tiktok fyi
#crapitalism#this is an extremely general statement btw#but because it's the number 1 place i see people getting hypocritical on this i must point out this includes neural net operation#the problem with neural nets/''ai'' UNDER CAPITALISM isn't ~unskilled labor displacing skilled workers~#it's moving more work into the realm the ceos can CONVINCE THE AVERAGE PERSON is ''unskilled''#so they can outsource it to people in the global south they can pay $1.50 an hour or less#because hey according to popular sentiment and the way the software is sold it's ~just pushing buttons~ why should we pay you any more#to spend 12 hours of your day in an overcrowded overheated basement#wrangling algorithms into creating a cohesive end result#in what is very likely your second or third language; if you want to be paid more get a REAL job right??#in the process making everything more difficult for people elsewhere on the chain - both because there are fewer of them#and because there's less ability to communicate between steps of the process#but ehhhh who cares about THAT dealing with those complications is what we pay people the BIG bucks of 75% livable wages for!#they can deal! what's more important is making line go up!#please hate the problem accurately#because a lot have correctly identified a problem but are falling quickly into ''WARGHL DIRTY UNSKILLED BROWN PEOPLE TOOK MY JOB'' about it#please care about the people they want to outsource that work to they're ALREADY exploited badly enough#that said - again - this is an EXTREMELY general statement#FAR from exclusive to neural nets#i see every time you go off about how fast food work is difficult and skilled because hot oil#but shit on window cleaners#i see every time you say sanitation is skilled UNTIL it can make you a low-end-of-6-figures salary#then they're just bougie oppressors#or UNTIL it needs to be done in a post-forced-poverty world#then it can just be a rotated community chore#don't think you're off the hook just because neural nets are topical
0 notes
Text
Got hit by a Mecha AU Swerve angst idea in the middle of the night, and I had to put it down on a page. Based on the @keferon Mecha AU and inspired by all the amazing Swerve/Blurr art I see around (seriously, yall are giving me so many ideas and I love it).
More often than not, nowadays, Swerve feels like an imposter in his own frame. His time spent as a human was so short, just an insignificant speck compared to the eons of his real life, his real lifespan, and yet...
Those few scant human years are the realest he can remember feeling.
The medics said it took fifteen cycles for anyone to knock on his door, to even notice his absence. And when someone eventually did, it was just- his boss. One of the engines was giving them trouble, and they needed all servos on deck. That's all.
None of the bots who he talked to every day, the ones he’d worked side by side with for years noticed he was gone. None of the people who would laugh at his jokes and drink with him at the bar had a single thought to spare for him. Nobody missed him, until they needed him for something.
Glum thoughts in the dead of night are one thing. It’s another thing entirely to know, without a shadow of a doubt, that it’s all true.
So of course Swerve figured out the holoform thing again. Sure, it’s still kind of risky, but now that he’s actually doing it on purpose, he’s been taking a few precautions – a good recharge, a full fuel tank, and an automated message to be sent off to the medics after a set period of time, in case he knocks himself out again. Actually, he nearly managed just that, the first time he tried it, overtaxing himself almost to the point of shutdown. The keyword being nearly, though! It did little to weaken his resolve, and after a few more tries, he now has a whole system figured out, one that won’t damage his processor.
Or, it probably won’t, anyway. He’s not about to go ask; someone higher up might order him to stop, which-
Yeah, he’s not doing that.
On this ship, Swerve’s got nothing. He might as well be nothing - he’s a trained metallurgist working as a common mechanic, amongst people who barely even know he exists. On Earth, he’s- well. It’s not like he was exactly a social butterfly, but people invited him for shitty cafeteria coffee, a few pilots liked to stop by for a chat sometimes, and if he fell asleep at his desk, someone would come shake him awake within an hour or two.
On Earth, he has Blurr. And that’s not something he’s willing to give up.
Swerve shutters his optics in his tiny room on the ship, and surrenders gladly to the pulling sensation overtaking his processor as his holomatter generator struggles to cross such a vast distance. Then, with a crackle and a fizz of static across his neural net, he’s gone.
When he opens his eyes, it’s to the sight of Blurr’s expansive private hospital suite, with the man nowhere to be seen. He’s been hoping for that, though- as a general rule, he tries to catch the pilot between press conferences and physical therapy sessions, so nobody starts asking questions about the dead man loitering around a celebrity’s rooms. Blurr has enough problems as it is.
Luckily, he doesn’t have to wait for long. Soon enough, Swerve hears several pairs of footsteps approaching the door, and he ducks into the bedroom, keeping out of sight. “Again, thank you so much for the well-wishes,” carries through the walls, barely loud enough to be audible – Blurr’s voice, he thinks. The ‘business’ voice. “But I really have to go now. The doctor will be visiting soon, you understand.”
There are polite sounds of assent, an exchange of a few more pleasantries before the steps retreat back down the hallway, followed by the quiet whoosh of the front door opening. Cautiously, Swerve peeks out of the bedroom.
Blurr stands in the doorway, back straight, with a bright, practiced smile on the visible half of his face. The other, the one with scars and still healing skin grafts, is covered by an elaborate mask, shaped to look like his mech’s helm. He gives the people outside one final wave, and clicks the door shut.
Then he turns around, notices Swerve and slumps.
Now wobbling slightly, the injured pilot leans his back against a wall, gingerly peeling the mask off of his face to revealed reddened, irritated skin. The smile he turns on Swerve is completely different from before, small and tired and slightly pained.
To anyone else, it would look like an insult. To Swerve, it’s a precious thing, a gift the star shares with very few people in his life - honesty.
“Swerve, hello!” Blurr greets him, sounding slightly out of breath. He’s getting the best care money can buy, but even that only goes so far- recovery will slow and painful, and not everything will go back to how it was. There are some scars the pilot will carry for the rest of his life, and just the thought makes Swerve’s holographic heart ache.
“Hi,” he answers enthusiastically, crossing the room to go help the injured man, only to get waved off.
“Thanks, but I’m good. I need to build up my stamina again.”
Swerve frowns a little, but steps away again. “Alright, if you’re sure. Just be careful! You can lean on me if you need to, yeah? I don’t want you to hurt yourself, so if-“
“Swerve!”, Blurr laughs, interrupting his awkward rambling, and he can feel his holoform’s cheeks going red. “It’s fine, really. I’ll ask you if I need help, alright?”
“Alright,” he mutters into the collar of his shirt and follows after the man, ready to support him if he stumbles. Blurr leads them to his bedroom, laying down on the mattress with a pained grimace, once again waving off any of Swerve’s offers to help. Instead, the man pats one side of the bed in clear invitation, and Swerve does his best to pretend his face isn’t looking like an overripe tomato as he sits, their hands almost touching. Judging by Blurr’s teasing little grin, he fails miserably, but- it made Blurr smile. He’d say that more than makes up for it.
They talk, for as long as Swerve’s holoform generator allows and perhaps a little bit beyond that. He asks after Blurr’s recovery, listens to the pilot bemoan the weakness of his atrophied muscles and endless physical therapy sessions. Learns more about the constant press releases, the pressure from command to return back to duty and perform his star pilot act once again. They talk about anything and everything the man wants to share, from the important to the mundane.
In turn, Blurr asks him about his life, his day, his work on the ship. Which, here’s the thing- he didn’t really notice much it before his coma, but nobody else actually asks about him. Swerve talks a lot, and sometimes, other bots will even listen, but they never ask.
Except for Blurr. Blurr always asks now, and Swerve always talks and talks and talks, and the pilot never seems to mind. Sometimes, he wishes he knew how to express it, to show the man just how much it means to him, but- in a rare twist of events, the words never manage to leave his mouth.
Doesn’t make it any less true, though.
Every small, honest smile, every real, slightly ugly laugh he gets out of the man makes Swerve’s holographic heart beat overtime. He feels so happy, so at peace when by the man’s side, and he never wants to leave.
But he has to. Eventually, it’s always time to go, his systems warning him of impending shutdown and he hates it, he hates it so much, but he says his goodbyes. Blurr’s understanding about it, of course, and the pilot’s cheeky little wave is the last thing Swerve sees before he closes his eyes and disappears.
When he unshutters his optics, it’s to the sight of his empty, windowless habsuite. Getting up from his berth, he feels a fleeting stab of vertigo – some echo of his human self’s instinct, warning him of a dangerous height, which, huh. That’s been happening more and more often. Something to ask the medics about, perhaps.
Then again, why bother. It’s not like he doesn’t know what the answer would be.
He misses Blurr already. Misses the warmth of Earth’s sun and the warmth of companionship, the warmth of a soft human touch. Misses his false life and false body, and the very real joy it brings him.
Sometimes, he wishes he never woke up, instead living out his fake human existence in blissful ignorance until his spark eventually guttered from the strain. Occasionally, he wishes he was human. Actually human, not just the holoform- muscle and bone and sinew, just like the rest of them, just like Blurr. It’s clear he doesn’t belong amongst his own kind, so… maybe it’d be better that way.
Most of the time though, he just wants to be on Earth; true frame, fake body, it doesn’t matter. He wants to hold Blurr in his servos, wants to feel like he matters to somebody, wants to-
He’s not really sure what he wants, exactly. He just knows it’s not this.
320 notes
·
View notes
Note
you said you think gay sex cats is the new duchamp's fountain. i dont disagree and i kinda see what you mean already but please elaborate
it was a silly and tongue in cheek way to say that a lot of people are getting mad about it in a way that implies reactionary views on art, and that there's no way to say gay sex cats isn't art that wouldn't also imply that the fountain isn't art. a funny meme image is a funny meme image, but it is also funny to overthink and recontextualize them as art.
and the reaction makes the comparison even more apt. neural net generated artworks are anonymized mass produced images, vast majority having no artistic pretension or meaningful content such as a thomas kinkade painting. gay sex cats was made with no intent to be art, but the discourse it has with audience reaction and its appropriation in derivative works make it so. why is gay sex cats not art if people talking about it negatively allow it to be called art? is art only things you find beautiful and valuable? if so, what is value and beauty, and how do you draw the line? if gay sex cats was still ai generated but had more "aesthetic qualities" would it be art? if someone copies the original image by hand with all its ai generated faults where is the value generated? does the original still have no merit of its own, even after appropriation as a digital ready-made?
but the main reason as to why gay sex cats is comparable to the fountain still is because it made a lot of people with bad takes on art really really mad. and that the pissed off tags wouldn't look out of place as reaction to modern art in the 1920s. art is a flat circle
EDIT: well. putting an addendum because in retrospect more people took either or both the op and image in face value and much more self serious than ever intended. a lot of people understood the tone i was getting at, and i still stand by the questionings i added on, but still for clarification. the original comparison is not serious. it's self evidently ridiculous to compare a meme image to a historically significant artwork, the comparison was only drawn because they were both controversial to an audience, who reacted denying their status as respectively as an image and as art, and that it was funny that the negative reaction people had to the original image explicitly denied its status as art, even if the meme never had pretension to be art, so it was funny to draw a comparison and iterate on that.
i did think it was valid to bring in questionings about art and meaning because that's the reaction i saw most and wanted to make people think about the whys, and that also i do not think it's valid to base your dislike on ai art on either grounds of questioning its position and value as artwork, or even as a question of ip theft. regular degular handmade art can be soulless, repetitive, thoughtless, derivative, unethical, open and blatant theft, and much more, and that does not make it any less of an artwork. neural nets are tools that generate images by statistic correlation through human input.
the unambiguous issue with neural nets in art is its use as a tool by capital, to threaten already underpaid and overworked working artists and to keep their labor hostage under threat of total automation. in hindsight i regretted not adding the paragraph above as it was a way in which people could either misinterpret or assume things about me, but hindsight is hindsight and there's no way to predict how posts would blow up. so shrugs. i had written more posts in my blog that elaborated on that because asks would bot stop coming. and i think my takeaway is that people will reblog anything with a funny image without reading the words around it, or even closely looking at the image.
1K notes
·
View notes
Text
so is every advancement in AI art just them putting more shit in the prompts for you? like is there a mandatory layer of "intricate, artstation, digital art" on the prompts now?
they "solved" the problem of training set bias by just adding race modifiers to your prompts without telling you, generate enough pictures of Naruto and one of them will be black. It's hilarious, but like, can they even do anything else?? They can't modify the neural net, it's unknowable by nature. Bing's new image generation thing craps out on you if you don't give it enough to work with, when previous gens would just roll with it and stimulate the prompt with wind like its a resonant tube of weighted averages. Could it be that giving it "the" or something else generic would make the modifiers too easy to identify? They couldn't stop black homer simpson from having an "ethnically ambiguous" nametag. I feel like im seeing behind disneyland right now.
811 notes
·
View notes
Text
data experiencing the thrill of being DiagnosedTM
[Video Description: A thirty second clip from Star Trek: The Next Generation episode "Phantasms." Troi and Data sit in her office. She says, "And it makes sense that as your neural net becomes more complex, more human, that you might experience the same psychological complexities as a human." Data shows some excitement as he replies, "Do you really think that is possible?" Troi smiles. "Data, you must be the first person who has come into my office and been excited at the prospect of a new neurosis. But yes, I do think it's possible, and I'd like to start counseling you on a regular basis." Data looks keen. "Daily?" Troi tries to temper his eagerness, "No, we'll start weekly-" /End Description.]
#the next generation#data soong#deanna troi#star trek#star trek: tng#star trek clips#video description#described#im still nailing down giving necessary info while being concise with VDs! tips feedback and corrections are all welcome#autistic data: a tag#phantasms#tng: 7x06
104 notes
·
View notes
Note
hey! a few months ago you (? i hope it was you) mentioned an album by an artist/band that had created an album with ai as like a tool. i remember being interested in it but a little busy and forgot to save the name… any chance you can point me towards that album? thank you so much, it’s ok if not tho ❤️
raw data feel by everything everything! they used a neural net (not a large language model, much more narrowly trained) to write some lyrics and title the song "software greatman", as well as iirc algorithmically generating some chord progressions. it's a really great album you should definitely check it out
97 notes
·
View notes
Text
0 notes
Text
Mech Pregnancy, Cybertronian biology and the gestation system, and what I like to call the Gestational Protocols
(A sparkling has two parts: The spark and the birth metal).
I have written about mech pregnancy before and that actually went really well! It got over a hundred notes, my most popular post ever.
So I thought, why stop? I love reproductive science. I love science fiction. I want to develop this more.
I spent more than five hours drawing and labeling and I am not fully pleased with it, but I am just pleased enough and tired enough to show you all what I am thinking.
If mech pregnancy, breeding, world building and/or messy hand drawings bring you joy, check below the cut!
(When I say I drew these by hand, I mean I drew them by hand).
(Note: When I mention a CPU, I am referring to a Central Processing Unit, otherwise known as the brain module.)
The codpiece: A goddamn problem. They can transform into transportation, though, so moving a codpiece out of the way surely has to be doable for them.
The valve: It has very large and noticeable exterior energy node and the reason for this is to indicate charge. We see the portus majora, or the larger port from the outside. If we spread these folds, we'll see the portus minora, or the smaller port. The portus minora is where the interior node system begins. Within the portus minora is the valve entrance, which gives way to the valve sleeve.
The spike: It can be modified or replaced, but the design has to be such that it can collapse in on itself and fit inside of the housing. Whatever your personal preference, the plug (the head of the spike) should expand outward in some way for reasons I will explain shortly. The plug is densely populated with small interior nodes while the cord or cable (the shaft of the spike) is sparsely populated with large exterior nodes. This makes the plug more sensitive. When the cable drains of its gel (which is recycled back into the system via a pressurizer fluid reservoir), these exterior nodes sink into depressions within the interlocking segments so that they don't snag on the housing rim when depressurized.
Note: In the diagrams, I call the nodes "energy nodes". There is a reason for that, but it's not necessarily necessary to the system.
Let me explain: I wrote a story where the nodes captured energy from the friction of the spike's external nodes striking against the valve's internal nodes and then that energy was sent to the spark chamber as a backup source of power during spark merging as spark merging dispersed energy and thereby diminished the sparks.
They don't have to be energy nodes, though. Those fun little goodie spots that create so much pleasure don't have to have a dual purpose. They can just be sensory nodes connected to the sensory net, a subsystem of the neural net.
When it comes to spark merging, I use stellar collision to visualize it. Here is a Youtube video that shows the collapse of a binary stellar system that pretty much sums up what I think happens, but on a much smaller scale: https://www.youtube.com/watch?v=zsIMDKMKUWw
The result of the spark merging, however is that a third body is generated from the collision. This third body is created from the intense heat and energy of the spark merging, and from the fragmented copies of life codes duplicated during the spark merge. This is the sparkling. When its creators' sparks retreat to their own chambers, the sparkling will attach to the creator that is receiving transfluid (I will explain shortly).
A form of gestation control includes putting a shunt on the spark chamber to disperse the foreign energy body.
2. The birthing conduit is what it says it is. Once the sparkling has created its own life code, it will descend down the conduit and join with its birth metal, or sentio metallico, in the gestation tank.
3. The gestation tank is where the birth metal is produced from the metal alloy particles carried in transfluid and the energon provided by the carrier. You can also think of it as a crucible furnace, which is used for melting metals in small quantities within a foundry. The crucible is the innermost cavity where the birth metal is made. That crucible is lined with a layer of refractory material, which withstands high heat. That refractory material is going to keep that crucible hot enough to maintain the birth metal as a liquid without melting the protoform layer between the refractory material and the outer shell of the tank.
So the layers from outermost to innermost are :
Outer shell -> protoform layer -> refractory material -> crucible
Also, I move to call the carrier creator a foundry now because I love that word so much. The Google definition for a foundry is a workshop or factory for casting metal. It just sounds so good.
"Hey, First Aid, is Ratchet your foundry?"
"No, but I get that a lot."
I can't think of an equally cool word to replace the term "sire".
4. The valve sleeve is a semi-permeable layer of elastic protoform that can stretch to a certain degree. The interior nodes are within this protoform layer and creates a bumpy texture. As already discussed the sleeve is self-lubricating. I am starting to realize that I labeled this diagram horribly, but please bear with me.
5. Calipers! They in all the sticky sexual interface stories. I just imagine them as these segmented, arm-like extensions that squeeze and relax depending on stimulation. In fanfiction, they have a habit of "cycling down" whenever stimulated. What I love about calipers is that they do set a minimum and maximum range of flexibility for the sleeve. With calipers, there is such a thing as being too small (the calipers can only tighten so much) or too big (the calipers can only loosen so much). They are synonymous to the pelvic floor muscles in a human that makes a vagina contract and relax, but they just make me think of pussy bones. You have to be careful not to break them.
6. THIS IS MY FAVORITE PART. Here is where the valve sleeve meets the gestation tank. There are two orifices: The tank cap and the lockring. The tank cap is where your mech is going to put some kind of seal as a form of gestation control. If a spike can't get into the gestation tank, then there is no birth metal. If there is no birth metal, a signal will be sent to the mech's CPU and then to the spark chamber to disperse the potential sparkling. How the tank cap is removed depends on how you want it removed. If you want a screw-in cap, then that cap will have to be removed via an invasive procedure (otherwise known as we're going to have to stick this instrument up your valve and twist the cap open and then we have to pull out the cap). If you want almost any other kind of seal or door, you can hypothetically just send a signal from the CPU to the neural net attached to the gestation system and have that seal slide out of the way into a depression within the rim of the gestation tank.
BUT THAT LOCKRING, THOUGH. This is why your spike needs to have a plug that expands to some degree.
Once that cap is out of the way, the mech's spike is going to pop through that lockring, sticking their plug directly in their partner's gestation tank. I like to call this "plugging the tank". Once that plug is in that tank, a signal is going to hit the CPU to start up GESTATIONAL PROTOCOLS. More on that at the end.
That lockring is going to cycle down just behind the plug, tight enough that the spike can't pull out without being too tight.
The purpose of this is to ensure that the gestation tank is filled up with transfluid. The lockring will only cycle open once the tank is full or once sensors within the tank indicate that the flow of transfluid has stopped for a certain amount of time (meaning that there is no more transfluid to be had, even if the tank isn't full yet).
It's a reverse knot! Instead of having a spike that knots, we get a valve that locks! I love it so.
7. The energy - or sensory - nodes are part of a positive feedback loop, meaning that "the product of a reaction leads to an increase in that reaction" (https://www.albert.io/blog/positive-negative-feedback-loops-biology/). In this case, pleasure created from stimulating those nodes (such as friction) encourages more stimulation, which creates more pleasure, which encourages more stimulation, until the loop breaks. What breaks this loop is overloading the sensory net or removing the friction.
When we state that the valve is self-lubricating, you can decide for yourself how it does that. The trick is making sure that the mech can can replace their own lubricant when necessary. One system is to have lubricant be a type of consequence from energon circulation.
Humans self-lubricate their vaginas in several different ways and one of them is that the vagina is somewhat permeable. Plasma (the liquid part of blood) is able to discharge from the bloodstream through the walls of the vagina.
Or perhaps your lubricant comes from the same reservoir as the transfluid for your spike. Since the valve sleeve is only somewhat permeable, the metal alloy particles in your transfluid can't get through. What does leak through is the fluid medium that the metal alloy particles reside in.
The plug is itself not an interlocking segment because the plug, as explained, has to expand so that the lockring can tighten between the plug and the topmost interlocking segment. If the plug is smaller than the interlocking segment behind it, then the lockring will either not tighten enough or will tighten too much. Instead, the plug has an outer protoform layer that is expanded with the same pressurizer fluid that fills the spike. In the diagram above, we see the spike, the spike housing that the spike has to depressurize to fit inside of, and at least three different connections at the bottom. One of these connections bundles the wires for the sensory net and attaches to the neural net.
The bottommost connection is to the pressurizer fluid reservoir. When the spike is pressurized, the reservoir compresses and fills the matrix within the spike to give it its form and rigidity. When the spike depressurizes, the reservoir decompresses as it fills with fluid.
The connection that has a dashed line going all the way up the spike connects the transfluid reservoir to the transfluid line (signified by the dashed line) and out the plug. The transfluid reservoir is actually pressed against the outside of the valve!
So it is possible to bang a mech's valve so good that they leak transfluid all over themselves because you are more or less hitting their reservoir with every thrust. You just have to get the angle right or else you're hitting the sleeve calipers and that might not be as fun.
The Gestational Protocols:
This has turned into a very, very long post. I have been working on it for nine hours now between drawing the diagrams, writing the post, and checking with Google to make sure my science isn't horrifically, unforgivably wrong (I could be using the positive feedback loop wrong, but I don't think I am).
So let me wrap this up with the Gestational Protocols. It's like a mech heat fic, actually, except the heat is very short and starts toward the end of sticky sexual interfacing.
For this scenario, Ratchet and Drift want to produce a sparkling. Because Drift is concerned about Ratchet's health, they decide that Drift should be the foundry. Drift has his tank cap removed beforehand.
They're having a great time, creating all the good friction, lighting up their sensory net like a growing fire. Drift is charged up, Ratchet is charged up, and they're about to hit that overload.
Drift's lockring is cycled all the way open. His calipers are trying to pull Ratchet closer. When Ratchet knows he can't hold on any longer, he pushes as deep as he can go. There's a small moment of resistance when his plug meets the lockring and then he pops through. The lockring cycles down and he's stuck. There's no pulling out now.
Ratchet told Drift what to expect from the gestational protocols, but it wasn't enough. The moment Ratchet is locked in place, a signal is sent from his gestation system to his CPU: Gestational protocols initiated...
His cache memory crashes. He has no past or present or future. He has no idea there was a war lasting millions of years. He doesn't even know what a Cybertron is. Programs are halted, tasks are paused, processing units block external input. Hydraulics fall to the lowest power possible. His frame goes completely limp.
Drift no longer exists. He is now a foundry. He is the function of his gestation system. His CPU has a primary and secondary task: Primary is to maintain the protocols and secondary is to reward Drift for maintaining the protocols.
As long as he lays there and lets Ratchet fill him up, he's fulfilling his primary task. Because it's so easy to let Ratchet fill him up, his neural net rewards him with pleasure and feel-good signals. He is riding a type of euphoria that is thoughtless bliss from the tips of his pedes to the tops of his finials.
A task pops up in his CPU, but he doesn't have the processing power to interpret it. He accepts without caring. He experiences his chest plates cracking open without actually seeing it or hearing it. His system rewards him for accepting the prompt, so he still doesn't care. His spark chamber opens next and he is thrown into the intense, beautiful pleasure-agony of having his spark collide with another mech's spark.
He doesn't remember who this other mech is, but Drift loves them. They're filling Drift up so well, both his tank and his spark. He's so full. He's being such a good foundry. He's receiving all those good neural and sensory signals and he's fuzzy/fizzy with joy.
The spark merge ends after several collisions and spirals. Drift loves every moment of it, and also loves it when his spark returns to its chamber. Now his spark feels swollen and his CPU registers a foreign body. There is a potential sparkling attached to his core. Chances are very good that this potential sparkling will not disperse.
His CPU rewards him with another rush of emotional glee and pride. He's sparked! He did so well, laying there and letting himself get sparked. He's a great foundry. He's the best foundry to ever get sparked. No one has ever been or will ever be as well-behaved as he was.
A notification hits his CPU and he doesn't even try to understand it. Apparently, it's the notification for his tank being full. A second notification and his lockring relaxes. He is deliciously, fully aware of a thick spike dragging across his oversensitive interior nodes, sending one last wave of hot, crackling pleasure through his frame.
Another notification. He doesn't read it. A task pops up. He accepts lazily.
The notification was that the gestational protocols had been completed. The task was to enter a soft reboot. Drift slips into recharge feeling like his only purpose in life is to embody pleasure and creation.
He wakes up feeling swollen and sloshy.
Ratchet is smiling down at him.
"Am I...? Are we...?"
Ratchet stroked a servo across his chest plates. "It's early still. The spark might disperse. But chances are looking good. We're sparked, kid."
And that is how I imagine the Gestational Protocols going: You get your tank plugged and then nothing matters but getting filled up with a sparkling.
Thank you for reading my book-length discussion! Please feel free to interact with me.
I have been working on this for ten hours now. I should proofread, but I am not going to at this time.
EDIT: I was in the shower when I realized I forgot something important - where does the protoform's first colony of nanites come from?
@earthstellar explains here (https://www.tumblr.com/earthstellar/659541951144738816/transformers-medical-analysis-essay-what-are?source=share) what Cybertronians use nanites for, including construction and self-repair. So we can readily assume that the protoform needs a nanites colony.
I'll tell you where the new spark's nanites came from: Their foundry's valve.
Humans do the same thing. We pick up friendly bacteria from the vagina we come out of.
That is all I had to add. Remember to start your protoform off right with a healthy nanite colony.
#mech preg#Mech pregnancy#cybertronian biology#transformers#macaddam#tf comics#world building#valveplug
134 notes
·
View notes
Text
actually one more mabel thought before i sleep. a really fucking wonderful potential use for generative AI that would actually be cool (instead of like. infinite hentai slop generator or The Machine That Lies About Everything) would be a 3d retopology tool
like. nobody on earth who does 3d stuff enjoys retopology. it's literally the least interesting part of making a character since you've already done the Art part of it but now you have to trudge through a bunch of extremely tedious extra bullshit just so you can then Also do the slightly less extremely tedious step of rigging. i don't think anyone would miss retopo if there was some neural net tool integrated with blender that you could feed a high poly mesh + some guide geometry ("loops should be oriented like this here and here" etc) and have it fill in the gaps
72 notes
·
View notes
Note
thoughts on xDOTcom/CorralSummer/status/1823504569097175056 tumblrDOTcom/antinegationism/758845963819450368 ?
I mostly try to ignore AI art debates, and as a result I feel like I don't have enough context to make sense of that twitter exchange. That said...
It's about generative image models, and whether they "are compression." Which seems to mean something like "whether they contain compressed representations of their training images."
I can see two reasons why partisans in the AI art wars might care about this question:
If a training image is actually "written down" inside the model, in some compressed form that can be "read off" of the weights, it would then be easier to argue that a copyright on the image applies to the model weights themselves. Or to make similar claims about art theft, etc. that aren't about copyright per se.
If the model "merely" consists of a bunch of compressed images, together with some comparatively simple procedure for mixing/combining their elements together (so that most of the complexity is in the images, not the "combining procedure"), this would support the contention that the model is not "creative," is not "intelligent," is "merely copying art by humans," etc.
I think the stronger claim in #2 is clearly false, and this in turn has implications for #1.
(For simplicity I'll just use "#2", below, as a shorthand for "the stronger claim in #2," i.e. the thing about compressed images + simple combination procedure)
I can't present, or even summarize, the full range of the evidence against #2 in this brief post. There's simply too much of it. Virtually everything we know about neural networks militates against #2, in one way or another.
The whole of NN interpretability conflicts with #2. When we actually look at the internals of neural nets and what is being "represented" there, we rarely find anything that is specialized to a single training example, like a single image. We find things that are more generally applicable, across many different images: representations that mean "there's a curved line here" or "there's a floppy ear here" or "there's a dog's head here."
The linked post is about an image classifier (and a relatively primitive one), not an image generator, but we've also found similar things inside of generative models (e.g.).
I also find it difficult to understand how anyone could seriously believe #2 after actually using these models for any significant span of time, in any nontrivial way. The experience is just... not anything like what you would expect, if you thought they were "pasting together" elements from specific artworks in some simplistic, collage-like way. You can ask them for wild conjunctions of many different elements and styles, which have definitely never been represented before in any image, and the resulting synthesis will happen at a very high, humanlike level of abstraction.
And it is noteworthy that, even in the most damning cases where a model reliably generates images that are highly similar to some obviously copyrighted ones, it doesn't actually produce exact duplicates of those images. The linked article includes many pairs of the form (copyrighted image, MidJourney generation), but the generations are vastly different from the copyrighted images on the pixel level -- they just feel "basically the same" to us, because they have the same content in terms of humanlike abstract concepts, differing only in "inessential minor details."
If the model worked by memorizing a bunch of images and then recombining elements of them, it should be easy for it to very precisely reproduce just one of the memorized images, as a special case. Whereas it would presumably be difficult for such a system to produce something "essentially the same as" a single memorized image, but differing slightly in the inessential details -- what kind of "mixture," with some other image(s), would produce this effect?
Yet it's the latter that we see in practice -- as we'd expect from a generator that works in humanlike abstractions.
And this, in turn, helps us understand what's going in in the twitter dispute about "it's either compression or magic" vs. "how could you compress so much down to so few GB?"
Say you want to make a computer display some particular picture. Of, I dunno, a bird. (The important thing is that it's a specific picture, the kind that could be copyrighted.)
The simplest way to do this is just to have the computer store the image as a bitmap of pixels, without any compression.
In this case, it's unambiguous that the image itself is being represented in the computer, with all the attendant copyright (etc.) implications. It's right there. You can read it off, pixel by pixel.
But maybe this takes up too much computer memory. So you try using a simple form of compression, like JPEG compression.
JPEG compression is pretty simple. It doesn't "know" much about what images tend to look like in practice; effectively, it just "knows" that they tend to be sort of "smooth" at the small scale, so that one tiny region often has similar colors/intensities to neighboring tiny regions.
Just knowing this one simple fact gets you a significant reduction in file size, though. (The size of this reduction is a typical reference point for people's intuitions about what "compression" can, and can't, do.)
And here, again, it's relatively clear that the image is represented in the computer. You have to do some work to "unpack" it, but it's simple work, using an algorithm simple enough that a human can hold the whole thing in their mind at once. (There is probably at least one person in existence, I imagine, who can visualize what the encoded image looks like when they look at the raw bytes of a JPEG file, like those guys in The Matrix watching the green text fall across their terminal screens.)
But now, what if you had a system that had a whole elaborate library of general visual concepts, and could ably draw these concepts if asked, and could draw them in any combination?
You no longer need to lay out anything like a bitmap, a "copy" of the image arranged in space, tile by tile, color/intensity unit by color/intensity unit.
It's a bird? Great, the system knows what birds look like. This particular bird is an oriole? The system knows orioles. It's in profile? The system knows the general concept of "human or animal seen in profile," and how to apply it to an oriole.
Your encoding of the image, thus far, is a noting-down of these concepts. It takes very little space, just a few bits of information: "Oriole? YES. In profile? YES."
The picture is a close-up photograph? One more bit. Under bright, more-white-than-yellow light? One more bit. There's shallow depth of field, and the background is mostly a bright green blur, some indistinct mass of vegetation? Zero bits: the system's already guessed all that, from what images of this sort tend to be like. (You'd have to spend bits to get anything except the green blur.)
Eventually, we come to the less essential details -- all the things that make your image really this one specific image, and not any of the other close-up shots of orioles that exist in the world. The exact way the head is tilted. The way the branch, that it sits on, is slightly bent at its tip.
This is where most of the bits are spent. You have to spend bits to get all of these details right, and the more "arbitrary" the details are -- the less easy they are to guess, on the basis of everything else -- the more bits you have to spend on them.
But, because your first and most abstract bits bought you so much, you can express your image quite precisely, and still use far less room than JPEG compression would use, or any other algorithm that comes to mind when people say the word "compression."
It is easy to "compress" many specific images inside a system that understands general visual concepts, because most of the content of an image is generic, not unique to that image alone.
The ability to convey all of the non-unique content very briefly is precisely what provides us enough room to write down all the unique content, alongside it.
This is basically the way in which specific images are "represented" inside Stable Diffusion and MidJourney and the like, insofar as they are. Which they are, not as a general rule, but occasionally, in the case of certain specific images -- due to their ubiquity in the real world and hence in the training data, or due to some deliberate over-sampling of them in that data.
(In the case of MidJourney and the copyrighted images, I suspect the model was [over-?]heavily trained on those specific images -- perhaps because they were thought to exemplify the "epic," cinematic MidJourney house style -- and it has thus stored more of their less-essential details than it has with most training images. Typical regurgitations from image generators are less precise than those examples, more "abstract" in their resemblance to the originals -- just the easy, early bits, with fewer of the expensive less-essential details.)
But now -- is your image of the oriole "represented" in computer memory, in this last case? Is the system "compressing" it, "storing" it in a way that can be "read off"?
In some sense, yes. In some sense, no.
This is a philosophical question, really, about what makes your image itself, and not any of the other images of orioles in profile against blurred green backgrounds.
Remember that even MidJourney can't reproduce those copyrighted images exactly. It just makes images that are "basically the same."
Whatever is "stored" there is not, actually, a copy of each copyrighted image. It's something else, something that isn't the original, but which we deem too close to the original for our comfort. Something of which we say: "it's different, yes, but only in the inessential details."
But what, exactly, counts as an "inessential detail"? How specific is too specific? How precise is too precise?
If the oriole is positioned just a bit differently on the branch... if there is a splash of pink amid the green blur, a flower, in the original but not the copy, or vice versa...
When does it stop being a copy of your image, and start being merely an image that shares a lot in common with yours? It is not obvious where to draw the line. "Details" seem to span a whole continuous range of general-to-specific, with no obvious cutoff point.
And if we could, somehow, strip out all memory of all the "sufficiently specific details" from one of these models -- which might be an interesting research direction! -- so that what remains is only the model's capacity to depict "abstract concepts" in conjunction?
If we could? It's not clear how far that would get us, actually.
If you can draw a man with all of Super Mario's abstract attributes, then you can draw Super Mario. (And if you cannot, then you are missing some important concept or concepts about people and pictures, and this will hinder you in your attempts to draw other, non-copyrighted entities.)
If you can draw an oriole, in profile, and a branch, and a green blur, then you can draw an oriole in profile on a branch against a green blur. And all the finer details? If one wants them, the right prompt should produce them.
There is no way to stop a sufficiently capable artist from imitating what you have done, if it can imitate all of the elements of which your creation is made, in any imaginable combination.
111 notes
·
View notes