#it doesn’t even look like ai. it’s just digitally rendered
Explore tagged Tumblr posts
Text
Charbroiled Basilisk
“Run that by me one more time,” Cleo said, rubbing their temples, “You…what?”
“We accidentally made an AI.” Mumbo said sheepishly, “And it says it’s made copies of all of you, besides me and Doc, and is torturing all your copies in the worst ways imaginable. For um. Eternity?”
Cleo stared at the box Mumbo was talking about. It was a rectangular PC case with a monitor perched on top, a monitor that was showing a pair of angry red eyes. The eyes looked between Mumbo, and Doc, and then back to her.
The box, Cleo noted, was plugged into the wall.
“Uh,” Jevin said, tilting his head with a slosh, “So like, far be it from me to tell you guys how to do your jobs. But like, why? Why did you make a machine that did that?”
“We didn’t!” Doc threw his hands up, “We made the AI to help us design things. I just- we wanted a redstone helper.”
“And then it got really smart really quickly.” Mumbo said awkwardly, twiddling his moustache nervously, “It says it’s perfectly benevolent and only wants to help!”
“Uh-huh.” Cleo said, “‘Benevolent’, is it?”
“Well, yeah. It’s been spitting out designs for new farms I couldn’t even imagine.” Mumbo said, pointing at the machine. The evil red eyes faded away, and it suddenly showed an image of a farm of some kind, rotating in place. It was spitting out a constant stream of XP onto a waiting player, who looked very happy.
A nearby printer started to grind and wheeze, Cleo’s eyes following a cable plugged into the box all the way to the emerging paper. Doc fished out the printout, and hummed consideringly.
“Interesting. Never considered a guardian-based approach to one of these…”
“Doc.” Cleo said, “What was that about this thing torturing copies of us for all eternity?”
“Oh, uh, that,” Doc said, “Um. The machine says it’s benevolent and only wants what’s best for us, which is why it’s decided that your copies need to suffer an eternity of torment. For um. Not helping in its creation, and slowing down the time it took for this thing to exist?”
Cleo stared at the box.
“...So, there’s a fragment of me swirling around in there in abject agony?” Cleo mused, and Jevin hissed some gas out of a hole in his slime in exasperation.
“Like, I’m no philosopher,” Jevin said, “But that doesn’t sound particularly “benevolent” to me. Like, my idea of a benevolent helper-guy is…honestly, probably Joe. Helps with no thought of reward and doesn’t, uh, want to send me into the freaking torment nexus? Why would something benevolent want to send us to super-hell? I didn’t do anything wrong!”
“Fair point. I knew you were making this stupid thing, but. This is just dumb.” Cleo groaned.
“Man, I need a drink,” Jevin said, pulling a bottle of motor oil out of his inventory and popping the top. Jevin shoved the bottle into the slime of his other hand and let the viscous yellow fluid pour into his slime, slowly turning green as it met with the blue.
“Yeah, I’ll second that. So…to recap, you two decided to build a thing. The thing declared it was a benevolent helper to playerkind, then immediately decided it was also going to moonlight as the new Satan of our own personal digital Hell? Have I got all that correct?” Cleo sighed, and Mumbo and Doc nodded sheepishly.
“Cool. I mean, not cool, but. Cool.” Jevin sighed.
“Now, hold on,” Cleo said, “because. How do we know your magic evil box is even telling the truth?”
“Uh…because it told us so?” Mumbo offered weakly.
“Yeah, but… Hang on.” Cleo sighed, tapping a message into their comm.
<ZombieCleo> Cub, how much data storage would it take to store and render a single player’s brain or brain equivalent?
<cubfan135> probably like a petabyte or more
<cubfan135> why
<ZombieCleo> don’t ask
<cubfan135> i see
<cubfan135> what did doc do this time?
<ZombieCleo> You don’t want to know.
“So, let’s say it’s a petabyte per player,” Cleo mused, looking up from their comm, “So that’s…twenty-six petabytes to render all of us, minus you two, of course.”
The red eyes were staring at her angrily.
“Did you guys give your evil box twenty-six petabytes of data storage, by chance?”
“Um, no? I don’t think so, anyway…” Mumbo said awkwardly, scratching his head.
“So, odds are, if this thing IS being truthful, then all it’s torturing are a bunch of sock puppet hermits.” Cleo said, gesturing at the computer, “It doesn’t have the data storage, let alone processing power.”
“If that,” Jevin countered, “that thing’s probably got, what, ten terabytes? Optimistically? Dude, it’s probably just sticking pins in a jello cube instead of actually torturing, you know, me.”
“And another thing!” Cleo said, “Even assuming you DID give your stupid box enough data storage for all of us, how the hell did it get our player data to start with?”
“Yeah!” Jevin countered, “It would have had to either get us to submit to a brain scan- which, why would you ever do that if it’s gonna use the scan to torture you? Or like, since I don’t have a brain, find some way to steal our player data. And I feel like Hypno or X or someone would have noticed?”
“Uh…” Doc scratched his head, “I don’t know.”
“You reckon it’s lying, mate?” Mumbo asked, and Doc nodded.
“Probably yeah. So…We can just…ignore it?”
“Oh no,” Cleo said, shaking their head, “We’re not ignoring anything.”
“We’re not?” Mumbo asked.
“Nope!” Cleo said, “We’re not ignoring a damn thing. Because…”
She and Jevin locked eyes.
“-Because if there’s even the SLIGHTEST CHANCE that this thing’s locked me and you in a phone booth together for like, three days, then…well. Then it pays.” Jevin nodded with a slop of slime.
Cleo marched over and grabbed the plug, yanking it out of the wall. The screen momentarily showed a bright red ! and then flashed to a dead black. She picked up the whole unit and walked over to Jevin, who’d punched a one-block hole in the floor and filled it with lava.
Cleo threw the computer inside, and all four hermits watched as it fizzled away to nothing.
“And that,” Cleo said, “is how you roast a basilisk.”
#magnetar writes#Hermitcraft fic#Mumbo Jumbo#Docm77#ZombieCleo#iJevin#Parody#this was written at 1 AM last night so this may be a little ???
239 notes
·
View notes
Text
Glitchcore dialogue prompts
1. "Reality is buffering… What happens when we hit pause?"
Character A stares at the glitching horizon, where the sky flickers between pixelated voids. Character B frowns, “Maybe we’re not meant to see the code behind it all.”
2. "You’re a corrupted file. But that doesn’t mean you’re broken."
Character A experiences moments of disconnection, their speech fragmented by static. Character B tries to reassure them, but each word feels like it’s slipping through the cracks of reality.
3. "Every time I blink, the world skips a frame."
Character A notices the world is out of sync. People flicker, objects disappear, and their reflection isn’t quite right. They turn to Character B for answers, but even their words are distorted, glitching mid-sentence.
4. "I was never programmed to feel this… but here I am, crashing."
Character A, an AI or digitally enhanced human, starts to experience emotions for the first time, leading to a system overload. Their thoughts flash like corrupted code, scrambling their sense of self.
5. "We’re stuck in a loop. But maybe this time, we can break it."
Time is glitching for Character A and Character B, repeating the same moments over and over. As they try to escape, reality fractures, showing distorted fragments of alternate timelines.
6. "If I glitch out, don’t follow. I’m just data—nothing more."
Character A is fading, pixel by pixel, as the virtual world they live in begins to collapse. Character B insists on trying to save them, even though the lines between digital and physical are breaking down.
7. "I hear the static whispers… It’s like they know we’re here."
Character A starts to pick up on strange sounds—static, broken transmissions, and voices from somewhere beyond. They believe the glitches are alive, watching them.
8. "We’re just echoes in the system, flickering between what’s real and what’s not."
Character A questions their existence as the world around them constantly shifts and deforms. The glitches feel too intentional, like someone—or something—is controlling it all.
9. "I saw myself glitch today… but it wasn’t me. It was something pretending to be me."
Character A sees their own reflection glitch and morph into something unfamiliar. Is it an error in the system, or is something trying to overwrite them?
10. "I’ve been patched up so many times, I don’t even know which version I am anymore."
Character A has been modified, both physically and digitally, so many times that they’ve lost their sense of identity. They question whether they’re still the same person they once were, or just a collection of fragments.
"You're not seeing me right now, are you? I'm stuck between frames."
"The code is breaking down. I can feel it. Every time I blink, something new glitches."
"We were perfect once. Now, we're just corrupted data fragments trying to piece ourselves together."
"Reality doesn’t crash. It fades, like static, until the lines blur and you can’t tell what’s real anymore."
"Don't trust what you see. It's all just a simulation rendering too slowly to hide its flaws."
"Every time I move, I leave a part of myself behind, like I’m lagging between timelines."
"I’m not sure if I’m the glitch or if the world around me is. Does it matter?"
"The pixels around your face—they’re unraveling. We need to reset the program before you disappear completely."
"I keep hearing this… echo. It’s like my thoughts are repeating, but they aren’t mine."
"I thought I deleted you. Why do you keep reappearing in my feed?"
"The horizon just flickered. Did you see that? I think we’re reaching the edge of the simulation."
"Every time I think I’ve fixed it, the glitches return, worse than before. Maybe we’re meant to stay broken."
"If I lose connection, you have to promise to reboot me. I can’t afford to stay stuck in here."
"It’s strange, isn’t it? How the glitch makes everything look more real than reality ever did."
"What if I’m just a copy of me, and the original got corrupted long ago?"
"I saw the world tear for a second. The sky turned into data streams, and I think I saw someone behind it all."
"I can’t trust the mirrors anymore. They show me… versions of myself that I don’t recognize."
"They keep trying to patch me, but it never works. I think I’m beyond fixing."
"You keep glitching. Are you real or just an error in the system trying to communicate?"
"I can feel myself desyncing from reality. Every moment, I drift further away."
"I’ve been seeing static in the mirror. Like I’m glitching in and out of existence."
"I can’t tell if I’m in the real world or a simulation. The lines are all blurred now."
"My thoughts are stuttering—like an old video buffering. Can you hear it too?"
"We’ve got less than a second before the whole system crashes. Are you ready?"
"Every time I blink, I lose a part of myself. The screen flickers, and I'm gone."
"There’s a glitch in my memory. Did we meet before, or is this another loop?"
"I’ve been coded wrong, haven’t I? My emotions don’t feel… real."
"I tried to log out, but the world didn’t let me. Now, I’m stuck in the error."
"We’re all just data points now. I can see your code unraveling."
"You’re breaking the system. If you keep doing that, everything might collapse."
"Sometimes I hear a voice, like a distorted signal. It tells me the end is near."
"I reached out to touch you, but my hand just passed through like you were a hologram."
"The colors are bleeding into one another, like corrupted files. Can you fix this?"
"I’m not supposed to exist, not like this. I’m a glitch, an error in the code."
"Reality froze for a moment. Did you see it? Everything just stopped moving."
#glitchcore#glitching#tadc pomni#the amazing digital circus#digital circus#the amazing digital circus pomni#tadc#tadc caine#the amazing digital circus caine#dialogue prompt#writing dialogue#character dialogue#dialouge#creative writing
20 notes
·
View notes
Text
Duty Now For The Future [AI edition]
First the eyes gave it away, then they figured out how to make them the same size and facing the same direction.
Fingers and limbs proved to be next the next tells, but now limbs usually look anatomically correct and fingers -- while problematic -- are getting better.
Rooms and vehicles and anything that needs to interact with human beings typically show some detail that's wrong, revealing the image as AI generated.
But again, mistakes are fewer and fewer, smaller and smaller, and more and more pushed to the periphery of the image, thus avoiding glaring error.
Letters and numbers -- especially when called to spell out a word -- provide an easy tell, typically rendered as arcane symbols or complete gibberish, but now AI can spell out short words correctly on images and it's only a matter of time before that merges with generative text AI to provide seamless readable signs and paragraphs.
All this in just a few years. We can practically see AI evolving right before our eyes.
Numerous problems still must be dealt with, but based on the progress already displayed, we are in the ballpark. All of this is a preamble to a look at where AI is heading and what we'll find when we get there. I haven't even touched on AI generated music or text yet, but I will include them going forward.
. . .
The single biggest challenge facing image generating AI is that it still doesn't grasp the concept of on model.
For those not familiar with this animation term, it refers to the old hand drawn model sheets showing cartoon characters in a variety of poses and expressions. Animators relied on model sheets to keep their characters consistent from cartoon to cartoon, scene to scene, even frame to frame in the animation. Violate that reference -- go “off model” as it were -- and the effect could look quite jarring.*
AI still struggles to show the same thing the same way twice. Currently it can come close, but as the saying goes, “Close don't count except in horseshoes, hand grenades, and hydrogen warfare.”
There are some workarounds to this problem, some clever (i.e., isolate the approved character and copy then paste them into other scenes), some requiring brute force (i.e., make thousands of images based on the same prompt then select the ones that look closest to one another).
When done carefully enough, AI can produce short narrative videos narrative in the sense they can use narration to appear to be thematically linked.
Usually, however, they're just an endless flow of images that we, the human audience, link together in our mind. This gives the final product, at least from a human POV, a surreal, dreamlike quality.
In and of themselves, these can be interesting, but they convey no meaning or intent; rather, it's the meaning we the audience subscribed to them.
Years ago when I had my first job in show biz (lot attendant at a drive-in theater), a farmer with property adjoining us raised peacocks as a hobby. The first few times I heard them was an unnerving experience: They sounded like a woman screaming help me.
But once I learned the sounds came from peacocks, I stopped hearing cries for help and only heard birds calling out in a way that sounded similar to a woman in distress.
Currently AI does that with video. This will change with blinding speed once AI learns to stay on model. The dreamlike / nightmarish / hallucinogenic visions we see now will be replaced with video that shows the same characters shot to shot, making it possible to actually tell stories.
How to achieve this?
Well, we already use standard digital modeling for animated films and video games. Contemporary video games show characters not only looking consistent but moving in a realistic manner. Tell the AI to draw only those digital models, and it can generate uniformity. Already in video game design a market exists for plug-in models of humans, animals, mythical beasts, robots, vehicles, spacecraft, buildings, and assorted props. There are further programs to provide skins and textures to these, plus programs to create a wide variety of visual effects and renderings.
Add to this literally thousands of preexistent model sheets and there's no reason AI can't be tweaked to render the same character or setting again and again.
As mentioned, current AI images and video show a dreamlike quality. Much as our minds attempt to weave a myriad of self-generated stimulations into some coherent narrative form when we sleep, resulting in dreams, current AI shows some rather haunting visual images when it hits on something that shares symbolic significance in many minds.
This is why the most effective AI videos touch on the strange and uncanny in some form. Morphing faces and blurring limbs appear far more acceptable in video fantastique than attempts to recreate reality. Like a Rorschach blot, the meaning is supplied by the viewer, not the creator.
This, of course, lends to the philosophical rabbit hole re quantum mechanics and whether objects really exist independent of an observer, but that's an even deeper dive for a different day.
© Buzz Dixon
* (There are times animators deliberately go off model for a given effect, of course, but most of the time they strive for visual continuity.)
6 notes
·
View notes
Text
To Continue, Please Verify You Are A Human
Today’s prompt: a conversation with my BFF about the comments I received on my fanfiction about it being written by an AI. It wasn’t, but I have to confess, that same BFF is my smart friend who helps me climb out of plot holes and link insane ideas together. It may be an oversight of mine that I have never checked to see if she is a robot…
By the time the ambulance arrives, it’s already over. The neighbours come out to gawk as the medics climb the stairs with a stretcher.
“What’s happened?” demands one woman, watching as they don’t even attempt to knock on the door of 11001 of the biggest block of flats in The Circuit. A new housing development which the cynical claim is only paid for by the world’s super wealthy in order to be used as a tax free means of transferring money across Europe.
“Stand back, please,” says the man, not answering her question.
His colleague gives an apologetic smile, but they let themselves in and the door is shut sharply in the faces of the bystanders.
“Poor thing,” the woman who had spoken says, already turning back to her own doorway and the incomplete daily tasks. “I hope it’s nothing serious, she can’t be much more than 20.”
“Nice girl, polite,” agrees Mrs Next-Door-But-One.
“Hot,” leers middle aged Mr Across-The-Road, which the women take as a cue to disappear back through their own doors.
*
Meanwhile, inside the flat, the paramedics are looking at the body on the floor. It’s supine, limbs stiff and bent unnaturally like a dropped doll. Honey brown hair fans out from the face and the eyes are wide and staring like a pair of marbles, increasing the resemblance to an imperfect facsimile.
The man, so gruff outside the door, sighs, closes his eyes and pinches the bridge of his nose. “I hate this bit.” He skirts a spreading, red puddle which seems to be pooling from deep gouges across the body’s thigh. “Why do they have to make it red?”
The woman pulls her ponytail into a higher and more severe bun and shrugs off her paramedic vest, retrieving a pouch of hidden tools from a secret belt pocket. “You know why.”
“Doesn’t seem to make any difference,” he mutters, but retrieves the saw from under the stretcher blanket in case it is needed nonetheless.
The woman reaches out, touching the body’s face, turning it this way and that, then she digs a nail into an indent behind the ear and a whole section of the forehead slides back. Inside is a spaghetti mass of wiring, still slightly smoking and visibly charred.
“Let’s see what we’ve got here then.” She plugs in a hand held downloader, navigating through what remains of the data files.
*
She’s done it hundreds of times. Answered numerous digital requests to identify horses or traffic lights or bridges, reconstructed vague blurry renderings of random numbers and letters, but for some reason this is defeating her. Again.
Still.
It had been funny at first. “Isn’t technology grand,” she’d joked and considers, not for the first time, slinging the whole thing out the window, but she’s just had it repaired (at no small expense) and knowing her track record, her phone isn’t long for this world so best not.
Then it had been frustrating. The lack of customer service number, the inability to reason with a human, just an unfeeling digital screen blankly requesting she prove her humanity and rejecting her.
And then there had been the dreams.
*
“Not much left,” he looks over her shoulder.
She navigates through the fragments which are present. Some bits of personality, error messages, little more. There’s nothing even resembling the Operating System or basic function algorithms.
*
She is human. Of course she is human. She remembered her school, her family, her life. She had gone for lunch with her sister and niece just yesterday.
But still the dreams and the insidious thought - they looked so alike, everyone commented on it, like factory produced dolls on an assembly line, each identical.
I’m always breaking technology, she tries to reassure herself. I’m the only one in the office that can’t use that stupid Odoo Accounting thing. It just crashes when I go near it. But if living through Covid has taught her anything, it is that humans could be dangerous to others, spreading contagion just by going near them.
But I remember things, she insists again in her own mind. I get sick.
Always the other voice: computers get viruses too, and always the mocking request of the white screen before her.
“To continue, please verify that you are a human”
If I was a murderer, she thinks inanely, clicking away at the requested squares with hard jabs, I would be presumed innocent. It would be their job to verify that I was not. Robot cannot be the default. She knows too well online that it can be, social media full of bots and her email full of spam, even her phone chooses that moment to chime with an automated text about her upcoming need to book a dentist appointment.
“Unsuccessful attempt: To continue, please verify that you are a human”
It’s been days.
“I am human!” she whisper-shrieks at the computer. “I’ll prove it you stupid machine.”
It’s a mark of the stress, of the recursive error loop she’s been trapped in. She grabs her scissors from the table and digs hard into her leg. Immediate red gushes up and out in fast spurts matching her frantically beating heart. “See…blood…” she trails off. “Well, fuck. I didn’t mean to-”
The blood makes her fingers slippery, her fingerprint unreadable. It takes minutes for her phone to respond to her touch, to dial for the help she needs, as she babbles her panic to the woman on the other side of the phone, as she slips into the darkness.
*
“No.” The woman puts away her device and unplugs it. “Not enough to reboot, it needs a full wipe and reinstall, and the casing is ruined anyway.”
He grimaces and hefts the saw. “You want me to-?”
“Yeah, the synthetic flesh is still fresh enough for the lab and once it’s processed the damage won’t be noticeable. May as well recycle.”
“I wonder,” he says, hovering the blade over the body’s shoulder joint, “why this model can’t handle existential crises at all?”
She shrugs, already calling the office. “We need a new Life Image Synthetic Adult for 11001 on The Circuit…yeah…yeah…no, none of the neighbours saw anything, programme her with memory of calling an ambulance for- Oh, oh yeah. Good idea. I forgot we had that one. Yeah, make use of it, even with only one eye. Hmmm...we’ll be back at the office in half an hour or so unless we get another call out. See you later, Sharon. Yeah, definitely. Drinks tonight, I’ll see you at Harry’s. Yeah. Yeah. Around 8. Bye.”
She helps her colleague load the limbs into the cool storage and the torso for deep analysis back at the labs onto the stretcher, where she covers it with a blanket so all the neighbours will see is an ill girl being loaded into an ambulance. She takes her time, positioning the body perfectly, inconspicuously.
“This is easier when we can use a body bag,” he comments.
“Honestly, if you want my opinion, this whole thing is a failure. No point if the Life Image Synthetic Adults freak out about their humanity at the slightest provocation. Decommission the lot of them and spend the money on repairing the environmental damage instead of on versions of humans that don’t contribute to it.” She finishes tucking in the blanket.
He barks out a laugh. “I wouldn’t let The Collaboration hear you say that.”
She rolls her eyes, but doesn’t push her point. The Collaboration does not like dissenters. “Anything else we need?”
“The reinstallation team’ll do the carpet.”
“Let’s just see what she was trying to access.” The woman bends down and her fingers quickly select a Ps5gQ. They wait as the loading wheel turns, morbidly curious to know what had been so important it had been worth dying for.
“Unsuccessful attempt: To continue, please verify that you are a human” blinks up on the screen.
“Coincidence,” he says.
“Must be a server fault,” she adds.
#@beloveddawn-blog#my writing#AI#Because I was cross#Written for/with beloveddawn#tw for suicidal imagery
5 notes
·
View notes
Text
AI art, and a couple of thoughts
Awhile back, an artist/author I follow started playing around with digital art assistants. They made it sound interesting enough that I tried Midjourney...and got a little hooked. (Examples below the cut.)
As a person who’s never had the patience or skill to Do Art, I find it overwhelmingly joyful to just...enter some words, and be given art in return? It’s most likely not what I pictured, but that often makes it more fun. I had fun in the beginning putting in abstract concepts, just to see what came back. Two of my favorites were generated by simply putting in “evil hunger” and “icy rage”.
Evil Hunger:
Icy Rage:
At that time, Midjourney was...not good at rendering humans. You see, it works by taking all the art it has access to online, attempting to match keywords to prompts, putting that subset of the art in its little AI blender, and spitting out the result. It doesn’t know, for example, that humans typically have two eyes that look in the same direction. Early experiments gave me a lot of figures facing away from the camera in silhouette, which was good because the clearer ones usually came straight from the Uncanny Valley. I got human figures with one leg, and horses with six.
That was Midjourney version 3. They’ve since come out with a version 4, which is much better at getting the proper number of physical features sorted. It still has a hard time with hands...but that’s a problem for all artists, from what I understand.
As you can read in the Discourse, if you’re so inclined, AI art can be Problematic. Is it stealing? Maybe. It’s a little ludicrous for me to input “icy rage” into a text box and claim the result is “my art.” There are thousands of real artists, living and dead, whose skill the AI is borrowing. That said, can we defend it as creative derivation? Artists (and writers, and most creatives) borrow from each other all the time. We use each others’ work as inspiration and let it drive our own creation. Isn’t that what’s happening here?
The question is, how much of the result is transforming others’ works, and how much is straight up copying? Unfortunately, there’s no way to tell.
For myself, I’ve got two rules. One is, I don’t use artists’ names in my prompts. A lot of people do, but to me, that’s where it crosses the line into plagiarism. The other is...well, I’ll illustrate with something it gave me awhile back:
I asked it for a “fire opal talisman” and it gave me this set of four. Look at the one in the lower right corner. It looks like the AI has picked up a signature from whatever art it was using as a reference. That makes me profoundly uncomfortable. So my second rule is, no matter how much I like a result, anything with a ghost signature/watermark is out.
Now, this is a whump blog, so of course I was curious about how it might do for that. The answer is...meh? Even though the current version is better at rendering people, it’s still not great. Many human figures end up looking cartoony. Also, Midjourney has a list of words that cannot be used as prompts, in order to prevent misuse. That’s definitely a good thing, but it means I can’t use “bruised” for example.
I did get a couple of interesting whumpy results, like when I asked it for a film-noir-ish wounded man in a trenchcoat:
That’s not bad. I could use that as story inspo or illustration. It’s just that ultimately, I find that the more specific the result I want, the more difficult it is to wring that image out of the AI.
And one more thing
Since I’m posting this on Tumblr in November 2022, I’m contractually obligated to add this masterpiece I generated five minutes ago:
(IYKYK and all that. The weirdness of the 3 and 4 are an AI artifact, but it amuses me to think that this might be the start of a dream sequence or hallucination where our MC imagines the clock’s numbers moving and changing.)
9 notes
·
View notes
Photo
Objective: To have fun around using 3D, digital painting and AI image generation. Testing the resources needed, the modularity, expediency of the process, and how accurate the results could be. Idea: I wanted something like a that reminisced old timey pulp magazine cover, but with a modern look. Ideally, it would look dreamy and foggy. Assets: Characters modelled on Genesis 8 base, with morphs and textures made by myself. Background is a mix of custom 3D objects, plus store bought scenary (IIRC Grovebrook Park and Urban Sprawl). On the AI side, I used the models Chillout Mix and NeverEnding Dream. I also used free images here and there. Process: I started with a 3D render, the ones I usually do. I tried many poses, my lighting options (turns out that doesn't make a difference). Then I started splicing the image in multiple parts by theme (Lois, background, Clark, etc...) and started tweaking them with AI, and digitally painting over when needed, doing many generations, a lot of repainting and blending. The objective was to take these bunch of different layers and blend them to something somewhat coherent. Problems: It's definitively more labor intensive than just painting, or using AI, or manipulating 3D objects then rendering, mostly because all three methods hate each other I guess. And I'm not very good. For one, AI hates faces. And eyes. Faces with glasses. And hands. And limbs. And edge of clothes. And objects. And everything. It will try to mix everything, fuse clothing to body, deform limbs, etc. Jon was supposed to be snapping his fingers, but AI really wanted him to hold a glass... or hallucination object. Every new layer needed more painting to add cohesion. And it was not very predictable, I wanted, for example, to make Lois look less like a teenaged anime girl, but the model does have biases. After that, make the whole thing look not like a bunch of layer barely blended at the borders was more time intensive than difficult, per se. I couldn't just chuck the whole image on AI and hope it would do it for me because I don't have a server room full of GPUs, and I didn't want to compromise on resolution, so I did by hand. I will pretend the ghosting was an intentional artistic choice. It was my first time doing this, so... yeah, there are things that I didn't know I could do that would have helped in the beginning. My thoughts: Overall, it's very resource intensive process, for GPU, time and energy. It's not very modular, or more precisely, not easily modular. I did a test with an older Jon model and just the thought of spending even more hours made me decide it's not worth it. There's an advantage that it's free, but it's also single use unlike 3D stuff you can make or purchase or just find online. Aesthetically... I don't dislike how Clark or Jon came out, but Lois was not the best. It was fun, if exhausting.
#3d#fanart#dc fanart#superman#superhero#clark kent#jon kent#lois lane#superboy#digital art#civilian clothes
2 notes
·
View notes
Text
Yet again begging everyone to just treat photography like paintings. AI has not made that big of a change, it just provided a useful gotcha. Photomanip and staged photos have been a thing since photos started, so no great truth is being taken away from you. And noticing alterations is a trained art skill. You know, useless art degree shit. People freaking out over being tricked by AI annoys me in part because yet again, a humanities skill is being treated like everyone on earth should be able to do it, or self-teach in 15 minutes.
I was looking at a bunch of 17th century paintings where Mary appears next to a worshipper and thinking about how specialists can tell when Mary has been painted less realistically on purpose whereas another person in the painting was a direct likeness of a model (usually who commissioned it). I can’t tell! I wasn’t good at art history. However, I once won a pub quiz by correctly identifying every single photo vs CGI generated image after they’d been put through a filter, because I used to do digital art. It pops out at me; I don’t need to count any proverbial fingers. But I never bothered with AI art or even modern digital art rendering tools, so nowadays I don’t expect to get it right away. Same with some photoshopping. The solution isn’t entirely “learn to count the fingers”, it’s “learn that absolutely no medium is this great and perfect Truth you’re searching for.” Which will never catch on, because so many people have been promised it.
Truth in art — truth in life — doesn’t come from an unmarred photo. It comes from someone trying their absolute best, just gutting themselves in the attempt, to tell you their lived experience, and say “do you understand? Was it the same for you?” That moment is so much more important.
6 notes
·
View notes
Text
Path Tracing with DLSS 3.5: NVIDIA’s Contribution
In order to commemorate the release of NVIDIA DLSS 3.5 in Cyberpunk 2077: Phantom Liberty, Digital Foundry held a roundtable video chat with a number of special guests. Among those in attendance were Bryan Catanzaro, Vice President of Applied Deep Learning Research at NVIDIA, and Jakub Knapik, Vice President of Art and Global Art Director at CD PROJEKT RED.
DLSS 3.5: NVIDIA’s Contribution
When talking about the new technology, NVIDIA’s Bryan Catanzaro noted that not only is DLSS 3.5 more attractive than native rendering, but in a way, its frames are more real when coupled with path tracing than native rendering when using the old rasterized methodology. This was said by Catanzaro in reference to how native rendering creates images using a rasterized method.
To tell you the truth, I believe that DLSS 3.5 is responsible for even further enhancing Cyberpunk 2077’s already stunning visuals. That is how I see things. Again, this is due to the fact that the AI is able to make decisions regarding how to display the picture that are more intelligent than those that we were previously capable of making without the assistance of AI. That’s something that’s going to keep evolving, in my opinion.
When compared to standard visuals, the frames in Cyberpunk 2077 that make use of DLSS (including Frame Generation) appear far more “real.” If you consider all of the graphic trickery, such as occlusions, artificial reflections, and shadows, as well as screen-space effects…Doesn’t everyone agree that raster in general is just a big bag of lies? Now that we have that information, we can toss it out and begin doing path tracing. If we’re lucky, we’ll end up with real shadows and true reflections.
The only way we will be able to accomplish that is by using AI to create a large number of pixels. Without the use of gimmicks, rendering via path tracing would need an excessive amount of computer power. Therefore, we are modifying the kind of methods that we make use of, and I believe that, at the end of the day, we are obtaining a greater number of genuine pixels with DLSS 3.5 than we were previously.
Jakub Knapik, who works for CDPR, agreed with him and referred to rasterization as “a bunch of hacks” placed one on top of the other.
It feels strange to say it, but I am in agreement with you. You bring up a very intriguing point when you ask, “What’s the tradeoff here?” This perspective is quite interesting. On the one hand, you have a rasterizing approach that is a collection of hacks, as Bryan described it earlier. You have a repository of rendering layers, but they are not in any way balanced with one another; instead, you are simply piling one on top of the other to generate frames.
Every single layer is a trick, an approximation of reality; screen space reflections and everything else like that; and you generate pixels in this manner as opposed to having a far more accurate definition of reality using path tracing, where you generate something in the middle (DLSS).
In the past, it was possible to state that you were sacrificing some quality in exchange for performance when you used DLSS. However, with DLSS 3.5, the image is actually and unquestionably better looking than it would be without it.Link Here
The topic of ‘fake frames’ was brought up for the first time in conjunction with DLSS Frame Generation. This is due to the fact that DLSS Frame Generation generates one frame independently of the rendering pipeline and inserts the’simulated frame’ after every’real frame.’ This sparked the beginning of the discussion on fake frames. However, the performance improvement that is afforded by the created frames is undeniably significant.
This is especially true in CPU-bound games, where the conventional upscaler can only do so much, while the generated frames are able to do significantly more. It’s not a surprise that AMD is going in the same direction with FSR 3, which will shortly make its premiere in games like Forspoken and Immortals of Aveum.
Regarding DLSS 3.5 in particular, the brand-new Ray Reconstruction feature offers significant improvements to the image’s overall quality whenever both upscaling and ray tracing are enabled at the same time. My test for Cyberpunk 2077: Phantom Liberty contains more information on this topic. Visit this site if you want to read the complete review of the game.
0 notes
Text
Essential Avengers: Avengers #253: CONQUERING VISION
March, 1985
The Vision vs. Quasimodo... in the heart of a machine!
ITS A ROBOT RUMBLE
ON THE INTERNET!
The Avengers seem very perturbed. Or maybe they’ve placed bets and are yelling at each other.
Anyway. Anyyyyyywayyyy.
Last time on Avengers: Vision became confined to a tube and was only fixed when Starfox hooked him up to Titan’s supercomputer ISAAC. While it helped Vision fix himself, it also seems to have changed his personality. Vision began conspiring with ISAAC to build a take-over-the-world-for-its-own-good device so he could take over the world for its own good and erase the evils and inequalities of man.
Vision was hesitant to pull the trigger on becoming a well-intentioned extremist and tried to gain power and influence by becoming the Avengers chairman and trying to make them more prominent with a branch team and closer ties to the White House.
But when anti-mutant arsonists burn down Vision and Scarlet Witch’s house during a new wave of anti-mutant fear, Vision decides ‘mmm yup, taking over the world time’. He distracts the Avengers by sending them to babysit the army as they poke Thanos technology that they shouldn’t poke and accidentally summon the Blood Brothers. And distracts Captain Marvel to go check out Thanos’ ship several light hours away past Pluto. Black Knight shows up unexpectedly but Vision shoves him into a tube to keep him out of trouble.
And now I guess Vision is going to fight Quasimodo the robot guy? Not sure how that fits in.
But first, some West Coast Avengers!
Like I said last time, they didn’t stop doing stuff just because their book is over.
Mockingbird happens to run into some drug runners while getting in some flight practice and figures heck why not beat up an entire boat full of gun-toting people as a light workout.
I guess the Quinjet can hover? Doesn’t seem to have thrusters or repulsors on the bottom or be a VTOL but hey, super advanced possibly Wakanda tech. It can do what it likes.
Mockingbird turns the drug runners over to the Coast Guard and returns to Palos Verdes and even gets to fly into one of those cool cliffside hangers disguised as a perfectly normal cliff. The West Coast Avengers revamped the hell out of the compound they bought.
Can you even legally excavate into a cliff like that? You can if you’re a superhero, I guess.
For some reason, there’s a fakeout where its implied Tigra is licking herself, cat style, but she’s just stretching. At least I hope the joke is that it sounded like she was cat cleaning herself and not something else.
One can never tell.
Anyway, I assume Hawkeye is just annoyed that he’s going to be vacuuming hair out of expensive equipment banks later. But really its that what if he threw a meeting and only he and Tigra came?
Mockingbird comes in not long after Hawkeye complains, slightly delayed from beating up drug runners. Wonder Man comes in shortly after, delayed by
FASHION
You know, this is a pretty great costume for Wonder Man. Its what all his modern outfits are based on when he’s not just dicks out energy man. I think I like the red jacket outfit more because being the only guy who dresses in ‘normal’ clothes while still looking somehow out of fashion with normal people fits for Wonder Man.
But I do love this one too. Its got a simple charm. Deciding that Wonder Man’s colors are black and red instead of Christmas green and red was a great decision and I’m sure that nobody will ever try to put him in red and green again.
Hawkeye grouses “Next, I suppose Iron Man will show up with a new chrome job!” but Iron Man is Sir Not Appearing in This Comic.
And the reason why is... looks like Tony and Rhodey are beating the crap out of each other in Iron Men armor this same month in Iron Man #192.
I don’t know the details but dammit Tony!
Anyway, over at last issue’s plot, the Avengers are still in Thanos’ ex-secret base in Arizona, still rolling their eyes and smh at the US Army for poking things what should not be poked.
Starfox and Scarlet Witch find a chamber blocked by rubble which has a symbio-nullifier which Starfox proposes to use to symbio-nullify the Blood Brothers.
First, he flexes on the US Army.
Army Guy: “It must weigh tons!”
Starfox: “Tons? Yes. But only about eight-and-a-half! Hardly any bother at all!”
Good flexing, Starfox.
Meanwhile, Captain America’s scolding has born fruit. The Pentagon has agreed to seal Thanos’ base, pending further investigation. And Colonel Farnam agrees because his training never prepared him to deal with MONSTERS FROM OUTER SPACE.
Also meanwhile, the army took pity on Hercules’ poor pantsless state and slash or were intimidated by it and have lent him a uniform.
He wears it as you’d expect Hercules to wear it.
With plenty of plunging neckline.
Since the Blood Brothers have a psionic link which makes them stronger the closer they are, Hercules has chained them up on very distant parts of the base.
But this precaution is rendered moot pretty quickly when Starfox returns with the symbio-nullifier to symbio-nullify the Blood Brothers.
Starfox suspected that Thanos had one of these lying around as a precaution if he was going to let the Blood Brothers into his base.
Hercules lightly complains that he didn’t get a good fight with the Blood Brothers especially since the hordes of Muspell and Maelstrom’s wacky minions were interesting but not all that much of a challenge for the prince of power.
Back at the Avengers Mansion, the giant holographic head of Vision is still dealing with Dane Black Knight Whitman. Mostly by showing him video footage of how the other Avengers are tied up.
Dane is confused for multiple reasons, including that when last he heard Wasp was the leader.
Vision: “My failure to anticipate your arrival was an unfortunate lapse. I regret that, as a result, you must suffer the indignity of incarceration.”
Dane: “But... why?! What does keeping me in a tube accomplish?”
Vision: “It prevents you from interfering! You see, I have come to the conclusion that the only way I can fulfill my duty to make the Earth a safer place... is to run it myself!”
Dane: “What?!? But that’s crazy! Uh... I mean, you can’t possibly...”
Vision: “Exactly the sort of reaction I expected!”
Vision: ‘See, this is why you’re a tube boy now.’
Vision turns off the hologram saying that Dane will understand when its all over.
As usual when somebody says something like that, Dane isn’t reassured, just more convinced he needs to break out and warn someone.
I’m not sure if its not already too late since Vision is safely ensconced in his take over the world chair in his secret take over the world room.
ISAAC’s head hologram shows up to Vision and asks him what the delay is, chop chop get to taking over the world for its own good.
Vision: “Sorry, ISAAC... I was just remembering how much I enjoyed having a body.”
Oh my god.
ISAAC: “What’s the sense of that? This entire world will soon be your ‘body’! How can the mobility of a single humanoid form compare to that?”
Vision: “I wouldn’t expect you to understand, ISAAC. It’s odd, though, so many times others have controlled my body... the robot Ultron, the Mad Thinker, Necrodamus... All have tried to subvert my mind and take me over. And now here am I... about to initiate the greatest takeover of all. One would almost think there were some mad connection -- !”
ISAAC: “Vision! You must not tarry!”
.................. Um, okay. So, rather than just being influenced by his brush with death and also brush with supercomputer, I think Vision is being actively manipulated into this by ISAAC.
I don’t know why but I do know that Vision continues being a viable character for decades so he probably can’t be burning all his bridges here.
Anyway, Vision uploads his psyche into the internet.
And like immediately starts taking over everything. One page montage immediately. The Pentagon, Cheyenne Mountain, SHIELD, satellites, the Kremlin.
Presumably the best security systems in the world barely warrant a mention for Vision’s mighty synthezoid brain.
He’s pulling a Skynet (for the world’s own good, so he says) and its barely an effort.
The scenery of being on the internet is, I dunno, pretty standard? Bright colors and dashes of light? I feel like I’ve seen it a lot of places.
But if we’re on page 13 of a book and Vision is effortlessly Skynetting, whats the rest of the issue going to be about? Interestingly, to me anyway, despite this being Vision’s turn villainous or well-intentioned extremist, another villain gets shoved in anyway for him to fight.
As Vision is nyooming around the Kremlin’s computers, he nearly runs into another AI, Quasimodo.
Helpfully, we get a recap of Quasimodo’s ENTIRE LIFE STORY because this is pre-fan wikis and I don’t think Quasimodo has appeared in Avengers before.
He was created to be the ultimate computer by the Mad Thinker but was abandoned when he developed a mind of his own.
Quasimodo was found by the Silver Surfer who used the cosmic powers of the Power Cosmic to transform Quasimodo from a computer into a robot.
Turning to the wiki for more information: He turns on Silver Surfer because he doesn’t like the body he got, so Surfer turns him into a stone gargoyle. Let that be a lesson about ingratitude.
Somehow, he stopped being a gargoyle and fought various people until he was defeated by the Fantastic Four and the Sphinx and wound up a disembodied intelligence in a Russian computer system. And here we are!
Quasimodo begs Vision to help him escape this digital hellhole but Vision just turns and leaves because he doesn’t have time for these shenanigans. And also because he knows Quasimodo is a villain who tends to turn on the people who help him so fuck that.
Quasimodo: “You know of my past - of my power - and you still would dare deny me?! There can be but one name for such as you... and that is fool!”
He then hauls off and punches Vision. Because they’re both digital intelligences on the internet they can punch each other and have a fight scene. That’s how internet works.
That’s why Mega Man X can beat up so many people in cyberspace.
Quasimodo says if Vision doesn’t help him get back to the physical world, he’ll destroy him.
Vision: “Now, listen to me... I am consolidating all computers worldwide. I gave up my own physical body to do this, and I’ll not tolerate any interference from the likes of you!”
Quasimodo: “You willingly abandoned your body?! You’re not a fool... you’re mad!”
Faced with an irreconcilable set of priorities, Quasimodo trips them both into “the irresistible currents of the IMPULSE VORTEX!”
Sure. That sounds like how internet works.
Meanwhile, over at Pluto is very far away, Monica Marvel nyooms past the moons of Uranus. Apparently her visual acuity is REALLY good because she takes in the scenery while she’s nyooming and finds it frighteningly beautiful out in the outer planets.
Anyway, Vision scolds Quasimodo for plunging them into a torrent. Which makes me laugh. Surely its too soon for torrents to be a thing. He’s just using it in a metaphorical sense.
Quasimodo tries to shoot EYE BEAM at Vision, which misses the digital synthezoid but obliterates an electron.
In a cutaway that would be at home in a Marvel movie, the scene briefly shifts to a Soviet computing center and a guy named Alexey complaining that his program just crashed.
Quasimodo does Vision some punches but Vision decides to start trying since Quasimodo’s attacks risk alerting people that something is amiss on the internet. And Vision’s powers work just as well on the internet as Quasimodo’s do. In fact, screw that, they work better! Vision just gets more and more powerful the longer he spends on the internet!
Vision: “You might have slain me earlier, but now this world is mine -- and there is no place in it for you!!”
And at Vision’s command the internet launches Quasimodo from Earth itself.
The internet can do that.
Meanwhile, back at Avenger’s Mansion, Dane Whitman determines that the tube he’s a tube boy in may look like glass but its as strong as steel. He’s not punching his way out of here.
But his recently uncursed cursed sword (the curse never stays not cursed for long so I hope Dane enjoys having a notcursed but very enchanted sword) is just a few feet away with the rest of his luggage. And there’s a mystic bond between himself and the sword so if he just thinks about the sword hard enough, surely it’ll manifest in his hand.
Like the Force but slightly more convenient.
Dane Whitman: Nothing’s happening. Must not... be concentrating hard enough! Maybe the link was broken with the curse. No... no, I mustn’t even think that! I need my sword! I must have my sword! I must!
He do it!
The Notcursed Ebony Sword appears in his hand and he slices through that steel glass like its just glass.
Meanwhile, over at Arizona, the Avengers finish up nullifying the Blood Brothers and putting them in suspended animation, or if you prefer, naptime timeout.
Captain America receives a buzz from Hawkeye who wonders what he’s doing within hailing range, ie in the western half of the US.
Captain America: “Arizona... government business... And I’m as surprised to hear you, as you are me! I take it that your team finished its mission in the Pacific early!”
Hawkeye: “Mission? What are you talking about, Cap? We haven’t been on any mission!”
Which is a dun dun dun considering their whole reason for being sent on this mission was that the West Coast Avengers were ostensibly busy.
And Vision lying about that raises a whole lot of questions for the Avengers.
Cap and Wanda Witch rush over to the Quinjet and contact the Mansion.
Vision: “Then you’re aware of my deception. I... am sorry, Cap. I didn’t want to mislead you, but I felt it necessary to carry out my plan.”
Scarlet Witch: “Plan? Vision, what do you mean? What have you done?”
Vision: “I... well, there is no easy way to put this... But I have taken over the world.”
You never want to hear “I have taken over the world” from a friend, unless its followed with “and I want to get you in on the ground floor of this exciting new opportunity.”
Vision promises the two that he’s taking over all of Earth’s computers for a really good reason like ending war and strife. And signs off by telling Wanda everything will be alright and that he loves her.
Aww?
Cap: “He meant it... he meant every word.”
Scarlet Witch: “He’d been upset lately, but I never thought... Cap, we have to stop him!”
Cap: “Yes. If there’s still time!”
DUN DUN DUN!
Follow @essential-avengers because I don’t know when I’ve been more excited to get to the next issue! Like and reblog?
#Avengers#Essential Avengers#Quasimodo#the Vision#Captain America#Scarlet Witch#Hercules#Starfox#Captain Marvel#Monica Rambeau#Hawkeye#Mockingbird#Tigra#Wonder Man#with a great new costume#Vision takes over the world#these things happen#from time to time#essential marvel liveblogging
13 notes
·
View notes
Photo
Language supermodel: How GPT-3 is quietly ushering in the A.I. revolution https://ift.tt/3mAgOO1
OpenAI
OpenAI’s GPT-2 text-generating algorithm was once considered too dangerous to release. Then it got released — and the world kept on turning.
In retrospect, the comparatively small GPT-2 language model (a puny 1.5 billion parameters) looks paltry next to its sequel, GPT-3, which boasts a massive 175 billion parameters, was trained on 45 TB of text data, and cost a reported $12 million (at least) to build.
“Our perspective, and our take back then, was to have a staged release, which was like, initially, you release the smaller model and you wait and see what happens,” Sandhini Agarwal, an A.I. policy researcher for OpenAI told Digital Trends. “If things look good, then you release the next size of model. The reason we took that approach is because this is, honestly, [not just uncharted waters for us, but it’s also] uncharted waters for the entire world.”
Jump forward to the present day, nine months after GPT-3’s release last summer, and it’s powering upward of 300 applications while generating a massive 4.5 billion words per day. Seeded with only the first few sentences of a document, it’s able to generate seemingly endless more text in the same style — even including fictitious quotes.
Is it going to destroy the world? Based on past history, almost certainly not. But it is making some game-changing applications of A.I. possible, all while posing some very profound questions along the way.
What is it good for? Absolutely everything
Recently, Francis Jervis, the founder of a startup called Augrented, used GPT-3 to help people struggling with their rent to write letters negotiating rent discounts. “I’d describe the use case here as ‘style transfer,'” Jervis told Digital Trends. “[It takes in] bullet points, which don’t even have to be in perfect English, and [outputs] two to three sentences in formal language.”
Powered by this ultra-powerful language model, Jervis’s tool allows renters to describe their situation and the reason they need a discounted settlement. “Just enter a couple of words about why you lost income, and in a few seconds you’ll get a suggested persuasive, formal paragraph to add to your letter,” the company claims.
This is just the tip of the iceberg. When Aditya Joshi, a machine learning scientist and former Amazon Web Services engineer, first came across GPT-3, he was so blown away by what he saw that he set up a website, www.gpt3examples.com, to keep track of the best ones.
“Shortly after OpenAI announced their API, developers started tweeting impressive demos of applications built using GPT-3,” he told Digital Trends. “They were astonishingly good. I built [my website] to make it easy for the community to find these examples and discover creative ways of using GPT-3 to solve problems in their own domain.”
Fully interactive synthetic personas with GPT-3 and https://t.co/ZPdnEqR0Hn ????
They know who they are, where they worked, who their boss is, and so much more. This is not your father's bot… pic.twitter.com/kt4AtgYHZL
— Tyler Lastovich (@tylerlastovich) August 18, 2020
Joshi points to several demos that really made an impact on him. One, a layout generator, renders a functional layout by generating JavaScript code from a simple text description. Want a button that says “subscribe” in the shape of a watermelon? Fancy some banner text with a series of buttons the colors of the rainbow? Just explain them in basic text, and Sharif Shameem’s layout generator will write the code for you. Another, a GPT-3 based search engine created by Paras Chopra, can turn any written query into an answer and a URL link for providing more information. Another, the inverse of Francis Jervis’ by Michael Tefula, translates legal documents into plain English. Yet another, by Raphaël Millière, writes philosophical essays. And one other, by Gwern Branwen, can generate creative fiction.
“I did not expect a single language model to perform so well on such a diverse range of tasks, from language translation and generation to text summarization and entity extraction,” Joshi said. “In one of my own experiments, I used GPT-3 to predict chemical combustion reactions, and it did so surprisingly well.”
More where that came from
The transformative uses of GPT-3 don’t end there, either. Computer scientist Tyler Lastovich has used GPT-3 to create fake people, including backstory, who can then be interacted with via text. Meanwhile, Andrew Mayne has shown that GPT-3 can be used to turn movie titles into emojis. Nick Walton, chief technology officer of Latitude, the studio behind GPT-generated text adventure game AI Dungeon recently did the same to see if it could turn longer strings of text description into emoji. And Copy.ai, a startup that builds copywriting tools with GPT-3, is tapping the model for all it’s worth, with a monthly recurring revenue of $67,000 as of March — and a recent $2.9 million funding round.
“Definitely, there was surprise and a lot of awe in terms of the creativity people have used GPT-3 for,” Sandhini Agarwal, an A.I. policy researcher for OpenAI told Digital Trends. “So many use cases are just so creative, and in domains that even I had not foreseen, it would have much knowledge about. That’s interesting to see. But that being said, GPT-3 — and this whole direction of research that OpenAI pursued — was very much with the hope that this would give us an A.I. model that was more general-purpose. The whole point of a general-purpose A.I. model is [that it would be] one model that could like do all these different A.I. tasks.”
Many of the projects highlight one of the big value-adds of GPT-3: The lack of training it requires. Machine learning has been transformative in all sorts of ways over the past couple of decades. But machine learning requires a large number of training examples to be able to output correct answers. GPT-3, on the other hand, has a “few shot ability” that allows it to be taught to do something with only a small handful of examples.
Plausible bull***t
GPT-3 is highly impressive. But it poses challenges too. Some of these relate to cost: For high-volume services like chatbots, which could benefit from GPT-3’s magic, the tool might be too pricey to use. (A single message could cost 6 cents which, while not exactly bank-breaking, certainly adds up.)
Others relate to its widespread availability, meaning that it’s likely going to be tough to build a startup exclusively around since fierce competition will likely drive down margins.
Christina Morillo/Pexels
Another is the lack of memory; its context window runs a little under 2,000 words at a time before, like Guy Pierce’s character in the movie Memento, its memory is reset. “This significantly limits the length of text it can generate, roughly to a short paragraph per request,” Lastovich said. “Practically speaking, this means that it is unable to generate long documents while still remembering what happened at the beginning.”
Perhaps the most notable challenge, however, also relates to its biggest strength: Its confabulation abilities. Confabulation is a term frequently used by doctors to describe the way in which some people with memory issues are able to fabricate information that appears initially convincing, but which doesn’t necessarily stand up to scrutiny upon closer inspection. GPT-3’s ability to confabulate is, depending upon the context, a strength and a weakness. For creative projects, it can be great, allowing it to riff on themes without concern for anything as mundane as truth. For other projects, it can be trickier.
Francis Jervis of Augrented refers to GPT-3’s ability to “generate plausible bullshit.” Nick Walton of AI Dungeon said: “GPT-3 is very good at writing creative text that seems like it could have been written by a human … One of its weaknesses, though, is that it can often write like it’s very confident — even if it has no idea what the answer to a question is.”
Back in the Chinese Room
In this regard, GPT-3 returns us to the familiar ground of John Searle’s Chinese Room. In 1980, Searle, a philosopher, published one of the best-known A.I. thought experiments, focused on the topic of “understanding.” The Chinese Room asks us to imagine a person locked in a room with a mass of writing in a language that they do not understand. All they recognize are abstract symbols. The room also contains a set of rules that show how one set of symbols corresponds with another. Given a series of questions to answer, the room’s occupant must match question symbols with answer symbols. After repeating this task many times, they become adept at performing it — even though they have no clue what either set of symbols means, merely that one corresponds to the other.
GPT-3 is a world away from the kinds of linguistic A.I. that existed at the time Searle was writing. However, the question of understanding is as thorny as ever.
“This is a very controversial domain of questioning, as I’m sure you’re aware, because there’s so many differing opinions on whether, in general, language models … would ever have [true] understanding,” said OpenAI’s Sandhini Agarwal. “If you ask me about GPT-3 right now, it performs very well sometimes, but not very well at other times. There is this randomness in a way about how meaningful the output might seem to you. Sometimes you might be wowed by the output, and sometimes the output will just be nonsensical. Given that, right now in my opinion … GPT-3 doesn’t appear to have understanding.”
An added twist on the Chinese Room experiment today is that GPT-3 is not programmed at every step by a small team of researchers. It’s a massive model that’s been trained on an enormous dataset consisting of, well, the internet. This means that it can pick up inferences and biases that might be encoded into text found online. You’ve heard the expression that you’re an average of the five people you surround yourself with? Well, GPT-3 was trained on almost unfathomable amounts of text data from multiple sources, including books, Wikipedia, and other articles. From this, it learns to predict the next word in any sequence by scouring its training data to see word combinations used before. This can have unintended consequences.
Feeding the stochastic parrots
This challenge with large language models was first highlighted in a groundbreaking paper on the subject of so-called stochastic parrots. A stochastic parrot — a term coined by the authors, who included among their ranks the former co-lead of Google’s ethical A.I. team, Timnit Gebru — refers to a large language model that “haphazardly [stitches] together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning.”
“Having been trained on a big portion of the internet, it’s important to acknowledge that it will carry some of its biases,” Albert Gozzi, another GPT-3 user, told Digital Trends. “I know the OpenAI team is working hard on mitigating this in a few different ways, but I’d expect this to be an issue for [some] time to come.”
OpenAI’s countermeasures to defend against bias include a toxicity filter, which filters out certain language or topics. OpenAI is also working on ways to integrate human feedback in order to be able to specify which areas not to stray into. In addition, the team controls access to the tool so that certain negative uses of the tool will not be granted access.
“One of the reasons perhaps you haven’t seen like too many of these malicious users is because we do have an intensive review process internally,” Agarwal said. “The way we work is that every time you want to use GPT-3 in a product that would actually be deployed, you have to go through a process where a team — like, a team of humans — actually reviews how you want to use it. … Then, based on making sure that it is not something malicious, you will be granted access.”
Some of this is challenging, however — not least because bias isn’t always a clear-cut case of using certain words. Jervis notes that, at times, his GPT-3 rent messages can “tend towards stereotypical gender [or] class assumptions.” Left unattended, it might assume the subject’s gender identity on a rent letter, based on their family role or job. This may not be the most grievous example of A.I. bias, but it highlights what happens when large amounts of data are ingested and then probabilistically reassembled in a language model.
“Bias and the potential for explicit returns absolutely exist and require effort from developers to avoid,” Tyler Lastovich said. “OpenAI does flag potentially toxic results, but ultimately it does add a liability customers have to think hard about before putting the model into production. A specifically difficult edge case to develop around is the model’s propensity to lie — as it has no concept of true or false information.”
Language models and the future of A.I.
Nine months after its debut, GPT-3 is certainly living up to its billing as a game changer. What once was purely potential has shown itself to be potential realized. The number of intriguing use cases for GPT-3 highlights how a text-generating A.I. is a whole lot more versatile than that description might suggest.
Not that it’s the new kid on the block these days. Earlier this year, GPT-3 was overtaken as the biggest language model. Google Brain debuted a new language model with some 1.6 trillion parameters, making it nine times the size of OpenAI’s offering. Nor is this likely to be the end of the road for language models. These are extremely powerful tools — with the potential to be transformative to society, potentially for better and for worse.
Challenges certainly exist with these technologies, and they’re ones that companies like OpenAI, independent researchers, and others, must continue to address. But taken as a whole, it’s hard to argue that language models are not turning to be one of the most interesting and important frontiers of artificial intelligence research.
Who would’ve thought text generators could be so profoundly important? Welcome to the future of artificial intelligence.
1 note
·
View note
Text
Personalization
The following is going to sound unimaginable, and is needed because otherwise, not only do corporations, with understandably intrinsic goals, dictate public and governmental behavior (and can more effectively disable competition), but governments also become dependent on corporations for data and more, which incidentally makes the below precondition unsatisfiable, denying the government its governing role.
If the above matters, then:
Before anything else, any initiatives to reach a status where governments and international governance(s) remain, true to their definition, fully public, including their digital functions (of which a remarkable member is personal identification, like facial recognition, and another the judiciary. Any development of digital versions of governmental functions should be made within the government, and currently existing commercial solutions should be (audited and) transferred to the government. Not doing that would not keep the government small: its size will balloon just the same, as dictated by the tools used today, but it will be in other hands than those of peoples’ representatives.), avoid private sponsoring of public initiatives or institutions, treat their populations outstandingly well, and keep lobbying and similar pressures at bay, while private enterprise does not duplicate governmental functions, must be launched with completion intent.
Once that prerequisite is fulfilled, services must ensure a front-door governmental access to data, along with an otherwise total privacy for users, the censoring or other personalization (such as, be it for user or provider profiles, modification of content like films, books, articles, images, music, interviews, games, interfaces, online shop range and pricing: the content that is consumed technically can be modified by the platform, not to reflect their declared originators’ views or character anymore) criterion below must be formulated in case-specific terms by governments (engaging experts from any institutions to govern this issue is currently less reliable) and complied with by ISPs, platforms, cloud operators and other enterprises.
Personalization must be prohibited in the case where it would potentially enhance the efficiency or effectiveness of sundering or exploitative processes, or processes potentially reducing upwards health-, social- (including privacy and habitat quality), knowledge- or financial mobility, or freedom and ability of physical mobility, and in the case where it would, as must be assumed by default, not be desired by the parties in question, and otherwise only authorized when modifications are clearly stated with details of the applied changes and the reasoning behind them.
In addition, interfaces must be regulated for self-explanatory accessibility by the elderly as well as against pushing the user to a behavior that may be detrimental to the user, like not selecting an advertised great “carrot” offer - which is less advantageous for the provider but that the provider doesn’t have any choice but to officially have available - because it is displayed in a way that the user is likely to miss that button even if they came for that offer in the first place. Or like selecting accept all cookies because it is displayed in a manner that makes the user automatically assume that it is the button to confirm their sub-selection of cookies, or where the mistake of providing more data rather than less is not as easily correctable as it is doable. The idea is transposable to administrative processes whereby default preferences should always be in favor of the individual and opt-outs should be used to reduce items that may go against the integrity (like organ and sample donation, especially during Covid where most people are overwhelmed with other issues and a new administrative decision to make donation the default case goes unnoticed, and then many of them die, incidentally having donated organs and samples for any purpose, even if that went against their will) or advantage of the individual.
Rights are the same online and offline, therefore their embodiment and enforcement depends on the topological conditions of the environment they are enforced in. The meaning of free speech online depends on algorithms being used, which may modify persons’ behavior, making the whole system different in nature. Therefore, any insights obtained upon “free” speech within the dynamics of current online media may not reflect (A peculiar example comes from a social network that recently had an editorial theme (vaccine passport) on which users could create posts arguing for their opinions, but every post, and comments to posts, were automatically flanked with an image stemming from the theme page (but wasn’t visible as such on the initial theme page), carrying a text that stated that “I”, intuitively the person commenting because that appeared under their name, was of opinion x (positive) about that theme, even when the comments themselves said the opposite. This falsely spread the impression that most people were of the opinion x, making it impossible to opine non-x at a statistically relevant scale with respect to one’s contribution to the building of public opinion through one’s outreach as an individual.) the opinions and desires you would see in the same persons under less trying communication circumstances, furthermore rendering any experimentation ends in this framework inappropriate.
Online (or offline) conversations on themes or policy propositions must be considered as a feedback to be positively integrated rather than clues about how to modify public views in favor of the policies that are desirable from a top-down point of view, or to make public views irrelevant for the implementation of these policies. Neither these conversations nor other media should be used as tools to steer public opinion in a way that potentially or eventually harms the public.
The speed and size of processes change the nature of a system, so it is essential to design the implementations of a system’s goal in the light of those attributes. Algorithms, as AI, require a regulation of outcomes, because they embody some level of “terms of exploitation”. The current affirmations that AI is eventually necessarily uncontrollable, as well as those hinting at ideas like AI “pain”, “feelings”, “soul”, “consciousness”, or “creativity” are presumably mostly due to the power advantage that such beliefs or frames would provide the owners of returns on AI through the shaping of regulations, structures and social behaviors. If the outcomes are clearly unpredictable, then the product is not ready to be deployed, irrespective of financial or any purported other pressures to deploy yesterday. If the outcomes are possibly, though not presumably, unpredictable, then, while the product is still not ready for deployment, that is a market for compliance insurance.
Finally, I would like to emphasize that the general population is well advised not to remain in a reactive mode toward questions such as the above, but to conceive ahead of time of where this takes us, because it is readable and derivable to a good extent from openly accessible information and our experience of life today and in the documented past, decide where else it would be better to go from here, and preemptively steer the boat in that direction instead. Picture a network of hallways compartmentalized by doors that only open in one direction - you want to think twice before you open the door that is the easiest to open, like the accept all cookies button.
Due to systemic mechanisms which I aim to elaborate on in further writings, the underlying advice is especially valid for us in a position to enact power, because what looks like a future advantage is a powerful boomerang leading the groups in power to become the very victims of the embraced principles, no matter what else the future holds. Not even if that is an imminent asteroid cataclysm. In other words, externalities are internalities in all cases.
1 note
·
View note
Text
Seven Types of Game Devs And The Games They Make
The Computer Science Student
The computer science student had to write a game for class in the fourth semester. The game must demonstrate OOP design and programming concepts, and solid grasp of C++.
This game is written not to be fun to play, but to demonstrate your skill to the professors - or to their poor assistants who have to read the code and grade the accompanying term paper. The core loop of the game is usually quite simple, but there are many loosely connected mechanics in there that barely don’t really fit. For example, whatever the core gameplay is, there could be birds in the sky doing some kind of AI swarm behaviour, there could be physics-enabled rocks on the floor, there could be a complicated level and unit editor with a custom XML-based format, and all kinds of weird shaders and particle effects.
And with all this tech infrastructure and OOP, there are just two types of enemies. That’s just barely enough to show you understand how inheritance works in C++.
The core gameplay is usually bad. Un-ergonomic controls, unresponsive game feel, flashy yet impractical 3D GUI widgets make it hard to play - but not actually difficult to beat, just unpleasant. The colours are washed-out, and everything moves a bit too slow. There is no overarching design, the moment-to-moment gameplay is not engaging, and the goal feels like an afterthought.
But that’s ok. It is to be expected. The professors are CS professors. They (or rather their assistants) don’t grade the game based on whether the units are balanced, whether the graphics are legible, or whether the game is any fun at all. They grade on understanding and correctly applying what you learned in class, documentation, integration of third-party libraries or given base code, and correct implementation of an algorithm based on a textbook.
The CS student usually writes a tower defense game, a platformer, or a SHMUP. After writing two or three games like this, he usually graduates without ever having gotten better at game design.
The After-Hours Developer
The after hours programmer has a day job doing backend business logic stuff for a B2B company you never heard of.
This kind of game is a labour of love.Screenshots might not look impressive at first glance. There is a lot going on, and the graphics look a bit wonky. But this game is not written to demonstrate mastery of programming techniques and ability to integrate third-party content, tools and libraries. This game was made, and continues to be developed, because it is fun to program and to design.
There is a clear core loop, and it is fun and engaging. The graphics are simple and functional, but some of them are still placeholder art. This game will never be finished, thus there will always be place-holders as long as the code gets ahead of the art. There is no XML or cloud-based savegame in there just because that is the kind of thing would look impressive in a list of features.
More than features, this games focuses on content and little flourishes. This game has dozens of skills, enemies, weapons, crafting recipes, biomes, and quests. NPCs and enemies interact with each other. There is a day-night cycle and a progression system.
While the CS student game is about showing off as many tech/code features as possible, this kind of programmer game is about showing off content and game design elements and having fun adding all this stuff to the game.
This game will be finished when the dev gets bored with adding new stuff. Only then, he’ll plan to add a beginning and an ending to the game within the next six months, and go over the art to make it look coherent. The six months turn into two years.
The after-hours developer often makes RPGs, metroidvanias, or rogue-like games. These genres have a set of core mechanics (e.g. combat, loot, experience, jumping) and opportunity for a bunch of mechanics built around the core (e.g. pets, crafting, conversation trees, quest-giving NPCs, achievements, shops/trading, inventory management, collecting trinkets, skill trees, or combo attacks).
The First-Time Game Jammer
The first-time game jammer wants to make his first game for an upcoming game jam. He knows many languages, but he does a lot of machine learning with torch7 for his day job, so he has decided to use LÖVE2D or pico-8 to make a simple game.
This guy has no training in digital art, game design, or game feel. But the he has a working knowledge of high-school maths, physics, and logic. So he can write his own physics engine, but doesn’t know about animation or cartoon physics. He doesn’t waste time writing a physics engine though. He just puts graphics on the screen. These graphics are abstract and drawn in mspaint. The numbers behind everything are in plain sight. Actions are either triggered by clicking on extradiegetic buttons or by bumping into things.
The resulting game is often not very kinetic or action-oriented. In this case, it often has a modal/stateful UI, or a turn-based economy. If it is action-oriented, it could be a simple platformer based around one core mechanic and not many variations on it. Maybe it’s a novel twist on Pong or Tetris.
The first-time game jammer successfully finished his first game jam by already knowing how to program in Lua, copying a proven game genre and not bothering to learn any new tools during the limited jamming time. Instead, he wrote the code to create every level by hand, in separate .lua files, using GNU EMACS.
The Solo Graphic Designer
The graphic designer has a skill set and approach opposite to those of the two programmers described above. He is about as good at writing code as the programmer is at drawing images in mspaint. The graphic designer knows all about the principles of animation, but has no idea how to code a simple loop to simulate how a tennis ball falls down and bounces off walls or the ground. He used to work in a team with coders, but this time he wants to make his own game based on his own creative vision.
The graphic designer knows all about animation tools, 3D modelling, composition. He has a graphic tablet and he can draw. He knows all about light and shade and gestalt psychology, but he can’t write a shader to save his life.
Naturally, the graphic designer plays to his strengths and uses a game engine with an IDE and a visual level editor, like Unity3D, Construct, or GameMaker.
The graphic designer makes a successful game by doing the opposite of what the coder does, because he does it well. The screenshots look good, and his game gets shared on Twitter. He struggles writing the code to aim a projectile at the cursor in a twin-stick shooter, but we live in a world of Asset Stores and StackOverflow.
The resulting game is a genre-mixing thingy full of set pieces, cut scenes, and visual-novel-style conversations. The actual gameplay is walking around and finding keys for locks, but it’s cleverly recontextualised with a #deep theme and boy does it look pretty.
The Engine Coder
The engine coder is like the CS student on steroids. He has nothing to prove. He knows his C++. He lives in a shack in Alaska, and pushes code to GitHub over a satellite connection. He also knows his Lua, C#, Python, and Haskell. The engine coder writes a physics engine, particle system, dialogue engine, planning-based mob AI, savegame system, a network layer and GUI widget library.
He has written five simple demos for the engine: A first-person walking simulator, a third-person platformer, a very pretty glowing orb swarm shader thingy, a non-interactive simulation of a flock of sheep grazing and a pack of wolves occasionally coming in to cull the herd with advanced predator AI, and a game where you fly a spaceship through space.
Somebody comments in the forums that it’s hard to even write Pong or Tetris in the engine. The Engine Coder is more concerned with optimising batched rendering and automatically switching LoD in the BSP tree so you can land on planets in space without loading screens.
The Overeager Schoolboy
The schoolboy has an idea for a game. He saves his money to buy Game Maker (or RPG Maker) and tells his all friends about his amazing idea. Then he makes a post about it on tumblr. Then he makes a sideblog about the game and posts there too, tagged #game development.
Unfortunately, the schoolboy is 15, and while he is talented, he doesn’t really know how to program or draw. He’s good at math, and he can draw with a pencil. Unfortunately, he wants to learn digital art, level design, and programming all in one go. He already knows all the characters for his game, and he writes posts about each of them individually, with pencilled concept art and flavourful lore.
Even more unfortunately, our schoolboy is hazy on how big the game is actually going to be, and what core mechanic the game should be based around.
After designing sprite sheets and portraits for ten characters you could add to your party, plus the Big Bad End Boss, he realises that he has no idea how to get there, or how to make the first level. He starts over with another set of tools and engine, but he doesn’t limit his scope.
In an overdramatic post two months later, he apologises to the people who were excited to play the game when it’s done. A week later he deletes the tumblr. He never releases a playable demo. He never gets constructive feedback from game developers.
The Game Designer’s Game Designer
The game designer’s game designer is not exactly a household name, but he has done this for a while. While you have never heard of him, the people who made the games you like have. All your favourite games journalists also have. Through this connection, many concepts have trickled down into the games you play and the way your friends talk to you about games they like.
The game designer’s game designer has been going at this for a while. When he started, there was no way to learn game design, so he probably studied maths, psychology, computer science, industrial design, or music theory.
The games fall outside of genres, and not just in the sense of mixing two genres together. They are sometimes outside of established genres, or they are clearly inside the tradition of RTS, rogue-likes or clicker games, but they feel like something completely new.
The games of the game designer’s game designer are sometimes released for free, out of the blue, and sometimes commissioned for museums and multimedia art festivals. Some of them are about philosophy, but they don’t merely mention philosophical concepts, or use them to prop up a game mechanic (cloning and transporters, anyone?). They explore concepts like “the shortness of life” or “capitalism” or “being one with the world” or “unfriendly AI” through game mechanics.
But they also explore gameplay tropes like “inventory management“ or “unidentified magic items“ or “unit pathfinding“.
Sometimes bursts of multiple games are released within weeks, after years of radio silence. Should you ever meet the game designer’s game designer, you tell him that you got a lot out of the textbook he wrote, but you feel guilty that you never played one of his games. So you lie and tell him you did.
17 notes
·
View notes
Text
It’s part 2 of Episode 115, and you can catch up with the zany antics of part 1 right here if you missed it!
(If you enjoy these recaps and want to see them continue, please think about helping me navigate my currently highly unstable life by becoming a patron. I am having a bit of a Time right now in my real life and it would mean the world to me!)
So iiiiiit’s Kaiba vs Noah in the Irritating Boy Prodigy World Championship and Kaiba is tired of Noah’s posturing and “pretentious” statements. He claims that such talk...
Which is really quite lovely and poetic, although also perhaps a touch unclear as an example of card game trash talk, no? And of course, I wouldn’t go so far myself as to suggest it is also bordering on, ahem, pretentious. I will leave that to the judgement of my honoured reader.
I do think it’s cool that Seto sees a version of himself on steroids if he’d stayed as a good Kaiba son, and recognises all his disdain for humanity as an excuse to avoid emotions. I mean, it’s nice he can recognise it, although it would be EVEN NICER if Seto didn’t also reasonably frequently disguise his normal human emotions with disdain but HEY baby steps.
Anyway he confronts Noah’s story of Gozaburo’s intentions and fatherly love with a direct challenge; if Gozaburo really intended for AI!Noah to take over KaibaCorp instead of Seto, then...
This triggers a loooong flashback from Noah, during which, I have to imagine, Seto et al just stood around in awkward and slightly huffy silence, presuming Noah was just buffering.
Lil Noah (same size, probably, but younger) wakes up after a “bad dream” in which he was killed in a traffic accident. He discovers he’s locked in his bedroom with NO MAIDS which is very unusual for him. One wall turns semi-transparent which is also pretty unusual, I assume, and Gozaburo appears.
It’s translated as “Father” which is usually 父 (chichi) or more formally お父さん (otosan) in Japanese, but I thiiiink he’s calling him 父上 (chichiue) and sort of mushing the sounds together cause it sounds like he’s saying something like “chiiue”. I dunno if that’s more or less formal but it literally means “father-up” or “father-above” soooo... check out the notes and see if someone who understands Japanese better than I do added any information!
From Gozaburo’s perspective, he now has less of a son and more of a big chibi tamagotchi on a screen in his creepy sci-fi basement
“Can you, uh ... step back. You’re freaking me out. Creepy-ass ghost boy.”
Gozaburo is actually really really Nice and Kind about the whole thing which I personally think is bullshit because Gozaburo is an abusive monster and yeah you get real life abusive monsters who are nice to their own (dead?) kids but “poor Gozabuwo he’s so saaad he bought his dead kid a puppy” is not the hard-hitting addition to the Kaiba brothers’ backstory that I asked for.
Here’s the fuckin puppy:
It is not, Mr Gozaburo, it is creepy. It looks like a sad puppy-kitten hybrid. You pushed the KAWAII slider too far.
Anyway for a while everything is totally fine and great in Dead Kid Digital Paradise. Noah gets a trip to space AND a cake for his birthday whereas usually children only get ONE of those two things and they don’t even get to choose which (get in the fucking rocket, Timmy). Noah sometimes forgets he’s not real.
“Ha yes, when I uploaded render=>seas[d=HD,colour=#2B94FC] I definitely saw exactly what you saw...”
But Noah has become disillusioned with the NPCs in his digital world, even “Sunny” the miserable hybrid. Gozaburo asks after Sunny and Noah assures him Sunny is TOTALLY FINE
“That’s what I most wanted to add to my living arrangements. A rabid animal auditioning for Mad Max 5: Also The Pets Went Feral.
So then, making light birthday conversation, Noah’s like “hey dad you know what’s cool, if we tried hard and believed in ourselves...”
Gozaburo is like “huh. how many people would survive?” and Noah’s like, “eh probably 3%” and I was a lil worried they were going to have Gozaburo go all ~genocide is wrong, what have I done to raise this monster~ but instead he was like, “hmm seems a little low? anyway interesting idea but now I’m bored byeeeee”
Apparently that brush-off was Noah’s first indication that Gozaburo was not quite as into his digital son as he was into his physical-reality son
Shortly thereafter, Seto and Mokuba showed up, but apparently Noah was never mentioned to them and they never noticed Gozaburo’s time in his creepy child-tamagotchi-dungeon and I DON’T KNOW guys I just don’t like the implications of this post-hoc rejiggery of a very strong backstory. You know what Seto’s backstory did not need? The removal of all little!Seto’s agency, in order to reframe Gozaburo’s abuse as arising from grief.
Anyway, Kaiba’s like,
And I’m like, you’re tragically lost to some undergraduate philosophy discussion group, my sweet darling pretentious teenager. Can you IMAGINE how insufferable 16-year-old private-jet-owning history-refusing god-killing Seto motherfucking Kaiba would be in your undergrad philosophy class? Some incisive but retiring young woman is, in another universe, as we speak type read, raising both eyebrows at something he just declaimed in response to the long-suffering TA and adding it to her wildly successful That Crazy Rich Kid In My Phil Class twitter.
Despite his ridiculousness, Kaiba does make an excellent point:
Noah has shunned his humanity, cocooning himself in contextless (and GEOLOGICALLY INACCURATE) knowledge and smug disdain. On the other hand, people could definitely turn out WORSE after like 10? years in near-total isolation with godlike powers over a virtual reality but no ability to affect the outside world.
Noah doesn’t like Kaiba pointing out his flaws so he tries to get the conversation back on track by making a VERY UNDERWHELMING “prediction”
He’s holding up his fingers like that because he ~~predicts~~ that this ~~mysterious event~~ will take place on the third turn. It’s currently the second turn. AND HE IS IN CONTROL OF LIKE THEIR ENTIRE SENSORY EXPERIENCE. It’s not a ~~~~prediction~~~~ it’s just a fucking THREAT and you just know, you just KNOW, when he follows through on his THREAT, next fuCKING TURN, he’s gonna be all like ohoho I’m sooooo SMART I ~~~~~~predicted~~~~~~ this very outcome ohohohoho.
Fuck you Noah.
In any case, no one is particularly phased by his prediction THREAT because after all,
“Shock is not one of his core emotions. They are contemptuous disdain, glee, attention-seeking, Mokuba!, DUEL, and I must kill god and I will.
#that's a joke#Seto has LOADS of emotions#he's just got a very Seto Kaiba way of expressing them#Yu-Gi-Oh!#sparklefists watches ygo#episode 115#seto kaiba#Noah#yugi#gozaburo kaiba
62 notes
·
View notes
Text
The Non-Violent Games of E3 2019
Following requests on Twitter, I’ve compiled a roundup of all the non-violent games announced and showcased at E3 2019. Hoping this can become an annual thing.
There are 41 games here across a variety of platforms, so without further ado...
Afterparty
Developer: Night School Studios Platforms: PC, Mac, Xbox One Release Date: 2019
Milo and Lola are best buds who recently died and find themselves in Hell. There is only one way to escape: outdrink Satan and he'll let them return to Earth. This point and click adventure sees you on the bender of your afterlife where you'll play beer pong, flirt with Satan and change the very structure of Hell.
Animal Crossing: New Horizons
Developer: Nintendo Platforms: Switch Release Date: March 20, 2020
The latest entry in Nintendo's charming life sim series starts on a deserted island. Players begin with nothing more than a tent, but must craft enough materials and cultivate the land until they can build an entire village -- all while repaying their debt to the insidious Tom Nook, of course.
Assassin's Creed Odyssey: Discovery Tour
Developer: Ubisoft Quebec Platforms: PC Release Date: Autumn 2019
Following on from the Assassin's Creed Origins version presents players with a combat-free version of Odyssy's Ancient Greece. They'll be able to explore it at will, or follow tours and suggested paths that will teach them about various aspects of life and culture at the time. This time around, there are also quizzes to see how much you have learned. It will be released for free, with a standalone version for people who don't own the game.
Catan
Developer: Asmodee Digital Platforms: Switch Release Date: June 2019
The digital adaptation of this classic board game is coming to Nintendo Switch. For those who haven't played, a landscape is randomly generated, its terrain and the produce gained from it is split between players, and your task is to build the biggest settlement. Trade with players to get the resources you need to build towns and roads, but be wary of whether the items you give them are helping their own progress.
Circuit Superstars
Developer: Original Fire Games Platforms: PC, PS4, Xbox One, Switch Release Date: 2020
In the absence of a new Micro Machines, Square Enix Collective is publishing this toy-like top down racer. Players control stylised cars in a series of circuit races where they need to consider their pit stop strategy rather than just floor it and hope to reach the finish first. Vehicles will range from classic cars to rally and trucks.
eFootball PES 2020
Developer: PES Team Platforms: PC, PS4, Xbox One Release Date: September 10, 2019
The long-running Pro Evolution Soccer gets a slight esports-driven name change this year, but still aims to deliver the most realistic football available. The Master League mode is being revamped, and a new mode Matchday appears to tie in with real-world matches, calling on players to choose a side and hope that their team's victories help them in-game.
Fall Guys: Ultimate Knockout
Developer: Mediatonic Platforms: PC, PS4 Release Date: 2020
Published by Devolver Digital, this colourful outing is essentially a Battle Royale game... but without combat. Players control one of 100 little blob people and strive to survive a series of obstacle courses like climbing a hill with boulders falling towards them or crashing through bricks walls that may or may not be solid. If you've ever watched Takeshi's Castle, you get this idea.
Forza Horizon 4: Lego Speed Champions
Developer: Playground Games Platforms: Xbox One, PC Release Date: June 13, 2019
The Lego Speed Champions expansion adds a whole new region to the best-selling racer where everything is made of (you guessed it) Lego. Players will also be able to hop into Lego vehicles, race around Lego tracks and smash through Lego walls and trees. Because Lego means everything is awesome (And now that song's stuck in your head. Sorrynotsorry).
FIFA 20
Developer: Electronic Arts Platforms: PS4, Xbox One, Switch Release Date: September 27, 2019
This year's FIFA revamps key systems like shooting and AI defending but also adds a brand new FIFA Street-style mode, Volta. This veers away from the realistic, straightforward football we're used to and invites players to be more creative in matches where teams have between three and five players rather than the usual eleven.
Flight Simulator
Developer: Microsoft Platforms: PC Release Date: TBA
Microsoft's hyper-realistic aviation simulation returns after more than a decade, this time rendered in impressive 4K visuals. Sit in the cockpit of faithfully recreated vehicles and fly around the world.
Fujii
Developer: Funktronic Labs Platforms: Valve Index Release Date: June 27, 2019
This virtual reality title allows you to explore three of vibrant and serene landscapes. As they wander through them, each with their own distinct environments, they can help bring each one to life by watering plants, touching certain objects and interacting with creatures. They can also collect seeds on their adventure and use them to grow an exotic garden of their own.
Garden of the Sea
Developer: Neat Corporation Platforms: Oculus Rift, HTC Vive Release Date: June 10, 2019 (Early Access)
In a game described as a mix between Harvest Moon and Pokémon, players are given a cottage on a small island and are free to live out their life how they see fit. They can grow plants and crops in their garden, or explore the island and meet the native creatures. Unlike Pokémon, interacting with them doesn't require you to battle them in any way -- simply stroke them, feed them and generally befriend them.
Genesis Noir
Developer: Feral Cat Den Platforms: PC, Mac Release Date: TBA
Technically, this game depicts the Big Bang that started our universe as a gunshot from a cosmic being, one that kills the protagonist's love interest... but since the gameplay does not involve violence and the goal is to prevent her death, I'm going to give it a pass. This stylish adventure sees you exploring Earth from the eyes of someone who exists outside of the universe. The emphasis is on exploration, interacting with the world and generating art as you do so.
Harvest Moon: Mad Dash
Developer: Natsume Platforms: PS4, Switch Release Date: Autumn 2019
An even more casual spin-off of the already laid back Harvest Moon series, Mad Dash is a colour matching puzzle game at its heart. Players are presented with a field seeded with blocks of different coloured crops. Combining those blocks to form larger squares helps the crops grow larger for more points -- for example, combine four cabbages to make one big cabbage, then four big cabbages to make a giant one. Rack up as high a score as possible before the time runs out, and you can call in a friend with co-op mode to help you.
Heave Ho
Developer: LeCartel Studios Platforms: PC, Switch Release Date: Summer 2019
Published by Devolver Digital, this wacky physics-based platformer gives you control of a strange looking creature with two exceedingly long arms. Use the simulated physics to build momentum and swing or flip across the screen to grab hold of the terrain. Only by mastering the mechanics will you complete each course. You can also have up to three friends join in, but coordination will be key to reaching your goal.
Hyperdot
Developer: Tribe Games Platforms: PC, Xbox Release Date: 2019
This minimalist action arcade game is all about dodging. You control a single dot (a HyperDot, if you will) trapped in a circular arena with enemies, projectiles and other hazards. Simply guide your dot around the empty space to avoid any collisions and survive as long as you can. The campaign will have over 100 different levels, each with their own dangers, and a multiplayer mode will challenge you to outlast your friends.
Just Dance 2020
Developer: Ubisoft Platforms: PS4, Xbox One, Switch, Wii (no, really) Release Date: November 5, 2019
If you haven't heard of Just Dance before, it's a Rayman Raving Rabbids minigame that's got out of hand (true story). In the minigame, players copied the choreography on an on-screen characters in time with the music. Ten years, 28 games and spin-offs, and countless music licensing deals later, you have Just Dance 2020.
Lost Words: Beyond The Page
Developer: Sketchbook Games Platforms: PC, PS4, Xbox One, Switch Release Date: December 2019
A 2D platformer set entirely in a little girl’s diary. Players use the words on the page as both platforms and tools to solve the various puzzles that block her progress. The game also has a beautiful story written by Rhianna Pratchett.
Madden NFL 20
Developer: Electronic Arts Platforms: PC, PS4, Xbox One Release Date: August 2, 2019
New to this year's Madden is Face of the Franchise: QB1, a "personalised career campaign" that allows players to create their own college quarterback and lead him to NFL glory. There are ten licensed college teams, along with all the most popular teams in the league.
Mind Labyrinth VR Dreams
Developer: Frost Earth Studio Platforms: PSVR, HTC Vive, Oculus Rift Release Date: Out Now
Another virtual reality exploration title set in a variety of imaginary landscapes. Mind Labyrinth VR Dreams is designed to be a meditative experience, where each world is inspired by a different mental state to help players "find their emotional balance". Environments range from a calm and lush forest to a perilous world of flames and tornados.
Mosaic
Developer: Krillbite Studio Platforms: PC Release Date: 2019
This dark and brooding adventure centres around a lonely man living a repetitive life in an overpopulated but ever-expanding city. Players experience his life, from the daily commute to working at a faceless megacorporation. All seems meaningless, until one day strange things begin to happen and the man's life changes dramatically.
Night Call
Developer: Black Muffin, MonkeyMoon Platforms: PC, Mac, PS4, Switch Release Date: 2019
A murder mystery noire game where you play a Paris taxi driver. Players must assist with the police investigation by getting their passengers, including potential suspects, to talk during their journey with you. But try not to scare them off, as you still need to earn enough money to pay your bills.
Per Aspera
Developer: Tlön Industries Platforms: PC Release Date: 2020
Another title about colonisation, but this one is set a little closer to home. Per Aspera tells the tale of a mission to colonise Mars, but in addition to the management side of things there is a strong narrative about the hardships of leaving your world to build a new one. Players must explore the Red Planet to find the resources they need to terraform the environment and survive, and make decisions carefully and strategically if they want the colony to prosper.
Planet Zoo
Developer: Frontier Developments Platforms: PC Release Date: November 5
From the makers of Planet Coaster and Jurassic World Evolution comes a management title that, perhaps quite obviously, puts you in charge of a zoo. Care for your animals, customers and staff as you try to run an efficient zoo, unlocking more and more exotic creatures as you progress.
Read Only Memories: Neurodiver
Developer: MidBoss Platforms: PC, Mac Release Date: 2020
Announced this week, this is the sequel to the acclaimed cyberpunk point-and-click adventure 2064: Read Only Memories. Players will search people's memories as a telepathic detective aided by the titular Neurodiver as the pair search Neo-San Francisco for a rogue psychic who is breaking people's minds.
Roller Champions
Developer: Ubisoft Platforms: PC Release Date: Early 2020
Set in 2029, this colourful game imagines a world where the most popular sport is the titular Roller Champions. Players compete in teams of three as they race around oval tracks trying to score as many goals as they can, while racking up as many laps as possible. There is the ability to tackle, but also the ability to dodge in stylish fashion.
Skatebird
Developer: Glass Bottom Games Platforms: PC, Mac, Linux Release Date: TBA
Since we're unlikely to ever be getting a new Skate, one indie is working on the next best thing: Skatebird. This skateboarding game features all the tricks, flips and kicks you expect, but the rider is a little bird and everything he's skating on has been made from household objects. The Kickstarter for this was announced earlier this week and already passed its goal of $15,000 (currently near $24,000).
Spiritfarer
Developer: Thunder Lotus Games Platforms: PC, PS4, Xbox One, Switch Release Date: 2020
In this cartoonish but touching adventure, players take on the role as the ferryman carrying recently departed souls on to the next world. While the souls are of human beings, they appear as animals that represent their personalities. Players can befriend them, develop relationships, and expand and improve the houseboat carrying them to the other side in a story that drives a message of acceptance.
Spaceteam VR
Developer: Cooperative Innovations Platforms: TBC Release Date: TBC
A virtual reality adaptation of a mobile game previously covered on NVGOTD. The concept is much the same: a group of players are aboard a spaceship, each in charge of certain controls labelled in made up space jargon. Each player is given a set of instructions that relates to another players' controls, but they don't know who. The only way out to find out is to shout out "Soak the Ferrous Holospecturm" or "Set the Sigmaclapper to 0" (for example) and hope everyone else is paying attention. If the team fails to follow instructions, the ship crashes.
Starmancer
Developer: Ominux Games Platforms: PC, Mac, Linux Release Date: Q1 2020
Starmancer is a building and management game that sees you constructing and maintaining a space station, but rather than playing some invisible God-like being (i.e. yourself) like you do in The Sims, this time you have a role to play. The game casts you as the station's AI and tasks you with sustaining a crew of colonists as they attempt to reach a new world on which they can build their home. Once the colony is complete, players can send out their humans to mine asteroids, trade with other factions and even explore ancient alien ruins.
Steep
Developer: Ubisoft Annecy Platforms: PC, PS4, Xbox One Release Date: Out Now
Another previous NVGOTD recommendation, this extreme winter sports title from Ubisoft is still going strong. This week the publisher announced a free new map set in Japan, giving players a whole new course to master.
Supermarket Shriek
Developer: Billy Goat Entertainment Platforms: PC, Xbox One Release Date: July 9, 2019
Don't worry, that's not blood -- it's paint. I was confused too. Supermarket Shriek is a kart racer (of sorts) where you control a man and a goat in a shopping trolley. Designed to ideally be played by two people (one as the goat, one as the man), each person has a microphone and must scream into it to turn in their direction to steer around a variety of complicated courses, all set in unsuspecting and previously tidy shops. There is, of course, a two-button control scheme for players who want to try it themselves.
Telling Lies
Developer: Sam Barlow Platforms: PC, Mac Release Date: 2019
From the creator of the superb Her Story comes a fresh live-action narrative adventure with far grander ambitions. Players explore videos stored on a hard drive stolen from the National Security Agency as they try to understand why the four characters have been played under electronic surveillance. As with Her Story, they can search for keywords to access new videos but this time they need to piece together timelines and events to interpret what they have seen.
The Curious Tale of Stolen Pets
Developer: Fast Travel Games Platforms: PlayStation VR, Oculus Quest, Oculus Rift, HTC Vive, Windows Mixed Reality Release Date: 2019
This single player adventure game is a puzzle-centric affair set on tiny floating worlds that come from the imagination of a child and their grandfather. Players grab, push, drop and spin objects found in each world searching for clues that will reveal what happened to the missing pets.
The Elder Scolls Legends
Developer: Bethesda Platforms: PC, Mac, PS4, Xbox One, Switch, iOS, Android Release Date: Out Now
This digital card battler sees you pitting monsters and creatures from the world of Bethesda's hit RPG series against each other. The big E3 news for this game is a new expansion, Moons of Elsweyr, introducing new cards and a storyline themed around the home of the cat-like Khajiit people.
The Good Life
Developer: White Owls Platforms: PC, PS4, Xbox One Release Date: Autumn 2019
Positioned as a "debt management life RPG", The Good Life puts players in the role of New York journalist Naomi, who has to move to a small British town to pay off her debts. The only way to do so is ben taking pictures of what happens in the town and reporting on them, but the closer she watches the local inhabitants, the quicker she realises that all is not as it seems.
The Lord of the Rings: Living Card Games
Developer: Fantasy Flight Interactive, Asmodee Digital Platforms: PC, PS4, Xbox One, Switch Release Date: August 8, 2019
Another digital card battler, but one based on the timeless works of JRRTolkien. Inspired by the real card game, this uses the animation and interactivity of video games to liven matches up. Players collect a deck of heroes from across Middle-Earth as they battle the forces of Sauron in card form.
The Sims 4: Island Living
Developer: The Sims Studio, Maxis Platforms: PC, Mac, PS4, Xbox One Release Date: June 21 (Desktop), July 16 (Consoles)
The newest expansion for The Sims 4 gives players the chance to build their dream island palace, as well as watch their Sims take part in more tropical activities. They'll be able to kit out their characters in local island clothing, instruct them to lounge on the beach, or encourage them to befriend the native dolphins. There are also water sports, like canoeing, swimming and surfing, plus occupations to train up for, like the beach-cleaning conservationist.
The Wardrobe - Even Better Edition
Developer: CINIC Games Platforms: PC Release Date: June 7, 2019
The Wardrobe is a point-and-click adventure inspired by the classic LucasArts titles like Monkey Island and Day of the Tentacle. It tells the story of Skinny, a boy died from an allergy to plums (that he didn't know about) and became a skeleton that lives in his friend Ronald's wardrobe. Skinny secretly watches over Ronald and helps him in life, but events force him to reveal himself. Even Better Edition adds joypad support, more achievements, a new save system and other improvements.
Totem Teller
Developer: Grinning Pickle Platforms: Xbox One, PC Release Date: 2020
Described by the developer as an "antinarrative video game", Totem Teller puts you in the role of a muse seeking inspiration. They roam the land in search of lost folklore, investigating strange distortions and retelling stories to any listeners they gather around them -- or allowing the stories to be forgotten forever. The surreal painted visual style gives it a storybook feel.
Way to the Woods
Developer: Anthony Tan Platforms: PC, PS4, Xbox One, Switch Release Date: 2020
This beautiful game sees players controlling one of two deer wandering through a post-apocalyptic landscape in search of safety. As they explore the ruins of civilisation, they'll encounter other friendly creatures such as racoons and cats, but a black goo that corrupts everything it touches continues to spread around them. Only by solving puzzles and finding new ways forward can they hope to escape.
Yoga Master
Developer: Oxygene Media Platforms: PS4 Release Date: Summer 2019
Developed in collaboration with professional yoga coaches, this game features more than 150 different poses for players to master and a variety of serene environments to practice in. There are over yoga programs to follow or you can create your own, and the game will be regularly updated with new content.
13 notes
·
View notes
Text
Rendering the Incomprehensible Comprehensible
I am confused by the state of the art of psychiatric medicine.
Now, I'm not a psychiatrist. I'm a guy what makes computers is be do videogames, and I haven't taken a chemistry class since freshman year of college or a biology class since high school. Pretty much the extent of my knowledge of the field is that I read Slate Star Codex a lot. So, the questions I'm asking here are ones I have to assume actual professionals in the area have answers to.
That question being... why is it made of drugs?
I don't mean in an “oh, these are social problems and we must solve society and overthrow [racism/capitalism/millenialism/makesworldwrong] instead of medicating our free spirits” way. I mean in a... how do drugs work at all, kind of way? It makes sense they work for killing pathogens- all you have to do is come up with a poison that works on what you're trying to kill but not on the host. But for fixing the brain? What?
My model of drug discovery works something like this:
- Scientists poke around at the brain and see a ton of hyper-complicated chemical processes happening in there, and make some educated guesses about what they're doing, based on measurements of levels of certain chemicals in certain places during certain mental states. They've got some vague ideas about what these chemicals are doing, but these are mostly statistical inferences and not detailed causal models. They look at these brain chemicals and how they move around, and infer that if they make some other chemicals that are shaped in specific ways, those chemicals will interfere with these other chemicals and make there be more or less of them under certain conditions. - Armed with these guesses, they go to the lab and synthesize these chemicals, and then spend billions of dollars running gigantic clinical trials to see if, maybe, putting a bunch of these new chemicals in the bloodstream will actually have anything like the desired effect. - Most of the time they don't, because these were just educated guesses based on simplified models, but with enough billions poured into running more trials, they'll eventually find a chemical they can p-hack into looking like it does something, and then exploit FDA regulations to get doctors to prescribe it for a thousand dollars a pill. Sometimes, if they're extremely lucky, they'll find something that has a positive effect that they don't need to statistically mutilate to show, and then we have a groundbreaking discovery.
I may just be super underinformed, but as I understand it... this process weirds me the hell out.
In my current job, I spend a lot of time fixing bugs in old websites. These websites are sometimes large and labyrinthine, full of old uncommented code some contractor wrote years ago before dropping off the face of the earth. This is, ignoring for a moment a completely unignorable difference in degree of complexity, kind of like trying to fix problems with the brain.
When I go in to fix a bug in a website, there's a lot of things I can do. I can look at the page's elements in the browser's dev tools. I can run the debugger and step through the code, looking at all the data and its values at any given point in time. I can go to the git repo and look back through previous versions of the code, to see what changes were made and when, in conjunction with Jira tickets describing what issue those changes were made to fix. And once I've figured out what's happening, I can go into the code, make changes, and see what effect they had.
Now, I can try to imagine what my job would be like if I had to do things like psychopharmacologists did.
First off, no making changes to the code. The code is compiled and minified and obfuscated and still three billion lines long. Even if I did figure out how to make desirable changes, that would be "digital eugenics" and I'd get fired.
Second, commit history only goes like three or four commits back, if I'm lucky. Previous commits have been deleted, since they're set to auto-recycle after a while and nobody knows how to turn that off.
Thirdly, no dev tools. I only have the rendered webpage itself, and when something goes wrong I have to kind of guess at whether it's a styling issue or a data issue or a connectivity issue or what.
What can I do, exactly? Well, I actually do have access to one of the dev tools, kind of: the Network tab. I can see the requests being made to the back-end API. Unfortunately, there is no API documentation, and the requests are just as obfuscated as the code. But I've also got Postman, and what I can kind of do is make my own requests to the API, to see what the output is and how it affects the system.
So, uh... hm, okay, I see a request being made to https://serotonin.presynapticneurone.neural.net. The data payload is gibberish, but I notice that when there's a lot of these requests happening, the webpage renders a little faster, and when there's not as many, it slows down. Maybe if I just copy the gibberish data and fake a bunch of my own requests, it'll go faster? ...Hm, okay, that kind of works on some pages but not others. Still, better than nothing- we have some users complaining about the site being slow, so let's just tell them to-
Oh, shit, wait, users don't know how computers work, I can't just tell them to spam Postman requests to the API endpoint. Um, okay, I'll write a little phone app that automatically spams the requests, and release that to users. Except- oh, for fuck's sake, I need to wait for FDApple to approve it for the app store, and they want us to prove that it works and doesn't contain malware. Except even I don't know if that works, so... okay, it's fine, we'll hire a bunch of testers and do a study that shows that overall it speeds things up, and doesn't kill anyone's machines. Good thing I work for a huge company that can afford to do that.
Aaaaaand here come the results, and- oh, god damn it, the study didn't achieve significance. Let me go get Steve, he can probably fudge the numbers here so the damn app store will let us release the fucking thing, we spent millions on those tests (and the tests of all the other interventions that turned out to do nothing because we didn't have enough information and guessed wrong), and we need to recoup our investment.
Sigh.
So... I'm hearing that the ROI on drug discovery is dropping, and that drug companies have pretty much given up on trying to fix things and have started repackaging the handful of blind hacky API spam tricks that miraculously have a consistent effect. This isn't surprising to me. I would not be surprised if, like, after decades of people banging their heads against a massively overcomplicated system, hitting it with differently-shaped hammers in hopes of getting anything to work... they've found most of the differently-shaped hammers that do anything.
At some point, someone has to invent developer tools, right? Find some way to actually figure out what the hell they're doing?
The big question: given the blatant inadequacy of the existing paradigm, why is the industry still trying to wring blood out of this dried-out stone? At some point, we're going to have to actually figure out what the brain is doing, but it seems like cognitive neuroscience is still in its infancy. "We don't know how this thing works" seems like the big obstacle to getting anything done, but most of the effort in this area still seems to be focused on finding new drugs to throw at the thing-we-don't-know-how-it-works.
I know I’m not the first person to ask this question. I’m sure everyone who’s ever had to grapple with psychiatry in any detail is lamenting the same issue, and I’m sure there are people who are working very hard to try and solve the problem. It just... doesn’t seem like those people are getting very much done. The most I hear about is pop science articles claiming that Science Has Discovered The Part Of The Brain That Makes You Love Kittens, which inevitably turn out to be irresponsible reporting of extremely modest correlational findings.
(Maybe AI will help? Maybe the brain is just too complicated to be reduced to something humans can understand on an engineering/problem-solving level, and we need something with a higher understanding-capacity? Except... most of the recent advances in AI are with neural nets that explicitly don't actually understand anything, nor do the researchers growing them.)
Where are we at with this? Are we getting anywhere? Is there encouraging progress in the field of learning-things-about-the-brain? Is the second derivative of that curve non-zero? Metacognitive revolution when?
14 notes
·
View notes
Text
On the debacle with Fallout 76
I feel like the debacle with Fallout 76 has become a testing grounds for a lot of the dominating theories and myths about video games and video game consumers in general, as well as more specifically about Bethesda studios and Bethesda gamers. I apologize for the LONG post ahead, but there’s a lot to unpack here and I want to make sure everyone’s on the same page before I try and make any big points. For those not in the know, I will attempt to summarize: - Bethesda released Fallout 76, a multiplayer installment in the Fallout franchise, with a set release date of November 14, 2018. - The game was announced with the marketing that it would be playable and enjoyable as singleplayer, that every person you ran into would be a real person, that it was a new Fallout experience, that its graphics would improve upon Fallout 4′s graphics by 16 times, and notably, one collector’s edition which cost $200 was marketed as coming with a wearable helmet and a canvas bag.
The beta was shaky and riddled with bugs, and upon release, the game itself was still pretty much broken -- far moreso than other Bethesda titles, and this coming from a company where the running joke since Oblivion has been that the bugs are so prevalent that they are a feature, not a flaw. An enormous patch was released shortly after launch, which was larger in size than the game itself, and which not only didn’t fix almost any of the bugs, but created hundreds *more* bugs, as if they didn’t playtest the patch at all. For players like me who can go a surprisingly long time in a Bethesda game without seeing any bugs at all, I will note that these bugs include: - T-posing enemies which either spontaneously assume their correct animations only when you get close, or never do, or which teleport suddenly into you to try and display their attack animations - Horrendous enemy A.I. where a lot of them will just stand in one place looping an animation - Enemies spawning out of thin air directly in front of you due to slow loading - A bug where enemies spontaneously heal the exact amount of damage you deal to them, making them invincible - Falling through the ground out of nowhere - Clipping through and getting caught in the world - Frequent server crashes, often due to in-game happenings (the game eventually gives you access to nuclear bombs, but the same bombs can crash the server if you drop them) - Frequent disconnects - Frequent game crashes (with no ‘save game’ function) - Body horror bugs like the Wendigo Bug which have been present since Fallout 4 and haven’t been fixed by Bethesda yet, even though modders were able to fix them weeks after Fallout 4 came out. Three years ago. Moreover, the game directly ported over most of its visual assets from Fallout 4. Most of the landscape elements come from Fallout 4, almost all of the weapons come from Fallout 4, almost all of the outfits and armors come from Fallout 4, most of the monsters come from Fallout 4, the physics and gunplay is directly ported over (minus the ability to pause the game to open your inventory, of course, and minus the time-slowing aspect of V.A.T.S, which makes V.A.T.S almost completely useless), the character creation is ported over, the loot is ported over, the base-building system and all of its assets (walls, floors, anything you’d use to build a base) are ported over. Basically, other than trees and certain monsters unique to West Virginia, you’ll have a hard time spotting content which isn’t directly ported over from Fallout 4, often without palette swaps. Is the promise of better graphics fulfilled? Well, the lighting is significantly improved, and even very pretty and atmospheric -- though occasionally light will shine through solid far-away objects, like mountains. Modders had done this almost immediately with Fallout 4, too, though, so it’s not really a huge achievement. And the landscape is much more colourful than in any other Fallout game, which is admittedly a nice change of pace, even though it makes no goddamn sense why the trees would survive while everything else dies around them. But other than those two elements... yeah, it just looks like Fallout 4, but usually doesn’t render as well due to being on a multiplayer server and due to the graphical glitches. How about the promise that every person you run into is a real person? Well, that was true all right, but how anyone thought that was a good idea is beyond me. It’s one of those things that sounds really cool and innovative until you think about it for literally any length of time at all. Why would that be a good thing? Unless you have quite a lot of friends who you’ve somehow got onto the same server (which, by the way, I don’t think has much functionality in Fallout 76), you’re not going to be very interested in those people, and you have no reason to be. They’re just big lumps of immersion-breaking, as I seriously doubt many people are going onto the game to vocally roleplay their way through the game experience. Moreover, this means no NPCs besides monsters and robots. No quests from anyone but robots and holotapes. Now, I like holotapes. I’m one of those unbearable players who listens to every holotape and reads every computer terminal. My favourite part of Fallout games is usually finding out the big stories behind Vaults or unusual locations. But when you are doing this quest for someone you will never meet, and have complete certainty of this fact, the reason to do quests in the first place starts to ebb away. You just get holotapes or robots telling you to go to a place, kill something there, rinse, repeat. That’s the entire game. Nothing is achieved; everyone who recorded those holotapes is dead, or a monster now. You’re not doing anyone any favours. There’s no one to help, there’s no one to hate, there’s just you (and whatever people you’re playing with, who, again, aren’t really part of the story as multiplayer gamers don’t typically roleplay). The main quest of the game revolves around trying to find the previous Overseer of the vault. There’s zero suspense, interest or urgency, because as a player, you know with complete certainty going in that if you find her, she’ll be dead or a monster. When you remove the NPCs, you remove all our reasons to care about quests. You also remove all interactions in the game besides “kill thing, loot thing, make stuff with loot”. And killing monsters with such laughable AI and glitches, AI designed for Fallout 4 where V.A.T.S could pause the game and dropped into a game where it doesn’t, isn’t nearly enjoyable enough to make that game loop anything but ghastly. How ANYONE thought this was a good idea is beyond me, and I’m pretty sure at this point that they didn’t do it because they thought it was a good idea, they did it because having NPCs function like they would in a singleplayer game, while in a multiplayer server, is an incredibly daunting task. When literally no one asked for the game to be multiplayer in the first place, but hey. Is the game fun to play alone? Not from literally anyone I know who has, no, and this is due to the above factors. Is the game, as the marketing said, more fun to play with your friends? Well, yes, but the same could be said of cleaning out a moldy garage alone versus with friends. Being with friends makes anything more enjoyable. The game does not cease to have all its serious underlying problems when you play with friends, you just have someone to commiserate with and witness this bullshit with you. Is this a new Fallout experience? Not really. It’s Fallout 4 with a prettier landscape, story constrained to holotapes and therefore constrained to the past (and not the present the player is actually playing in!), and it’s arguably not even a Fallout experience at all. It wears a Fallout skin but the core roleplaying, choice, and narrative features of the game are gone, and all that’s left is a world that’s much bigger, but where all the new space is pretty much empty anyhow. Oh, and the canvas bags for the collector’s edition were cheap vinyl when people got them, Bethesda just went “yeah canvas was too expensive lol, u can have five dollars’ worth of the game’s microtransaction money for free tho if you want, just file a complaint”. The amount of the microtransaction digital money wouldn’t even buy a virtual canvas bag, mind. Then someone threatened a lawsuit, and it looks like people are going to get their actual canvas bags. But they still need to file a complaint, and WHOOPS! They accidentally doxxed everyone who filed a complaint, to some other people who filed a complaint! The absolute cherry on top. (Yes, it really was an accident, it’s even stupider than it sounds.) So what can we take away from all this? Well, I wouldn’t take away much hope for Fallout 76 as a game, for one. It’s a dumpster fire, and they keep pouring gasoline onto it. But the game has scored abysmally low basically everywhere. People have noticed, and they’re not pleased. The game’s price has dropped 30%, and that’s in the first couple weeks after launching, which is completely unheard of for a AAA game. Returns are going wild. Youtube is FULL of videos taking Fallout 76 to town. So clearly, gamers won’t lap up whatever you give them just because it’s a sequel to something they love. The sunk cost fallacy hasn’t run that deep, and people are suddenly extremely skeptical of whatever Bethesda releases next -- which at this rate, is going to be either The Elder Scrolls: Blades, or their new sci-fi game, followed by The Elder Scrolls VII (title as yet unannounced). I would also suggest that studios may finally have been given a good indication that clumsily slapping multiplayer on something that had success as single-player isn’t the greatest idea. This is a lesson that probably should have been learned years ago, but better late than never. I would also hope that game studios, Bethesda especially, develop a touch more respect for their fanbase and realize that player bases can be lost. Bethesda has relied upon their fanbase to mod away their bugs, laziness, and incomplete content hampered by release dates for many years now, but faced with a multiplayer game with no mod support, they are put in a position where they have to realize how heavily they’ve been leaning on those mods. But there’s another part of the story that isn’t being covered so much -- one which challenges the assumptions which has led Bethesda and the players to such a disaster in the first place. Red Dead Redemption 2 has been in the makings for a long time now, but was released something like a year late in comparison to its originally announced release date. The new Kingdom Hearts has been repeatedly delayed. I’d expect the fans would have reacted with nothing but outrage! But they ... haven’t, for the most part. There’s been some frustration and groaning, especially with people who have pre-ordered the games, but for the most part, the fans have been pretty understanding. It turns out they’d rather have a game come out finished than come out on time. That seems simple, and even obvious, but for close to twenty years, it has been the prevailing logic that for a game to sell well, it has to come out at a pre-defined and specific date, and if it isn’t done, that’s just how the process of making games work, and we’ll fix it in bug patches, or wait for mods to fix it. This is such an assumed phenomenon that it shows up repeatedly in Extra Credits, a show which talks in great detail about the production of video games, and I’d be hard-pressed to name a game that I own or play which doesn’t have unfinished content, even if it’s fairly bug-free. But here we are, Red Dead 2 is out, and it’s a roaring success, despite considerable delays. The conventional wisdom is simply wrong. And it gets even better. This is the trailer for The Outer Worlds, a game made by Obsidian. I urge you to watch it. First of all, the game looks good. The graphics are good, the human characters are expressive and dynamic while still looking realistic. The backgrounds are great. The humour is great. The world-building, what we see of it, looks very promising. And oh my god, the shade they throw at Bethesda is gorgeous. Not only does Obsidian highlight themselves as the creators of Fallout and Fallout: New Vegas -- that is, the two most-loved Fallout games -- they play with the concept of a cryogenically frozen player character (possibly lampshading the use of the same concept in Fallout 4), and they point out that player choice isn’t just about a binary “who do you shoot” moment -- another moment from Fallout 4, and one of the few real choices you get to make in that game -- and implies that variety of choice, including non-combat choice, is going to be a Thing in this game. Look at the comments section for that video. You will see hundreds, nay, THOUSANDS of comments praising the trailer, talking about the shade it casts on Bethesda, making New Vegas meme jokes, praising the music, lauding the humour, wondering about the characters it shows us. You know what I didn’t see? Even one single, solitary comment complaining that there’s no definite release date shown anywhere in that trailer. Seriously, watch it again. It doesn’t say exactly when it’s coming out. Just 2019. No month. No date. Just sometime next year. You know... when it’s done. What you might not have known was that The Outer Worlds was originally estimated to come out this year. You didn’t know that because they didn’t release the trailer until just recently -- when they were far enough in production to produce such a great trailer, for one, but also once they were far enough to be certain they would be finished with production within a year. No one cares when it’s coming. They care that it looks like a good game with so much original effort put into it. That’s what matters. And maybe if the game studios can realize this, we’ll finally see an end to the exploitative bullshit that happens -- exploitative of not just the gamers, but of the thousands of overworked employees it takes to make a AAA video game -- in the service of an absolute deadline above the game itself. God, now that’s a thought. So don’t be discouraged by the failure of Fallout 76. There’s way better on the horizon. The myths that studios need a firm deadline to put out a good game, the myths that players in some way demand a firm deadline, the myth that players will sit there and take any level of bullshit, they’re all being thoroughly, publicly debunked. Feels good, man. Feels good.
#fallout#fallout 76#the outer worlds#outer worlds#red dead redemption 2#gaming#release date#rpg#western rpg
42 notes
·
View notes