#better than Nvidia
Explore tagged Tumblr posts
Video
youtube
This Stock Is Up Over 6x More Than Nvidia in 2024 !
0 notes
Text
( ˘͈ ᵕ ˘͈♡)
I gave up hoping for something more than the in-game animations' dead eyes.
#Hogwarts Legacy#Hogwarts Legacy Screenshots#Hogwarts#Hogwarts Legacy MC#Hogwarts Legacy OC#Ekrizdis Mors#Slytherin MC#Natsai Onai#Natsai Onai x MC#Riz x Natty#the UNHOLY hours I spent to get this screenshot#THEIR EYES ARE SO DEAD 😭😭😭#but this turns out better than I expected tho#waiting for modders community to work their magic on the mod animation menu situation#NVIDIA screenshots
11 notes
·
View notes
Text
yeeeesssssssssss
#just happy that. while sure i still dont have time to play it today. i got it running. sure it was on the nvidia cloud thing. but its#working. its finally working. cant wait to see my rook in 3d tomorrow. and yeah there's a six hour session limit but. thats gotta be long#enough to make a character and get to a save point at the very least so. fuck yeah#original posts#also like. talked to my wife with both of us having clearer heads (thanks to literally clearer air) and shes open to slowly building a new#computer which is cool. just not with the absolute minimum parts this time. so until then nvidia it is for anything too modern for this#2012 ass machine. ah well its better than nothing
3 notes
·
View notes
Text
welcome to dawntrail gunbreaker, finally
gnb is very fun it turns out, like 2 dps in tank's trench coat
#anya plays ffxiv#i always forget i have filters in game#(just some nvidia filters)#and then i take a basic screenshot#without doing anything to lighting\filters in gpose#and get confused later when the colors look washed out#and have to fix it in csp#anyways#if i to get any lore friendly justification for picking up a job#which i won't do#but if i would#it'd be purely to be better than thancred#because i'm never forgiving that cave in#never#every time i run that dungeon#i think about astarion#i was RIGHT THERE
5 notes
·
View notes
Text
<- first world problems alert.
bro I see all these live2d showcases of BEAUTIFUL art with the WORST rigging every DAY and I want to chew on my controller and yell I can rig better than that just give me a CHANCE!!!!
my biggest flaw is that i can't make a showcase of my rigging so I can't pitch my hat into the ring. even when I stream, I'm using the lumpy cousin of face tracking softwares so my model is glitchy and stiff. I'd do anything for vbridger tracking ..... shy of getting an actual iphone to use it. ughh
#were going to try a new software soon bc vtube studio is so bad next to (nvidia??) and vbridger#its so frustrating lol#I CAN RIG SO GOOD I JUST DONT HAVE THE TOOLS TO COMMUNICATE IT!!!#i just wish i could rig other peoples art i think it would look sm better than rigging my own art#but how do i even START#the answer is getting an iphone and using twitter and both of those options are bad
13 notes
·
View notes
Text
Portal 2 is still the perfect game to me. I hyperfixated on it like crazy in middle school. Would sing Want You Gone out loud cuz I had ADHD and no social awareness. Would make fan animations and pixel art. Would explain the ending spoilers and fan theories to anyone who'd listen. Would keep up with DeviantArt posts of the cores as humans. Would find and play community-made maps (Gelocity is insanely fun).
I still can't believe this game came out 12 years ago and it looks like THIS.
Like Mirror's Edge, the timeless art style and economic yet atmospheric lighting means this game will never age. The decision not to include any visible humans (ideas of Doug Rattmann showing up or a human co-op partner were cut) is doing so much legroom too. And the idea to use geometric tileset-like level designs is so smart! I sincerely believe that, by design, no game with a "realistic art style" has looked better than Portal 2.
Do you guys remember when Nvidia released Portal with RTX at it looked like dogshit? Just the most airbrushed crap I've ever seen; completely erased the cold, dry, clinical feel of Aperture.
So many breathtakingly pit-in-your-stomach moments I still think about too. And it's such a unique feeling; I'd describe at as... architectural existentialism? Experiencing the sublime under the shadow of manmade structures (Look up Giovanni Battista Piranesi's art if you're curious)? That scene where you're running from GLaDOS with Wheatley on a catwalk over a bottomless pit and––out of rage and desperation––GLaDOS silently begins tearing her facility apart and Wheatley cries 'She's bringing the whole place down!' and ENORMOUS apartment building-sized blocks begin groaning towards you on suspended rails and cement pillars crumble and sparks fly and the metal catwalk strains and bends and snaps under your feet. And when you finally make it to the safety of a work lift, you look back and watch the facility close its jaws behind you as it screams.
Or the horror of knowing you're already miles underground, and then Wheatley smashes you down an elevator shaft and you realize it goes deeper. That there's a hell under hell, and it's much, much older.
Or how about the moment when you finally claw your way out of Old Aperture, reaching the peak of this underground mountain, only to look up and discover an endless stone ceiling built above you. There's a service door connected to some stairs ahead, but surrounding you is this array of giant, building-sized springs that hold the entire facility up. They stretch on into the fog. You keep climbing.
I love that the facility itself is treated like an android zooid too, a colony of nano-machines and service cores and sentient panel arms and security cameras and more. And now, after thousands of years of neglect, the facility is festering with decomposition and microbes; deer, raccoons, birds. There are ghosts too. You're never alone, even when it's quiet. I wonder what you'd hear if you put your ear up against a test chamber's walls and listened. (I say that all contemplatively, but that's literally an easter egg in the game. You hear a voice.)
Also, a reminder that GLaDOS and Chell are not related and their relationship is meant to be psychosexual. There was a cut bit where GLaDOS would role-play as Chell's jealous housewife and accuse her of seeing other cores in between chambers. And their shared struggle for freedom and control? GLaDOS realizing, after remembering her past life, that she's become the abuser and deciding that she has the power to stop? That even if she can't be free, she can let Chell go because she hates her. And she loves her. Most people interpret GLaDOS "deleting Caroline in her brain" as an ominous sign, that she's forgetting her human roots and becoming "fully robot." But to me, it's a sign of hope for GLaDOS. She's relieving herself of the baggage that has defined her very existence, she's letting Caroline finally rest, and she's allowing herself to grow beyond what Cave and Aperture and the scientists defined her to be. The fact that GLaDOS still lets you go after deleting Caroline proves this. She doesn't double-back or change her mind like Wheatley did, she sticks to her word because she knows who she is. No one and nothing can influence her because she's in control. GLaDOS proves she's capable of empathy and mercy and change, human or not.
That's my retrospective, I love this game to bits. I wish I could experience it for the first time again.
#ramblings#long post#not art#personal#also i know “did glados actually delete caroline” is debated cuz the credits song disputes this#but i like to think she did#it's not sad. caroline died a long time ago#it's a goodbye
2K notes
·
View notes
Text
“Humans in the loop” must detect the hardest-to-spot errors, at superhuman speed
I'm touring my new, nationally bestselling novel The Bezzle! Catch me SATURDAY (Apr 27) in MARIN COUNTY, then Winnipeg (May 2), Calgary (May 3), Vancouver (May 4), and beyond!
If AI has a future (a big if), it will have to be economically viable. An industry can't spend 1,700% more on Nvidia chips than it earns indefinitely – not even with Nvidia being a principle investor in its largest customers:
https://news.ycombinator.com/item?id=39883571
A company that pays 0.36-1 cents/query for electricity and (scarce, fresh) water can't indefinitely give those queries away by the millions to people who are expected to revise those queries dozens of times before eliciting the perfect botshit rendition of "instructions for removing a grilled cheese sandwich from a VCR in the style of the King James Bible":
https://www.semianalysis.com/p/the-inference-cost-of-search-disruption
Eventually, the industry will have to uncover some mix of applications that will cover its operating costs, if only to keep the lights on in the face of investor disillusionment (this isn't optional – investor disillusionment is an inevitable part of every bubble).
Now, there are lots of low-stakes applications for AI that can run just fine on the current AI technology, despite its many – and seemingly inescapable - errors ("hallucinations"). People who use AI to generate illustrations of their D&D characters engaged in epic adventures from their previous gaming session don't care about the odd extra finger. If the chatbot powering a tourist's automatic text-to-translation-to-speech phone tool gets a few words wrong, it's still much better than the alternative of speaking slowly and loudly in your own language while making emphatic hand-gestures.
There are lots of these applications, and many of the people who benefit from them would doubtless pay something for them. The problem – from an AI company's perspective – is that these aren't just low-stakes, they're also low-value. Their users would pay something for them, but not very much.
For AI to keep its servers on through the coming trough of disillusionment, it will have to locate high-value applications, too. Economically speaking, the function of low-value applications is to soak up excess capacity and produce value at the margins after the high-value applications pay the bills. Low-value applications are a side-dish, like the coach seats on an airplane whose total operating expenses are paid by the business class passengers up front. Without the principle income from high-value applications, the servers shut down, and the low-value applications disappear:
https://locusmag.com/2023/12/commentary-cory-doctorow-what-kind-of-bubble-is-ai/
Now, there are lots of high-value applications the AI industry has identified for its products. Broadly speaking, these high-value applications share the same problem: they are all high-stakes, which means they are very sensitive to errors. Mistakes made by apps that produce code, drive cars, or identify cancerous masses on chest X-rays are extremely consequential.
Some businesses may be insensitive to those consequences. Air Canada replaced its human customer service staff with chatbots that just lied to passengers, stealing hundreds of dollars from them in the process. But the process for getting your money back after you are defrauded by Air Canada's chatbot is so onerous that only one passenger has bothered to go through it, spending ten weeks exhausting all of Air Canada's internal review mechanisms before fighting his case for weeks more at the regulator:
https://bc.ctvnews.ca/air-canada-s-chatbot-gave-a-b-c-man-the-wrong-information-now-the-airline-has-to-pay-for-the-mistake-1.6769454
There's never just one ant. If this guy was defrauded by an AC chatbot, so were hundreds or thousands of other fliers. Air Canada doesn't have to pay them back. Air Canada is tacitly asserting that, as the country's flagship carrier and near-monopolist, it is too big to fail and too big to jail, which means it's too big to care.
Air Canada shows that for some business customers, AI doesn't need to be able to do a worker's job in order to be a smart purchase: a chatbot can replace a worker, fail to their worker's job, and still save the company money on balance.
I can't predict whether the world's sociopathic monopolists are numerous and powerful enough to keep the lights on for AI companies through leases for automation systems that let them commit consequence-free free fraud by replacing workers with chatbots that serve as moral crumple-zones for furious customers:
https://www.sciencedirect.com/science/article/abs/pii/S0747563219304029
But even stipulating that this is sufficient, it's intrinsically unstable. Anything that can't go on forever eventually stops, and the mass replacement of humans with high-speed fraud software seems likely to stoke the already blazing furnace of modern antitrust:
https://www.eff.org/de/deeplinks/2021/08/party-its-1979-og-antitrust-back-baby
Of course, the AI companies have their own answer to this conundrum. A high-stakes/high-value customer can still fire workers and replace them with AI – they just need to hire fewer, cheaper workers to supervise the AI and monitor it for "hallucinations." This is called the "human in the loop" solution.
The human in the loop story has some glaring holes. From a worker's perspective, serving as the human in the loop in a scheme that cuts wage bills through AI is a nightmare – the worst possible kind of automation.
Let's pause for a little detour through automation theory here. Automation can augment a worker. We can call this a "centaur" – the worker offloads a repetitive task, or one that requires a high degree of vigilance, or (worst of all) both. They're a human head on a robot body (hence "centaur"). Think of the sensor/vision system in your car that beeps if you activate your turn-signal while a car is in your blind spot. You're in charge, but you're getting a second opinion from the robot.
Likewise, consider an AI tool that double-checks a radiologist's diagnosis of your chest X-ray and suggests a second look when its assessment doesn't match the radiologist's. Again, the human is in charge, but the robot is serving as a backstop and helpmeet, using its inexhaustible robotic vigilance to augment human skill.
That's centaurs. They're the good automation. Then there's the bad automation: the reverse-centaur, when the human is used to augment the robot.
Amazon warehouse pickers stand in one place while robotic shelving units trundle up to them at speed; then, the haptic bracelets shackled around their wrists buzz at them, directing them pick up specific items and move them to a basket, while a third automation system penalizes them for taking toilet breaks or even just walking around and shaking out their limbs to avoid a repetitive strain injury. This is a robotic head using a human body – and destroying it in the process.
An AI-assisted radiologist processes fewer chest X-rays every day, costing their employer more, on top of the cost of the AI. That's not what AI companies are selling. They're offering hospitals the power to create reverse centaurs: radiologist-assisted AIs. That's what "human in the loop" means.
This is a problem for workers, but it's also a problem for their bosses (assuming those bosses actually care about correcting AI hallucinations, rather than providing a figleaf that lets them commit fraud or kill people and shift the blame to an unpunishable AI).
Humans are good at a lot of things, but they're not good at eternal, perfect vigilance. Writing code is hard, but performing code-review (where you check someone else's code for errors) is much harder – and it gets even harder if the code you're reviewing is usually fine, because this requires that you maintain your vigilance for something that only occurs at rare and unpredictable intervals:
https://twitter.com/qntm/status/1773779967521780169
But for a coding shop to make the cost of an AI pencil out, the human in the loop needs to be able to process a lot of AI-generated code. Replacing a human with an AI doesn't produce any savings if you need to hire two more humans to take turns doing close reads of the AI's code.
This is the fatal flaw in robo-taxi schemes. The "human in the loop" who is supposed to keep the murderbot from smashing into other cars, steering into oncoming traffic, or running down pedestrians isn't a driver, they're a driving instructor. This is a much harder job than being a driver, even when the student driver you're monitoring is a human, making human mistakes at human speed. It's even harder when the student driver is a robot, making errors at computer speed:
https://pluralistic.net/2024/04/01/human-in-the-loop/#monkey-in-the-middle
This is why the doomed robo-taxi company Cruise had to deploy 1.5 skilled, high-paid human monitors to oversee each of its murderbots, while traditional taxis operate at a fraction of the cost with a single, precaratized, low-paid human driver:
https://pluralistic.net/2024/01/11/robots-stole-my-jerb/#computer-says-no
The vigilance problem is pretty fatal for the human-in-the-loop gambit, but there's another problem that is, if anything, even more fatal: the kinds of errors that AIs make.
Foundationally, AI is applied statistics. An AI company trains its AI by feeding it a lot of data about the real world. The program processes this data, looking for statistical correlations in that data, and makes a model of the world based on those correlations. A chatbot is a next-word-guessing program, and an AI "art" generator is a next-pixel-guessing program. They're drawing on billions of documents to find the most statistically likely way of finishing a sentence or a line of pixels in a bitmap:
https://dl.acm.org/doi/10.1145/3442188.3445922
This means that AI doesn't just make errors – it makes subtle errors, the kinds of errors that are the hardest for a human in the loop to spot, because they are the most statistically probable ways of being wrong. Sure, we notice the gross errors in AI output, like confidently claiming that a living human is dead:
https://www.tomsguide.com/opinion/according-to-chatgpt-im-dead
But the most common errors that AIs make are the ones we don't notice, because they're perfectly camouflaged as the truth. Think of the recurring AI programming error that inserts a call to a nonexistent library called "huggingface-cli," which is what the library would be called if developers reliably followed naming conventions. But due to a human inconsistency, the real library has a slightly different name. The fact that AIs repeatedly inserted references to the nonexistent library opened up a vulnerability – a security researcher created a (inert) malicious library with that name and tricked numerous companies into compiling it into their code because their human reviewers missed the chatbot's (statistically indistinguishable from the the truth) lie:
https://www.theregister.com/2024/03/28/ai_bots_hallucinate_software_packages/
For a driving instructor or a code reviewer overseeing a human subject, the majority of errors are comparatively easy to spot, because they're the kinds of errors that lead to inconsistent library naming – places where a human behaved erratically or irregularly. But when reality is irregular or erratic, the AI will make errors by presuming that things are statistically normal.
These are the hardest kinds of errors to spot. They couldn't be harder for a human to detect if they were specifically designed to go undetected. The human in the loop isn't just being asked to spot mistakes – they're being actively deceived. The AI isn't merely wrong, it's constructing a subtle "what's wrong with this picture"-style puzzle. Not just one such puzzle, either: millions of them, at speed, which must be solved by the human in the loop, who must remain perfectly vigilant for things that are, by definition, almost totally unnoticeable.
This is a special new torment for reverse centaurs – and a significant problem for AI companies hoping to accumulate and keep enough high-value, high-stakes customers on their books to weather the coming trough of disillusionment.
This is pretty grim, but it gets grimmer. AI companies have argued that they have a third line of business, a way to make money for their customers beyond automation's gifts to their payrolls: they claim that they can perform difficult scientific tasks at superhuman speed, producing billion-dollar insights (new materials, new drugs, new proteins) at unimaginable speed.
However, these claims – credulously amplified by the non-technical press – keep on shattering when they are tested by experts who understand the esoteric domains in which AI is said to have an unbeatable advantage. For example, Google claimed that its Deepmind AI had discovered "millions of new materials," "equivalent to nearly 800 years’ worth of knowledge," constituting "an order-of-magnitude expansion in stable materials known to humanity":
https://deepmind.google/discover/blog/millions-of-new-materials-discovered-with-deep-learning/
It was a hoax. When independent material scientists reviewed representative samples of these "new materials," they concluded that "no new materials have been discovered" and that not one of these materials was "credible, useful and novel":
https://www.404media.co/google-says-it-discovered-millions-of-new-materials-with-ai-human-researchers/
As Brian Merchant writes, AI claims are eerily similar to "smoke and mirrors" – the dazzling reality-distortion field thrown up by 17th century magic lantern technology, which millions of people ascribed wild capabilities to, thanks to the outlandish claims of the technology's promoters:
https://www.bloodinthemachine.com/p/ai-really-is-smoke-and-mirrors
The fact that we have a four-hundred-year-old name for this phenomenon, and yet we're still falling prey to it is frankly a little depressing. And, unlucky for us, it turns out that AI therapybots can't help us with this – rather, they're apt to literally convince us to kill ourselves:
https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/04/23/maximal-plausibility/#reverse-centaurs
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
#pluralistic#ai#automation#humans in the loop#centaurs#reverse centaurs#labor#ai safety#sanity checks#spot the mistake#code review#driving instructor
853 notes
·
View notes
Note
Sora looks awesome from OpenAI and then also Chat with RTX (Nvidia) will have a personal local LLM on your own machine but new windows updates will have co-pilot too. The future of AI is going to be awesome. As someone in the data field, you have to keep moving with it or be left without. IT is definitely an exciting time.
As someone else in the data field, my full background is in data and data flow, AI is the latest buzzword that a small group of people in Silicon Valley have pushed to work people up into a frenzy.
The people cheering on AI are the same people who said NFTs were going to radically change the world of work.
I think there’s positive uses for AI, particularly in pattern recognition, like detecting cancer.
However, Sora looks like shit. It’s producing videos of three-legged cats, and it’s using stolen work to do it. And sure, it’ll get better, but without regulation all it will do is poison the well of human knowledge as certain groups begin to create things that aren’t real. We move into a world where evidence can be fabricated.
Why are generative AI fans targeting artists who voice their concerns? Every day I see some AI techbro tweeting an artist and saying they’ve just scrolled through their art and fed it to an algorithm. It is scummy behaviour.
As a fellow ‘data field’ person, you’ll know that AI is also only as useful as what we feed it. Most organisations don’t know where their data actually is, they’re desperately trying to backpedal their huge push to the cloud and host things on premise. The majority of digital transformation projects fail, more fines are being handed out for failing compliance than ever, and companies can’t possibly claim to be cyber secure when they don’t know where they’re holding their data.
AI won’t fix any of this. It needs human engineering and standardisation to fix, non-technical and technical teams need to understand the connectivity of every process and piece of technology and maybe then some form of AI can be used to optimise processes.
But you can’t just introduce AI and think it fixes large-scale issues. It will amplify them if you continue to feed it garbage.
244 notes
·
View notes
Text
Benchmark Tech Notes
Running the Benchmark
If your Benchmark isn't opening, it's an issue with the executable file, and something not completing properly on either download, or extracting the Zip file. The Benchmark is designed to run and give you scores for your potato computer, I promise.
I actually saved my Benchmark to my external drive, and it still pulls and saves data and runs as it should. Make sure you allowed the download to complete before extracting the zip.
Resolution
Check your Settings; in Display, it may be defaulting your monitor Resolution to something than you might otherwise use if you aren't on standard 1920x1080.
To check your monitor Resolution, minimize everything on your screen and right click anywhere on your Desktop. Go to Display Settings and scroll down to find Resolution and what it's set at.
You can set the Graphic Settings 1 tab to Maximum, or to Import your game settings. Display Settings tab is where you set it to be Windowed, Bordered, or Full Screen, as well as select Resolution to match your monitor in the dropdown (or customize it if needed). I speak on Resolution as some folks in my FC noted it changed how their characters looked.
The Other tab in Settings is where you can change the text output, or even check a box to disable the logo and score; I do this on subsequent plays, once I have my scores at various settings, to get the clean screenshots.
@calico-heart has a post about fixing graphics settings, with screenshots of the settings tab. Basically, change graphics upscaling from AMD to NVIDIA, and/or uncheck Enable Dynamic Resolution. Also check the Framerate Threshold dropdown.
Screenshots
The benchmark auto-saves 5 screens each playthrough. In the Benchmark folder there is a Screenshots folder to find the auto-images taken of your characters.
Character Appearance
If you want to get your current in game appearance, including non-standard hairstyles, make sure to load up the live game, right click and "Save Character Settings."
Then go to Documents/My Games/Final Fantasy XIV: A Realm Reborn (this is the default in Windows 10 so mileage varies). The file will have the date you last updated their settings and be named FFXIV_CHARA_01.dat (or however many saves you have/made).
Grab those newly updated DAT files for your character(s) and copy them, then in the same base folder, go to Final Fantasy XIV: A Realm Reborn (Benchmark).
Paste the copied DAT files in there, and rename to FFXIV_CHARA_BENCH01.dat (the number doesn't matter, and you may have more).
When running Benchmark Character Creation, use the dropdown menu.
If you do Create a Custom Character and Load Appearance Data, it will give you default hairstyles again. Meteor's Dawntrail hairstyle is a new default.
In Char Gen I am finding that a very pale hrothgal reflects the green scenery around her, giving her white skin/fur a green tinge. The other zones do not have this problem, or at least not to the same degree.
They added a Midday vs Evening setting in outdoor areas as well to test lighting. The lighting in the Gridanian innroom is better; not as bright as outdoors, to be expected, but not completely useless.
New voice type icons to clarifying the sounds you make.
Remember we're getting a free fantasia with the expansion, so some tweaking may be needed; Iyna I felt like I needed to adjust her jaw. Other colors--skin, hair, eyes, tattoos, etc--are showing differently in the various kinds of lighting.
Uncertain if the limit on hairstyles for the Hrothgals so far is just a Benchmark thing; they do have set styles for different head options. Everyone gets Meteor's hair though, so it may be a temporary/Benchmark limit. But which clan and face you choose drastically alters what hair and facial feature options you have access to.
Check your settings, tweak them a bit, play around with chargen, and remember this is still a Benchmark; they always strike me as a little less polished than the finished game, but so far I'm actually pretty pleased with having defined fingers and toes, the irises in the eyes, scars looking cut into the skin, and other improvements.
172 notes
·
View notes
Text
Eggtober 6th 2023
"Splat" or "Fun with Colors": Raw Egg.
(Clip Studio Paint, Gouache Brush, Pencil brush for details and highlights. 12 colors, I think? 1 Hour.) I actually really liked the rough version I made, so you're gonna get that one at the end as well, for anyone who also likes the rough one better than the smooth one.
But first... I finally discovered a feature of CSP, so now I am unstoppable and I will NEVER AGAIN have to ask myself "How the fuck did I do that?"
Because now I have EVIDENCE. Now curious friends, followers, and my forgetful ass, can watch the full process of how I made a thing. Including what references I used so it's clear how much is iterative and how much I am drawing directly from the visual reference. Today I had to do a lot from imagination because I couldn't find an exaggerated splashy egg, but sometimes I really am just making a study and trying to do a one-to-one recreation of a reference. So now y'all get to know all my filthy little secrets. I was intending to grab footage starting with Eggtober 1, 2023 but OBS needs a version of an NVIDIA driver that will absolutely wreck my computer with BSODs because I own a junker apparently. But it turns out CSP (or at least V2, IDK if it was in V1) has a way to capture a speedpaint natively when you create the file.
Now I am unstoppable, powerful. No more taking a break from art when life gets busy and coming back to pieces I drew 10 years ago and wondering "How the hell did I manage that?" I can just check. It's over for all of you. Once I practice anatomy again and start being able to draw shapes and volumes perfectly from imagination, I will become all-powerful. I will ascend. Hell, maybe someone might even pay me if I learn to draw anything that isn't an egg or a meme. XD Radical self-confidence, baby. I can art now, and I have evidence. My horizons are infinite!
And now, hopefully, any baby artists that are just starting out can get an idea of how I do it from this and future pieces so I can pull you all up with me in a bid of apotheosis. For the EGGsthetic! (Aesthetic.)
I wonder which version of this egg @lady-quen's breadbugs will snap up?
And I wonder which one @quezify will like best? My money's on the sketchy one.
I can't tell which I like better honestly. The smooth one us much more "My aesthetic" because it matches how I render eggs but... The rough pencil-y gouache lines you get with light pressure really remind me of how the classic modern quezify eggs look, and I of course only started doing eggs because of the first Eggtober so, like. On the one hand, smooth and painterly look that goes with all but one of my previous eggs (Eggtober 1, 2023 was a study from memory of quezify's style, after all). But on the other hand... dramatic color changes! Textrure, shine! Colors that aren't in the actual references! EXPRESSIVENESS. Two different moods on the same egg art and I really dig both of them honestly.
164 notes
·
View notes
Note
As I understand it you work in enterprise computer acquisitions?
TL;DR What's the general vibe for AI accelerating CPUs in the enterprise world for client compute?
Have you had any requests from your clients to help them upgrade their stuff to Core Ultra/Whateverthefuck Point with the NPUs? Or has the corporate world generally shown resistance rather than acquiescence to the wave of the future? I'm so sorry for phrasing it like that I had no idea how else to say that without using actual marketing buzzwords and also keeping it interesting to read.
I know in the enterprise, on-die neural acceleration has been ruining panties the world over (Korea's largest hyperscaler even opted for Intel Sapphire Rapids CPUs over Nvidia's Hopper GPUs due to poor supply and not super worth it for them specifically uplift in inference performance which was all that they really cared about), and I'm personally heavily enticed by the new NPU packing processors from both Team Red and Team We Finally Fucking Started Using Chiplets Are You Happy Now (though in large part for the integrated graphics). But I'm really curious to know, are actual corporate acquisitions folks scooping up the new AI-powered hotness to automagically blur giant pink dildos from the backgrounds of Zoom calls, or is it perceived more as a marketing fad at the moment (a situation I'm sure will change in the next year or so once OpenVINO finds footing outside of Audacity and fucking GIMP)?
So sorry for the extremely long, annoying, and tangent-laden ask, hope the TL;DR helps.
Ninety eight percent of our end users use their computers for email and browser stuff exclusively; the other two percent use CAD in relatively low-impact ways so none of them appear to give a shit about increasing their processing power in a really serious way.
Like, corporately speaking the heavy shit you're dealing with is going to be databases and math and computers are pretty good at dealing with those even on hardware from the nineties.
When Intel pitched the sapphire processors to us in May of 2023 the only discussion on AI was about improving performance for AI systems and deep learning applications, NOT using on-chip AI to speed things up.
The were discussing their "accelerators," not AI and in the webinar I attended it was mostly a conversation about the performance benefits of dynamic load balancing and talking about how different "acclerators" would redistribute processing power. This writeup from Intel in 2022 shows how little AI was part of the discussion for Sapphire Rapids.
In August of 2023, this was the marketing email for these processors:
So. Like. The processors are better. But AI is a marketing buzzword.
And yeah every business that I deal with has no use for the hot shit; we're still getting bronze and silver processors and having zero problems, though I work exclusively with businesses with under 500 employees.
Most of the demand that I see from my customers is "please can you help us limp this fifteen year old SAN along for another budget cycle?"
104 notes
·
View notes
Text
Baldur's Gate 3 Screenshot Tutorial
Hi, I decided to make a more in-depth guide for my twitter followers, as I'm super limited in characters and formatting options over there.
For this tutorial, I'll explain how you can enhance your screenshots. I'll divide it into five parts: ReShade, making your screenshots high resolution, camera mods, photography basics, and post-processing. By the end of following all of these steps, you should have something way better than the start!
I recommend going through this tutorial downloading things step-by-step for the first three parts, as it'll help you to quickly identify where you've gone wrong if you have any issues.
1. ReShade ReShade is a post-processing tool that allows you to change the look of a game with an array of different effects and adjustments to use. It can be a lot to wrap your head around at first, so I recommend starting off by finding a ReShade preset that speaks to you from this page if you're not already familiar with using it. The mod authors should explain how to download it. I find 22:20 of this YouTube video to be helpful to introduce ReShade if you're completely new to it. This video is for the Sims 4, but ReShade typically works the same across different games. Now that ReShade is downloaded, we can get depth of field working within the ReShade. This step is optional. Depth of field refers to what will be in focus in your screenshot, and what will be blurred. It's essentially simulating shooting with a camera, like so:
To get this effect working, you need to follow this tutorial within the ReShade menu.
2. Making Your Screenshots High Resolution Typically, Baldur's Gate 3 is ran in 1920x1080 resolution, or standard HD (unless you have a higher resolution monitor and are running the game in 4k, in which case, you can ignore this step if you'd like). This is definitely an acceptable quality, but if you'd like to capture any detail, you're not going to get much out of this. To get a better quality image, there are two ways to achieve this. The first method is through hotsampling. Hotsampling is briefly running a game in a much higher resolution than your monitor supports, allowing you to capture screenshots with incredible detail, then bringing it back down to a native playable resolution. To hotsample, you'll either need to use the BG3 camera tool, or SRWE. For either of these hotsampling tools, it's important that you've downloaded ReShade, or they will not work.
Once you have either of these downloaded, make sure your game is running in windowed mode. If you have more than one monitor, you need to change your display to show only on one screen. Or again, this will not work.
Next, you're going to want to make sure you have a key set for taking screenshots in ReShade, as well as making sure you like the folder where your screenshots are set to be saved. You can find this in the settings tab. Once you have those set, you're ready to take really HD screenshots!
To do that, you want to set your game's resolution to 2x, or even 3x what it's currently displayed as. Once it's set, your game screen is going to look giant and probably run way off your monitor. This is a sign it's working! Once it looks like this, press the screenshot key you set earlier within ReShade, and there you go, a nice big screenshot should be in the folder you set!
If you don't want to do hotsampling, and if you have a Nvidia graphics card, you can download their their app, which can take resampled screenshots. It won't be as high quality as hotsampling, but still better than standard HD.
3. Camera Mods
There are two camera mods that I know of for BG3. One is paid, the other is not.
The first one is the Native Camera Tweaks mod. This mod allows you to move the camera around more freely as you're traversing the world, but in cutscenes you'll still be stuck.
The second one is the paid one, but it allows for total freedom within the game, even during cutscenes. This tool is also very helpful for hotsampling. Within this tool, it's very useful to configure your own controls for moving the camera around in game, as well as setting a key you'll remember for pausing the game so you can set up a screenshot. I changed the movement keys to be wasd and the keys to change the angle of my camera side to side/up and down to the arrow keys.
4. Photography Basics
Taking screenshots in a game is a lot like doing photography irl tbh lol, same rules mostly apply. You of course want to do the basics like making sure your subject is in focus, it's not too dark or too light. But some other tips for people not very familiar with taking photos to take note of are:
Make sure if you're taking a photo of a person, the top of their head is within frame
Try and either make sure someone is front and center, or in the rule of thirds
Pay attention to the lighting, sometimes it's too bright or too dull. Sometimes it's unflattering in certain angles. Lighting will always make a huge difference
5. Post-Processing
You can now leave your screenshot as is, or edit it further with a photo editing software! I recommend using Photopea, as it offers basically everything Photoshop does without the insane price tag. From here you can do whatever you feel is best to enhance your image.
And that's all! If you have any questions, feel free to ask, and if you get stuck anywhere in this tutorial, don't feel bad. A lot of this stuff is just trial and error, but if you're very persistent with it, I promise you'll get these working. Also I would just like to mention that a lot of this stuff applies to taking screenshots in a lot of games! So you can take this knowledge with you elsewhere <3
If you happened to follow all this, please send in an image of your Tav you took!
118 notes
·
View notes
Text
accountability update
There were 23 survey respondents who a) use Voeille water + skyboxes + horizons ("the Big 3") and b) have a ratio of downloads to RAM of less than 50% (e.g., 16gb RAM but no more than 8gb total CC). And yet 18 of them will sometimes experience flashing (9 of them often or severely).
Hood densities across the spectrum. Forced texture memories across the spectrum. 3 of the 5 non-flashers have uint LotSkirtIncrease in their userstartup. Every single one of the non-flashers is on Win10 or Win 11, and four of them have computers two years old or less. 4 Nvidia cards and 1 AMD. One of them uses highest quality lot imposters on a verified reasonably dense neighborhood.
I have now tested 30 different hypothesis correlations and there is simply no definitive pattern. Yes, it's usually better to have less CC and less lot view and more forced texmem. But then you have the user with 32gb RAM and less than 5gb of CC, and lotskirtincrease can't possibly be the only thing tanking them when everything up to and including that seems equal and so many non-flashers are using it without issue.
I feel like the only thing left to do is start trading DL folders and neighborhoods and seeing if it's your game or their computer.
The report is going to be interesting nerd reading and I will definitely put the best practice tips front and center, such as they are. But it's also going to have a lot of appendices showing that everything is just absolutely random.
63 notes
·
View notes
Note
What is considered both a reasonable and maximum polycount for custom content hair and other types of custom content in The Sims 2 and does it depend on gaming specs? Also your work is great!
Thank you for taking the time to read it.
I, personally, use hair that is under 25K polys unless it's unique and cute. Anything over that is overboard and should get decimated. Any furniture or clothing over 10K is extreme for me.
As for specs, I'm inclined to believe that it's a game limitation, how powerful your computer specs are, and a secret third and fourth thing, your OS, and if you're a laptop user.
This OS talk is a side tangent, so bear with me:
Big disclaimer that this is all my opinion, not a factual piece. Don't take this as gospel and I'm far from an expert on operating softwares, computers, and CC for that matter. I went a little bit insane with the OS talk because you mentioned specs and this has been on my mind for a while 🥴
Every single time I've heard that someone installed TS2 on Linux, they are able to play on maximum settings with a BUNCH of CC for a long time and experience no pink soup or pink soup related crashing. I want to do my own research and play the same heavily detailed lot for the same amount of time on Windows and Linux and compare the differences as well as compare how they use resources differently. If I already did not have an attachment to Photoshop CC 2017, I would have made the switch by now.
Okay so Windows... I've played TS2 on my Asus laptop from 2020 and on my new desktop. Here's the spec difference
Laptop: Intel Core i7-9750H 6 Core Processor, 8 GB RAM, NVIDIA GeForce GTX 1650 (Windows 10)
Desktop: AMD Ryzen 5 2600X Six-Core Processor, 16 GB RAM, NVIDIA GeForce GTX 1080 Ti (Windows 11)
My laptop was really good for it's time (I bought it in March 2020), but it was pink soup galore for any cluttered CC lot, even with all of the fixes and GRM edits. My current setup is a mish mosh of my bf's and ex's computer parts and it runs perfectly fine, but I do not play long enough to encounter pink soup. (I have a job and I mainly play to get CC previews these days.) If you noticed, both my CPU and GPU were made before my laptop was sold, and yet it still performs way better. Laptops with top of the line hardware will never be more powerful than PCs with even mid to high level hardware from 5 years ago. Don't forget that laptops will throttle performance to protect itself from overheating and causing damage.
There is also no difference between installing and playing the game on Windows 10 and Windows 11, except that you should absolutely uninstall OneDrive if you haven't already. There might be some issue if you install with discs, but I don't own the discs.
And as for Mac, I truly believe that Mac is the worst way to experience Sims 2. Between the Super Collection crap, not being able to use third party tools (SimPE, Hair Binner, any other .exe files made to run for Windows), and the file limit that really hits you hard if you download a bunch of CC that you can't merge anyway because CCMerger can't run on Mac. I should say I have never played Sims 2 on a Mac, but this is my opinion after reading about the struggles of other MacOS users online.
The point of this OS tangent? None, really. I'm not trying to persuade you to use Linux or stop using Mac, this is simply what I've noticed and my opinions on the matter. There's millions of variables I did not cover such as DXVK, texture sizes, difference in specs between each OS and user and many other things I am forgetting.
Feel free to correct, add on, extrapolate or whatever. If you have any thoughts, please comment, add it in reblogs, or tag me in your post. I'm very interested in the current topics about high polys, pink soup and big textures for this game.
#spell.txt#cc discussions#my opinion on macs wont change though#sorry mac users#only thing im qualified for in this discussion is my photoshop certificate lmao
17 notes
·
View notes
Note
Question from a guy thats stupid and doesn't know anything about computers. How do i start learning more? Like i would say i know the basics but thats it,do you have a source or something that you use ? (Again sorry if the question is stupid i just don't know)
Not a stupid question at all!
The basic resources are JayzTwoCents, Gamer's Nexus, and LinusTechTips. I've ordered them by reliability there.
When it comes to picking parts, which I suspect you might be struggling with, there's a lot to consider. Websites like PCPartPicker, PC Bottleneck Calculator, and videocardbenchmark are all fantastic resources.
Here are a few things from me to you, though:
1. NVIDIA are evil; if you want a GeForce card, buy refurbished. Also, 40 series cards are WAY overpriced for what you're getting. I'll always recommend AMD's Radeons anyways though, since they have way more VRAM.
2. Intel are selling snake oil right now and I wouldn't buy anything from them. AMD's Ryzen chips would be my choice even if Intel weren't being scammers right now.
3. ASUS have been consistantly fucking over their customers for a few months, so don't buy from them (especially their motherboards and graphics cards). JayzTwoCents literally banned them as a sponsor because of their horseshit and faulty products.
4. Research manufacturers just as much as you research the parts themselves! MSI is the king of the game right now, but everyone has their strengths and weaknesses. For exams, Gigabyte uses a concerning amount of ABS plastic, ASUS are pulling an Apple, and Zotac can be pretty inconsistent.
5. If you're buying a pre-built (which I wouldn't recommend, but you do you), research the builder and read as many reviews as possible. I know Build Redux is big right now because they sponsored LTT a bunch, but their shipping materials are cheap and reviews say that computers are being delivered broken. Digital Storm overprice the FUCK out of everything, iBuyPower are filthy liars, and Alienware pre-builts are built so odd that it'll be hard to do maintenance or upgrades later on.
Oh, and take everything with a grain of salt. Things change frequently, so there's a decent chance that certain brands or products are better or worse than a few months ago.
And if you have specific questions, ask them specifically! "What graphics card should I buy?" Is a very different question than "what's a good graphics card to pair with my cpu?"
76 notes
·
View notes
Note
Are we fucking doomed with AI? At this point more and more people are ignoring the negative implications (stealing art from actually talented people), sooner than later all content is going to be generated by a bunch of no talent hacks who couldn't think an original thought if their life depended on it. Burn it all down, this shit fucking sucks.
I think most of that shit is going to blow up and send companies like Nvidia and Microsoft crashing to earth. There will be uses for that technology but the way it’s being pitched right now to investors feels really unrealistic. When you add in all the NFT grifters who migrated right over to becoming AI grifters you’ve just got a lot of hype that probably won’t translate into actual user interest.
It’s sad watching companies and people who have been around long enough to know better jumping into this shit because it’s just the current hot thing.
43 notes
·
View notes