#It does not have a GPU which is probably where the issue was
Explore tagged Tumblr posts
Text
Quick follow-up on yesterday's post. I am vindicated. I did test out jellyfin with transcoding yesterday and it completely saturated my media server hardware, something that doesn't come near to happening with my methods. By doing it my way I'm able to serve multiple HD streams at once, Jellyfin choked on one SD stream because it insisted on transcoding rather than just passing the file through.
#it's an older processor#but it's a 4 core i5 sitting around 3.1 GHz without turbo#It does not have a GPU which is probably where the issue was#and it's old enough that it likely doesn't have the intel instruction sets to speed up transcoding#but still it was an easy 8x increase in normal load
1 note
·
View note
Note
Oooh, what about Journey? I think the sand probably took a lot to pull off
it did!! i watched a video about it, god, like 6 years ago or something and it was a very very important thing for them to get just right. this is goimg to be a longer one because i know this one pretty extensively
here's the steps they took to reach it!!
and heres it all broken down:
so first off comes the base lighting!! when it comes to lighting things in videogames, a pretty common model is the lambert model. essentially you get how bright things are just by comparing the normal (the direction your pixel is facing in 3d space) with the light direction (so if your pixel is facing the light, it returns 1, full brightness. if the light is 90 degrees perpendicular to the pixel, it returns 0, completely dark. and pointing even further away you start to go negative. facing a full 180 gives you -1. thats dot product baybe!!!)
but they didnt like it. so. they just tried adding and multiplying random things!!! literally. until they got the thing on the right which they were like yeah this is better :)
you will also notice the little waves in the sand. all the sand dunes were built out of a heightmap (where things lower to the ground are closer to black and things higher off the ground are closer to white). so they used a really upscaled version of it to map a tiling normal map on top. they picked the map automatically based on how steep the sand was, and which direction it was facing (east/west got one texture, north/south got the other texture)
then its time for sparkles!!!! they do something very similar to what i do for sparkles, which is essentially, they take a very noisy normal map like this and if you are looking directly at a pixels direction, it sparkles!!
this did create an issue, where the tops of sand dunes look uh, not what they were going for! (also before i transition to the next topic i should also mention the "ocean specular" where they basically just took the lighting equation you usually use for reflecting the sun/moon off of water, and uh, set it up on the sand instead with the above normal map. and it worked!!! ok back to the tops of the sand dunes issue)
so certain parts just didnt look as they intended and this was a result of the anisotropic filtering failing. what is anisotropic filtering you ask ?? well i will do my best to explain it because i didnt actually understand it until 5 minutes ago!!!! this is going to be the longest part of this whole explanation!!!
so any time you are looking at a videogame with textures, those textures are generally coming from squares (or other Normal Shapes like a healthy rectangle). but ! lets say you are viewing something from a steep angle
it gets all messed up!!! so howww do we fix this. well first we have to look at something called mip mapping. this is Another thing that is needded because video game textures are generally squares. because if you look at them from far away, the way each pixel gets sampled, you end up with some artifacting!!
so mip maps essentially just are the original texture, but a bunch of times scaled down Properly. and now when you sample that texture from far away (so see something off in the distance that has that texture), instead of sampling from the original which might not look good from that distance, you sample from the scaled down one, which does look good from that distance
ok. do you understand mip mapping now. ok. great. now imagine you are a GPU and you know exactly. which parts of each different mip map to sample from. to make the texture look the Absolute Best from the angle you are looking at it from. how do you decide which mip map to sample, and how to sample it? i dont know. i dont know. i dont know how it works. but thats anisotropic filtering. without it looking at things from a steep angle will look blurry, but with it, your GPU knows how to make it look Crisp by using all the different mip maps and sampling them multiple times. yay! the more you let it sample, the crisper it can get. without is on the left, with is on the right!!
ok. now. generally this is just a nice little thing to have because its kind of expensive. BUT. when you are using a normal map that is very very grainy like the journey people are, for all the sparkles. having texture fidelity hold up at all angles is very very important. because without it, your textures can get a bit muddied when viewing it from any angle that isnt Straight On, and this will happen
cool? sure. but not what they were going for!! (16 means that the aniso is allowed to sample the mip maps sixteen times!! thats a lot)
but luckily aniso 16 allows for that pixel perfect normal map look they are going for. EXCEPT. when viewed from the steepest of angles. bringing us back here
so how did they fix this ? its really really clever. yo uguys rmemeber mip maps right. so if you have a texture. and have its mip maps look like this
that means that anything closer to you will look darker, because its sampling from the biggest mip map, and the further away you get, the lighter the texture is going to end up. EXCEPT !!!! because of aisononotropic filtering. it will do the whole sample other mip maps too. and the places where the anisotropic filtering fail just so happen to be the places where it starts sampling the furthest texture. making the parts that fail that are close to the camera end up as white!!!
you can see that little ridge that was causing problems is a solid white at the tip, when it should still be grey. so they used this and essentially just told it not to render sparkles on the white parts. problem solved
we arent done yet though because you guys remember the mip maps? well. they are causing their own problems. because when you shrink down the sparkly normal map, it got Less Sparkly, and a bit smooth. soooo . they just made the normal map mip maps sharper (they just multipled them by 2. this just Worked)
the Sharp mip maps are on the left here!!
and uh... thats it!!!! phew. hope at least some of this made sense
432 notes
·
View notes
Text
Rant about generative AI in education and in general under the cut because I'm worried and frustrated and I needed to write it out in a small essay:
So, context: I am a teacher in Belgium, Flanders. I am now teaching English (as a second language), but have also taught history and Dutch (as a native language). All in secondary education, ages 12-16.
More and more I see educational experts endorse ai being used in education and of course the most used tools are the free, generative ones. Today, one of the colleagues responsible for the IT of my school went to an educational lecture where they once again vouched for the use of ai.
Now their keyword is that it should always be used in a responsible manner, but the issue is... can it be?
1. Environmentally speaking, ai has been a nightmare. Not only does it have an alarming impact on emission levels, but also on the toxic waste that's left behind. Not to mention the scarcity of GPUs caused by the surge of ai in the past few years. Even sources that would vouch for ai have raised concerns about the impact it has on our collective health. sources: here, here and here
2. Then there's the issue with what the tools are trained on and this in multiple ways:
Many of the free tools that the public uses is trained on content available across the internet. However, it is at this point common knowledge (I'd hope) that most creators of the original content (writers, artists, other creative content creators, researchers, etc.) were never asked for permission and so it has all been stolen. Many social media platforms will often allow ai training on them without explicitly telling the user-base or will push it as the default setting and make it difficult for their user-base to opt out. Deviantart, for example, lost much of its reputation when it implemented such a policy. It had to backtrack in 2022 afterwards because of the overwhelming backlash. The problem is then that since the content has been ripped from their context and no longer made by a human, many governments therefore can no longer see it as copyrighted. Which, yes, luckily also means that ai users are legally often not allowed to pass off ai as 'their own creation'. Sources: here, here
Then there's the working of generative ai in general. As said before, it simply rips words or image parts from their original, nuanced context and then mesh it together without the user being able to accurately trace back where the info is coming from. A tool like ChatGPT is not a search engine, yet many people use it that way without realising it is not the same thing at all. More on the working of generative ai in detail. Because of how it works, it means there is always a chance for things to be biased and/or inaccurate. If a tool has been trained on social media sources (which ChatGPT for example is) then its responses can easily be skewed to the demographic it's been observing. Bias is an issue is most sources when doing research, but if you have the original source you also have the context of the source. Ai makes it that the original context is no longer clear to the user and so bias can be overlooked and go unnoticed much easier. Source: here
3. Something my colleague mentioned they said in the lecture is that ai tools can be used to help the learning of the students.
Let me start off by saying that I can understand why there is an appeal to ai when you do not know much about the issues I have already mentioned. I am very aware it is probably too late to fully stop the wave of ai tools being published.
There are certain uses to types of ai that can indeed help with accessibility. Such as text-to-voice or the other way around for people with disabilities (let's hope the voice was ethically begotten).
But many of the other uses mentioned in the lecture I have concerns with. They are to do with recognising learning, studying and wellbeing patterns of students. Not only do I not think it is really possible to data-fy the complexity of each and every single student you would have as they are still actively developing as a young person, this also poses privacy risks in case the data is ever compromised. Not to mention that ai is often still faulty and, as it is not a person, will often still make mistakes when faced with how unpredictable a human brain can be. We do not all follow predictable patterns.
The lecture stated that ai tools could help with neurodivergency 'issues'. Obviously I do not speak for others and this next part is purely personal opinion, but I do think it important to nuance this: as someone with auDHD, no ai-tool has been able to help me with my executive dysfunction in the long-term. At first, there is the novelty of the app or tool and I am very motivated. They are often in the form of over-elaborate to-do lists with scheduled alarms. And then the issue arises: the ai tries to train itself on my presented routine... except I don't have one. There is no routine to train itself on, because that is my very problem I am struggling with. Very quickly it always becomes clear that the ai doesn't understand this the way a human mind would. A professionally trained in psychology/therapy human mind. And all I was ever left with was the feeling of even more frustration.
In my opinion, what would help way more than any ai tool would be the funding of mental health care and making it that going to a therapist or psychiatrist or coach is covered by health care the way I only have to pay 5 euros to my doctor while my health care provider pays the rest. (In Belgium) This would make mental health care much more accessible and would have a greater impact than faulty ai tools.
4. It was also said that ai could help students with creative assignments and preparing for spoken interactions both in their native language as well as in the learning of a new one.
I wholeheartedly disagree. Creativity in its essence is about the person creating something from their own mind and putting the effort in to translate those ideas into their medium of choice. Stick figures on lined course paper are more creative than letting a tool like Midjourney generate an image based on stolen content. How are we teaching students to be creative when we allow them to not put a thought in what they want to say and let an ai do it for them?
And since many of these tools are also faulty and biased in their content, how could they accurately replace conversations with real people? Ai cannot fully understand the complexities of language and all the nuances of the contexts around it. Body language, word choice, tone, volume, regional differences, etc.
And as a language teacher, I can truly say there is nothing more frustrating than wanting to assess the writing level of my students, giving them a writing assignment where they need to express their opinion and write it in two tiny paragraphs... and getting an ai response back. Before anyone comes to me saying that my students may simply be very good at English. Indeed, but my current students are not. They are precious, but their English skills are very flawed. It is very easy to see when they wrote it or ChatGPT. It is not only frustrating to not being able to trust part of your students' honesty and knowing they learned nothing from the assignment cause you can't give any feedback; it is almost offensive that they think I wouldn't notice it.
5. Apparently, it was mentioned in the lecture that in schools where ai is banned currently, students are fearful that their jobs would be taken away by ai and that in schools where ai was allowed that students had much more positive interactions with technology.
First off, I was not able to see the source and data that this statement was based on. However, I personally cannot shake the feeling there's a data bias in there. Of course students will feel more positively towards ai if they're not told about all the concerns around it.
Secondly, the fact that in the lecture it was (reportedly) framed that being scared your job would disappear because of ai, was untrue is... infuriating. Because it already is becoming a reality. Let's not forget what partially caused the SAG-AFTRA strike in 2023. Corporations see an easy (read: cheap) way to get marketable content by using ai at the cost of the creative professionals. Unregulated ai use by businesses causing the loss of jobs for real-life humans, is very much a threat. Dismissing this is basically lying to young students.
6. My conclusion:
I am frustrated. It's clamoured that we, as teachers, should educate more about ai and it's responsible use. However, at the same time the many concerns and issues around most of the accessible ai tools are swept under the rug and not actively talked about.
I find the constant surging rise of generative ai everywhere very concerning and I can only hope that more people will start seeing it too.
Thank you for reading.
41 notes
·
View notes
Note
I think consoles should probably die but I think the issue is that the 350 dollar pc you screenshotted like. Is bad lol. It won't run new games flat out. Its like 800-1000 dollars for a midrange pc that does what you need it to ie new games and the lower price is with like months of scalping deals. Consoles still have a niche of like 500 dollars and it will do what it does for 10 years and make games look decent, which like technical aptitude aside and such like is a valuable niche. (Note: I have built pcs and will never do so myself again because it sounds easy enough until like your ram isn't compatible with your motherboard's bios despite it listing it as such and amd sends you 3 dead cpus in a row and your gpu has removed all antialiasing from games due to the silicon lottery etc etc. Never again lol, I'm paying others to deal with this from now on.)
the price on the one i posted (courtesy of goons who were talking about this exact thing by coincidence the other day) is also bc its refurbed, which helps. the dead CPU thing i hate so fucking much. i went through the same shit as you with motherboards and also had to return them 3x. since when do we live in a universe where you buy a product costing over 100 dollars and are supposed to anticipate it wont work?? insane!!!! but on the other hand i can switch shit out whenever and if it breaks its not a huge hassle.
i realized when i woke up from nap #3 today (woof lol) that part of the problem i have with things in general is that graphical fidelity is inexplicably the marker by which we determine quality. a console that can render a photo realistic human flawlessly is worthless if the games revolving around that feature and that feature only are ass. like, the only positive thing ive heard unprompted about the ps5 is that the controller is extremely cool but i will be fucking damned if i could name a single game that wasnt a tech demo that used it effectively. because they've locked themselves into this corner of boasting about having the best graphics, they have to develop games that primarily feature 4k super ultra mega graphics to justify the expense of the console. does this make sense. like now they're in this stupid loop where because game production revolves around the feature that they didn't appear to actually meaningfully plan for.
also im sick in the head, bc my first thought when people were like "but the computer won't run on ultra high settings!" was "so? turn them down?". but then i remembered all of the games ive played in the last 10 years that looked like ass because the devs didnt think that they should try to make the game attractive or maintain any of the atmosphere for people with normal computers.
32 notes
·
View notes
Text
Share Your Anecdotes: Multicore Pessimisation
I took a look at the specs of new 7000 series Threadripper CPUs, and I really don't have any excuse to buy one, even if I had the money to spare. I thought long and hard about different workloads, but nothing came to mind.
Back in university, we had courses about map/reduce clusters, and I experimented with parallel interpreters for Prolog, and distributed computing systems. What I learned is that the potential performance gains from better data structures and algorithms trump the performance gains from fancy hardware, and that there is more to be gained from using the GPU or from re-writing the performance-critical sections in C and making sure your data structures take up less memory than from multi-threaded code. Of course, all this is especially important when you are working in pure Python, because of the GIL.
The performance penalty of parallelisation hits even harder when you try to distribute your computation between different computers over the network, and the overhead of serialisation, communication, and scheduling work can easily exceed the gains of parallel computation, especially for small to medium workloads. If you benchmark your Hadoop cluster on a toy problem, you may well find that it's faster to solve your toy problem on one desktop PC than a whole cluster, because it's a toy problem, and the gains only kick in when your data set is too big to fit on a single computer.
The new Threadripper got me thinking: Has this happened to somebody with just a multicore CPU? Is there software that performs better with 2 cores than with just one, and better with 4 cores than with 2, but substantially worse with 64? It could happen! Deadlocks, livelocks, weird inter-process communication issues where you have one process per core and every one of the 64 processes communicates with the other 63 via pipes? There could be software that has a badly optimised main thread, or a badly optimised work unit scheduler, and the limiting factor is single-thread performance of that scheduler that needs to distribute and integrate work units for 64 threads, to the point where the worker threads are mostly idling and only one core is at 100%.
I am not trying to blame any programmer if this happens. Most likely such software was developed back when quad-core CPUs were a new thing, or even back when there were multi-CPU-socket mainboards, and the developer never imagined that one day there would be Threadrippers on the consumer market. Programs from back then, built for Windows XP, could still run on Windows 10 or 11.
In spite of all this, I suspect that this kind of problem is quite rare in practice. It requires software that spawns one thread or one process per core, but which is deoptimised for more cores, maybe written under the assumption that users have for two to six CPU cores, a user who can afford a Threadripper, and needs a Threadripper, and a workload where the problem is noticeable. You wouldn't get a Threadripper in the first place if it made your workflows slower, so that hypothetical user probably has one main workload that really benefits from the many cores, and another that doesn't.
So, has this happened to you? Dou you have a Threadripper at work? Do you work in bioinformatics or visual effects? Do you encode a lot of video? Do you know a guy who does? Do you own a Threadripper or an Ampere just for the hell of it? Or have you tried to build a Hadoop/Beowulf/OpenMP cluster, only to have your code run slower?
I would love to hear from you.
13 notes
·
View notes
Note
have you thought about a docked laptop setup? i have a lot of my own energy/disability issues and i'm looking into it for when i can upgrade. getting a good laptop that can do most things, but having the ability to dock it to an external gpu and a monitor setup when you want to do work or play big games at your desk is like. one of the better ways to get a split setup that lets you move to bed when you're low energy without much hassle
money's the main thang for sure. i have a solid desktop setup, so i don't really have to worry about performance or anything like that.
portability for digital art is a little tricky for me just 'cause i am very particular about how stuff feels. it's a sensory thing, and when it's off, drawing feels like 7000 pins in my wrist and arm (mentally) lol. so if my brain is in cintiq mode, it's wanting everything sensory about my cintiq - how it feels to push the pen on the screen, how much glide/friction there is, etc. also cursor/hover stuff, which doesn't exist on the ipad i have.
it also does not help that my cintiq is a 13hd, so it uses wacom's lovely (awful) 3-in-1 cable, which is already finicky enough without daisychaining usb/hdmi extenders lol
so usually i'll just swap to my ipad, if the sensory overlords deem it acceptable. anddd that brings up the other nested issue:
my ipad is a 6th gen ipad, the first non-pro one that can use the pencil. it's at the point where i can't have another app open while drawing whether it's clip studio or procreate. it's just old and i've used the hell out of it. it also doesn't support hover, so another sensory problem.
honestly if i could afford it, i'd probably just get the biggest, newest ipad pro. it has hover capability and would be comparable to my cintiq size wise. i could use this w/astropad and be able to hop to and from my desk without issue for the most part. also the novelty of a new toy could help push me through the adjustment period of getting used to going back and forth lol.
i'd be able to use it more dependably for drawing wherever, as well as serving most needs i'd want out of a laptop.
buuuut i am still in a perpetual state of not being able to afford my basics, let alone think about any technological upgrades/changes. sooo i just have to tell my autistic ass to calm down and wait it out, hopefully i can address it before i die!! :D
(ty for the suggestion btw!! i hope you find something that works for you, too!)
3 notes
·
View notes
Text
A lot of it can! I built a stupidly overpowered computer given that the game I played the most in the first month or two was Sid Meier's Alpha Centauri from 1999. Windows 11 actually does better with SMAC than Windows 10 did, it's only crashed once! I can also run Creative Suite CS2 which I OWN on this system. Photoshop from 2004 doesn't quite know what to do with multithreading but it's actually still really fast and the 64 gigs of RAM doesn't hurt. At the moment I have NMS running in the background while I have a gadzillion firefox tabs open and I could easily watch a video like this and be fine. One of the upsides of holding on to old software is that where it can take advantage of the new hardware, it really does run very well and you get actually uplift. The issue is that a lot of people lost the thread of Intel's naming scheme and haven't upgraded their computer or they bought something with stupidly little RAM (16g is probably okay but 32 g is what most gamers will be happier with and I bought a little word processing toaster of a netbook in 2020-ish or 2021 which had FOUR gigs of RAM but there was a huge tech shortage at the time and it can do what i needed it to do (run Scrivener) just fine. It cannot handle a lot of tabs open and still do, you know, the operating system. 8 gigs is Sluggish because of Windows bloat.
And yes, I have a fervent desire for two things in programming:
That programmers optimize their programs to run on a wide variety of systems with reasonable speeds
That programmers enable their programs to use things like multithreading and large amounts of available RAM *if there is excess capacity readily available* to speed necessary functions that take significant time. Like, oh, loading screens and transitions shouldn't take much time at all on my system but the cpu is sitting there at like 4% utilization while the framerate drops to 30fps on a LOADING screen because they just had to animate a gadzillion star systems through the gpu alone (looking at you NMS.)
What I mean by that is that, for example, in the gaming sector, there are a lot of companies who are like "How much eye candy can we stuff in here to meet our arbitrary (stupidly low) framerate goal" and most of the "eye candy" is stuff most observers wouldn't know or recognize from a hole in the ground. Yes, I'm talking about Starfield. I didn't understand why they'd accept a framerate of 30fps on console until I discovered they'd artificially locked both Skyrim and Fallout 4 ON THE PC to like 30 or 40 fps. And then tied the physics to framerate, completely unnecessarily (as in you can change this behavior with like two or three ini edits.) Then there's games like Baldur's Gate which was done thoughtfully and can run on a huge range of hardware. I had to return one Civilization game because they hadn't accounted for an aging fanbase and high resolution monitors and I flat out couldn't read the tiny text to play the game.
But yeah, we have this neverending leapfrogging bloat that goes on, where users try to upgrade to get things to work better and companies decide to fill in the new overhead with data mining background tasks and "user experience optimizations" that are a ridiculous rat race that in no way enhances the user experience.
Anyway, I'm not opposed to upgrades but like, do them sensibly and if you can, learn to build your own stuff so that you aren't beholden to the anticonsumer tactics most major computer system integrators (dell, etc.) use to get as much of your money as possible while giving you the least in exchange.
we should globally ban the introduction of more powerful computer hardware for 10-20 years, not as an AI safety thing (though we could frame it as that), but to force programmers to optimize their shit better
231K notes
·
View notes
Text
What Is Neural Processing Unit NPU? How Does It Works
What is a Neural Processing Unit?
A Neural Processing Unit NPU mimics how the brain processes information. They excel at deep learning, machine learning, and AI neural networks.
NPUs are designed to speed up AI operations and workloads, such as computing neural network layers made up of scalar, vector, and tensor arithmetic, in contrast to general-purpose central processing units (CPUs) or graphics processing units (GPUs).
Often utilized in heterogeneous computing designs that integrate various processors (such as CPUs and GPUs), NPUs are often referred to as AI chips or AI accelerators. The majority of consumer applications, including laptops, smartphones, and mobile devices, integrate the NPU with other coprocessors on a single semiconductor microchip known as a system-on-chip (SoC). However, large data centers can use standalone NPUs connected straight to a system’s motherboard.
Manufacturers are able to provide on-device generative AI programs that can execute AI workloads, AI applications, and machine learning algorithms in real-time with comparatively low power consumption and high throughput by adding a dedicated NPU.
Key features of NPUs
Deep learning algorithms, speech recognition, natural language processing, photo and video processing, and object detection are just a few of the activities that Neural Processing Unit NPU excel at and that call for low-latency parallel computing.
The following are some of the main characteristics of NPUs:
Parallel processing: To solve problems while multitasking, NPUs can decompose more complex issues into smaller ones. As a result, the processor can execute several neural network operations at once.
Low precision arithmetic: To cut down on computing complexity and boost energy economy, NPUs frequently offer 8-bit (or less) operations.
High-bandwidth memory: To effectively complete AI processing tasks demanding big datasets, several NPUs have high-bandwidth memory on-chip.
Hardware acceleration: Systolic array topologies and enhanced tensor processing are two examples of the hardware acceleration approaches that have been incorporated as a result of advancements in NPU design.
How NPUs work
Neural Processing Unit NPU, which are based on the neural networks of the brain, function by mimicking the actions of human neurons and synapses at the circuit layer. This makes it possible to execute deep learning instruction sets, where a single command finishes processing a group of virtual neurons.
NPUs, in contrast to conventional processors, are not designed for exact calculations. Rather, NPUs are designed to solve problems and can get better over time by learning from various inputs and data kinds. AI systems with NPUs can produce personalized solutions more quickly and with less manual programming by utilizing machine learning.
One notable aspect of Neural Processing Unit NPU is their improved parallel processing capabilities, which allow them to speed up AI processes by relieving high-capacity cores of the burden of handling many jobs. Specific modules for decompression, activation functions, 2D data operations, and multiplication and addition are all included in an NPU. Calculating matrix multiplication and addition, convolution, dot product, and other operations pertinent to the processing of neural network applications are carried out by the dedicated multiplication and addition module.
An Neural Processing Unit NPU may be able to do a comparable function with just one instruction, whereas conventional processors need thousands to accomplish this kind of neuron processing. Synaptic weights, a fluid computational variable assigned to network nodes that signals the probability of a “correct” or “desired” output that can modify or “learn” over time, are another way that an NPU will merge computation and storage for increased operational efficiency.
Testing has revealed that some NPUs can outperform a comparable GPU by more than 100 times while using the same amount of power, even though NPU research is still ongoing.
Key advantages of NPUs
Traditional CPUs and GPUs are not intended to be replaced by Neural Processing Unit NPU. Nonetheless, an NPU’s architecture enhances both CPUs’ designs to offer unparalleled and more effective parallelism and machine learning. When paired with CPUs and GPUs, NPUs provide a number of significant benefits over conventional systems, including the ability to enhance general operations albeit they are most appropriate for specific general activities.
Among the main benefits are the following:
Parallel processing
As previously indicated, Neural Processing Unit NPU are able to decompose more complex issues into smaller ones in order to solve them while multitasking. The secret is that, even while GPUs are also very good at parallel processing, an NPU’s special design can outperform a comparable GPU while using less energy and taking up less space.
Enhanced efficiency
NPUs can carry out comparable parallel processing with significantly higher power efficiency than GPUs, which are frequently utilized for high-performance computing and artificial intelligence activities. NPUs provide a useful way to lower crucial power usage as AI and other high-performance computing grow more prevalent and energy-demanding.
Multimedia data processing in real time
Neural Processing Unit NPU are made to process and react more effectively to a greater variety of data inputs, such as speech, video, and graphics. When response time is crucial, augmented applications such as wearables, robotics, and Internet of Things (IoT) devices with NPUs can offer real-time feedback, lowering operational friction and offering crucial feedback and solutions.
Neural Processing Unit Price
Smartphone NPUs: Usually costing between $800 and $1,200 for high-end variants, these processors are built into smartphones.
Edge AI NPUs: Google Edge TPU and other standalone NPUs cost $50–$500.
Data Center NPUs: The NVIDIA H100 costs $5,000–$30,000.
Read more on Govindhtech.com
#NeuralProcessingUnit#NPU#AI#NeuralNetworks#CPUs#GPUs#artificialintelligence#News#Technews#Technology#Technologynews#Technologytrends#govindhtech
0 notes
Text
GAMERSMENU
This is our principle gaming PC construct guide; the arrangement of parts we'd prescribe to anybody needing to assemble another framework that adjusts evaluating and execution. However I will concede that it can for somewhat discouraging perusing at present, all gratitude to the proceeding with GPU dry season.
Go To
GAMERSMENU
Things do appear to be improving and you can essentially purchase basically everything on this rundown at an ordinary cost. However, no doubt, designs cards are still terribly over-evaluated at whatever point they're accessible. Such is life, I presume.
Notwithstanding this, we're taking a gander at a framework with an objective cost of around $1,000, and that is the place where the remainder of our fabricate sits for the planned $400 Nvidia GeForce RTX 3060 Ti that we would suggest for this degree of framework. And keeping in mind that It's feasible to get everything beside the GPU today, the designs card truly is the pulsating heart of any gaming PC, and that makes it hard to suggest a full form without basing your new apparatus around a GPU.
Hoping to fabricate your best gaming pc construct this 2021? This aide has the best pc works at different financial plans set up dependent on the best performing equipment per dollar spent. Building your own gaming pc has many advantages including the capacity to modify/customize your pc, a more noteworthy appreciation for your gaming pc construct venture and it's truly fun. Despite the fact that building your own pc can be more financial plan amicable, we as a whole have a spending plan to hold fast to, which is the place where the accompanying pc incorporates become an integral factor.
The best gaming pc fabricates you see beneath are refreshed each and every month and are parted into the most famous spending plans and gaming execution classifications that permit you to effortlessly design your next pc without the issue of doing the entirety of the examination yourself.
With present day PC games progressing at a particularly quick rate, there is nothing unexpected that there are numerous titles that have been delivered that most standard cutout PCs (modest pre-fabricated frameworks) can scarcely deal with. Furthermore, as PC gamers we like to have and encounter the best… We like to play our games on the most elevated settings conceivable, with the most noteworthy framerate conceivable, (with however many RGB lights as could reasonably be expected.)
Luckily, these days, even a spending gaming PC will permit you to run most games on higher settings on a reasonable 1080p screen. (Albeit, in this aide, we'll talk about very good quality PCs, instead of financial plan well disposed frameworks.)
For those of you who simply need to get directly into requesting the parts for your framework, I've assembled five distinctive pre-made part records ($1,000, $1,250, $1,500, $1,750, and $2,000) with the goal that you can sidestep the part choice measure and get directly into building your new amazing gaming PC for 2021.
These frameworks are refreshed with the top parts at the best costs consistently. Thus, in case you're taking a gander at these forms you can wager they'll give you most extreme execution for the spending you've set. What's more, in case you're searching for a correspondingly evaluated pre-fabricated gaming PC, simply click on the "PRE-BUILT »" connection to look at an elective alternative.
Your Ticket to High-End Gaming!
Would you like to fabricate the best top of the line gaming PC workable for $1,500? Then, at that point, you've gone to the perfect spot! The form we profile here offers the best equilibrium of CPU and GPU power you'll discover in any expand on the 'Net, alongside very good quality yet financially savvy parts in each and every other class. The objective: most extreme edges per dollar without holding back on the stuff every other person does, similar to a quality force supply and a major strong state drive. The last thing you need is a dragster motor in an old mixer skeleton, so we ensure your gaming PC fabricate marks off all the cases!
Be that as it may, before we get to our proposals, we need to discuss the condition of the PC part market at the present time. Because of a powerful coincidence of creation issues and appeal, it's frequently hard to track down a portion of the key PC parts in stock. The entirety of the best illustrations cards and surprisingly the most noticeably terrible ones are unavailable. You can discover basically any card available to be purchased by hawkers on eBay and, as indicated by our modern GPU Price Index, that implies spending essentially $800 for a RTX 3060 card that should cost $329.
While GPUs are the most noticeably terrible wrongdoers, the Ryzen 5000 series CPUs are unavailable or selling at lifted costs wherever as well. Notwithstanding, on some random day, you might observe one to be available to be purchased.
Despite the fact that building your own is frequently less expensive than purchasing prebuilt, evaluating is a lot of something individual relying upon what games you need to play. Certain individuals probably won't raise an etched eyebrow at burning through four thousand on a gaming PC to wrench up Assassin's Creed Valhalla PC Settings to 4K, while the majority of us would in any case battle to figure out $1,000 for a form that can run the most recent Call of Duty at 1080p.
1 note
·
View note
Text
2020 Recap - My Year in Gaming
2020. What a year for video games. I had big plans for last year, but in the end I did very little besides play video games, and I don’t think I’m alone there since we were all stuck at home looking for a way out of reality. I wanted to do a year-end recap as I’ve done sporadically in past years, but this one will be different than the typical “Games of the Year” format because despite all the games I played in 2020, almost none of them came out in 2020, and some of the things that defined my year in gaming weren't even games.
Resident Evil 3 Remake (PS4)
RE3 was one of the only games I played in 2020 that didn’t coincide with the deadly pandemic's spread across the US. RE3 is, of course, a game about the spread of a deadly virus in Anytown, USA. It was an appetizer, I guess.
When the Resident Evil 2 remake dropped in 2019, there were some things I loved about it, and a few things that felt like steps back from the original. I feel much the same about RE3. I had also theorized that a Resident Evil 3 remake would be better off as RE2 DLC than as a separate full-length game, and considering how short RE3 turned out, with some of the best sections of hte original cut entirely (namely, the clock tower), I stand by my theory.
Oh well, at least Jill gets this rad gun, which for the time being is the closest thing to a new Lost Planet we can hope for anytime soon.
Sekiro (PS4)
Sekiro is the first video game I ever Platinumed. This is partly because conquering the base game was such a spartan exercise that going the extra mile to get the Platinum didn’t seem so bad, but it’s also surely a result of the pandemic. I needed a project and a big win. Who didn't?
I wrote at length about why I like Sekiro more than every other modern FromSoft game, and also about the game’s cherry-on-top moment that reminded me of blowing up Hitler’s face in Bionic Commando. Please read them!
Death Stranding (PS4)
Release date notwithstanding, this was obviously the Game of 2020. I wrote about it here, here, and here. This game bears the distinction of being the second one I ever Platinumed. It took 150 hours. Only then did I learn I had a hoverboard.
Streets of Rage 4 (PS4)
This is the only 2020 game I played for more than a few hours. In fact, I cleared the entire game at least five times. I still don’t think it captures the gritty aesthetic of the prior Streets of Rages (nor even tries to), but this is probably the best-feeling bup I've played. Huge bonus points for finally bringing back Adam, but in the end I found it hard not to pick Blaze every time.
Blaster Master Zero 2 (Switch)
What impressed me about this sequel from Inti Creates was that it wasn’t just more of the same, even though that would've been fine. BMZ2 builds on its already excellent predecessor with a catchy new format where players can freely cruise the cosmos and stages take the varied form of planets—some big and sprawling, others short and sweet. Hopping at will from planet to planet without ever knowing what experiences and treasure each one held felt like system jumping in No Man’s Sky and island hopping in The Legend of Zelda: Phantom Hourglass, both of which felt like opening presents.
Dragon Force (Saturn)
Charming, satisfying, and addictive as a bag of chips. Unlike a bag of chips, when it’s over, you can do it all again. And again. And it’ll be different each time! This might be the first strategy game I've truly loved. Better late than never.
The PC Engine Mini
The PC Engine/TurboGrafx-16 Mini seems a particularly justifiable mini-console for people outside Japan because so many missed these consoles entirely, the games are hard to obtain, and the lineup includes titles spanning the entire convoluted Turbo/PC Engine ecosystem—the TurboGrafx-CD/CD-ROM², Super CD-ROM², Arcade CD-ROM² and SuperGrafx, in addition to plain, old standard HuCard games. I myself didn’t know the first thing about these systems before. It’s like reliving the nineties again for the first time.
Most of the titles included are simple action games that don't require a command of Japanese, but make no mistake: being able to understand Snatcher and TokiMemo does make me feel like an elite special person worth more than many of you.
(Side note: From a gender representation perspective, the difference between Snatcher and Death Stranding is stark. Virtually every interaction with every woman or girl in Snatcher is decorated with ways to sexually harass her. Guess someone finally had a conversation with our favorite auteur.)
A Gaming PC
I’d threatened to transition to PC gaming for years after beholding the framerate difference between the console and PC versions of DmC in 2012, and last July I finally took the leap, buying an ASUS “Republic of Gamers” (ugh) laptop with an NVIDIA GeForce RTX 2070 Max-Q GPU. It seems like consoles are getting more PC-like all the time, especially with all these half-step iterations that splinter performance and sometimes even the feature set (à la the New 3DS and Switch Lite), so with the impending new generation seemed like a fine time to change course.
In the half-year since, I’ve barely played a single PC game more recent than 2013, but just replaying PS3-era games at high settings has been like rediscovering them for the first time.
I also finally experienced keyboard-and-mouse shooting and understand now why PC gamers think they're better than everyone else. Max Payne is a completely different game with a mouse. Are all shooters like this??
The USPS
Early in the year, I rediscovered my childhood game shop, Starland, which is now an online hub known as eStarland.com with a brick-and-mortar showroom. To my delight, it has become one of the best and most modestly priced sources for import Saturn games in the country, and I scored Shining Force III’s second and third episodes, long missing from my collection, for a mere ten bucks each!
In June, I treated myself to a trio of Saturn imports from eStarland: the tactics-meets-dating-sim mashup Sakura Taisen 2, the nicely presented RTS space opera Quo Vadis 2, and beloved gothic dungeon crawler Baroque. Miraculously, this haul amounted to just around thirty dollars total. Less miraculously, they never arrived. This was the second time I’d had something lost in the mail in my entire life, and also the second time that month. Something was wrong with the USPS, and it wasn’t just COVID pains. We would soon learn Trump had been actively working to sabotage one of the nation’s oldest and most reliable institutions in a plot to compromise the upcoming presidential election.
Frankly it’s a miracle there’s still such a thing as “delivery” at all, and a few missing video games is the last of my worries considering what caused it, but nevertheless this was an experience in my gaming life that could not have happened any other year. I won’t forget it.
*By the way, USPS reimbursed me for the insured value of the missing order, which was fifty bucks. So I actually profited a little off the experience.
Mega Everdrive Pro
I love collecting for the Genesis and Mega Drive, but I will not pay hundreds of dollars for a video game that retailed for about sixty. The publishers never asked for that, and the developers won’t see a (ragna)cent of the money. I'm also far less inclined to start collecting for Sega CD, since the hardware is notoriously breakable, the cases are huge and also breakable, and the library just isn't that good.
Still, I'd been increasingly curious about the add-on as an interesting piece of Sega history, so when I learned Ukranian mad scientist KRIKzz had released a new Mega Everdrive that doubled as a Sega CD FPGA, I finally took the plunge into the world of flash carts. This has proven a great way to play some of the Mega Drive’s big-ticket rarities I will never buy—namely shmups like Advanced Busterhawk Gley Lancer and Eliminate Down—as well as try out prospective additions to the collection. I never would have discovered the phenomenal marvel of engineering and synth composition that is Star Cruiser without this thing, but now that I have, it’s high on the shopping list.
The Mega Everdrive Pro is functionally nearly identical to TerraOnion’s “Mega SD” cartridge, but slightly less expensive, comes in a “normal” cartridge shell instead of the larger Virtua Racing-style one, and supports a single hardworking dude in Ukraine rather than a company with reportedly iffy customer service.
Twitch
Getting a PC also resolved issues that had long prevented me from achieving a real streaming setup, and much of my gaming life in 2020 was about ramping up my streaming efforts. I even made Affiliate in about a month. Streaming has been a great creative outlet and distraction, as well as a way to connect with other people during the COVID depression and structure my gaming time. Find me every Monday through Thursday 8-11pm Eastern at twitch.tv/lacquerware.
Metroid: Other M (Dolphin)
PC ownership also gave me access to the versatile Dolphin emulator, liberating a handful of great Wii exclusives from their disposable battery-powered prison.
One of the Wii games I fired up on Dolphin was Metroid: Other M, a game I’d always wanted to try but had been dissuaded by years of bad publicity and the fact that I never had any goddamn batteries. I know I should temper what I’m about to say by acknowledging that I was playing at 1080p/60fps on a PS4 controller so my experience was automatically a vast improvement over that of all Wii players, but I’m increasingly confident Metroid: Other M was the most fun I’ve ever had playing a Metroid game. I haven’t decided yet if I’m willing to die on this hill, but I will just say that if you like the Metroidvania genre in general and aren’t particularly attached to the Metroid series’ story or its habit of making you wander aimlessly for hours, there’s a very high chance you will enjoy Other M—especially if you play it on Dolphin.
Don't Starve Together (PC)
Don't Starve is the only game my friend Jason plays, so last year I tried to get into it with him. I respect this game's singular devotion to the concept of survival, but make no mistake: every session of Don't Starve ends with you starving to death. Or freezing. Or getting stomped by a giant deity of the forest. The entire game is staving off death until it inevitably comes. Even when death comes, you can revive infinitely (in whatever mode we were playing), which means even death is not an end goal. There is no end goal. You don't even have the leeway to "play" and create your own meaning as you do in similarly zen games like Dead Rising.
Don't Starve is a game for people for whom hard work is the ultimate reward in and of itself. Don't Starve told me something about Jason.
G-Darius (PS1)
In the early fall, Sony announced they were dropping PS3, PSP, and Vita support from the browser and mobile versions of their PSN Store, and since the PS3 version of the store app runs like a solar-powered parking meter in Seattle, I decided this was my last chance to stock up on Japanese PSN gems.
Among my final haul, the PS1 port of G-Darius proved an instant favorite. Take down the usual cast of mechanized fish in a vibrant, chunky, low-poly style that perfectly inhabits the constraints of the original PlayStation hardware. I believe this is the first Darius game that lets you get into giant beam duels with the bosses, which is quite definitely one of the coolest things a video game has ever let you do. The PS1 port is also surprisingly feature-rich, including some easier difficulty levels that present an actually surmountable challenge for non-savants.
This one’s coming to the upcoming Darius Cozmic Revelation collection on Switch alongside DARIUSBURST, a good-ass romp in its own right.
Red Entertainment
In my effort to shine a tiny spotlight on some of the unsung Interesting Games of gaming, I found myself drawn again and again to the work of Red Entertainment. First there were cavechild headbutt simulator Bonk’s Adventure and twin shmups Gates of Thunder and Lords of Thunder on the PC Engine Mini. Then I streamed full playthroughs of the PS2’s best samurai-era, off-brand 3D Castlevania, Blood Will Tell and the Trigun-adjacent stand-‘n-gun, Gungrave: Overdose. Then I was dazzled by Bonk’s Adventure’s futuristic spin-off cute-‘em-up, Air Zonk, which was also sneakily tucked away on my PC Engine Mini in the “TurboGrafx-16” section. It turned out all these games were made by the same miracle developer responsible for Bujingai, the stylish PS2 wushu game starring Gackt and a household name here at the Lacquerware estate. How prolific can one team be???
Month of Cyberpunk
In November, I started toying with the idea of themed months on my Twitch channel with “Cyberpunk month.” It was supposed to be a build-up to Cyberpunk 2077’s highly anticipated November release, but holy shit that didn’t happen, did it? Still, I always find myself gravitating toward this genre in November, I guess because I associate November with gloom (even though this year it was sunny almost every day). A month is a long time to adhere to a single theme, but cyberpunk is such a well-served niche in gaming that I could easily start an all-cyberpunk Twitch channel. The fact that we’re so spoiled with choice makes Cyberpunk 2077’s terrible launch all the more embarrassing. Here are just some of the games I played (and streamed!) in November:
Ghostrunner Shadowrun (Genesis) RUINER Remember Me Transistor Rise of the Dragon (Sega CD) Shadowrun (Mega CD) Cyber Doll (Saturn) Binary Domain Shadowrun Returns Blade Runner (PC) Deus Ex: Human Revolution Deus Ex: Mankind Divided Observer
Shadowrun on the Genesis gets my top pick, but the two most recent Deus Ex games are great alternatives for those looking for something in the vein of 2077 that isn’t infested with termites.
Lost Planet 2
Every year. I played through it twice in 2020.
Dead Rising 4
I slept on this one too long. While it's a far cry from the original game, it's easily the most fun I've had with a Christmas game since Christmas NiGHTS. This is the game a lot of people thought they were getting when they bought the original Dead Rising with their new Xbox 360--goofy, indulgent, and pressure-free.
Devil May Cry 5: Vergil (PS4)
Vergil dropped for last-gen consoles in December and breathed a whole lot of life into a game that was already at the head of its class.
Nioh 2
I’ve only played a few hours of Nioh 2 because I promised my friend I’d co-op it with him and wouldn’t play ahead. But he’s a grad student with two small children. Nevertheless, Nioh 2 is my Game of 2020.
And that's it! Guess I'll spend 2021 playing games that came out last year, and maybe eventually getting vaccinated? Please?
#2020 year in review best of games of the year game of the year goty recap lacquerware death stranding sekiro darius g-darius video games gam#dragon force#2020#year in review#best of#games of the year#game of the year#goty#recap#review#lacquerware#death stranding#sekiro#darius#g-darius#video games#games#gaming#nioh#nioh 2#devil may cry#devil may cry 5#dmc5#vergil#dead rising 4#dr4#frank west#christmas games#lost planet#lp2
11 notes
·
View notes
Note
please please pleaseee tell me about video game graphics
thank you zero, i owe you my damn life
a side note: i only have experience in Blender when it comes to shading, rendering and similar things, and Dreamy Theatre 2nd is the only Project DIVA game i’ve ever played. i am nowhere near an expert, and this is just a rant so this may or may not be inaccurate oops
OKAY SO. project DIVA. love those games, have been a fan of them since like?? i first found them when i was 12?? so seeing the fact that it now has a Switch release made me really happy, and i’ve been thinking of buying it, since i dont want to clog up my brothers PS3′s memory just for an old PD game, right? the other day though, i stumbled across this video, which compares 4 of the games World is Mine PV’s, right? and i kid you not that when i understood that the Switch version was the top right one, i got. so surprised. because it looks like a downgrade from even the PS4 one, which i personally believe that in this case was a downgrade from the PS3 one.
turns out, Mega Mix uses toon shaders, which can be really good when used right! but from what i’ve seen in MM, it just looks so weird.
i just wanna preface by saying that Future Tone is the game that looks the best overall to me; the shaders and effects highlights the models and sets really well, and uses textures (look at Miku’s clothes in Rolling Girl. they’re plasticy, the light reflects on them and the different textures on her models help highlight what is what. and i think Rolling Girl in FT looks REALLY BAD compared to some previous iterations. look at how it compares to F 2nd and how much the atmosphere is ruined in FT. by the looks of it, F 2nd also uses toon shaders, though im not sure)
toon shaders can be extremely effective. they make the characters more cartoony, overriding any textures and making everything look plasticy and smooth. MMD uses toon shaders - and it causes that unique, classic MMD look!! and i love that look!! but the MMD models were made with that in mind. Mega Mix’s models look like direct ports of the PS4 game, aka they weren’t made for toon shaders.
of course, the context of the song and the pv changes the situation drastically! i’ve seen songs like Arifureta Sekai Seifuku where i actually believe the toon shader enhances the colorful, bouncy nature of the song and PV! thats where i think the toon shaders fits, in bright happy songs! the issue is, that very rarely are there PVs like that. most of them use darker tones, grittier sets or just duller colors. for songs with effects, the PS3 triumphs, but overall the PS4 just looks so much better. just look at ODDS & ENDS. F 2nd still holds up, FT looks amazing and then. MM is just. she’s flat. it’s almost as if any sort of color filter or light bounces off her.
my main issue with the toon shader in this case, is that Miku looks washed out. any texture, bump or feature dissapears, in favor for a flat, bright look. note how you can’t see her nose at all? its small, and that makes it blend in. it drowns. any texture in her hair is replaced with a bright shine and an even, uniform color. look at how well the toon shader is utilized in Ghost Rule, and this was made in MMD. the toon shader in MM leaves no room for things like bloom, reflection or shine. it all looks like its absorbed into Miku, and that she exists on a different layer compared to her surroundings.
this was intentional, from what ive heard! SEGA wanted it to look kid friendly, which means cartoony, 2D inspired and simple. but thats pretty stupid, right? when i was 12, i was drawn to the PD games both because of the gritty songs, but also the bright, happy colors. hell, look at PoPiPo! i remember this PV so clearly because the bright colors and the look made me happy, and this was achieved without toon shaders! and its achieved in Future Tone too, like. Look at World’s End Dancehall. It’s all neon colors without a single dark shadow in sight. and that video came out this year, and because of it, i assumed that this was the Switch port! i was wrong, obviously. still the PS4 release.
i’ve seen some people argue that toon shaders were used because the Switch isn’t a powerful console. first of all, it runs games like Witcher 3, BOTW, and so much more. I agree, the Switch isnt the most powerful console! but it could easily run a game like F 2nd, which still has complex effects and shading. and toon shading is still heavy on the CPU and GPU! there’s even a video where they’ve removed the toon shader in Mega Mix, and it still runs well and looks super good. you can see the textures, the small details and it does her model wonders. you can see the wrinkles in her clothes!!!! and this is still on the switch! it looks straight out of Future Tone. and that game is almost 7 years old. the switch could 100% run it with no to minor problems.
but as far as i am aware, the Project DIVA department has been merged with other departments in SEGA, so its very possible this was made because of budget cuts as well. after all, the toon shader probably takes less effort to slap onto a song, especially since it virtually looks the same on every song, and the game itself is basically a smaller port of Future Tone.
this is the only thing ive been able to think about this week honestly, it makes me kinda sad ngl
#now THIS is a rant babey!!!#asks#pr0tagonists#vocaloid#i tried to highlight and stuff so its easier to read#because god knows i hate big chunks of text that looks the same#long post#save tag
10 notes
·
View notes
Text
Porting Falcon Age to the Oculus Quest
There have already been several blog posts and articles on how to port an existing VR game to the Quest. So we figured what better way to celebrate Falcon Age coming to the Oculus Quest than to write another one!
So what we did was reduced the draw calls, reduced the poly counts, and removed some visual effects to lower the CPU and GPU usage allowing us to keep a constant 72 hz. Just like everyone else!
Thank you for coming to our Tech talk. See you next year!
...
Okay, you probably want more than that.
Falcon Age
So let's talk a bit about the original PlayStation VR and PC versions of the game and a couple of the things we thought were important about that experience we wanted to keep beyond the basics of the game play.
Loading Screens Once you’re past the main menu and into the game, Falcon Age has no loading screens. We felt this was important to make the world feel like a real place the player could explore. But this comes at some cost in needing to be mindful of the number of objects active at one time. And in some ways even more importantly the number of objects that are enabled or disabled at one time. In Unity there can be a not insignificant cost to enabling an object. So much so that this was a consideration we had to be mindful of on the PlayStation 4 as loading a new area could cause a massive spike in frame time causing the frame rate to drop. Going to the Quest this would be only more of an issue.
Lighting & Environmental Changes While the game doesn’t have a dynamic time of day, different areas have different environmental setups. We dynamically fade between different types of lighting, skies, fog, and post processing to give areas a unique feel. There are also events and actions the player does in the game that can cause these to happen. This meant all of our lighting and shadows were real time, along with having custom systems for handling transitioning between skies and our custom gradient fog.
Our skies are all hand painted clouds and horizons cube maps on top of Procedural Sky from the asset store that handles the sky color and sun circle with some minor tweaks to allow fading between different cube maps. Having the sun in the sky box be dynamic allowed the direction to change without requiring totally new sky boxes to be painted.
Our gradient fog works by having a color gradient ramp stored in a 1 by 64 pixel texture that is sampled using spherical distance exp2 fog opacity as the UVs. We can fade between different fog types just by blending between different textures and sampling the blended result. This is functionally similar to the fog technique popularized by Campo Santo’s Firewatch, though it is not applied as a post process as it was for that game. Instead all shaders used in the game were hand modified to use this custom fog instead of Unity’s built in fog.
Post processing was mostly handled by Unity’s own Post Processing Stack V2, which includes the ability to fade between volumes which the custom systems extended. While we knew not all of this would be able to translate to the Quest, we needed to retain as much of this as possible.
The Bird At its core, Falcon Age is about your interactions with your bird. Petting, feeding, playing, hunting, exploring, and cooperating with her. One of the subtle but important aspects of how she “felt” to the player was her feathers, and the ability for the player to pet her and have her and her feathers react. She also has special animations for perching on the player’s hand or even individual fingers, and head stabilization. If at all possible we wanted to retain as much of this aspect of the game, even if it came at the cost of other parts.
You can read more about the work we did on the bird interactions and AI in a previous dev blog posts here: https://outerloop.tumblr.com/post/177984549261/anatomy-of-a-falcon
Taking on the Quest
Now, there had to be some compromises, but how bad was it really? The first thing we did was we took the PC version of the game (which natively supports the Oculus Rift) and got that running on the Quest. We left things mostly unchanged, just with the graphics settings set to very low, similar to the base PlayStation 4 PSVR version of the game.
It ran at less than 5 fps. Then it crashed.
Ooph.
But there’s some obvious things we could do to fix a lot of that. Post processing had to go, just about any post processing is just too expensive on the Quest, so it was disabled entirely. We forced all the textures in the game to be at 1/8th resolution, that mostly stopped the game from crashing as we were running out of memory. Next up were real time shadows, they got disabled entirely. Then we turned off grass, and pulled in some of the LOD distances. These weren’t necessarily changes we would keep, just ones to see what it would take to get the performance better. And after that we were doing much better.
A real, solid … 50 fps.
Yeah, nope.
That is still a big divide between where we were and the 72 fps we needed to be at. It became clear that the game would not run on the Quest without more significant changes and removal of assets. Not to mention the game did not look especially nice at this point. So we made the choice of instead of trying to take the game as it was on the PlayStation VR and PC and try to make it look like a version of that with the quality sliders set to potato, we would need to go for a slightly different look. Something that would feel a little more deliberate while retaining the overall feel.
Something like this.
Optimize, Optimize, Optimize (and when that fails delete)
Vertex & Batch Count
One of the first and really obvious things we needed to do was to bring down the mesh complexity. On the PlayStation 4 we were pushing somewhere between 250,000 ~ 500,000 vertices each frame. The long time rule of thumb for mobile VR has been to be somewhere closer to 100,000 vertices, maybe 200,000 max for the Quest.
This was in some ways actually easier than it sounds for us. We turned off shadows. That cut the vertex count down significantly in many areas, as many of the total scene’s vertex count comes from rendering the shadow maps. But the worse case areas were still a problem.
We also needed to reduce the total number of objects and number of materials being used at one time to help with batching. If you’ve read any other “porting to Quest” posts by other developers this is all going to be familiar.
This means combining textures from multiple object into atlases and modifying the UVs of the meshes to match the new position in the atlas. In our case it meant completely re-texturing all of the rocks with a generic atlas rather than having every rock use a custom texture set.
Now you might think we would want also reduce the mesh complexity by a ton. And that’s true to an extent. Counter intuitively some of the environment meshes on the Quest are more complex than the original version. Why? Because as I said we were looking to change the look. To that end some meshes ended up being optimized to far low vertex counts, and others ended up needing a little more mesh detail to make up for the loss in shading detail and unique texturing. But we went from almost every mesh in the game having a unique texture to the majority of environment objects sharing a small handful of atlases. This improved batching significantly, which was a much bigger win than reducing the vertex count for most areas of the game.
That’s not to say vertex count wasn’t an issue still. A few select areas were completely pulled out and rebuilt as new custom merged meshes in cases where other optimizations weren’t enough. Most of the game’s areas are built using kit bashing, reusing sets of common parts to build out areas. Parts like those rocks above, or many bits of technical & mechanical detritus used to build out the refineries in the game. Making bespoke meshes let us remove more hidden geometry, further reduce object counts, and lower vertex counts in those problem areas.
We also saw a significant portion of the vertex count coming from the terrain. We are using Unity’s built in terrain system. And thankfully we didn’t have to start from total scratch here as simply increasing the terrain component's Pixel Error automatically reduces the complexity of the rendered terrain. That dropped the vertex count even more getting us closer to the target budget without significantly changing the appearance of the geometry.
After that many smaller details were removed entirely. I mentioned before we turned off grass entirely. We also removed several smaller meshes from the environment in various places where we didn’t think their absence would be noticed. As well as removed or more aggressively disabled out of view NPCs in some problem areas.
Shader Complexity
Another big cost was most of the game was using either a lightly modified version of Unity’s Standard shader, or the excellent Toony Colors Pro 2 PBR shader. The terrain also used the excellent and highly optimized MicroSplat. But these were just too expensive to use as they were. So I wrote custom simplified shaders for nearly everything.
The environment objects use a simplified diffuse shading only shader. It had support for an albedo, normal, and (rarely used) occlusion texture. Compared to how we were using the built in Standard shader this cut down the number of textures a single material could use by more than half in some cases. This still had support for the customized gradient fog we used throughout the game, as well as a few other unique options. Support for height fog was built into the shader to cover a few spots in the game where we’d previously used post processing style methods to achieve. I also added support for layering with the terrain’s texture to hide a few places where there were transitions from terrain to mesh.
Toony Colors Pro 2 is a great tool, and is deservedly popular. But the PBR shader we were using for characters is more expensive than even the Standard shader! This is because the way it’s implemented is it’s mostly the original Standard Shader with some code on top to modify the output. Toony Colors Pro 2 has a large number of options for modifying and optimizing what settings to use. But in the end I wrote a new shader from scratch that mimicked some of the aspects we liked about it. Like the environment shader it was limited to diffuse shading, but added a Fresnel shine.
The PSVR and PC terrain used MicroSplat with 12 different terrain layers. MicroSplat makes these very fast and much cheaper to render than the built in terrain rendering. But after some testing we found we couldn’t support more than 4 terrain layers at a time without really significant drops in performance. So we had to go through and completely repaint the entire terrain, limiting ourselves to only 4 texture layers.
Also, like the other shaders mentioned above, the terrain was limited to diffuse only shading. MicroSplat’s built in shader options made this easy, and apart from the same custom fog support added for the original version, it didn’t require any modifications.
Post Processing, Lighting, and Fog
The PSVR and PC versions of Falcon Age makes use of color grading, ambient occlusion, bloom, and depth of field. The Quest is extremely fill rate limited, meaning full screen passes of anything are extremely expensive, regardless of how simple the shader is. So instead of trying to get this working we opted to disable all post processing. However this resulted in the game being significantly less saturated. And in extreme cases completely different. To make up for this the color of the lighting and the gradient fog was tweaked to make up for this. This is probably the single biggest factor in the overall appearance of the original versions of the game and the Quest version not looking quite the same.
Also as mentioned before we disabled real time shadows. We discussed doing what many other games have done which is move to baked lighting, or at least pre-baked shadows. We decided against this for a number of reasons. Not the least of which was our game is mostly outdoors so shadows weren’t as important as it might have been for many other games. We’ve also found that simple real time lighting can often be faster than baked lighting, and that certainly proved to be true for this game.
However the lack of shadows and screen space ambient occlusion meant that there was a bit of a disconnect between characters in the world and the ground. So we added simple old school blob shadows. These are simple sprites that float just above the terrain or collision geometry, using a raycast from a character’s center of mass, and sometimes from individual feet. There’s a small selection of basic blob shapes and a few unique shapes for certain feet shapes to add a little extra bit of ground connection. These are faded out quickly in the distance to reduce the number of raycasts needed.
Falcon
Apart from the aforementioned changes to the shading, which was also applied to the falcon’s custom shaders, we did almost nothing to the bird. All the original animations, reaction systems, and feather interactions remained. The only thing we did to the bird was simplify a few of the bird equipment and toy models. The bird models themselves remained intact.
I did say we thought this was important at the start. And we early on basically put a line in the sand and said we were going to keep everything enabled on the bird unless absolutely forced to disable it.
There was one single sacrifice to the optimization gods we couldn’t avoid though. That’s the trails on the bird’s wings. We were making use of Ara Trails, which produce very high quality and configurable trails with a lot more control than Unity’s built in systems. These weren’t really a problem for rendering on the GPU, but CPU usage was enough that it made sense to pull them.
Selection Highlights
This is perhaps an odd thing to call out, but the original game used a multi pass post process based effect to draw the highlight outlines on objects for both interaction feedback and damage indication. These proved to be far too expensive to use on the Quest. So I had to come up with a different approach. Something like your basic inverted shell outline, like so many toon stylized games use, would seem like the perfect approach. However we never built the meshes to work with that kind of technique, and even though we were rebuilding large numbers of the meshes in the game anyway, some objects we wanted to highlight proved difficult for this style of outline.
With some more work it would have been possible to make this an option. But instead I found an easier to implement approach that, on the face, should have been super slow. But it turns out the Quest is very efficient at handling stencil masking. This is a technique that lets you mark certain pixels of the screen so that subsequent meshes being rendered can ask to not be rendered in. So I render the highlighted object 6 times! With 4 of those times slightly offset in screen space in the 4 diagonal directions. The result is a fairly decent looking outline that works on arbitrary objects, and was cheap enough to be left enabled on everything that it had been on before, including objects that might cover the entire screen when being highlighted.
Particles and General VFX
For the PSVR version of the game, we already had two levels of VFX in the game to support the base Playstation 4 and Playstation 4 Pro with different kinds of particle systems. The Quest version started out with these lower end particle systems to begin with, but it wasn’t enough. Across the board the number and size of particles had to be reduced. With some effects removed or replaced entirely. This was both for CPU performance as the sheer number of particles was a problem and GPU performance as the screen area the particles covered became a problem for the Quest’s reduced fill rate limitations.
For example the baton had an effect that included a few very simple circular glows on top of electrical arcs and trailing embers. The glows covered enough of the screen to cause a noticeable drop in framerate even just holding it by your side. Holding it up in front of your face proved too expensive to keep framerate in even the simplest of scenes.
Similar the number of embers had to be reduced to improve the CPU impact. The above comparison image only shows the removal of the glow and already has the reduced particle count applied.
Another more substantive change was the large smoke plumes. You may have already noticed the difference in some of the previous comparisons above. In the original game these used regular sprites. But even reducing the particle count in half the rendering cost was too much. So these were replaced with mesh cylinders using a shader that makes them ripple and fade out. Before changing how they were done the areas where the smoke plumes are were unable to keep the frame rate above 72 fps any time they were in view. Sometimes dipping as low as 48 hz. Afterwards they ceased to be a performance concern.
Those smoke plumes originally made use of a stylized smoke / explosion effect. That same style of effect is reused frequently in the game for any kind of smoke puff or explosion. So while they were removed for the smoke stacks, they still appeared frequently. Every time you take out a sentry or drone your entire screen was filled with these smoke effects, and the frame rate would dip below the target. With some experimentation we found that counter to a lot of information out there, using alpha tested (or more specifically alpha to coverage) particles proved to be far more efficient to render than the original alpha blended particles with a very similar overall appearance. So that plus some other optimizations to those shaders and the particle counts of those effects mean multiple full screen explosions did not cause a loss in frame rate.
The two effects are virtually identical in appearance, ignoring the difference in lighting and post processing. The main difference here is the Quest explosion smoke is using dithered alpha to coverage transparency. You can see if you look close enough, even with the gif color dithering.
Success!
So after all that we finally got to the goal of a 72hz frame rate! Coming soon to an Oculus Quest near you!
https://www.oculus.com/experiences/quest/2327302830679091/
10 notes
·
View notes
Text
What to buy to build a gaming pc
Why You Need to Start Your Gaming Weblog On GameSkinny (Alternatively Of WordPress)
It is time to recognize the very best gaming blogs of the year. Luckily, as much more games are becoming published, extra gaming blogs are getting developed. Considering the fact that it is no longer probable for gamers to try out each game that releases, blogs have come to be an essential tool in the gamer's toolbox. They give up-to-date data about new and future releases to enable gamers answer their undying question, "Need to I devote $60 on this?".
With elegant style including cool blue lighting, the Lenovo IdeaPad L340 Gaming is an perfect mainstream laptop for the contemporary multi-tasker and undercover gamer. Readily available in 15-inch or 17-inch models, the Lenovo IdeaPad L340 Gaming has next-gen features like up to the most recent 9th Gen Intel Core i7 mobile processor, NVIDIA GeForce GTX 1650 GPUs and jaw-dropping Dolby Audio hardwired into the laptop to take portable gaming into one more dimension.
As technologies continues to increase, with chipsets becoming smaller sized but much more powerful, we really should start seeing additional and extra video games make their way onto the mobile platform. Does this imply that video gaming as we know it is dead, though? Far from it. With 4K capability for smartphones nonetheless a though off, as effectively as a massive eSports following for Computer games in particular, it is hard (but not impossible) to see a future without classic Pc and console gaming.
Do you appreciate to fight while building your personal town or village in a game? If yes, Clash of Clans is the ideal solution for your gaming needs. It is one particular of the most popular games by Supercell. It is a multiplayer game that comes totally free of price with in-app purchases. The key aim of the player is to make a village and fill it with anything that the villagers will require. A town hall, gold mine, army camp and substantially additional get unlocked in the course of the course of time.
When it comes to graphical intensity there really is no comparison amongst mobile gaming and Computer and console gaming. While handheld devices such as the Nintendo Switch provide a mobile option , it nonetheless doesn't pack adequate punch to genuinely compete on an even field with the Pc and console industry — and the average smartphone possesses even much less graphical power. Even though there have been enormous advancements in mobile technologies, with new devices coming equipped with more potent processors and RAM than ever before, only Computer and consoles are capable to deliver the energy needed for 4K imaging, the hottest new point in gaming.
Tank is a effectively-optimized absolutely free gaming WordPress theme that comes with a responsive layout that displays your content material beautifully on phones, widescreen monitors, and everything in involving. With the aid of WooCommerce assistance, you can simply begin an on the net company effortlessly. The theme comes with a impressive layout that is infused with charcoal black colour that provides the web-site a incredibly royal outlook. This theme comes with a style that is one of a kind and helps to appeal to men and women with an affinity for the army and tanks in particular.
And ultimately, in order to keep your audience closer to you, do not forget to add a make contact with kind and subscription form to your gaming website. Operating a gaming weblog let your readers comment on your publications. By no means miss a opportunity to reply to the comments of your readers. This will show that you care about the time that men and women spent to appear through your data.
Gaming was after viewed as a solitary past time, but this assumption is now a issue of the past. These days, gamers share their hobby digitally with others from all about the planet. This takes spot through live Net sessions on Twitch and even the most preferred of social media platforms presently. On social networking web pages like YouTube, blogs and Instagram, influencer video game content dominates. There are now literally thousands of gaming influencers producing content material for social media platforms like YouTube http://denzz.site and blogs. And the greatest and brightest of these influencers have huge followings. In reality, lots of of the top YouTubers in the globe have been initially gaming influencers.
Like gaming? With the assist of a screen capture device (or a video camera) you can pass on your knowledge on a particular game (or level) and entertain people at the exact same time. Video games as a spectator sport is a fairly new phenomenon, but immensely well-liked. Take Twitch, exactly where millions of gamers collect each month. Some gamers have even come to be celebrities as a result.
If you want to produce an E-commerce gaming site, Crystal Skull is a great solution for you. This Visual composer powered WordPress gaming theme lets you awesome sites with high definition components with its parallax and video blocks. Put that additional charm on your website with all the animated icons and pictures the theme provides you with.
Learn To (Do) GAMING Like A Professional
The tactile satisfaction of the mechanical keyboards is typically missing in membrane keyboards. It mimics the really feel of writing on an old school typewriter, like in the previous. Now, this does not imply that the contemporary mechanical keyboards are missing attributes. They are packed with the issues that the buyer requirements for every day use and even in most cases for gaming as properly.
Zombie games in AR are the genuine hit of today's gaming market. Zombie GO is one particular prime instance of outstanding Augmented Reality shooter games. It brings zombie apocalypse into the actual globe, at least that is what the producer inform. As a player, you stroll by means of your property, college or the nearest parking lot and anticipate zombies to pop up at any second. Then action starts - fight and kill with a weapons of your selection.
Tuned and made for gaming websites, Arcane offers an exquisite gaming knowledge unmatched by a lot of themes out there. This dedicated gaming WordPress theme gives you with the suitable tools to create gaming communities of enormous proportions. From forms teams to managing tournaments, content material sharing, custom user and team profiles, you can do it all with this plugin.
Obviously, gaming” as a topic has a really broad meaning and there are various alternatives today, each for the casual and hardcore PRO gamer. Although consoles can provide access to exclusive titles, which are hardly ever out there for PCs, this goes hand-in-hand with a higher price of games. Consoles also feature some gaming nuances inside joystick controls, as nicely as particular strategies such as progress saving, which is optimized for consoles.
Video games have been around since the 1950s when the earliest personal computer scientists started building uncomplicated games as part of their research. Video games remained a hobby of scientists till the 1970s when the initial video game arcades opened. But video games did not go mainstream till the 1980s when technologies was created to move arcade games into the home. This ushered in a new era of property console gaming led by companies like Nintendo, Sega and Atari.
1 note
·
View note
Text
My quick review of the ASUS XG27UQ monitor (4K, HDR, 120Hz)
I originally wanted to tweet this series of bullet points out but it was getting way too long, so here goes! I got this to replace a PG278Q, which was starting to develop odd white stains, and never had good color reproduction in the first place (TN film drawbacks, very low gamma resulting in excessively bright shadows, under-saturated shadows, etc.)
The hardware aesthetic is alright! The bezels may feel a bit large to some people, but I don’t mind them at all. If you’re a fan of the no-bezel look, you’ll probably hate it. There is a glowing logo on the back that you can customize (Static Cyan is my recommendation), but it isn’t bright enough to be used as bias lighting, which would’ve been nice.
The built-in stand is decent; it comes with a tacky and distracting light projection feature at the bottom. It felt quite stable, though I don’t care about it because it got instantly replaced by an Ergotron LX arm. (I have two now, I really recommend them in spite of their price.)
The coating is a little grainy and this is noticeable on pure colors! You can kinda see the texture come through, a bit more than I’d like. Not a huge deal though.
The rest of the review will be under the cut.
The default color preset (”racing mode”), which the monitor is calibrated against, is very vivid and saturated. It looks great! But it’s inherently inaccurate, which bothers me, so I don’t like it. It looks like as if sRGB got stretched into the expanded gamut of the monitor.
sRGB “emulation” looks very similar to my Dell U2717D, whose sRGB mode is factory-calibrated. However, the XG27UQ’s sRGB mode has lower gamma (brighter shadows), so while the colors are accurate, the gamma is not. It feels 1.8-ish. Unless you were in a bright room, it would be inappropriate for work that needs to have accurate shadows. This mode also locks other controls, so it’s not the most useful, but the brightness is set well on it, so it is usable!
The “User Mode” settings use the calibrated racing mode as a starting point, which is a big relief. So it’s possible to tweak the color temperature and the saturation from there! I checked pure white against my Dell monitor and my smartphone (S9+) and tried to reach a reasonable 3-way compromise between them, knowing that the Dell is most likely the most accurate, and that Samsung also allegedly calibrates their high-end smartphones well. My configuration ended up being R:90/G:95/B:100 + SAT:42. This matches the saturation of the U2717D sRGB mode fairly closely. You also get to choose between 1.8, 2.2, and 2.5 gamma too, which is not too granular, but great to have. It kinda feels like my ideal match is between 2.2 and 2.5, but 2.2 is fine.
The color gamma according to lagom.nl looked fine, but I had to open the picture in Paint, otherwise it was DPI-scaled in the browser, and that messed with the way it works!! (That website is an amazing resource for quick monitor checks.)
Colors are however somewhat inaccurate in this mode. It’s easy to see by comparing the tweaked User Mode vs. sRGB emulation. There are some rather sizeable hue shifts in certain cases. I believe part of this is caused by the saturation tweak not operating properly.
Here’s a photo of what the Photoshop color picker looks like when Saturation is set to 0 on the monitor, vs. what a proper grayscale conversion should be. It’s definitely not using the right coefficients.
So in practice, when using the Racing & User modes, compared to the U2717D sRGB, here’s a few examples of what I see:
Reds are colder (towards the purple side) & oversaturated
Bright yellow (255,215,90) is undersaturated
Bright green (120,200,130) is undersaturated
Dark green (0,105,60) is fine
Magenta (220,13,128) is oversaturated
Dark reds & brown (150,20,20 to 90,15,10) is oversaturated
Cyan (0,180,240) is fine
Pink (230,115,170) is fine
Some shades of bright saturated blue (58,48,220) have the biggest shifts.
The TF2 skin tone becomes slightly desaturated and a bit colder
It’s not inaccurate to the point of being distracting, and you always have the sRGB mode (with flawed gamma?) to check things with, but it’s definitely not ideal, and some of these shifts go far enough that I wouldn’t recommend this monitor for color work that needs to be very accurate.
I’ve went back and forth, User vs sRGB, several times, on my most recent work (True Sight 2019 sequences). I’ve found the differences were acceptable for the most part; they bothered me the most during the Chronosphere sequence, in which the hazy sunset atmosphere turned a bit into to a rose gold tint, which wasn’t unpleasant at all — and looked quite pretty! — but it wasn’t what I did.
I’m coming from the point of view of a “prosumer” who cares about color accuracy, but who ultimately recognizes that this quest is impossible in the face of so many devices out there being inaccurate or misconfigured one way or the other. In the end, my position is more pragmatic, and I feel that you gotta be able to see how your stuff’s gonna look on the devices where it’ll actually be watched. So while I’ve done color grading on a decent-enough sRGB-calibrated monitor, I’ve always checked it against the inaccurate PG278Q, and I’ve done a little bit of compromising to keep my color work looking alright even once gamma shifted. And so, now, I’ll also be getting to see what my colors look like on a monitor that doesn’t quite restrain itself to sRGB gamut properly.
Well, at least, all of that stuff is out of the box, but...
TFTCentral (one of the most trustworthy monitor review websites, in my opinion) has found suspiciously similar shifts. But after calbration, their unit passed with flying colors (pun intended), so if you really care about this sort of stuff and happen to have a colorimeter... you should give it a try!
I hope one day we’ll be able to load and apply an ICC/ICM profile computer-wide, instead of only being able to load a simple gamma curve on the GPU with third-party tools like DisplayCAL. Even if it had to squeeze the gamut a bit...
Also, there are dynamic dimming / auto contrast ratio features which could potentially be useful in limited scenarios if you don’t care about color accuracy and want to maximize brightness. I believe they are forced on for HDR. But you will probably not care at all.
IPS glow is not very present on my unit; less than on my U2717D. However, when it starts to show up (more than a 30°-ish angle away), it shows up more. UPDATED: after some more time with the monitor, I wanna say that, in fact, IPS glow isit's slightly stronger, and shows up sooner (as in, from broader angles). It requires me to sit a greater distance from the monitor in order to not have it show up and impede dark scenes. It is worse than on my U2717D.
Backlight bleed, on the other hand, is there, and a little bit noticeable. On my unit, there’s a little bit of blue-ish bleed on the lower left corner, and some dark-grey-orange bleed for a good third of the upper-left. However, in practice, and to my eyes, it doesn’t bother me, even when I look for it. It ain’t perfect, but I’ve definitely seen worse, especially from ASUS. The photo above was taken at 100% brightness, and I’ve tried to make it just a tad brighter than what my eyes see, so hopefully it’s a decent sample.
Dead pixels: on my unit, I have 5 stuck dead green subpixels overall. There are 4 in a diamond pattern somewhat down and right to the center of the screen, and another one, a bit to the right of that spot. All of them kinda “shimmer” a little bit, in the sense that they become stronger or weaker based on my angle of view. They’re a bummer but I haven’t found them to be a hindrance. Took me a few days to even notice them for the first time, after all.
HDR is just about meaningless and uses some global dimming techniques, as well as stuff that feels like... you know that Intel HD driver feature that brightens the content on the screen, while lowering the panel backlight power in tandem, to save power, but it kinda flattens (and sometimes clips) highlights? It kinda looks like that sometimes. Without local dimming, HDR is just about meaningless.
Unfortunately, the really nice HDR support in computer monitors is still looking like it’s going to be at the very least a year out, and even longer for sub-1000 price ranges. (I was holding out for the PG27UQX at first, but it still has no word on availability, a whole year after being announced, and will probably cost over two grand, so no thanks.)
G-Sync (variable refresh rate) support is... not there yet?! The latest driver does not recognize the monitor as being compatible with the feature. And it turns out that the product page says that G-Sync support is currently being applied for. Huh. I thought they had special chips in those monitors solely for the feature, but it’s possible this one does it another way? (The same way that Freesync monitors do it?)
DSC (Display Stream Compression) enables 4K 120Hz to work through a single DisplayPort cable, without chroma subsampling. And it’s working for me, which came as a surprise, as I was under the impression this feature required a 2000-series Turing GPUs. (I have a 1080 Ti.) I was wrong about this, it’s 144 Hz that requires DSC. And I don’t have it on this Pascal card. But I don’t really care since I prefer to run this monitor at 120 Hz, as it’s a multiple of the 60 Hz monitor next to it.
Windows DPI scaling support is okay now. Apps that are DPI-aware, and the vast majority of them are now, scale back and forth between 150% and 100% really well as they get dragged between the monitors! The only program I’ve had issues with is good old Winamp, which acted as if it was 100% on the XG27UQ... and shrinked down on another monitor. So I asked it to override DPI scaling behaviour (”scaling performed by: application”), which keeps the player skin at 100% on every monitor, but any call to system fonts and UI (Bento skin’s playlist + Settings panel) are still at 150%. So I had to set the playlist font size to 7 for it to look OK on the non-scaled monitor!
A few apps misbehave in interesting ways; TeamSpeak, for example, seen above, scales everything back from 150% to 100%, and there is no blurriness, but the “larger layout” (spacing, etc.) sticks.
Games look great with 4K in 27 inches. Well, I’ve only really tried Dota 2 so far, but man does it get sharp, especially with the game’s FXAA disabled. It was already a toss-up at 1440p, but at 4K I would argue you might as well keep it disabled. However, going from 2560x1440 to 3840x2160 requires some serious horsepower. It may look like a +50% upgrade in pixels, but it’s actually a +125% increase! (3.68 to 8.29 million pixels.) For a 1080 Ti, maxed-out Dota 2 at 1440p 120hz is really trivial, but once you go to 4K, not anymore... you could always lower resolution scale though! (Not an elegant solution if you like to use sharpening filters though, looking at you RDR2.)
Overall, the XG27UQ is a good monitor, and I’m satisfied with my purchase, although slightly disappointed by the strong IPS glow and the few dead subpixels. 7/10
6 notes
·
View notes
Text
Starts Fast Bitcoin Mining Free with Realmining
https://www.cryptoerapro.com/bitcoin-miner/
You’l possible create less than one penny PER YEAR
Android phones simply aren't powerful enough to match the mining hardware employed by serious operations.
So, it would possibly be cool to setup a miner on your Android phone to determine how it works. But don’t expect to make any money.
Do expect to waste a ton of your phone’s battery!
What is bitcoin miner Mining Hardware best bitcoin miner wallet
bitcoin miner mining hardware (ASICs) are high specialised computers used to mine bitcoins.
The ASIC industry has become complicated and competitive.
Mining hardware is currently only located where there is cheap electricity.
When Satoshi released Bitcoin, he supposed it to be mined on computer CPUs.
Enterprising coders soon discovered they might get more hashing power from graphic cards and wrote mining software to permit this.
GPUs were surpassed in turn by ASICs (Application Specific Integrated
Nowadays all serious bitcoin miner mining is performed on ASICs, usually in thermally-regulated knowledge-centers with access to low-cost electricity.
Economies of scale have so led to the concentration of mining power into fewer hands than originally supposed.
What Are bitcoin miner Mining Pools? best bitcoin miner wallet
Mining pools enable tiny miners to receive additional frequent mining payouts.
By joining with other miners in a very cluster, a pool allows miners to search out blocks more frequently.
But, there are some problems with mining pools as we have a tendency to'll discuss.
As with GPU and ASIC mining, Satoshi apparently didn't anticipate the emergence of mining pools.
Pools are groups of cooperating miners who conform to share block rewards in proportion to their contributed mining power.
This pie chart displays the present distribution of total mining power by pools:
Whereas pools are fascinating to the common miner as they sleek out rewards and build them a lot of predictable, they sadly concentrate power to the mining pool’s owner.Bitcoin Miner
Nowadays there are very skilled industrial mining operations. Let's have a look at how they work.What will a mining farm seem like?
Let's have a look inside a real bitcoin miner mining farm in Washington state.
bitcoin miner mining farms completely use ASIC miners to mine various coins. Several of these farms are minting several Bitcoins per day.
How abundant do crypto mining farms create? How a lot of a mining farm makes depends on several factors:
The worth it pays forelectricity How previous its mining hardware is The scale of its operation The worth of bitcoin miner when the miner sells it The level of problem when the bitcoin miner is mined By far, the largest issue affecting how much money a mining farm makes is how abundant it pays for electricity. Nearly all mining farms are using the same hardware.
Since the rewrd for locating a block is fastened, and the difficulty is adjusted based mostly on total processing power working on finding blocks at any given time, then electricity is the sole value that is variable. If you'll be able to realize cheaper power than different miners, you can afford to either increase the scale of your mining operation, or pay less on your mining for the same output.
How abundant electricity do mining farms use As previously mentioned, mining farms use a ton of electricity. How abundant they consume depends on how massive their operation is. However the most recent Bitmain ASIC miner consumes concerning 1350 watts.
In total, it is estimated that each one mining farms can use about seventy five terrwat hours of electricity in the year 2020. That's roughly the equivalent to 15 times the yearly energy consumption of denmark.
minng across globe Mining farms are located the world. We do not apprehend where each mining farm in the planet is, but we tend to have some educated guesses.
Most of the mining has been and still is located in China. Of course, as of 20twenty, it's believed that as a lot of as 65percent of bitcoin miner mining
Why is so a lot of Mining happening in China? Samson Mow of Blockstream and former CTO of BTCC mining pool explains.
The main benefits of mining in China are faster setup times and lower initial CapEx that, along with nearer proximity to where ASICs are assembled, have driven industry growth there
Samson Mow CSO, Blockstream BONUS CHAPTER one Necessary bitcoin miner Mining Terms best bitcoin miner wallet
During this bonus chapter, we tend to will find out about some of the foremost common terms associated with bitcoin miner mining.
If you are considering mining at any level, understanding what these terms suggests that will be crucial for you to get started.
Miner Anyone who mines Bitcoins (or any different cryptocurrency).
Block Reward The block reward is a mounted quantity of Bitcoins that get rewarded to the miner or mining pool that finds a given block
A assortment of individual miners who 'pool' their efforts or hashing power together and share the blockreward. Miners produce pools because it increases their probabilities of earning a block reward.
ly each four years, the block reward gets cut in [*fr1]. The primary block reward ever mined was in 2008 and it it absolutely was for fifty Bitcoins. That block reward lasted for four years, where in 2012, the primary reward halving occured and it dropped to twenty five Bitcoins.
In 2016, a second halving occured where the reward was reduced to 12.five Bitcoins. And as of the time of this writing, we have a tendency to are on the cusp of the third halving (ETA Could 11th), where the reward can be curtail to 6.twenty five Bitcoins. You'll notice the foremost up to date estimation of precisely when the following halving will occur on our bitcoin miner block reward halving clock
ASIC represent "Application Specific Integrated Circuit". In plain english, that simply means it's a chip designed to try and do one very specific reasonably calculation. In the case of of an ASIC miner, the chip in the miner is meant to unravel problems using the SHA256 hashing algorithm. This is critical GPU mining, explained below.
nce you mine for Bitcoins (or any cryptocurrency) employing a graphics card. This was one in all the earliest styles of mining, however is not profitable thanks to the introduction of ASIC miners. Bitcoin Miner
Hashing Power (or Hash Rate) How many calculations (hashes) a miner can perform per second.
Or it will consult with the total amount of hashing done on a sequence by all miners put along - also known as "Internet Hash".
You'll be able to learn a lot of about Hash Rate by reading our article regarding it.
Problem Measured in Trillions, mining difficulty refers to how exhausting it is to seek out a block. The present level of difficulty on the bitcoin miner blockchain is the first reason why it is not profitable to mine for most individuals.
Problem Adjustment mining farm bitcoin miner was designed to supply block reliably every ten minutes. Because total hashing power (or Web Hash) is consistently changing, the issue of finding a block needs to adjust proportional to the quantity of total hashing power on the network.
In terribly simple terms, if you have four miners on the network, all with equal hashing power, and 2 stop mining, blocks would happen ever 20 minutes instead of each 10. Thus, the difficulty of finding blocks also needs to chop in 0.5, therefore that blocks can continue to be found each 10 minutes.
Issue changes happen each 2,016 blocks. This ought to mean that if a brand new block is added each ten minutes, then a issue adjustment would occur every two weeks. The 10 minute block rule is just a goal though. Some blocks are added once additional than 10 minutes. Some are added after less. Its a law of averages and a heap if left up to chance. That doesn't mean that for the foremost part, blocks are added reliably every 10 minutes
A measurement of energy consumption per hour. Most ASIC miners can tell you how much energy they consume using this metric.
Compared to the carbon emissions from simply the cars of PayPal’s staff as they commute to figure, Bitcoin’s environmental impact is negligible.
As bitcoin miner could simply replace PayPal, credit card companies, banks and also the bureaucrats who regulate them all, it begs the question:
Isn’t traditional finance a waste? Bitcoin Miner
Not just of electricity, however of cash, time and human resources!
Mining Difficulty If solely 21 million Bitcoins will ever be created, why has the issuance of bitcoin miner not accelerated with the rising power of mining hardware?
Issuance is regulated by Difficulty, an algorithm which adjusts the problem of the Proof of work problem in accordance with how quickly blocks are solved inside a certain timeframe (roughly each two weeks or 2016 blocks).
Issue rises and falls with deployed hashing power to keep the typical time between blocks at around 10 minutes.
For most of Bitcoin's history, the common block time has been concerning 9.seven minutes. As a result of the value is often rising, mining power does come back onto the network at a fast speed that creates faster blocks. But, for many of 2019 the block time has been around ten minutes. This is because Bitcoin's price has remained steady for many of 2019.
Block Reward Halving Satoshi designed bitcoin miner such that the block reward, which miners automatically receive for solving a block, is halved every 210,000 blocks (or roughly four years).
As Bitcoin’s value has risen substantially (and is expected to keep rising over time), mining remains a profitable endeavor despite the falling block reward… a minimum of for those miners on the bleeding fringe of mining hardware with access to low-price electricity.
Honest Miner Majority Secures the Network To successfully attack the bitcoin miner network by making blocks with a falsified transaction record, a dishonest miner would need the majority of mining power thus as to maintain the longest chain.
This is often called a 51p.c attack and it allows an attacker to pay the same coins multiple times and to blockade the transactions of other users at can.
To realize it, an attacker wants to own mining hardware than all alternative honest miners.
This imposes a high financial price on any such attack.
At this stage of Bitcoin’s development, it’s seemingly that solely major corporations or states would be able to satisfy this expense… although it’s unclear what web benefit, if any, such actors would gain from degrading or destroying Bitcoin. Miner
Mining Centralization Pools and specialised hardware has sadly led to a centralization trend in bitcoin miner mining.
bitcoin miner developer Greg Maxwell has stated that, to Bitcoin’s seemingly detriment, a handful of entities control the vast majority of hashing power.
It is conjointly widely-known that at least fiftyp.c of mining hardware is found inside China.
But, it’s could be argued that it’s contrary to the long-term economic interests of any miner to aim such an attack.
The resultant fall in Bitcoin’s credibility would dramatically reduce its exchange rate, undermining the value of the miner’s hardware investment and their held coins.
Because the community could then commit to reject the dishonest chain and revert to the last honest block, a fifty one% attack most likely offers a poor risk-reward ratio to miners.
bitcoin miner mining is certainly not perfect but doable enhancements are invariably being suggested and thought-about.
https://www.cryptoerapro.com/bitcoin-miner/
1 note
·
View note
Text
Look at these explosions
So if you've been following my twitter recently you would have noticed all the gifs I've been posting of things exploding.
A couple weeks ago I decided that I wanted to be able to simulate sand for use in partical effects. Now, after a ton of fiddling with openGL, I've got it working. I thought I'd write a summary of how it works, not just to help others who might be thinking of doing a similar thing, but also to clarify in my own head exactly what monstrosity I've created.
The majority of the logic runs entirely on the GPU, which is fantastic, because that thing is fast as hell, but there are some parts of any physics system that can't be run in parallel. To get around this, we perform five steps in sequence, each one a parallel computation.
Force step
MoveMap update
ForceMap update
Move step
Delete step
Force step
All the forces on the pixels are processed here. This is things like adding speed for explosions and applying gravity. Each force is represented by a shader, and each force shader is applied to the speed map to update the speed of each pixel. How is the speed of each pixel represented you might ask? Well, the speed map is another image load of pixel data that overlays the image, and the speed of each pixel in the image is determined by the colour of that same pixel in the speed map.
The more red a pixel is, the more speed it has in the x axis, while green represents speed in the y axis.
There is an issue here already though, as a pixel only has 256 possible values for each of its r,g,b and a values. Because of this, we can't have negative colours, and we also lose some granularity in the speeds we can have. To get around this, each pixel on the speed map has a base red and green value of 0.5, or 128, and any value above that represents a positive speed, while any below represents a negative speed.
The speed is also represented as a fraction of the overall "max speed" defined in the pixel particle canvas. A low max speed restrains how fast the particles can move, obviously, but a really high max speed will create a cut off on how slow particles can move, as really low speeds will get rounded to 0. Once the speeds have been updated, it's onto translating them into movements.
MoveMap update
We can't just apply the speeds directly to the pixels of the image for a number of reasons. One of these reasons is that pixels in an image must be at an integer position, but we want our particles to move smoothly. To get around this, we have the move map, which essentially just records how much each pixel has been acted on by its speed in previous steps.
Just like the speed map, this is represented through the red and green values of its pixels. After we've got the move map updated, we can finally start getting some movement done.
ForceMap update
We're not actually going to make any alterations to the image map right now though, because first we have to figure out exactly what movements are going to happen. Each pixel looks at its move map, and if it's got enough accumulated movement, it draws onto the force map exactly which direction it is moving in.
It's a little hard to see, but each of those purple/cyan pixels is a pixel recording that it will be making a movement in that step. We could probably actually skip this step and have it be a part of the Move update, but its a little neater to have it like this, even if it does slow down the algorithm a little bit.
Move step
This is when it gets really interesting.
Getting the pixels to move might sound simple, but there are a couple things that make it hard.
Firstly, "moving" a pixel is impossible, we can only duplicate a pixel and delete the original.
Secondly, a pixel being processed can not edit the colour values of any other pixels in the image.
If that second point didn’t make a huge amount of sense, remember that we're doing each step completely in parallel, so if we were to edit the colour of another pixel other than the pixel we are currently processing, that would break the parallelism.
To get around this, we do something painful, but necessary. Every empty pixel does a scan of the immediately adjacent pixels, and if it finds a pixel that is signalling that it wants to move into its space, it copies the pixel into its space, making the same copies in the move map and speed map.
It is very important that only empty pixels are allowed to do this, as this is where our collision logic comes from. If a space is occupied, it is impossible for another pixel to enter that space in that step. Even if that pixel is itself moving, we still don't want to move another pixel into its space, as we don't know whether the movement is going to get blocked by another collision.
There is more to the collision system than just blocking movement however. Colliding particles transfer some of the speed to the particle they are colliding with, as can be seen in the gif below.
Without this logic, you can get some really weird behaviour with opposing particles forming solid walls in midair as they try to move past each other. We implement this simply by letting each filled pixel also do a scan of its surrounding area, but only move some of the speed across and leave the colour behind.
BUT WAIT, what if we move the speed across but then in the same step the pixel we're moving it across to itself moves? Remember that we're doing it all in parallel, so the speed we copy for each moving pixel is the speed that we started with at the beginning of the step.
We can fix that in the next, and final, step
Delete step
All that's left to do is delete the original copies of each pixel that we duplicated, and handle that last speed moving case I just mentioned.
Deletion is also kind of painful, as it involves another scan. During the move step, we also leave a signal on the space the pixel moves into that records exactly which space that pixel came from. You can see these signals as little purple/cyan dots in the following gif.
Each filled space does a scan of the surrounding spaces, and if it finds a delete signal pointing to itself, it deletes. These signals also serve another purpose however in solving the problem we have with moving speeds over correctly for collided pixels. Each pixel that moved this step uses its delete signal to find out where it moved from, and updates its own speed to match the old tile. This catches all the changes that were made in the move step.
And all that goes on each step of the simulation, sometimes with multiple steps per frame. I skimped on some of the details, but hopefully the overall picture made sense. If you want to see exactly how I implemented it in the code, I've uploaded it all to github in the repository below.
https://github.com/Cowinatub/pixelparticles
If you are a Löve user yourself, feel free to use the code for whatever you want, including as an example of bad practices in lua, but be warned that my comments are sparse at best. I don't plan to stop working on this yet though, so I'll write more rigorous comments and documentation, if people are interested.
3 notes
·
View notes