#OpenGL API
Explore tagged Tumblr posts
Text
I want to make this piece of software. I want this piece of software to be a good piece of software. As part of making it a good piece of software, i want it to be fast. As part of making it fast, i want to be able to paralellize what i can. As part of that paralellization, i want to use compute shaders. To use compute shaders, i need some interface to graphics processors. After determining that Vulkan is not an API that is meant to be used by anybody, i decided to use OpenGL instead. In order for using OpenGL to be useful, i need some way to show the results to the user and get input from the user. I can do this by means of the Wayland API. In order to bridge the gap between Wayland and OpenGL, i need to be able to create an OpenGL context where the default framebuffer is the same as the Wayland surface that i've set to be a window. I can do this by means of EGL. In order to use EGL to create an OpenGL context, i need to select a config for the context.
Unfortunately, it just so happens that on my Linux partition, the implementation of EGL does not support the config that i would need for this piece of software.
Therefore, i am going to write this piece of software for 9front instead, using my 9front partition.
#Update#Programming#Technology#Wayland#OpenGL#Computers#Operating systems#EGL (API)#Windowing systems#3D graphics#Wayland (protocol)#Computer standards#Code#Computer graphics#Standards#Graphics#Computing standards#3D computer graphics#OpenGL API#EGL#Computer programming#Computation#Coding#OpenGL graphics API#Wayland protocol#Implementation of standards#Computational technology#Computing#OpenGL (API)#Process of implementation of standards
9 notes
·
View notes
Text
It is such a pitfall to get frustrated by not already knowing something, when the whole plan was to learn it. Which is obviously something that applies to loads of things. (this post is not going to be a smart analysis of the learning process, this is a post of me ranting about one of a couple of first attempts of using a programming API, me being someone who has little experience with using APIs at all) The situation where I caught myself giving up too early again today, was me trying to use an API: openGL, or rather the stuff that comes along with it. So, there's like guides and tutorials and it's a mature project and everything, ideal for learning it feels like. You just need some wrappers and utility and extra libraries. You know, some glfw or glew, maybe some glad, glm is also a thing, but maybe that's built into opengl, idk opengl isn't a library anyways, it's just a specification. It's not like installing that stuff is super difficult, in fact it's surprisingly easy. Do the git, make a build directory and cmake, boom done. Now what though, there's different guides doing different things and just initializing has so much stuff going on. There's simple examples, okay let's run with that, but then I have a dozen functions flying around that apparently set up stuff and open windows and other cool stuff that would take loads of time to figure out myself. And that's where I began giving up. Next step would have been to set up opengl thingies, vertices and rendering a humble triangle. It's not like I couldn't look up how to do that or that I didn't have examples at hand to just copy, but I had this feeling that I wasn't figuring things out on my own anymore. Which is a dumb thing to say, not only because I have used premade frameworks and rendered triangles before, but also because learning by screwing around with it is actually a great way to learn. The more you fuck around, the more you find out, but in a positive sense. I'll absolutely keep looking into it, but I've got enough for today and even if that wasn't much; making half a step of progress each weekend will eventually lead to a decent amount of steps.
0 notes
Text
every time i look at game development
my brain hurts a lot. because it isn't the game development i'm interested in. it's the graphics stuff like opengl or vulkan and shaders and all that. i need to do something to figure out how to balance work and life better. i've got a course on vulkan i got a year ago i still haven't touched and every year that passes that i don't understand how to do graphics api stuffs, the more i scream internally. like i'd love to just sit down with a cup of tea and vibe out to learning how to draw lines. i'm just in the wrong kind of job tbh. a lot of life path stuff i coulda shoulda woulda. oh well.
2 notes
·
View notes
Text
thinking of writing a self-aggrandizing "i'm a programmer transmigrated to a fantasy world but MAGIC is PROGRAMMING?!" story only the protagonist stumbles through 50 chapters of agonizing opengl tutorialization to create a 700-phoneme spell that makes a red light that slowly turns blue over ten seconds. this common spell doesn't have the common interface safeties for the purposes of simplicity and so if you mispronounce the magic words for 'ten seconds' maybe it'll continue for five million years or until you die of mana depletion, whichever comes first. what do you mean cast fireballs those involve a totally different api; none of that transfers between light-element spells and fire-element spells
chapter 51 is the protag giving up at being a wizard and settling down to be a middling accountant
11 notes
·
View notes
Note
Not sure if you've been asked/ have answered this before but do you have any particular recommendations for learning to work with Vulkan? Any particular documentation or projects that you found particularly engaging/helpful?
vkguide is def a really great resource! Though I would def approach vulkan after probably learning a more simpler graphics API like OpenGL and such since there is a lot of additional intuitions to understand around why Vulkan/DX12/Metal are designed the way that they are.
Also I personally use Vulkan-Hpp to help make my vulkan code a lot more cleaner and C++-like rather than directly using the C api and exposing myself to the possibility of more mistakes(like forgetting to put the right structure type enum and such). It comes with some utils and macros for making some of the detailed bits of Vulkan a bit easier to manage like pNext chains and such!
12 notes
·
View notes
Note
Hi! 😄
Trying to use reshade on Disney Dreamlight Valley, and the I’m using a YouTube video for all games as I haven’t found a tutorial for strictly DDV. The first problem I’ve ran into is I’m apparently supposed to click home when I get to the menu of the game but nothing pops up so I can’t even get past that. It shows everything downloaded in the DDV files tho. 😅 never done this before so I’m very confused. Thanks!
I haven't played DVV so I'm not sure if it has any quirks or special requirements for ReShade.
Still, for general troubleshooting, make sure that you've installed ReShade into the same folder where the main DVV exe is located.
Does the ReShade banner come up at the top when you start the game? If so, then ReShade is at least installed in the correct place.
If it doesn't show up despite being in the correct location, make sure you've selected the correct rendering api as part of the installation process. You know the part where it asks you to choose between directx 9/10/11/12 or OpenGL/Vulkan? If you choose the wrong one of those ReShade won't load.
You can usually find out the rendering api of most games on PC Gaming Wiki. Here's the page for Disney Dreamlight Valley. It seems to suggest it supports Direct X 10 and 11.
If you're certain everything is installed correctly and the banner is showing up as expected but you still can't open the ReShade menu, you can change the Home key to something else just in case DVV is blocking the use of that for some reason.
Open reshade.ini in a text file and look for the INPUT section.
You'll see a line that says
KeyOverlay=36,0,0,0
36 is the javascript keycode for Home. You can change that to something else.
Check the game's hotkeys to find something that isn't already assigned to a command. For example, a lot of games use F5 to quick save, and F9 to quick load, so you might need to avoid using those. In TS4 at the moment I use F6 to open the overlay, because it's not assigned to anything in the game. You can choose whatever you want. You can find a list of javascript keycodes here. F6, for example, is 117, so you'd change the line to read
KeyOverlay=117,0,0,0
But you can choose whatever you want. Just remember to check it isn't already used by the game.
Note: you can usually do this in the ReShade menu, but since it isn't opening for you at the moment this is a way to change that key manually
Beyond that, I'm not sure what would be stopping the menu from opening. If you've exhausted the options above, you can try asking over in the official ReShade discord server. Please give them as much information as possible in as clear and uncluttered and to-the-point language as possible to increase your chances of someone being able (and willing) to help.
2 notes
·
View notes
Text
The S3 ViRGE Minecraft Thing
(Article originally posted on TheRetroWeb)
Have you ever wanted to play Minecraft? Have you ever wondered “how terrible can I make this experience”? No? Well too bad. You’ve clicked on this article and I’ve gone this far already, so let’s just keep going and see what happens.
A Little bit of history…
The S3 ViRGE, short for Video and Rendering Graphics Engine (alternately, Virtual Reality Graphics Engine), was first introduced in November of 1995, with an actual release date of early 1996. It was S3 Graphics’ very first 3D-capable graphics card, and it had the unfortunate luck of launching alongside… the 3Dfx Voodoo 1.
It became rather quickly apparent that the ViRGE was terribly insufficient in comparison to the Voodoo, and in fact even picked up the infamous moniker of “graphics decelerator”, which poked fun at its lackluster 3D performance.
The original ViRGE would be followed by the card that this article focuses on, the ViRGE/DX, just a little under a year later in the waning months of 1996.
The ViRGE/DX was a welcome improvement over the original release, lifting performance to more acceptable levels and improving software compatibility with better drivers. Mostly. And while native Direct3D performance was iffy at best and OpenGL support was nonexistent, S3 did have one last trick up their sleeves to keep the ViRGE line relevant: the S3D Toolkit.
Similar to 3Dfx’s Glide API, the S3D Toolkit was S3 Graphics’ proprietary low-level graphics API for the ViRGE. Unlike 3Dfx’s offering, however, S3D, much like the cards it was intended for, fell flat on its face. Only a small handful of games ever natively supported S3D acceleration, and by my own admission, I haven’t ever played any of them.
But wait, this article is about playing Minecraft on the ViRGE, isn’t it? The block game of all time is famously written in Java, and uses an OpenGL rendering pipeline. So, how can the S3 ViRGE, a card with no OpenGL support, possibly play Minecraft?
Wrappers!
This is where a little thing called “OpenGL wrappers” come in. Shipping in the form of plain OpenGL32.dll files (at least, on Windows) that you drop into a folder alongside whatever needs OpenGL acceleration, these wrappers provide a way to modify, or “wrap”, OpenGL API calls.
In our case, we are interested in the category of OpenGL wrappers that translate OpenGL API calls to that of other APIs. For a more modern equivalent of these wrappers, the Intel Arc line of graphics cards uses DXVK in order to translate older DirectX 9 calls to Vulkan, which is a natively-supported API.
For this experiment, we will be using a wrapper called “S3Mesa”, made by Brian Paul of the Mesa project. Though open-source, this wrapper never made it to a completed state, and is missing certain features such as texture transparency despite the ViRGE itself being supposedly capable of it. However, this does not affect gameplay much beyond aesthetics.
The S3Mesa wrapper, on a more technical note, translates OpenGL 1.1 calls to a mix of both S3D and DirectX API calls.
The System
At last, we arrive at the system hardware. As of writing, I am currently benchmarking a plethora of low-end (or otherwise infamous) cards for my “Ultra Nugget Graphics Card Roundup”, and so the system itself is likely a liiiiiittle bit overpowered for the lowly ViRGE/DX:
AMD Athlon XP (Palomino) @ 1.14GHz
Shuttle MK32 Socket A motherboard
256MB DDR-400
S3 ViRGE/DX (upgraded to 4MB of video memory)
Windows 98SE
Why Windows 98SE? Because S3 never released 3D-accelerated graphics drivers for non-Windows 9x operating systems in the consumer space.
For Minecraft itself, KernelEX 4.5.2 and Java 6 are installed as well, and an old version of the launcher dating back to early 2013 that I personally refer to as the “Minecraft 1.5 Launcher” is used for compatibility purposes. Also because no launcher that can work on Windows 98 is capable of logging into the authentication servers anymore.
Setting up the game
With Windows 98SE, KernelEX, and Java 6 installed (in that order, of course), we can turn our attention to the game itself. As mentioned before, no launcher to my knowledge that runs on Windows 98 is capable of logging into the auth servers. This results in two additional problems: starting the game itself and downloading game assets.
Using the 1.5 launcher solves this first issue by means of relying on a little thing called the lastlogin file. This is an old way that the launcher was able to allow players to keep playing offline when disconnected from the internet, but more importantly, unlike the modern launcher, it doesn’t expire. 🙂
And because of that, our login problem is solved by middle school me’s old .minecraft folder backup, from which I’ve extracted the lastlogin file for use in this experiment.
As for game assets, there is no longer any way to easily download the game files for use on Windows 98SE directly, and so I’ve instead pieced together a folder using that same backup. The most important thing is that instead there being a “versions” folder, there is now instead a “bin” folder, where both the natives and the game’s jarfile both reside.
Now that our .minecraft folder is acquired, take that thing and plot it right down into the Windows folder in Windows 98. Why? Because on Windows 98, the 1.5 launcher ignores the “application data” folder entirely. The launcher itself can go anywhere you’d like, so long as you’re using the .exe version and not the .jar version.
Finally, to wrap things up, place the OpenGL to S3D wrapper in the same location as the launcher exe. Make sure it’s called OpenGL32.dll!
The Game
You just lost it. 🙂
The S3 ViRGE, by my own testing, is capable of running any version of Minecraft from Classic up to and including Indev version in-20100110. However, it is EXTREMELY unstable, and has a tendency to crash mere seconds after loading into a world. This is on top of some minor rendering errors introduced by the aformentioned incomplete state of the S3Mesa wrapper. This video was recorded with Windows ME rather than Windows 98, but this does not impact anything regarding performance or compatibility (and in fact, at least from my own experience, the game is more stable under ME than 98).
Below are the desktop/game settings used in testing:
“Tiny” render distance
Desktop resolution: 640 x 480 (don’t fullscreen the game)
Bit depth: 16/24-bit color (32-bit can cause the ViRGE to run out of memory, and 16-bit can cause strange issues on Windows 98)
And last but not least, some gameplay. This came from some scrapped footage originally intended for my “UNGCR” video, and was only intended for personal reference in collecting performance numbers. As such, the audio is muted due to some copyrighted music blasting in the background.
youtube
Further reading/resources
Vogons Wrapper Project
Original video that this article is based on
VGA Legacy MKIII’s ViRGE/DX page
thanks for reading my walking natural disaster of an article kthxbaiiiiiiii
14 notes
·
View notes
Text
Wish List For A Game Profiler
I want a profiler for game development. No existing profiler currently collects the data I need. No existing profiler displays it in the format I want. No existing profiler filters and aggregates profiling data for games specifically.
I want to know what makes my game lag. Sure, I also care about certain operations taking longer than usual, or about inefficient resource usage in the worker thread. The most important question that no current profiler answers is: In the frames that currently do lag, what is the critical path that makes them take too long? Which function should I optimise first to reduce lag the most?
I know that, with the right profiler, these questions could be answered automatically.
Hybrid Sampling Profiler
My dream profiler would be a hybrid sampling/instrumenting design. It would be a sampling profiler like Austin (https://github.com/P403n1x87/austin), but a handful of key functions would be instrumented in addition to the sampling: Displaying a new frame/waiting for vsync, reading inputs, draw calls to the GPU, spawning threads, opening files and sockets, and similar operations should always be tracked. Even if displaying a frame is not a heavy operation, it is still important to measure exactly when it happens, if not how long it takes. If a draw call returns right away, and the real work on the GPU begins immediately, it’s still useful to know when the GPU started working. Without knowing exactly when inputs are read, and when a frame is displayed, it is difficult to know if a frame is lagging. Especially when those operations are fast, they are likely to be missed by a sampling debugger.
Tracking Other Resources
It would be a good idea to collect CPU core utilisation, GPU utilisation, and memory allocation/usage as well. What does it mean when one thread spends all of its time in that function? Is it idling? Is it busy-waiting? Is it waiting for another thread? Which one?
It would also be nice to know if a thread is waiting for IO. This is probably a “heavy” operation and would slow the game down.
There are many different vendor-specific tools for GPU debugging, some old ones that worked well for OpenGL but are no longer developed, open-source tools that require source code changes in your game, and the newest ones directly from GPU manufacturers that only support DirectX 12 or Vulkan, but no OpenGL or graphics card that was built before 2018. It would probably be better to err on the side of collecting less data and supporting more hardware and graphics APIs.
The profiler should collect enough data to answer questions like: Why is my game lagging even though the CPU is utilised at 60% and the GPU is utilised at 30%? During that function call in the main thread, was the GPU doing something, and were the other cores idling?
Engine/Framework/Scripting Aware
The profiler knows which samples/stack frames are inside gameplay or engine code, native or interpreted code, project-specific or third-party code.
In my experience, it’s not particularly useful to know that the code spent 50% of the time in ceval.c, or 40% of the time in SDL_LowerBlit, but that’s the level of granularity provided by many profilers.
Instead, the profiler should record interpreted code, and allow the game to set a hint if the game is in turn interpreting code. For example, if there is a dialogue engine, that engine could set a global “interpreting dialogue” flag and a “current conversation file and line” variable based on source maps, and the profiler would record those, instead of stopping at the dialogue interpreter-loop function.
Of course, this feature requires some cooperation from the game engine or scripting language.
Catching Common Performance Mistakes
With a hybrid sampling/instrumenting profiler that knows about frames or game state update steps, it is possible to instrument many or most “heavy“ functions. Maybe this functionality should be turned off by default. If most “heavy functions“, for example “parsing a TTF file to create a font object“, are instrumented, the profiler can automatically highlight a mistake when the programmer loads a font from disk during every frame, a hundred frames in a row.
This would not be part of the sampling stage, but part of the visualisation/analysis stage.
Filtering for User Experience
If the profiler knows how long a frame takes, and how much time is spent waiting during each frame, we can safely disregard those frames that complete quickly, with some time to spare. The frames that concern us are those that lag, or those that are dropped. For example, imagine a game spends 30% of its CPU time on culling, and 10% on collision detection. You would think to optimise the culling. What if the collision detection takes 1 ms during most frames, culling always takes 8 ms, but whenever the player fires a bullet, the collision detection causes a lag spike. The time spent on culling is not the problem here.
This would probably not be part of the sampling stage, but part of the visualisation/analysis stage. Still, you could use this information to discard “fast enough“ frames and re-use the memory, and only focus on keeping profiling information from the worst cases.
Aggregating By Code Paths
This is easier when you don’t use an engine, but it can probably also be done if the profiler is “engine-aware”. It would require some per-engine custom code though. Instead of saying “The game spent 30% of the time doing vector addition“, or smarter “The game spent 10% of the frames that lagged most in the MobAIRebuildMesh function“, I want the game to distinguish between game states like “inventory menu“, “spell targeting (first person)“ or “switching to adjacent area“. If the game does not use a data-driven engine, but multiple hand-written game loops, these states can easily be distinguished (but perhaps not labelled) by comparing call stacks: Different states with different game loops call the code to update the screen from different places – and different code paths could have completely different performance characteristics, so it makes sense to evaluate them separately.
Because the hypothetical hybrid profiler instruments key functions, enough call stack information to distinguish different code paths is usually available, and the profiler might be able to automatically distinguish between the loading screen, the main menu, and the game world, without any need for the code to give hints to the profiler.
This could also help to keep the memory usage of the profiler down without discarding too much interesting information, by only keeping the 100 worst frames per code path. This way, the profiler can collect performance data on the gameplay without running out of RAM during the loading screen.
In a data-driven engine like Unity, I’d expect everything to happen all the time, on the same, well-optimised code path. But this is not a wish list for a Unity profiler. This is a wish list for a profiler for your own custom game engine, glue code, and dialogue trees.
All I need is a profiler that is a little smarter, that is aware of SDL, OpenGL, Vulkan, and YarnSpinner or Ink. Ideally, I would need somebody else to write it for me.
6 notes
·
View notes
Text
Mesh topologies: done!
Followers may recall that on Thursday I implemented wireframes and Phong shading in my open-source Vulkan project. Both these features make sense only for 3-D meshes composed of polygons (triangles in my case).
The next big milestone was to support meshes composed of lines. Because of how Vulkan handles line meshes, the additional effort to support other "topologies" (point meshes, triangle-fan meshes, line-strip meshes, and so on) is slight, so I decided to support those as well.
I had a false start where I was using integer codes to represent different topologies. Eventually I realized that defining an "enum" would be a better design, so I did some rework to make it so.
I achieved the line-mesh milestone earlier today (Monday) at commit ce7a409. No screenshots yet, but I'll post one soon.
In parallel with this effort, I've been doing what I call "reconciliation" between my Vulkan graphics engine and the OpenGL engine that Yanis and I wrote last April. Reconciliation occurs when I have 2 classes that do very similar things, but the code can't be reused in the form of a library. The idea is to make the source code of the 2 versions as similar as possible, so I can easily see the differences. This facilitates porting features and fixes back and forth between the 2 versions.
I'm highly motivated to make my 2 engine repos as similar as possible: not just similar APIs, but also the same class/method/variable names, the same coding style, similar directory structures, and so on. Once I can easily port code back and forth, my progress on the Vulkan engine should accelerate considerably. (That's how the Vulkan project got user-input handling so quickly, for instance.)
The OpenGL engine will also benefit from reconciliation. Already it's gained a model-import package. That's something we never bothered to implement last year. Last month I wrote one for the Vulkan engine (as part of the tutorial) and already the projects are similar enough that I was able to port it over without much difficulty.
#open source#vulkan#java#software development#accomplishments#github#3d graphics#coding#jvm#3d mesh#opengl#polygon#3d model#making progress#work in progress#topology#milestones
2 notes
·
View notes
Text
(the 'one company' in the Flash case is primarily Apple, although the decision to decisively deprecate and kill it was made across all the browser manufacturers. Apple are also the ones who decided not to let Vulkan and soon OpenGL run on their devices and to have their own graphics API, which leads to the current messy situation where graphics APIs seem to be multiplying endlessly and you have to rely on some kind of abstraction package to transpile between Vulkan, DX12, Metal, WebGPU, OpenGL, ...)
1 note
·
View note
Link
This is both a really interesting history of graphics APIs and a nice discussion of WebGPU that makes it sound a lot more interesting than I would have expected.
2 notes
·
View notes
Text
Shader Programming: 3D Scene using Direct 3D 11 - 3rd Year Project
My favorite project I made while I was an undergrad! This was the first time I used Direct 3D, HLSL and coded my first shaders.
I had no idea about shaders prior to this project - I had only dabbled in OpenGL, but wasn't aware of the graphics pipeline, nor that GPUs were programmable.
That said, the whole module fascinated me. It was so engaging, and I loved the way algorithms and vector maths could be implemented to draw and render shapes and effects on the screen.
youtube
The scene makes use of the vertex, hull, domain and fragment shaders, for geometry manipulation and post processing effects.
All of the shaders were written from scratch using VS and the D3D11 graphics API, in C++ and HLSL.
You can read more about the project here!
0 notes
Text
MENDUNER RX590 Graphics Card - 8GB GDDR5 256bit Game Graphics Card, Quiet Fast Heat Dissipation GPU for Desktop Computer
Price: Buy Now Last Updated: Product Description MENDUNER RX590 Graphics Card – 8GB 256-bit GDDR5 Gaming Graphics Card, Fast and Quiet Heat Dissipation GPU for Desktop Computer [3D API] RX590 graphics card, for DirectX 12, for OpenGL 4.5, HDCP support, support up to 4 screen outputs, for VR Ready level graphics card. [Bus interface type] This RX590 gaming graphics card includes PCI Express 3.0…
View On WordPress
0 notes
Text
Core/Memory Boost Clock / Memory Speed – 1710 MHz / 14 Gbps 6GB GDDR6 DisplayPort x 3 (v1.4a), HDMI x 1 (Supports 4K@60Hz as specified in HDMI 2.0b) TORX Fan 2.0 Dispersion fan blade: Steep curved blade accelerating the airflow Traditional fan blade: Provides steady airflow to massive heat sink below Double ball bearing: Strong and lasting core for years of smooth gaming. Afterburner Overclocking Utility OC Scanner: An automated function finds the highest stable overclock settings. On Screen Display: Provides real-time information of your system’s performance. Predator: In-game video recording. NVIDIA G-SYNCâ„¢ and HDR Get smooth, tear-free gameplay at refresh rates up to 240 Hz, plus HDR, and more. This is the ultimate gaming display and the go-to equipment for enthusiast gamers. GeForce RTXâ„¢ VR By combining advanced VR rendering, real-time ray tracing, and AI, GeForce RTX will take VR to a new level of realism. Specifications Model Brand MSI Model GeForce RTX 2060 Ventus GP OC Interface Interface PCI Express x16 3.0 Chipset Chipset Manufacturer NVIDIA GPU GeForce RTX 2060 Core clock Boost Clock (MHz) : 1710 Stream Processors 1920 Memory Memory Clock 14.0 Gbps Memory Size 6GB Memory Interface 192-bit Memory Type GDDR6 3D API DirectX DirectX 12 OpenGL OpenGL 4.6 Ports HDMI 1 x HDMI DisplayPort DisplayPort 1.4 x3 General Max Resolution 7680 x 4320 Cooler Dual fan Power Connector 1x 8-pin HDCP Ready
0 notes
Note
Not sure if there's a good way to phrase this since I get annoyed at a lot of these kinds of question, but
Is there a particular reason that you go for OpenGL + Haskell? My heart tells me that those are the worst possible fit (procedural API over the top of a big hidden state machine w/ soft real-time requirements vs a runtime that wants to do pure functions and lazy evaluation). That said, you seem to get to interesting places with some regularity whereas my projects (C/C++ and vulkan usually) tend to die before I get to the cool part, so I'm wondering if there's something to it
i just got frustrated with c-alikes and i really enjoyed aspects of haskell coding. it is objectively a very silly combination, although not as silly as it has been historically given the various improvements in haskell gc over the years.
historically i've used gpipe for haskell rendering, which does some astounding type family wizardry to basically fully-hide the opengl state machine and also let you write shaders in actual haskell (values in the shader monad are actually part of a compositional scripting type that evaluates to glsl code. it's wild.) so it's about as close as you can get to totally ignoring all the opengl-ness of opengl. that being said, uh, that has some problems (zero memoization in generated scripts; very unclear surfacing of real opengl constraints)
also to be fair my projects also tend to die before i get to the cool part, it's just sometimes i manage to get some neat renders out there before.
(right now i've been wanting to jettison the gpipe library in favor of just doing raw opengl right in IO, mostly so i can actually use opengl 4 features that aren't surfaced either in gpipe nor in the OpenGL package, but ofc the first step there would be a whole bunch of low-level data fiddling. but since i've been doing webgl2 + javascript also, uh, the opengl part would mostly end up being exactly the same, so it feels a little less intimidating now. i just really wanna write some wild shader code and to write really wild shader code you do kind of have to directly interface with raw opengl.)
2 notes
·
View notes
Text
Perkembangan Teknologi Grafis dalam Game PC
Perkembangan teknologi grafis dalam game PC telah menjadi salah satu faktor kunci yang mendorong inovasi dan peningkatan pengalaman bermain. Sejak awal kemunculannya, grafis dalam game telah mengalami transformasi yang mengesankan, membawa visual yang semakin realistis dan imersif. Berikut adalah perjalanan evolusi teknologi grafis dalam game PC.
1. Grafis 2D dan 8-bit
Pada akhir 1970-an dan awal 1980-an, game PC pertama kali hadir dengan grafis 2D yang sangat sederhana. Game seperti Pong dan Space Invaders menampilkan visual berbasis piksel dengan palet warna yang terbatas. Dalam periode ini, fokus lebih pada gameplay dan mekanisme dasar, daripada kualitas grafis.
2. Peralihan ke Grafis 3D
Memasuki tahun 1990-an, industri game mulai beralih ke grafis 3D, dimulai dengan game seperti Doom dan Wolfenstein 3D. Teknologi ini membuka era baru dalam desain game, memungkinkan pembuatan dunia virtual yang lebih kompleks dan mendalam. Dengan penggunaan 3D polygonal graphics, pengembang dapat menciptakan lingkungan yang lebih realistis, meskipun masih terlihat cukup kasar dibandingkan dengan standar saat ini.
3. Perkembangan API Grafis
Salah satu langkah penting dalam perkembangan grafis adalah kemunculan API (Application Programming Interface) seperti DirectX dan OpenGL. API ini memberikan alat bagi pengembang untuk mengoptimalkan grafis game, meningkatkan kualitas visual, dan membuat pengalaman yang lebih kaya. Ini memungkinkan dukungan untuk efek pencahayaan, bayangan, dan tekstur yang lebih halus.
4. Real-Time Rendering dan Shader
Dengan kemajuan teknologi, penggunaan teknik rendering waktu nyata dan shader mulai menjadi norma. Game seperti Half-Life 2 dan Far Cry memperkenalkan teknologi seperti vertex shading dan pixel shading, yang memungkinkan efek visual yang lebih kompleks dan realistis. Ini memberikan pengalaman yang lebih mendalam, di mana pemain dapat melihat perubahan pencahayaan dan bayangan secara dinamis selama permainan.
5. Ray Tracing dan Realisme yang Ditingkatkan
Dalam dekade terakhir, teknologi ray tracing mulai mendapatkan perhatian sebagai cara untuk mencapai visual yang lebih realistis. Dengan kemampuan untuk mensimulasikan bagaimana cahaya berinteraksi dengan objek di dunia virtual, game seperti Cyberpunk 2077 dan Control menggunakan ray tracing untuk memberikan efek cahaya, refleksi, dan bayangan yang sangat mendetail. Teknologi ini menandai langkah besar dalam cara kita memahami grafis dalam game.
6. VR dan AR: Masa Depan Grafis Game
Teknologi grafis tidak hanya berhenti di rendering konvensional; realitas virtual (VR) dan augmented reality (AR) membuka pintu baru dalam pengalaman gaming. Game seperti Half-Life: Alyx dan Beat Saber menunjukkan bagaimana grafis yang imersif dapat mengubah cara kita berinteraksi dengan dunia game. Pengalaman ini mengharuskan pengembang untuk memikirkan ulang desain grafis agar sesuai dengan interaksi pengguna di ruang 3D.
Kesimpulan
Perkembangan teknologi grafis dalam game PC telah membawa kita dari grafis 2D yang sederhana ke dunia virtual yang mendalam dan realistis. Dengan inovasi yang terus berlanjut, seperti ray tracing dan VR, masa depan grafis game tampak lebih cerah dan menarik. Pengalaman visual yang kaya ini tidak hanya meningkatkan gameplay tetapi juga mengubah cara kita merasakan dan berinteraksi dengan dunia game.
0 notes