#OpenGL API
Explore tagged Tumblr posts
Text
I want to make this piece of software. I want this piece of software to be a good piece of software. As part of making it a good piece of software, i want it to be fast. As part of making it fast, i want to be able to paralellize what i can. As part of that paralellization, i want to use compute shaders. To use compute shaders, i need some interface to graphics processors. After determining that Vulkan is not an API that is meant to be used by anybody, i decided to use OpenGL instead. In order for using OpenGL to be useful, i need some way to show the results to the user and get input from the user. I can do this by means of the Wayland API. In order to bridge the gap between Wayland and OpenGL, i need to be able to create an OpenGL context where the default framebuffer is the same as the Wayland surface that i've set to be a window. I can do this by means of EGL. In order to use EGL to create an OpenGL context, i need to select a config for the context.
Unfortunately, it just so happens that on my Linux partition, the implementation of EGL does not support the config that i would need for this piece of software.
Therefore, i am going to write this piece of software for 9front instead, using my 9front partition.
#Update#Programming#Technology#Wayland#OpenGL#Computers#Operating systems#EGL (API)#Windowing systems#3D graphics#Wayland (protocol)#Computer standards#Code#Computer graphics#Standards#Graphics#Computing standards#3D computer graphics#OpenGL API#EGL#Computer programming#Computation#Coding#OpenGL graphics API#Wayland protocol#Implementation of standards#Computational technology#Computing#OpenGL (API)#Process of implementation of standards
9 notes
¡
View notes
Text
Apple Unveils Mac OS X
Next Generation OS Features New âAquaâ User Interface
MACWORLD EXPO, SAN FRANCISCO
January 5, 2000
Reasserting its leadership in personal computer operating systems, AppleÂŽ today unveiled MacÂŽ OS X, the next generation MacintoshÂŽ operating system. Steve Jobs demonstrated Mac OS X to an audience of over 4,000 people during his Macworld Expo keynote today, and over 100 developers have pledged their support for the new operating system, including Adobe and Microsoft. Pre-release versions of Mac OS X will be delivered to Macintosh software developers by the end of this month, and will be commercially released this summer.
âMac OS X will delight consumers with its simplicity and amaze professionals with its power,â said Steve Jobs, Appleâs iCEO. âAppleâs innovation is leading the way in personal computer operating systems once again.â
The new technology Aqua, created by Apple, is a major advancement in personal computer user interfaces. Aqua features the âDockâ â a revolutionary new way to organize everything from applications and documents to web sites and streaming video. Aqua also features a completely new Finder which dramatically simplifies the storing, organizing and retrieving of filesâand unifies these functions on the host computer and across local area networks and the Internet. Aqua offers a stunning new visual appearance, with luminous and semi-transparent elements such as buttons, scroll bars and windows, and features fluid animation to enhance the userâs experience. Aqua is a major advancement in personal computer user interfaces, from the same company that started it all in 1984 with the original Macintosh.
Aqua is made possible by Mac OS Xâs new graphics system, which features all-new 2D, 3D and multimedia graphics. 2D graphics are performed by Appleâs new âQuartzâ graphics system which is based on the PDF Internet standard and features on-the-fly PDF rendering, anti-aliasing and compositingâa first for any operating system. 3D graphics are based on OpenGL, the industryâs most-widely supported 3D graphics technology, and multimedia is based on the QuickTime⢠industry standard for digital multimedia.
At the core of Mac OS X is Darwin, Appleâs advanced operating system kernel. Darwin is Linux-like, featuring the same Free BSD Unix support and open-source model. Darwin brings an entirely new foundation to the Mac OS, offering Mac users true memory protection for higher reliability, preemptive multitasking for smoother operation among multiple applications and fully Internet-standard TCP/IP networking. As a result, Mac OS X is the most reliable and robust Apple operating system ever.
Gentle Migration
Apple has designed Mac OS X to enable a gentle migration for its customers and developers from their current installed base of Macintosh operating systems. Mac OS X can run most of the over 13,000 existing Macintosh applications without modification. However, to take full advantage of Mac OS Xâs new features, developers must âtune-upâ their applications to use âCarbonâ, the updated version of APIs (Application Program Interfaces) used to program Macintosh computers. Apple expects most of the popular Macintosh applications to be available in âCarbonizedâ versions this summer.
Developer Support
Apple today also announced that more than 100 leading developers have pledged their support for the new operating system, including Adobe, Agfa, Connectix, id, Macromedia, Metrowerks, Microsoft, Palm Computing, Quark, SPSS and Wolfram (see related supporting quote sheet).
Availability
Mac OS X will be rolled out over a 12 month period. Macintosh developers have already received two pre-releases of the software, and they will receive another pre-release later this monthâthe first to incorporate Aqua. Developers will receive the final âbetaâ pre-release this spring. Mac OS X will go on sale as a shrink-wrapped software product this summer, and will be pre-loaded as the standard operating system on all Macintosh computers beginning in early 2001. Mac OS X is designed to run on all Apple Macintosh computers using PowerPC G3 and G4 processor chips, and requires a minimum of 64 MB of memory.
3 notes
¡
View notes
Text
every time i look at game development
my brain hurts a lot. because it isn't the game development i'm interested in. it's the graphics stuff like opengl or vulkan and shaders and all that. i need to do something to figure out how to balance work and life better. i've got a course on vulkan i got a year ago i still haven't touched and every year that passes that i don't understand how to do graphics api stuffs, the more i scream internally. like i'd love to just sit down with a cup of tea and vibe out to learning how to draw lines. i'm just in the wrong kind of job tbh. a lot of life path stuff i coulda shoulda woulda. oh well.
2 notes
¡
View notes
Text
thinking of writing a self-aggrandizing "i'm a programmer transmigrated to a fantasy world but MAGIC is PROGRAMMING?!" story only the protagonist stumbles through 50 chapters of agonizing opengl tutorialization to create a 700-phoneme spell that makes a red light that slowly turns blue over ten seconds. this common spell doesn't have the common interface safeties for the purposes of simplicity and so if you mispronounce the magic words for 'ten seconds' maybe it'll continue for five million years or until you die of mana depletion, whichever comes first. what do you mean cast fireballs those involve a totally different api; none of that transfers between light-element spells and fire-element spells
chapter 51 is the protag giving up at being a wizard and settling down to be a middling accountant
11 notes
¡
View notes
Note
Not sure if you've been asked/ have answered this before but do you have any particular recommendations for learning to work with Vulkan? Any particular documentation or projects that you found particularly engaging/helpful?
vkguide is def a really great resource! Though I would def approach vulkan after probably learning a more simpler graphics API like OpenGL and such since there is a lot of additional intuitions to understand around why Vulkan/DX12/Metal are designed the way that they are.
Also I personally use Vulkan-Hpp to help make my vulkan code a lot more cleaner and C++-like rather than directly using the C api and exposing myself to the possibility of more mistakes(like forgetting to put the right structure type enum and such). It comes with some utils and macros for making some of the detailed bits of Vulkan a bit easier to manage like pNext chains and such!
12 notes
¡
View notes
Note
Hi! đ
Trying to use reshade on Disney Dreamlight Valley, and the Iâm using a YouTube video for all games as I havenât found a tutorial for strictly DDV. The first problem Iâve ran into is Iâm apparently supposed to click home when I get to the menu of the game but nothing pops up so I canât even get past that. It shows everything downloaded in the DDV files tho. đ
never done this before so Iâm very confused. Thanks!
I haven't played DVV so I'm not sure if it has any quirks or special requirements for ReShade.
Still, for general troubleshooting, make sure that you've installed ReShade into the same folder where the main DVV exe is located.
Does the ReShade banner come up at the top when you start the game? If so, then ReShade is at least installed in the correct place.
If it doesn't show up despite being in the correct location, make sure you've selected the correct rendering api as part of the installation process. You know the part where it asks you to choose between directx 9/10/11/12 or OpenGL/Vulkan? If you choose the wrong one of those ReShade won't load.
You can usually find out the rendering api of most games on PC Gaming Wiki. Here's the page for Disney Dreamlight Valley. It seems to suggest it supports Direct X 10 and 11.
If you're certain everything is installed correctly and the banner is showing up as expected but you still can't open the ReShade menu, you can change the Home key to something else just in case DVV is blocking the use of that for some reason.
Open reshade.ini in a text file and look for the INPUT section.
You'll see a line that says
KeyOverlay=36,0,0,0
36 is the javascript keycode for Home. You can change that to something else.
Check the game's hotkeys to find something that isn't already assigned to a command. For example, a lot of games use F5 to quick save, and F9 to quick load, so you might need to avoid using those. In TS4 at the moment I use F6 to open the overlay, because it's not assigned to anything in the game. You can choose whatever you want. You can find a list of javascript keycodes here. F6, for example, is 117, so you'd change the line to read
KeyOverlay=117,0,0,0
But you can choose whatever you want. Just remember to check it isn't already used by the game.
Note: you can usually do this in the ReShade menu, but since it isn't opening for you at the moment this is a way to change that key manually
Beyond that, I'm not sure what would be stopping the menu from opening. If you've exhausted the options above, you can try asking over in the official ReShade discord server. Please give them as much information as possible in as clear and uncluttered and to-the-point language as possible to increase your chances of someone being able (and willing) to help.
2 notes
¡
View notes
Text
The S3 ViRGE Minecraft Thing
(Article originally posted on TheRetroWeb)
Have you ever wanted to play Minecraft? Have you ever wondered âhow terrible can I make this experienceâ? No? Well too bad. Youâve clicked on this article and Iâve gone this far already, so letâs just keep going and see what happens.
A Little bit of historyâŚ
The S3 ViRGE, short for Video and Rendering Graphics Engine (alternately, Virtual Reality Graphics Engine), was first introduced in November of 1995, with an actual release date of early 1996. It was S3 Graphicsâ very first 3D-capable graphics card, and it had the unfortunate luck of launching alongside⌠the 3Dfx Voodoo 1.
It became rather quickly apparent that the ViRGE was terribly insufficient in comparison to the Voodoo, and in fact even picked up the infamous moniker of âgraphics deceleratorâ, which poked fun at its lackluster 3D performance.
The original ViRGE would be followed by the card that this article focuses on, the ViRGE/DX, just a little under a year later in the waning months of 1996.
The ViRGE/DX was a welcome improvement over the original release, lifting performance to more acceptable levels and improving software compatibility with better drivers. Mostly. And while native Direct3D performance was iffy at best and OpenGL support was nonexistent, S3 did have one last trick up their sleeves to keep the ViRGE line relevant: the S3D Toolkit.
Similar to 3Dfxâs Glide API, the S3D Toolkit was S3 Graphicsâ proprietary low-level graphics API for the ViRGE. Unlike 3Dfxâs offering, however, S3D, much like the cards it was intended for, fell flat on its face. Only a small handful of games ever natively supported S3D acceleration, and by my own admission, I havenât ever played any of them.
But wait, this article is about playing Minecraft on the ViRGE, isnât it? The block game of all time is famously written in Java, and uses an OpenGL rendering pipeline. So, how can the S3 ViRGE, a card with no OpenGL support, possibly play Minecraft?
Wrappers!
This is where a little thing called âOpenGL wrappersâ come in. Shipping in the form of plain OpenGL32.dll files (at least, on Windows) that you drop into a folder alongside whatever needs OpenGL acceleration, these wrappers provide a way to modify, or âwrapâ, OpenGL API calls.
In our case, we are interested in the category of OpenGL wrappers that translate OpenGL API calls to that of other APIs. For a more modern equivalent of these wrappers, the Intel Arc line of graphics cards uses DXVK in order to translate older DirectX 9 calls to Vulkan, which is a natively-supported API.
For this experiment, we will be using a wrapper called âS3Mesaâ, made by Brian Paul of the Mesa project. Though open-source, this wrapper never made it to a completed state, and is missing certain features such as texture transparency despite the ViRGE itself being supposedly capable of it. However, this does not affect gameplay much beyond aesthetics.
The S3Mesa wrapper, on a more technical note, translates OpenGL 1.1 calls to a mix of both S3D and DirectX API calls.
The System
At last, we arrive at the system hardware. As of writing, I am currently benchmarking a plethora of low-end (or otherwise infamous) cards for my âUltra Nugget Graphics Card Roundupâ, and so the system itself is likely a liiiiiittle bit overpowered for the lowly ViRGE/DX:
AMD Athlon XP (Palomino) @ 1.14GHz
Shuttle MK32 Socket A motherboard
256MB DDR-400
S3 ViRGE/DX (upgraded to 4MB of video memory)
Windows 98SE
Why Windows 98SE? Because S3 never released 3D-accelerated graphics drivers for non-Windows 9x operating systems in the consumer space.
For Minecraft itself, KernelEX 4.5.2 and Java 6 are installed as well, and an old version of the launcher dating back to early 2013 that I personally refer to as the âMinecraft 1.5 Launcherâ is used for compatibility purposes. Also because no launcher that can work on Windows 98 is capable of logging into the authentication servers anymore.
Setting up the game
With Windows 98SE, KernelEX, and Java 6 installed (in that order, of course), we can turn our attention to the game itself. As mentioned before, no launcher to my knowledge that runs on Windows 98 is capable of logging into the auth servers. This results in two additional problems: starting the game itself and downloading game assets.
Using the 1.5 launcher solves this first issue by means of relying on a little thing called the lastlogin file. This is an old way that the launcher was able to allow players to keep playing offline when disconnected from the internet, but more importantly, unlike the modern launcher, it doesnât expire. đ
And because of that, our login problem is solved by middle school meâs old .minecraft folder backup, from which Iâve extracted the lastlogin file for use in this experiment.
As for game assets, there is no longer any way to easily download the game files for use on Windows 98SE directly, and so Iâve instead pieced together a folder using that same backup. The most important thing is that instead there being a âversionsâ folder, there is now instead a âbinâ folder, where both the natives and the gameâs jarfile both reside.
Now that our .minecraft folder is acquired, take that thing and plot it right down into the Windows folder in Windows 98. Why? Because on Windows 98, the 1.5 launcher ignores the âapplication dataâ folder entirely. The launcher itself can go anywhere youâd like, so long as youâre using the .exe version and not the .jar version.
Finally, to wrap things up, place the OpenGL to S3D wrapper in the same location as the launcher exe. Make sure itâs called OpenGL32.dll!
The Game
You just lost it. đ
The S3 ViRGE, by my own testing, is capable of running any version of Minecraft from Classic up to and including Indev version in-20100110. However, it is EXTREMELY unstable, and has a tendency to crash mere seconds after loading into a world. This is on top of some minor rendering errors introduced by the aformentioned incomplete state of the S3Mesa wrapper. This video was recorded with Windows ME rather than Windows 98, but this does not impact anything regarding performance or compatibility (and in fact, at least from my own experience, the game is more stable under ME than 98).
Below are the desktop/game settings used in testing:
âTinyâ render distance
Desktop resolution: 640 x 480 (donât fullscreen the game)
Bit depth: 16/24-bit color (32-bit can cause the ViRGE to run out of memory, and 16-bit can cause strange issues on Windows 98)
And last but not least, some gameplay. This came from some scrapped footage originally intended for my âUNGCRâ video, and was only intended for personal reference in collecting performance numbers. As such, the audio is muted due to some copyrighted music blasting in the background.
youtube
Further reading/resources
Vogons Wrapper Project
Original video that this article is based on
VGA Legacy MKIIIâs ViRGE/DX page
thanks for reading my walking natural disaster of an article kthxbaiiiiiiii
14 notes
¡
View notes
Text
Wish List For A Game Profiler
I want a profiler for game development. No existing profiler currently collects the data I need. No existing profiler displays it in the format I want. No existing profiler filters and aggregates profiling data for games specifically.
I want to know what makes my game lag. Sure, I also care about certain operations taking longer than usual, or about inefficient resource usage in the worker thread. The most important question that no current profiler answers is: In the frames that currently do lag, what is the critical path that makes them take too long? Which function should I optimise first to reduce lag the most?
I know that, with the right profiler, these questions could be answered automatically.
Hybrid Sampling Profiler
My dream profiler would be a hybrid sampling/instrumenting design. It would be a sampling profiler like Austin (https://github.com/P403n1x87/austin), but a handful of key functions would be instrumented in addition to the sampling: Displaying a new frame/waiting for vsync, reading inputs, draw calls to the GPU, spawning threads, opening files and sockets, and similar operations should always be tracked. Even if displaying a frame is not a heavy operation, it is still important to measure exactly when it happens, if not how long it takes. If a draw call returns right away, and the real work on the GPU begins immediately, itâs still useful to know when the GPU started working. Without knowing exactly when inputs are read, and when a frame is displayed, it is difficult to know if a frame is lagging. Especially when those operations are fast, they are likely to be missed by a sampling debugger.
Tracking Other Resources
It would be a good idea to collect CPU core utilisation, GPU utilisation, and memory allocation/usage as well. What does it mean when one thread spends all of its time in that function? Is it idling? Is it busy-waiting? Is it waiting for another thread? Which one?
It would also be nice to know if a thread is waiting for IO. This is probably a âheavyâ operation and would slow the game down.
There are many different vendor-specific tools for GPU debugging, some old ones that worked well for OpenGL but are no longer developed, open-source tools that require source code changes in your game, and the newest ones directly from GPU manufacturers that only support DirectX 12 or Vulkan, but no OpenGL or graphics card that was built before 2018. It would probably be better to err on the side of collecting less data and supporting more hardware and graphics APIs.
The profiler should collect enough data to answer questions like: Why is my game lagging even though the CPU is utilised at 60% and the GPU is utilised at 30%? During that function call in the main thread, was the GPU doing something, and were the other cores idling?
Engine/Framework/Scripting Aware
The profiler knows which samples/stack frames are inside gameplay or engine code, native or interpreted code, project-specific or third-party code.
In my experience, itâs not particularly useful to know that the code spent 50% of the time in ceval.c, or 40% of the time in SDL_LowerBlit, but thatâs the level of granularity provided by many profilers.
Instead, the profiler should record interpreted code, and allow the game to set a hint if the game is in turn interpreting code. For example, if there is a dialogue engine, that engine could set a global âinterpreting dialogueâ flag and a âcurrent conversation file and lineâ variable based on source maps, and the profiler would record those, instead of stopping at the dialogue interpreter-loop function.
Of course, this feature requires some cooperation from the game engine or scripting language.
Catching Common Performance Mistakes
With a hybrid sampling/instrumenting profiler that knows about frames or game state update steps, it is possible to instrument many or most âheavyâ functions. Maybe this functionality should be turned off by default. If most âheavy functionsâ, for example âparsing a TTF file to create a font objectâ, are instrumented, the profiler can automatically highlight a mistake when the programmer loads a font from disk during every frame, a hundred frames in a row.
This would not be part of the sampling stage, but part of the visualisation/analysis stage.
Filtering for User Experience
If the profiler knows how long a frame takes, and how much time is spent waiting during each frame, we can safely disregard those frames that complete quickly, with some time to spare. The frames that concern us are those that lag, or those that are dropped. For example, imagine a game spends 30% of its CPU time on culling, and 10% on collision detection. You would think to optimise the culling. What if the collision detection takes 1 ms during most frames, culling always takes 8 ms, but whenever the player fires a bullet, the collision detection causes a lag spike. The time spent on culling is not the problem here.
This would probably not be part of the sampling stage, but part of the visualisation/analysis stage. Still, you could use this information to discard âfast enoughâ frames and re-use the memory, and only focus on keeping profiling information from the worst cases.
Aggregating By Code Paths
This is easier when you donât use an engine, but it can probably also be done if the profiler is âengine-awareâ. It would require some per-engine custom code though. Instead of saying âThe game spent 30% of the time doing vector additionâ, or smarter âThe game spent 10% of the frames that lagged most in the MobAIRebuildMesh functionâ, I want the game to distinguish between game states like âinventory menuâ, âspell targeting (first person)â or âswitching to adjacent areaâ. If the game does not use a data-driven engine, but multiple hand-written game loops, these states can easily be distinguished (but perhaps not labelled) by comparing call stacks: Different states with different game loops call the code to update the screen from different places â and different code paths could have completely different performance characteristics, so it makes sense to evaluate them separately.
Because the hypothetical hybrid profiler instruments key functions, enough call stack information to distinguish different code paths is usually available, and the profiler might be able to automatically distinguish between the loading screen, the main menu, and the game world, without any need for the code to give hints to the profiler.
This could also help to keep the memory usage of the profiler down without discarding too much interesting information, by only keeping the 100 worst frames per code path. This way, the profiler can collect performance data on the gameplay without running out of RAM during the loading screen.
In a data-driven engine like Unity, Iâd expect everything to happen all the time, on the same, well-optimised code path. But this is not a wish list for a Unity profiler. This is a wish list for a profiler for your own custom game engine, glue code, and dialogue trees.
All I need is a profiler that is a little smarter, that is aware of SDL, OpenGL, Vulkan, and YarnSpinner or Ink. Ideally, I would need somebody else to write it for me.
6 notes
¡
View notes
Text
Mesh topologies: done!
Followers may recall that on Thursday I implemented wireframes and Phong shading in my open-source Vulkan project. Both these features make sense only for 3-D meshes composed of polygons (triangles in my case).
The next big milestone was to support meshes composed of lines. Because of how Vulkan handles line meshes, the additional effort to support other "topologies" (point meshes, triangle-fan meshes, line-strip meshes, and so on) is slight, so I decided to support those as well.
I had a false start where I was using integer codes to represent different topologies. Eventually I realized that defining an "enum" would be a better design, so I did some rework to make it so.
I achieved the line-mesh milestone earlier today (Monday) at commit ce7a409. No screenshots yet, but I'll post one soon.
In parallel with this effort, I've been doing what I call "reconciliation" between my Vulkan graphics engine and the OpenGL engine that Yanis and I wrote last April. Reconciliation occurs when I have 2 classes that do very similar things, but the code can't be reused in the form of a library. The idea is to make the source code of the 2 versions as similar as possible, so I can easily see the differences. This facilitates porting features and fixes back and forth between the 2 versions.
I'm highly motivated to make my 2 engine repos as similar as possible: not just similar APIs, but also the same class/method/variable names, the same coding style, similar directory structures, and so on. Once I can easily port code back and forth, my progress on the Vulkan engine should accelerate considerably. (That's how the Vulkan project got user-input handling so quickly, for instance.)
The OpenGL engine will also benefit from reconciliation. Already it's gained a model-import package. That's something we never bothered to implement last year. Last month I wrote one for the Vulkan engine (as part of the tutorial) and already the projects are similar enough that I was able to port it over without much difficulty.
#open source#vulkan#java#software development#accomplishments#github#3d graphics#coding#jvm#3d mesh#opengl#polygon#3d model#making progress#work in progress#topology#milestones
2 notes
¡
View notes
Text
(the 'one company' in the Flash case is primarily Apple, although the decision to decisively deprecate and kill it was made across all the browser manufacturers. Apple are also the ones who decided not to let Vulkan and soon OpenGL run on their devices and to have their own graphics API, which leads to the current messy situation where graphics APIs seem to be multiplying endlessly and you have to rely on some kind of abstraction package to transpile between Vulkan, DX12, Metal, WebGPU, OpenGL, ...)
1 note
¡
View note
Link
This is both a really interesting history of graphics APIs and a nice discussion of WebGPU that makes it sound a lot more interesting than I would have expected.
2 notes
¡
View notes
Text
C++ and 3D Graphics: A Tutorial on Using OpenGL and OpenCL
Introduction C++ and 3D Graphics: A Tutorial on Using OpenGL and OpenCL is a comprehensive guide to creating 3D graphics applications using C++ and the OpenGL and OpenCL APIs. This tutorial is designed for developers who want to learn how to create high-performance 3D graphics applications using C++ and the OpenGL and OpenCL APIs. In this tutorial, we will cover the core concepts andâŚ
0 notes
Text
Honors Project & Dissertation - Generating physically inspired lightning effects in the GPU using parallelization techniques
Finally, time to upload my Honors Project! I really loved working on this, as one of my main interests is physics simulation in graphics programming.
Here's the explanation for this project - which is basically comparing two algorithms for rendering lightning geometry, parallelizing them, and seeing which one is faster.
Keep reading for the explanation! However, if you want the full documentation and breakdown of this project, click here.
Physically based or inspired lightning is not feasible to implement in interactive applications such as games because of the iterative nature of its algorithms, which makes them computationally expensive.
Accurate lightning simulation is described with the Dielectric Breakdown Model (DBM), which involves solving Laplaceâs equation. Solving it with the conjugate gradient method (CGM) has been proven to be computationally expensive, however, this method has been substituted with a rational function to produce faster results. Additionally, these methods have been tested in single-threaded CPU applications, but thereâs evidence that the CGM can be optimized if they are computed in parallel in the GPU.
This research attempts to prove whether itâs possible to optimize physically inspired lightning generation performance using a rational function method in the GPU with the aid of parallel multithreading.
The application is developed in C++ using the Direct3D 11 API. It ports open-source code of the CGM and rational method from OpenGL to D3D11. Testing is carried out using a performance benchmark measuring the computation times of the parallelized rational method using a compute shader against its non-parallelized version and the CGM.
Despite the embarrassingly parallel nature of the algorithm, the parallelized version was 3 times slower than its non-parallelized version, albeit 3 times faster than the CGM. Resources consumed by the rational method increased ninefold compared to its non-parallelized version as well.
In conclusion, the parallelized version proved to be slower due to the CPU-GPU data transfer overhead. Further research hypothesis suggests the use of group shared memory could compensate for this overhead.
Personally I would've loved to have more time to work on this as I believe I could've proved my hypothesis with group shared memory, but I'm not as knowledgeable yet. I need to study more graphics programming!
0 notes
Text
Core/Memory Boost Clock / Memory Speed â 1710 MHz / 14 Gbps 6GB GDDR6 DisplayPort x 3 (v1.4a), HDMI x 1 (Supports 4K@60Hz as specified in HDMI 2.0b) TORX Fan 2.0 Dispersion fan blade: Steep curved blade accelerating the airflow Traditional fan blade: Provides steady airflow to massive heat sink below Double ball bearing: Strong and lasting core for years of smooth gaming. Afterburner Overclocking Utility OC Scanner: An automated function finds the highest stable overclock settings. On Screen Display: Provides real-time information of your systemâs performance. Predator: In-game video recording. NVIDIA G-SYNCâ⢠and HDR Get smooth, tear-free gameplay at refresh rates up to 240 Hz, plus HDR, and more. This is the ultimate gaming display and the go-to equipment for enthusiast gamers. GeForce RTXâ⢠VR By combining advanced VR rendering, real-time ray tracing, and AI, GeForce RTX will take VR to a new level of realism. Specifications Model Brand MSI Model GeForce RTX 2060 Ventus GP OC Interface Interface PCI Express x16 3.0 Chipset Chipset Manufacturer NVIDIA GPU GeForce RTX 2060 Core clock Boost Clock (MHz) : 1710 Stream Processors 1920 Memory Memory Clock 14.0 Gbps Memory Size 6GB Memory Interface 192-bit Memory Type GDDR6 3D API DirectX DirectX 12 OpenGL OpenGL 4.6 Ports HDMI 1 x HDMI DisplayPort DisplayPort 1.4 x3 General Max Resolution 7680 x 4320 Cooler Dual fan Power Connector 1x 8-pin HDCP Ready
0 notes
Note
Not sure if there's a good way to phrase this since I get annoyed at a lot of these kinds of question, but
Is there a particular reason that you go for OpenGL + Haskell? My heart tells me that those are the worst possible fit (procedural API over the top of a big hidden state machine w/ soft real-time requirements vs a runtime that wants to do pure functions and lazy evaluation). That said, you seem to get to interesting places with some regularity whereas my projects (C/C++ and vulkan usually) tend to die before I get to the cool part, so I'm wondering if there's something to it
i just got frustrated with c-alikes and i really enjoyed aspects of haskell coding. it is objectively a very silly combination, although not as silly as it has been historically given the various improvements in haskell gc over the years.
historically i've used gpipe for haskell rendering, which does some astounding type family wizardry to basically fully-hide the opengl state machine and also let you write shaders in actual haskell (values in the shader monad are actually part of a compositional scripting type that evaluates to glsl code. it's wild.) so it's about as close as you can get to totally ignoring all the opengl-ness of opengl. that being said, uh, that has some problems (zero memoization in generated scripts; very unclear surfacing of real opengl constraints)
also to be fair my projects also tend to die before i get to the cool part, it's just sometimes i manage to get some neat renders out there before.
(right now i've been wanting to jettison the gpipe library in favor of just doing raw opengl right in IO, mostly so i can actually use opengl 4 features that aren't surfaced either in gpipe nor in the OpenGL package, but ofc the first step there would be a whole bunch of low-level data fiddling. but since i've been doing webgl2 + javascript also, uh, the opengl part would mostly end up being exactly the same, so it feels a little less intimidating now. i just really wanna write some wild shader code and to write really wild shader code you do kind of have to directly interface with raw opengl.)
2 notes
¡
View notes
Text
Perkembangan Teknologi Grafis dalam Game PC
Perkembangan teknologi grafis dalam game PC telah menjadi salah satu faktor kunci yang mendorong inovasi dan peningkatan pengalaman bermain. Sejak awal kemunculannya, grafis dalam game telah mengalami transformasi yang mengesankan, membawa visual yang semakin realistis dan imersif. Berikut adalah perjalanan evolusi teknologi grafis dalam game PC.
1. Grafis 2D dan 8-bit
Pada akhir 1970-an dan awal 1980-an, game PC pertama kali hadir dengan grafis 2D yang sangat sederhana. Game seperti Pong dan Space Invaders menampilkan visual berbasis piksel dengan palet warna yang terbatas. Dalam periode ini, fokus lebih pada gameplay dan mekanisme dasar, daripada kualitas grafis.
2. Peralihan ke Grafis 3D
Memasuki tahun 1990-an, industri game mulai beralih ke grafis 3D, dimulai dengan game seperti Doom dan Wolfenstein 3D. Teknologi ini membuka era baru dalam desain game, memungkinkan pembuatan dunia virtual yang lebih kompleks dan mendalam. Dengan penggunaan 3D polygonal graphics, pengembang dapat menciptakan lingkungan yang lebih realistis, meskipun masih terlihat cukup kasar dibandingkan dengan standar saat ini.
3. Perkembangan API Grafis
Salah satu langkah penting dalam perkembangan grafis adalah kemunculan API (Application Programming Interface) seperti DirectX dan OpenGL. API ini memberikan alat bagi pengembang untuk mengoptimalkan grafis game, meningkatkan kualitas visual, dan membuat pengalaman yang lebih kaya. Ini memungkinkan dukungan untuk efek pencahayaan, bayangan, dan tekstur yang lebih halus.
4. Real-Time Rendering dan Shader
Dengan kemajuan teknologi, penggunaan teknik rendering waktu nyata dan shader mulai menjadi norma. Game seperti Half-Life 2 dan Far Cry memperkenalkan teknologi seperti vertex shading dan pixel shading, yang memungkinkan efek visual yang lebih kompleks dan realistis. Ini memberikan pengalaman yang lebih mendalam, di mana pemain dapat melihat perubahan pencahayaan dan bayangan secara dinamis selama permainan.
5. Ray Tracing dan Realisme yang Ditingkatkan
Dalam dekade terakhir, teknologi ray tracing mulai mendapatkan perhatian sebagai cara untuk mencapai visual yang lebih realistis. Dengan kemampuan untuk mensimulasikan bagaimana cahaya berinteraksi dengan objek di dunia virtual, game seperti Cyberpunk 2077 dan Control menggunakan ray tracing untuk memberikan efek cahaya, refleksi, dan bayangan yang sangat mendetail. Teknologi ini menandai langkah besar dalam cara kita memahami grafis dalam game.
6. VR dan AR: Masa Depan Grafis Game
Teknologi grafis tidak hanya berhenti di rendering konvensional; realitas virtual (VR) dan augmented reality (AR) membuka pintu baru dalam pengalaman gaming. Game seperti Half-Life: Alyx dan Beat Saber menunjukkan bagaimana grafis yang imersif dapat mengubah cara kita berinteraksi dengan dunia game. Pengalaman ini mengharuskan pengembang untuk memikirkan ulang desain grafis agar sesuai dengan interaksi pengguna di ruang 3D.
Kesimpulan
Perkembangan teknologi grafis dalam game PC telah membawa kita dari grafis 2D yang sederhana ke dunia virtual yang mendalam dan realistis. Dengan inovasi yang terus berlanjut, seperti ray tracing dan VR, masa depan grafis game tampak lebih cerah dan menarik. Pengalaman visual yang kaya ini tidak hanya meningkatkan gameplay tetapi juga mengubah cara kita merasakan dan berinteraksi dengan dunia game.
0 notes