#Deep Space Nine Upscale Project (DS9UP): Technical Goals and FAQ
Explore tagged Tumblr posts
arkbuzz · 5 years ago
Text
Arkbuzz - News: Deep Space Nine Upscale Project (DS9UP): Technical Goals and FAQ
I’ve been needing to write a sort of “mission statement” and FAQ for this project, and this seems as good a time as any. The goal of the Deep Space Nine Upscale Project (DS9UP) is to create a version of Star Trek: Deep Space Nine worth watching in the modern era of 4K and HD televisions and monitors. Topaz Video Enhance AI has been a critical part of those efforts to date, but I’ve spent the last month testing AviSynth, StaxRip, Handbrake, and at least a dozen other applications in the process of creating this encode. Because image quality is intrinsically suggestive, I’m not going to claim I can somehow create the “best” version of Deep Space Nine, but I intend to map out multiple paths and settings tweaks that lead to different outcomes. I’ve gone back and specifically focused on the DVD encodes for two reasons: 1). We need every single scrap of data for upscaling (our results bear out the importance of this in several spots) 2). I’m creating a legal route for individuals to upscale a TV show they already own. I will not be creating or distributing any torrents based on my own work. I will be publishing a full tutorial on how to create what I’ve created, once we’ve reached that point. To date, I’ve used two workstations simultaneously for DS9UP testing: An AMD Threadripper 3990X and an Intel Core i9-10980XE, both equipped with 64GB of RAM and an RTX 2080. I’ve occasionally tapped additional processing power in the form of eight-core Intel and AMD systems. Both systems have been excellent, but the 3990X is particularly good at running many encodes in parallel. The current encoder preset I’m working with and wrote about earlier this week is codenamed “Rubicon,” after the runabout. An interesting example of why source quality matters. While the ‘Artemis’ AI upscale algorithm is malfunctioning in both clips, it’s malfunctioning much less when given more source material to work with. Upscaled MKV on left, Rubicon on right. I have far more compute power available than is typical for a project like this, and I’m using it accordingly. Before we go further, here’s some surprise Season 7 footage I created. Goals and Principles The final upscaling and filter application process must be as simple as possible, to increase the likelihood people can follow it. No individual scene edits unless absolutely unavoidable. Upscaling should require as little re-encoding as possible, to reduce source degradation. When encoding cannot be avoided, re-encode in maximum detail. Storage is cheap and Topaz Video Enhance AI offers no control whatsoever over final output settings. Err on the side of caution. Create a minimum of two workflows balanced around maximum quality versus sane processing time requirements. Another example of how the Artemis-LQ preset struggles. It struggles with Rubicon and upscaled MKVs both, but it struggles more with the MKV. The upscale should rely on the largest amount of free-to-use software possible. Topaz isn’t free to use, but it does include a 30-day free trial. The non-upscaled video should still improve the underlying source image quality before upscaling is applied. When in doubt, encode it, and compare it. Render all results at near-maximum quality. When there are questions about what maximum quality settings are, encode all of the likely options simultaneously. When combinatorics makes this impossible, choose likely targets based on a close reading of the various filter settings. Be willing to laugh at some of the ridiculously bad quality encodes you will occasionally create, especially if it takes 1-2 days to create them. Address slow rendering times by leveraging greater parallelism. If you’re finishing 15-20 encodes per day, it won’t matter if it takes 12-36 hours to finish them. Encode the entire episode at once, for easier spot comparison of any area. Rubicon is not a perfect example of its own goals and principles. Like a lot of first season efforts, it needs further refinement. Currently, it relies on multiple pieces of paid software and the source is encoded more than I like. I’ve also been forced to use Handbrake as an initial ripper rather than MakeMKV due to persistent problems with audio/video muxing. Handbrake has no such issue and I’ve noticed no meaningful quality loss from a Handbrake rip on “Very Slow” with an RF of 2. There are some nasty dependencies to contend with across applications. AviSynth doesn’t always like manipulating a video after it has been through DaVinci Resolve Studio and Topaz. DaVinci won’t ingest MKV files and doesn’t support MPEG-2 at all. Creating Rubicon Currently, Rubicon uses Handbrake for the initial rip, followed by StaxRip 2.0.8.0 as an AviSynth front-end GUI. After processing via AviSynth, I upscale the application in Topaz VEAI. This creates an intermediate step I personally call 5Sharp, mostly because “That one encode I like” was wordy. Odd as it might seem, I’m very pleased with the result of this cloud. Upscaled MKV on left, Rubicon on right. 5Sharp is rather nice, IMO, but it struggles to resolve the judder issues caused by DS9’s party trick of flipping back and forth between 23.976 fps and 29.97 fps. I’ve come up with two methods of resolving this issue — the one currently deployed in Rubicon uses DaVinci Resolve Studio, while another option I’m considering relies solely on AviSynth. The reason I’ve been a little vague about my workflows isn’t that I’m trying to be coy. It’s because it’s virtually impossible to describe them all without sounding like a lunatic. In the past week, I’ve experimented with the following: Encoding from VOB files created by DVDDecrypter Encoding from MKV files created by Handbrake Encoding from MP4 files created by Handbrake Encoding from MKV files created by MakeMKV I’ve encoded the VOB files at 23.976 and 29.97 fps to see the differences, experimented with various ways of extracting MKV timecodes in the hopes of fixing my A/V sync issue when ingesting via MakeMKV (no luck), attempted to use VapourSynth and StaxRip to invoke the VFRtoCFR script (no luck and I don’t speak Python), and experimented with multiple methods of adjusting frame rates in multiple applications. In video editing, doing A before B often produces different results than B before A, so I’ve also experimented with reversing the order of my own tests. I’ve run various methods of adjusting frame rates on all of the sources above, to gauge the impact and evaluate how it impacts different source rips differently. I’m not “settled” on using Handbrake for initial ripping in any meaningful way, except that starting with Handbrake gets me aligned audio and video without locking me into either 23.976fps or 29.97fps the way ripping the VOB files currently does. Are there solutions to these problems? I’m certain there are solutions to these problems. What I don’t want to do is leave stories littered with half-explained workflow questions that represent discarded branches of research. Now that I’ve finished my most recent massive report, I plan to hook up with some of the other grassroots work being done on this project. FAQ Why are you using Handbrake instead of MakeMKV? I’d love to be using MakeMKV. In fact, I’ve rendered MakeMKV-based source hundreds of times. There are two problems with MakeMKV that I have yet to solve. 1). Misaligned audio/video at the beginning of a stream. 2). How Deep Space Nine‘s variable frame rate is handled by many applications. StaxRip, for example, will attempt to rip a MakeMKV stream into a hybrid constant frame rate (CFR) of 24.66fps, having apparently averaged the 29.97fps content frame rate with the 23.976fps content frame rate. Ripping the VOBs directly is possible — and this solves the audio sync problem — but this also forces the show into all one frame rate or the other. Also, Blackmagic’s DaVinci Resolve doesn’t support MPEG-2 or MKV files, which is rather frustrating. Tool recommendations are welcome. How Long Does it Take to Encode an Entire Episode? Honest answer: I’m not sure. It takes between 8.5 – 11 hours just to upscale a DVD source file by 4x. This is sometimes referred to as “4K,” but the final resolution on Rubicon is 2560×1920. It’s roughly a 5MP image. The reason I’m not sure how long it actually takes is that I typically run between 4 – 10 source encodes simultaneously with an upscale in the background. Right now, I’d say it takes anywhere from 13 – 20 hours to upscale an episode, start to finish. 8-11 hours of that is out of my control. Topaz VEAI takes as long as it takes. Do You Actually Know What You’re Doing? I’m still a beginner at this sort of thing. In retrospect, there are easier shows to cut one’s teeth on than Deep Space Nine. My strategy for conquering this problem has been spunk, gumption, and overwhelming amounts of processing power. Are You Aware You’re Doing It Wrong? I am exquisitely aware that I am doing it wrong. I’ve been attempting to learn how to upscale and effectively remaster video from scratch, with some help from online and real-life friends. I take to video like a duck to vacuum. If you think you know something that might help, there’s a pretty good chance you do. I’m not tackling this project because I think I’m somehow immune to the Dunning-Kruger effect. I’m taking it on because DS9 is 27 years old and nobody has done it yet. Paramount has made it clear they aren’t going to. We’ve started to lose the actors who starred on the show. For the first time in my entire career, the tools to fix problems like this have become available to ordinary people. I and some other groups of people are availing ourselves of them. You’ll Never Make It as Good as Paramount Could: This one really isn’t a question, but I hear it regularly enough to make it worth addressing. It is not news to me that Paramount is capable of creating a remastered version of Deep Space Nine that would blow mine out of the water. Here’s a shot they created for the documentary “What We Left Behind” last year: And, for comparison, here’s my own version of that clip, rendered in Rubicon — the best footage I’ve assembled to-date: I know which one you probably prefer. I know which one I prefer. But since Paramount isn’t doing the work, I’ve got to work with what I’ve got. The goal of the DS9UP isn’t to create a better version of DS9 than Paramount could produce — it’s to create the best version of Deep Space Nine that it’s possible to build (with allowance for individual taste). Now Read: Deep Space Nine Upscale Project Season Finale: What We’ve Brought Ahead Deep Space Nine Upscale Project Update: ‘Sacrifice of Angels’ Upscaling Star Trek: Deep Space Nine Using Topaz Video Enhance AI #DeepSpaceNineUpscaleProject #TechnicalQuestions #DS9UP #Extreme #DeepSpace9
Tumblr media
Arkbuzz - News source https://arkbuzz.com/blogs/news/deep-space-nine-upscale-project-ds9up-technical-goals-and-faq
0 notes
componentplanet · 5 years ago
Text
Deep Space Nine Upscale Project (DS9UP): Technical Goals and FAQ
I’ve been needing to write a sort of “mission statement” and FAQ for this project, and this seems as good a time as any.
The goal of the Deep Space Nine Upscale Project (DS9UP) is to create a version of Star Trek: Deep Space Nine worth watching in the modern era of 4K and HD televisions and monitors. Topaz Video Enhance AI has been a critical part of those efforts to date, but I’ve spent the last month testing AviSynth, StaxRip, Handbrake, and at least a dozen other applications in the process of creating this encode.
Because image quality is intrinsically suggestive, I’m not going to claim I can somehow create the “best” version of Deep Space Nine, but I intend to map out multiple paths and settings tweaks that lead to different outcomes. I’ve gone back and specifically focused on the DVD encodes for two reasons:
1). We need every single scrap of data for upscaling (our results bear out the importance of this in several spots)
2). I’m creating a legal route for individuals to upscale a TV show they already own. I will not be creating or distributing any torrents based on my own work. I will be publishing a full tutorial on how to create what I’ve created, once we’ve reached that point.
To date, I’ve used two workstations simultaneously for DS9UP testing: An AMD Threadripper 3990X and an Intel Core i9-10980XE, both equipped with 64GB of RAM and an RTX 2080. I’ve occasionally tapped additional processing power in the form of eight-core Intel and AMD systems. Both systems have been excellent, but the 3990X is particularly good at running many encodes in parallel.
The current encoder preset I’m working with and wrote about earlier this week is codenamed “Rubicon,” after the runabout.
An interesting example of why source quality matters. While the ‘Artemis’ AI upscale algorithm is malfunctioning in both clips, it’s malfunctioning much less when given more source material to work with. Upscaled MKV on left, Rubicon on right.
I have far more compute power available than is typical for a project like this, and I’m using it accordingly.
Before we go further, here’s some surprise Season 7 footage I created.
youtube
Goals and Principles
The final upscaling and filter application process must be as simple as possible, to increase the likelihood people can follow it. No individual scene edits unless absolutely unavoidable.
Upscaling should require as little re-encoding as possible, to reduce source degradation.
When encoding cannot be avoided, re-encode in maximum detail. Storage is cheap and Topaz Video Enhance AI offers no control whatsoever over final output settings. Err on the side of caution.
Create a minimum of two workflows balanced around maximum quality versus sane processing time requirements.
Another example of how the Artemis-LQ preset struggles. It struggles with Rubicon and upscaled MKVs both, but it struggles more with the MKV.
The upscale should rely on the largest amount of free-to-use software possible. Topaz isn’t free to use, but it does include a 30-day free trial.
The non-upscaled video should still improve the underlying source image quality before upscaling is applied.
When in doubt, encode it, and compare it.
Render all results at near-maximum quality. When there are questions about what maximum quality settings are, encode all of the likely options simultaneously. When combinatorics makes this impossible, choose likely targets based on a close reading of the various filter settings.
Be willing to laugh at some of the ridiculously bad quality encodes you will occasionally create, especially if it takes 1-2 days to create them.
Address slow rendering times by leveraging greater parallelism. If you’re finishing 15-20 encodes per day, it won’t matter if it takes 12-36 hours to finish them.
Encode the entire episode at once, for easier spot comparison of any area.
Rubicon is not a perfect example of its own goals and principles. Like a lot of first season efforts, it needs further refinement. Currently, it relies on multiple pieces of paid software and the source is encoded more than I like. I’ve also been forced to use Handbrake as an initial ripper rather than MakeMKV due to persistent problems with audio/video muxing. Handbrake has no such issue and I’ve noticed no meaningful quality loss from a Handbrake rip on “Very Slow” with an RF of 2.
There are some nasty dependencies to contend with across applications. AviSynth doesn’t always like manipulating a video after it has been through DaVinci Resolve Studio and Topaz. DaVinci won’t ingest MKV files and doesn’t support MPEG-2 at all.
Creating Rubicon
Currently, Rubicon uses Handbrake for the initial rip, followed by StaxRip 2.0.8.0 as an AviSynth front-end GUI. After processing via AviSynth, I upscale the application in Topaz VEAI. This creates an intermediate step I personally call 5Sharp, mostly because “That one encode I like” was wordy.
Odd as it might seem, I’m very pleased with the result of this cloud. Upscaled MKV on left, Rubicon on right.
5Sharp is rather nice, IMO, but it struggles to resolve the judder issues caused by DS9’s party trick of flipping back and forth between 23.976 fps and 29.97 fps. I’ve come up with two methods of resolving this issue — the one currently deployed in Rubicon uses DaVinci Resolve Studio, while another option I’m considering relies solely on AviSynth.
The reason I’ve been a little vague about my workflows isn’t that I’m trying to be coy. It’s because it’s virtually impossible to describe them all without sounding like a lunatic. In the past week, I’ve experimented with the following:
Encoding from VOB files created by DVDDecrypter Encoding from MKV files created by Handbrake Encoding from MP4 files created by Handbrake Encoding from MKV files created by MakeMKV
I’ve encoded the VOB files at 23.976 and 29.97 fps to see the differences, experimented with various ways of extracting MKV timecodes in the hopes of fixing my A/V sync issue when ingesting via MakeMKV (no luck), attempted to use VapourSynth and StaxRip to invoke the VFRtoCFR script (no luck and I don’t speak Python), and experimented with multiple methods of adjusting frame rates in multiple applications. In video editing, doing A before B often produces different results than B before A, so I’ve also experimented with reversing the order of my own tests.
I’ve run various methods of adjusting frame rates on all of the sources above, to gauge the impact and evaluate how it impacts different source rips differently. I’m not “settled” on using Handbrake for initial ripping in any meaningful way, except that starting with Handbrake gets me aligned audio and video without locking me into either 23.976fps or 29.97fps the way ripping the VOB files currently does.
Are there solutions to these problems? I’m certain there are solutions to these problems. What I don’t want to do is leave stories littered with half-explained workflow questions that represent discarded branches of research.
Now that I’ve finished my most recent massive report, I plan to hook up with some of the other grassroots work being done on this project.
FAQ
Why are you using Handbrake instead of MakeMKV?
I’d love to be using MakeMKV. In fact, I’ve rendered MakeMKV-based source hundreds of times. There are two problems with MakeMKV that I have yet to solve.
1). Misaligned audio/video at the beginning of a stream. 2). How Deep Space Nine‘s variable frame rate is handled by many applications.
StaxRip, for example, will attempt to rip a MakeMKV stream into a hybrid constant frame rate (CFR) of 24.66fps, having apparently averaged the 29.97fps content frame rate with the 23.976fps content frame rate. Ripping the VOBs directly is possible — and this solves the audio sync problem — but this also forces the show into all one frame rate or the other.
Also, Blackmagic’s DaVinci Resolve doesn’t support MPEG-2 or MKV files, which is rather frustrating.
Tool recommendations are welcome.
How Long Does it Take to Encode an Entire Episode?
Honest answer: I’m not sure. It takes between 8.5 – 11 hours just to upscale a DVD source file by 4x. This is sometimes referred to as “4K,” but the final resolution on Rubicon is 2560×1920. It’s roughly a 5MP image. The reason I’m not sure how long it actually takes is that I typically run between 4 – 10 source encodes simultaneously with an upscale in the background.
Right now, I’d say it takes anywhere from 13 – 20 hours to upscale an episode, start to finish. 8-11 hours of that is out of my control. Topaz VEAI takes as long as it takes.
Do You Actually Know What You’re Doing?
I’m still a beginner at this sort of thing. In retrospect, there are easier shows to cut one’s teeth on than Deep Space Nine. My strategy for conquering this problem has been spunk, gumption, and overwhelming amounts of processing power.
Are You Aware You’re Doing It Wrong?
I am exquisitely aware that I am doing it wrong. I’ve been attempting to learn how to upscale and effectively remaster video from scratch, with some help from online and real-life friends. I take to video like a duck to vacuum. If you think you know something that might help, there’s a pretty good chance you do.
I’m not tackling this project because I think I’m somehow immune to the Dunning-Kruger effect. I’m taking it on because DS9 is 27 years old and nobody has done it yet. Paramount has made it clear they aren’t going to. We’ve started to lose the actors who starred on the show.
For the first time in my entire career, the tools to fix problems like this have become available to ordinary people. I and some other groups of people are availing ourselves of them.
You’ll Never Make It as Good as Paramount Could:
This one really isn’t a question, but I hear it regularly enough to make it worth addressing. It is not news to me that Paramount is capable of creating a remastered version of Deep Space Nine that would blow mine out of the water. Here’s a shot they created for the documentary “What We Left Behind” last year:
youtube
And, for comparison, here’s my own version of that clip, rendered in Rubicon — the best footage I’ve assembled to-date:
youtube
I know which one you probably prefer. I know which one I prefer. But since Paramount isn’t doing the work, I’ve got to work with what I’ve got.
The goal of the DS9UP isn’t to create a better version of DS9 than Paramount could produce — it’s to create the best version of Deep Space Nine that it’s possible to build (with allowance for individual taste).
Now Read:
Deep Space Nine Upscale Project Season Finale: What We’ve Brought Ahead
Deep Space Nine Upscale Project Update: ‘Sacrifice of Angels’
Upscaling Star Trek: Deep Space Nine Using Topaz Video Enhance AI
from ExtremeTechExtremeTech https://www.extremetech.com/extreme/310224-deep-space-nine-upscale-project-ds9up-technical-goals-and-faq from Blogger http://componentplanet.blogspot.com/2020/05/deep-space-nine-upscale-project-ds9up.html
0 notes
componentplanet · 5 years ago
Text
The Star Trek Tricorder Gets Raspberry Pi’d: Behold the Picorder!
Efforts to build a Star Trek tricorder-like device have been ongoing for at least a decade or two now, with various individuals and organizations contributing their own idea to the concept. For those of you who aren’t Star Trek fans: A tricorder is a small, handheld device capable of determining various types of information about the local environment. In science fiction terms, this often takes the place of scanning for chronitons, dissipating energy patterns, or Spot, Data’s cat. The real life efforts to build a tricorder have been a bit more prosaic, but there’s huge potential value in being able to see invisible forces around us.
Naturally, that means somebody had to go and build one out of a Raspberry Pi. Somehow, the name “Picorder” hasn’t been slapped on this thing yet, and while I’m certain ViacomCBS would be all over that branding like a Klingon on blood wine after a grueling Kal’Hyah, it’s a pretty good nickname for a tricorder built from a Raspberry Pi.
Gabe Emerson, of SaveItForParts, has built himself a Picorder with a nifty set of mostly-functioning capabilities.
youtube
The guy gets some props for kicking off a discussion of a Star Trek device with a reference to wormhole travel via SG-1. Official name: The “Save it For Parts Handheld Scanner.” Integrated components include a display, RTL-SDR software-defined radio for picking up radio signals, and a thermal imager built into the top of the package. Local atmosphere, temperature, and pressure sensors were supposed to be installed as well, but due to unspecified problems, those components had to be left out of the (current) final build.
Components that did make it include an Adafruit display, a DHT11/22 sensor for temperature and relative humidity, and a fair bit of home-made modification and rewiring for his own custom project. Obviously this is a homemade DIY device and scarcely going to be competing for design dollars in professional contexts, but bit by bit, the idea of a tricorder is becoming a reality.
We already have handheld devices with dedicated cameras and it isn’t hard to imagine fielding a FLIR module. With optical and thermal imaging addressed, let’s integrate a night vision module in there as well. A microphone with the ability to capture a wider range of sound than the human ear can hear would do for the episodes where tricorders are depicted as having sensed something beyond the bounds of human hearing.
This might seem beyond the scope of this story, but during the recent TSMC event, one of the designers they tapped, Butterfly iQ, was discussing their ongoing efforts to build a universal, hand-held ultrasound machine, and the required “Ultrasound on a chip” they had to build in order to bring the machine to life.
Of course, on Star Trek, tricorders were powered by the plot, with a disparate set of capabilities depending on what was called for. The real-life version, however, is increasingly a question of “When will it get easier to put all these components in the same box?” as opposed to “Nobody could ever build a sensor that advanced.” In the long-term, we may find this is one area where our own inventions quickly outstrip the sorts of capabilities late 90s sci-fi envisioned. As impressive as tricorders were, they weren’t as good for content ingestion or review as the devices we already have today.
In any event, as backyard “Picorders” go, this seemed a fun project. There are a number of other backyard projects on SaveItForParts, so check it out if you feel the urge.
Now Read:
DS9 Upscale Project Update: What I’ve Been Working On
Deep Space Nine Upscale Project Season Finale: What We’ve Brought Ahead
Deep Space Nine Upscale Project (DS9UP): Technical Goals and FAQ
from ExtremeTechExtremeTech https://www.extremetech.com/electronics/314218-the-star-trek-tricorder-gets-raspberry-pid-behold-the-picorder from Blogger http://componentplanet.blogspot.com/2020/08/the-star-trek-tricorder-gets-raspberry.html
0 notes
componentplanet · 5 years ago
Text
Modders Are Using AI to Overhaul Old Games Textures, With Gorgeous Results
Update (6/2/2020): This is a story I’m truly glad to return to. Since this ran in December 2018, there’s been a huge surge in game mods with exactly these kinds of upgrades. Deus Ex now has New Vision 2.0a. There’s the Remako mod for Final Fantasy VII, Moguri for Final Fantasy IX, Hexen Neuro x4 to improve that title, and an AI upscale mod for The Witcher. There’s a mod for Vampire the Masquerade: Bloodlines, and one for Sonic Adventure. Fan of the original Red Faction? Grab the upscale here.
The methods behind these AI upscales vary. Many use Topaz VEAI, but some AI mods use ESRGAN or don’t identify the specific AI model. As always, mods reflect the interests (and skill level) of the creator, and quality can vary, but AI upscaling is emerging as a new way to remaster old games without the original creator’s involvement. In many cases, that’s the only way these kinds of projects can even realistically happen. Only a relative handful of classic and beloved titles like Final Fantasy VII get remastered or remade. Skywind has been in development for a long time at this point. While the mod is much more than a simple remaster of Morrowind, you still can’t download and play it. What you can do is download AI-remastered textures and load them into the original game engine, as discussed below.
That’s not a knock at Skywind, which is an incredible and far-ranging mod project that I’m looking forward to. The nature of modding is that it can take an incredibly long time, seeing as nobody is getting paid for it. AI promises to accelerate the work substantially, allowing users to perform upscales that used to require studio-level attention.
The end of this story refers to the hope that we might see a Deep Space Nine upscale at some point. I’ve actually taken on that project myself. Details and rendered scene comparisons on what I’m calling the Deep Space Nine Upscale Project (DS9UP) are available here, while the project FAQ is here.
Original story below:
One of the promises of AI is its ability to enhance pre-existing visual detail rather than requiring the painstaking creation of all-new work. While there have been plenty of technical demos on this topic, including prototypes that actually deliver the ‘Enhance’ function TV shows have been claiming already exists for decades, consumer applications remain fewer and farther between. That could change in the near future, thanks to the potential for using AI to improve graphics in older games.
If you’ve ever been a fan of PC game mods, you’re aware that most mod projects don’t come to fruition, particularly big ones. Huge texture overhauls and update projects for classic titles can work wonders — I’ve been replaying Resident Evil 4 with an HD texture overhaul pack that makes the game look like something far closer to the modern era, even if the lighting is simplistic by current standards. But the amount of work required to update a game in this fashion is enormous. It has to be done carefully and with an eye towards replacing most, if not all, of the content in the game or else the result is a spliced-together trainwreck that satisfies no one. HardOCP recently posted a pair of updates showing how modders are using AI to clean up two popular games — Doom 2 and Morrowind — and the improvements are huge.
In Doom 2’s case, the author, hidfan, used super-resolution texture tools from Nvidia’s GameWorks and Topaz Lab’s AI Gigapixel to create the upscaling, then downscaled them again, manually removed AI artifacts, and manually adjusted the transparency masks (according to hidfan “AI don’t [sic] know what to do with binary Black&White yet.”)
The Morrowind team uses a method called ESRGAN, which stands for Enhanced Super Resolution Generative Adversarial Networks, a method of adding detail to initially low-resolution images. The team writes:
By doing it over several passes with the goal of fooling its adverserial [sic] part, it will usually produce an image with more fidelity and realism than past methods. I have upscaled the textures in Morrowind to four times the vanilla resolution using ESRGAN. Below you can compare various models’ results to the original (HR).
Right now, such tools are obviously in their infancy, but there’s real long-term potential here. If AI can increase resolution and improve texture detail, it could allow for remastering games where the original assets are no longer available without painstakingly creating new, updated resources from scratch. If this approach can be extended to video, the results could be enormous. Fans of TV shows like Star Trek: Deep Space 9 and Babylon 5 (including the author) have lamented that there’s no appetite for remastering these series due to ugly issues surrounding rights and the need to recreate the CGI scenes from scratch in the latter and the cost and expense of doing so in the former. Anything that moves the dial in terms of making AI more readily available for users to experiment with in such fashion is an advance in our book.
Feature image from the Morrowind Enhanced Textures mod. 
Now Read:
OpenAI Bots Crush the Best Human Dota 2 Players in the World
Nvidia AI Can Render Complete Urban Environments in Unreal Engine 4
Why AI still sucks in video games — and how DirectX 12 could change it for the better
from ExtremeTechExtremeTech https://www.extremetech.com/gaming/282695-modders-are-using-ai-to-overhaul-old-games-textures-with-gorgeous-results from Blogger http://componentplanet.blogspot.com/2020/06/modders-are-using-ai-to-overhaul-old.html
0 notes