• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia DLSS 5 months on-a win or fail?

Not sure if anyone else has seen this Q&A session with Andrew Edelsten, technical director of deep learning at NVIDIA: https://www.nvidia.com/en-us/geforce/news/nvidia-dlss-your-questions-answered/

It clarifies some of the questions around how DLSS works, but there's an interesting point here:

Q: How does DLSS work?

A: The DLSS team first extracts many aliased frames from the target game, and then for each one we generate a matching “perfect frame” using either super-sampling or accumulation rendering. These paired frames are fed to NVIDIA’s supercomputer. The supercomputer trains the DLSS model to recognize aliased inputs and generate high quality anti-aliased images that match the “perfect frame” as closely as possible. We then repeat the process, but this time we train the model to generate additional pixels rather than applying AA. This has the effect of increasing the resolution of the input. Combining both techniques enables the GPU to render the full monitor resolution at higher frame rates.

Based on that, it sounds like NVIDIA has to run the game through their 'supercomputer' to generate whatever dataset is required for the tensor cores to do their job.

If that's the case, it sounds as if though there's likely going to be some kind of submissions process for developers to have their game 'trained'. If this proves to be the case, will this have an impact on the uptake of DLSS in future titles?

There's a lot of work involved in this. I didn't realise that its all fed into their supercomputer to then feed to the cores to tell it what to do.

Does that mean it will have to be tweaked to every game to get it right, what about all the different configurations of peoples setups?
 
What was the point in DLSS?

It's a feature that marketing can sell cards on. Look even at this forum, who is comprised of mostly knowledgeable folk (compared to mass market), and you'll find 1 in 2 posts where they're recommending an RTX card they do so based on DLSS. Reddit is full of people doing the same. That's no accident. Now think of people who can't even remember the name of the cards besides maybe Nvidia RTX GPU.

Make no mistake, DLSS is a sales win for Nvidia.
 
Untitled.png



:D And to think for only another £1000 I could have had DLSS!
 
Does that mean it will have to be tweaked to every game to get it right, what about all the different configurations of peoples setups?

Reading the QA, it's apparent that yes, DLSS works on a per-game basis, each of which needs to be 'processed' by NVIDIA before it can be enabled.

I wondered how developers were going to do the training, but it reads as if they don't do it at all; NVIDIA does that part on their behalf.

If that really is the case, it opens up a number of other questions:

* Is this a chargeable service?
* How long does the process take?
* What technical requirements must your game support to make it compatible with NVIDIA's 'deep learning' training?
* How long does it take to implement whatever output NVIDIA generates?

I can't look it up right now, but there's an interesting interview on YouTube with Jensen Huang giving a talk to some students majoring in business/entrepreneurship. He discusses the introduction of programmable shaders on GeForce 3(?) and mentions it was a huge gamble that could have sunk the company if it didn't work; perhaps we're seeing something similar with Turing.

One theory is that NVIDIA has such a performance lead at the top end that it can afford to pour resources into developing RTX without losing the performance crown to AMD or anyone else, at least not in one product cycle.

Rightly or wrongly, NVIDIA is trying to influence the future of the consumer graphics market with this new technology and vision of a ray-traced future, and it's only in the fullness of time that we'll get to see how that play pans out.
 
Lol
Reduce render to 78% and get better image quality at same performance.
All GPUs do can this also.
What was the point in DLSS?

Did anyone at nvidia question the QC?

That's the real problem here. You can simply lower render resolution and get better IQ at the same performance.

Right now you're just making your GPU do more work for a lower quality image.

A good, to the point, video by HUB with lots of evidence to back it up.
 
Reading the QA, it's apparent that yes, DLSS works on a per-game basis, each of which needs to be 'processed' by NVIDIA before it can be enabled.

I wondered how developers were going to do the training, but it reads as if they don't do it at all; NVIDIA does that part on their behalf.

If that really is the case, it opens up a number of other questions:

* Is this a chargeable service?
* How long does the process take?
* What technical requirements must your game support to make it compatible with NVIDIA's 'deep learning' training?
* How long does it take to implement whatever output NVIDIA generates?

I can't look it up right now, but there's an interesting interview on YouTube with Jensen Huang giving a talk to some students majoring in business/entrepreneurship. He discusses the introduction of programmable shaders on GeForce 3(?) and mentions it was a huge gamble that could have sunk the company if it didn't work; perhaps we're seeing something similar with Turing.

One theory is that NVIDIA has such a performance lead at the top end that it can afford to pour resources into developing RTX without losing the performance crown to AMD or anyone else, at least not in one product cycle.

Rightly or wrongly, NVIDIA is trying to influence the future of the consumer graphics market with this new technology and vision of a ray-traced future, and it's only in the fullness of time that we'll get to see how that play pans out.

In theory the developers could train the models but you need a fairly large compute farm to do the training. The training itslef is done using hundreds of Volta TV100 GPUs that Nvidia hosts. Processing time is likely between a few hours and a few day, but data prep and QA likely takes another week.

Nviia are still developing the technology. It isn;t linked to specific hardware features, but leverages the Tensor cores in order to get high performance. Nvidia could completely change the entire software architecture and algorithms used as they continue their research. This is why it isn;t really made available to developers at this time. Not least, game developers are not machine learning experts and don;t posses the resources to research their own super scaling generative networks.
 
I'm absolutely, an 'nVidia card, have my money'. In a capitalist way, that's my choice. Generally the people that say what you said, are the ones that 'wish they had the money to throw at it' but unfortunately employing stupid people in decent paid jobs dont go hand in hand for them.

I feel that you are genuinely ***** at Nvidia but I think thats a little out of order,No ?

It's akin to me saying "I would have though the people that have the money also have the brains and not buy said card because of x,t,z untill proven xyz works"
 
But by the time their "AI" has learned enough to improve the image quality. Most people will have finished the game and moved on.
True, maybe by the time the next gen stuff comes out they will have sped up the process. Hopefullly really improved it so the AI is actually doing something...
 
But by the time their "AI" has learned enough to improve the image quality. Most people will have finished the game and moved on.


Only if the developer doesn't send an executable before game release. In reality theyblikely will send alpha release a few months before hand
 
I just change the graphics options for quality/frame rate balance, if I don't like the look of DLSS I'd disable it. I'm DLSS agnostic. Tbh I don't have an RTX card yet so don't think I qualify to comment.. ;)
 
They have to send their source code. Most devs will tell them to get lost.
That's just silly IMO :D. Someone said that before and at that time I said I had access to the source code for very complex computational software that other companies would pay a fortune for. Right now I have access to the entire source code for a healthcare system rolled out to an entire country.
Point it, software houses and NV would ensure the correct contracts are in place with careful controls over who can access it. Besides, NV are not in the business of developing games.
All code added by NV developers would be reviewed by the game software house too.
NV are working closely with software houses to get RT and DLSS into all new games apparently, obviously big expensive titles not the £4.99 jobbies on Steam :D.
 
Last edited:
..that will have RTX and DLSS.

The tech isn't a fail, it's just a trade off between fps and quality. It's something that can be turned off, its not like its forced on and can't be disabled - that would be a failure. It's down to how the tech is implemented in a game. I recently got a 1070 and played Deus Ex Mankind Divided and had to tweak some things down in the options to get smooth gameplay. That's not a new concept, it happens a lot, tweak down to improve fps or push the slider to max and take a fps hit. DLSS =! a miracle :) I'm sorry if you were expecting one.

It’s forced into the price though.
 
I feel that you are genuinely ***** at Nvidia but I think thats a little out of order,No ?

It's akin to me saying "I would have though the people that have the money also have the brains and not buy said card because of x,t,z untill proven xyz works"

Yeah you're right, I shouldn't have been as strong
 
Back
Top Bottom