• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

RDNA 3 rumours Q3/4 2022

Status
Not open for further replies.
Don't AMD have their own dlss thing?

So far nothing for DLSS 3.x which is to be expected. Hopefully FSR will evolve fast and good enough.

Indeed. All frames will be the real deal rather than fake ones to boost numbers but do nothing for input latency.

We don't know the actual effect upon input latency and since not everyone plays "competitively", at hundreds of FPS, it probably doesn't matter that much. After all, some people are/were fine to play on a gamepad and at even at 30fps, so talking about input lag above all doesn't hold much weight.
 
Maybe, i don't know what to do... Nvidia just launched a £900 4080 12GB 4070, so is that then a £700 4070 4060?

I would like to hope AMD will seize the opportunity to make Nvidia look daft with much more reasonable pricing but we have taught AMD that doesn't actually do anything for their market share, all it does is leave margins on the table, so i don't have a lot of hope for that.

My recent GPU history:

GTX 970, £270
GTX 1070, £370
RTX 2070 Super, £480

Now the ##70 class card is £900.

I can't do this anymore....
So you are saying ...


green-mile-im-tired-boss.gif


Jensen is disappointed in you.
 
If DLSS 3 takes data from the next frame to fill in the detail on the low res frame it puts on your screen how does it do that? Wouldn't it need to render the next frame to get that detail?
I guess it can render then next frame, but not display it. Then you add the frame(s) in between and start displaying the frames.

There are commands in certain games/engine like Render Ahead_x (I think it was, probably still is in nVIDIA drivers), where x being a higher number meant a smoother experience at the cost of some input lag, while lower values meant lower input lag, bug increased stutter. Perhaps works in a similar way, but now the hardware has enough power to generate frames between those reference ones to increase smoothness ( there was a talk that for each "real" pixe 7 more are generated by AI?).

In a way, like that works v-sync if I'm not mistaken, as in holding back/generating just enough frames to match the display refresh rate resulting in a slight lag - sometimes can be bigger, depends per game.
 
Last edited:
Sad state of affairs really even more so when you think most games now are rubbish - maybe 1-2 a year that are really good if lucky.

Maybe just full of snobs on this forum and have much better kit than the average but hard not to see the death of pc gaming if this continues, crazy but now seems sensible £700ish for 1080Ti now more than double for high end.
 
I guess it can render then next frame, but not display it. Then you add the frame(s) in between and start displaying the frames.

There are commands in certain games/engine like Render Ahead_x (I think it was, probably still is in nVIDIA drivers), where x being a higher number meant a smoother experience at the cost of some input lag, while lower values meant lower input lag, bug increased stutter. Perhaps works in a similar way, but now the hardware has enough power to generate frames between those reference ones to increase smoothness ( there was a talk that for each "real" pixe 7 more are generated by AI?).

In a way, like that works v-sync if I'm not mistaken, as in holding back/generating just enough frames to match the display refresh rate resulting in a slight lag - sometimes can be bigger, depends per game.

The point is it would be using GPU cycles to render a frame its not displaying, displaying the frame costs nothing, rendering it costs GPU cycles, the same as if it was putting that image on your screen. Its rendering a higher res image to gather data from, not to display on the screen.

So does Jensons explanation whats going on here make any sense to you? It doesn't to me.
 
Last edited:
Sad state of affairs really even more so when you think most games now are rubbish - maybe 1-2 a year that are really good if lucky.

Maybe just full of snobs on this forum and have much better kit than the average but hard not to see the death of pc gaming if this continues, crazy but now seems sensible £700ish for 1080Ti now more than double for high end.
I do one better, people paying this money to wait on games released on the ps5 5 years later, while many can get those games now, looks great, performs great and on discount lol
 
Love the size of my MBA 6950XT - hoping the 7900 is a similar form factor, for the reference card at least. Judging from the picture they showed before it looks like an evolution of the current design so hopefully the same size!

My 6950 fits perfectly in a circa 20L SFF case (Tecware fusion). Where as something like a 4090 just won’t work - unless water cooled.
 
Last edited:
Maybe, i don't know what to do... Nvidia just launched a £900 4080 12GB 4070, so is that then a £700 4070 4060?

I would like to hope AMD will seize the opportunity to make Nvidia look daft with much more reasonable pricing but we have taught AMD that doesn't actually do anything for their market share, all it does is leave margins on the table, so i don't have a lot of hope for that.

My recent GPU history:

GTX 970, £270
GTX 1070, £370
5700XT
RTX 2070 Super, £480


Now the ##70 class card is £900.

I can't do this anymore....

FTFY


:D
 
Maybe, i don't know what to do... Nvidia just launched a £900 4080 12GB 4070, so is that then a £700 4070 4060?

I would like to hope AMD will seize the opportunity to make Nvidia look daft with much more reasonable pricing but we have taught AMD that doesn't actually do anything for their market share, all it does is leave margins on the table, so i don't have a lot of hope for that.

My recent GPU history:

GTX 970, £270
GTX 1070, £370
RTX 2070 Super, £480

Now the ##70 class card is £900.

I said **** it and got a RTX4080 12GB on pre-order

The optimist in me FTFY both.
 
Last edited:
The point is it would be using GPU cycles to render a frame its not displaying, displaying the frame costs nothing, rendering it costs GPU cycles, the same as if it was putting that image on your screen. Its rendering a higher res image to gather data from, not to display on the screen.

So does Jensons explanation whats going on here make any sense to you? It doesn't to me.
From memory: AI is trained on supercomputers using high res images, that was/is (?) the model for current versions. Using that it can accurately "guess" how to fill image properly when it upscale it.
So basically everything is the same until this point. You have frame 1 and 2, but you have them in vRAM, you don't display them. Using their new tech, they can quickly calculate and add the other frame needed to fit between the original 1 and 2 frame. For every "real" pixel of the frame, 7 other pixels are "guessed" by the GPU and this "guessing" is far less of a performance hog than it would be to render the entire frame.

So, since the magic from DLSS is already working and we can see it in action in many games, the other part of magic is creating the other frame that is slided between other two. Obviously I don't know how well that works, but I'm guessing pretty good since DLSS only got better with each iteration.

In terms of latency, I'd say is more about the overall latency: mouse, computer (with all its hardware and software bits), plus display. Each adds naturally its own lag.
Lets say you have 120-130ms for a given scenario from the moment you press the mouse button until the frame is fully rendered on screen. From that, 50ms is down to the GPU and drivers. Since that is not a hard wall and subject to improvement (reflex already does that and AMD also I believe does this), you cut that 50ms to 40ms. If the time needed to generate the extra frame is 10ms, then the latency is the same. If is less, then you have a gain. If is more, well, if is 20ms then you go from the original 50ms to 60ms. If is 30ms then is 70ms and so on. But, the new overall latency would be 130-140ms, etc., so potentially irrelevant.

Granted, until this is fully tested (for overall latency: mouse-pc-display), we can only guess. But if someone half tests this (just the GPU and maybe driver latency), then it can give a false negative.

I'm hoping AMD can do something similar rather sooner than later with FSR. Having something open and available to all is a indeed a much better option.
 
Status
Not open for further replies.
Back
Top Bottom