• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

The thread which sometimes talks about RDNA2

Status
Not open for further replies.
The render is the output from the gpu which you see on your screen. That is the meaning of the word render. You can also render to an image file or a video file. If the output image from the gpu is 4k, then the render is 4k. By the meaning of the word render. To state that the render is not 4k is to violate law of identity. The render is 4k, so thus it is 4k.

https://en.wikipedia.org/wiki/Rendering_(computer_graphics)

When a 3d program renders a frame, we normally refer to what we see on screen as the rendered image. The process of rendering is a black box to us. So if we see a 4k output we call that the rendered image.

Is my 4k display, that is upscaling to 4k not me seeing a 4k image?
What about CPU based rendering? Is that not really a render?
Make your mind up because you seem to be disagreeing with yourself as much as anyone else.

Post processing either does or doesn't matter. Where that post-processing occurs makes absolutely no difference in this context.
 
It post-processing. so yes applied after rasterisation.

Its why it works, its how you get to a 4K image with 1440P performance, the GPU renders a 1440P image, magnifies that image and fills in some missing pixels selectively based on instructions from the drivers, for example a distant power line might not have enough pixels to fill it out completely on an image with 1440 x 2560 lines, that's why you see that as a thin lines that its broken up, bits of it missing, with 3840 x 2680 lines its migh have enough pixels to fill in those pixels, Nvidia's server farms hold that information and tell the driver when to put those missing pixel to fill out the power line.

The original image the GPU rendered is still 1440P, that's how you get the near 1440P performance at 4K DLSS (1440P to 4K display image upscaled)

Again the gpu renders a 4k image. It does not render a 1440p image to then produce a 4k image later. DLSS is a part of the graphics pipline, just like TAA is. The output image is 4k, thus the render is 4k.
 
O wait you can make coherent sentences. 8k DLSS is 8k. By the law of identity. The image output is in the 8k resolution.

4K, or 8K with DLSS is not 4K or 8k, it's lower resolution upscaled to the screen's native resolution, and when you upscale something, the IQ degrades, so DLSS is the process to bring that image up to scratch, Native 4k the cards physically rendering the scene in 4k, DLSS the cards are rendering the scene at lower resolution.

The output may be 4k/8k, but the rendering of the scene is not 4k/8k.

If you got a 3090 doing 8k without DLSS it would totally tank to unusable framerates.
 
Is my 4k display, that is upscaling to 4k not me seeing a 4k image?
What about CPU based rendering? Is that not really a render?
Make your mind up because you seem to be disagreeing with yourself as much as anyone else.

Post processing either does or doesn't matter. Where that post-processing occurs makes absolutely no difference in this context.

There is no point in that argument. You just made up a load of crap and posted it. The meaning of render has been posted. Post processing is a part of the rendering pipline when you produce a rendered image.
 
It does fit the description but it's playing semantics.

Exactly, it does fit the description which is why humbug wont answer the question. No, you've got that backwards. I have explained what render means. I have given all three meanings of the word and shown they can all be applied to the final image. It's not me playing semantics, it's humbug when he arguing that the render is the initial image and nothing else.

GPUs use a rendering pipeline, you could theoretically skip a number of those stages and still render an image, display it at 4k and call it a day, but no one would argue that those early stage outputs were what is being discussed here.
That's exactly what humbug is arguing.
 
4K, or 8K with DLSS is not 4K or 8k, it's lower resolution upscaled to the screen's native resolution, and when you upscale something, the IQ degrades, so DLSS is the process to bring that image up to scratch, Native 4k the cards physically rendering the scene in 4k, DLSS the cards are rendering the scene at lower resolution.

The output may be 4k/8k, but the rendering of the scene is not 4k/8k.

If you got a 3090 doing 8k without DLSS it would totally tank to unusable framerates.

It is. If the gpu renders a 4k image, it is a 4k image. Law of identity. To state any different is to violate the law of identity.
 
It is. If the gpu renders a 4k image, it is a 4k image. Law of identity. To state any different is to violate the law of identity.

But with DLSS, the image is rendered at a lower resolution.

You can't say a 2k image blown up to 4k and touched up to look like 4k, is 4k.

I think you know very well what's happening with DLSS, the GPU needs to use less of its power when DLSS is enabled, because it's actually rendering less pixels.
 
Again the gpu renders a 4k image. It does not render a 1440p image to then produce a 4k image later. DLSS is a part of the graphics pipline, just like TAA is. The output image is 4k, thus the render is 4k.

No, again you're using blanket statements for something you do not understand, simply blurting 3+2 = 4 does not make it so.

The GPU draws the lines on the paper, your screen displays those lines and if its a 4K screen it will display 3840 x 2160 Lines, the lines the GPU drew on the paper are still 1440 x 2560.
 
No, again you're using blanket statements for something you do not understand, simply blurting 3+2 = 4 does not make it so.

The GPU draws the lines on the paper, your screen displays those lines and if its a 4K screen it will display 3840 x 2160 Lines, the lines the GPU drew on the paper are still 1440 x 2560.

The meaning of the word render, refers to the image the GPU produces. There is no reguard given to how the image was processed by the GPU. If the output image from the gpu is 4k then the rendered image is also 4k. That is what render means, its you who is trying to redefine the word render. To mean some part in the process that happens before the image is rendered.

The rendered image is the end result, not some point in the process.
 
Exactly, it does fit the description which is why humbug wont answer the question. No, you've got that backwards. I have explained what render means. I have given all three meanings of the word and shown they can all be applied to the final image. It's not me playing semantics, it's humbug when he arguing that the render is the initial image and nothing else.

That's exactly what humbug is arguing.
No he's not. He's arguing that post processing, regardless of whether that is on chip before the frame buffer (in the case of dlss) or not, does not mean the GPU is natively rendering a scene at the output resolution, which it isn't.
If the scene is rasterised at 2k then it's a 2k render, upscaled by dlss to 4k for output purposes. This can still be called a render as can any image presented to the user but it's not what is being discussed here.

Not that it really matters, if DLSS produced near on indistinguishable 4k then that's great and should be adopted as an industry standard (which it is being).
If DLSS was offloaded to the CPU and it resulted in a better framerate, would anyone care? Only zx because his identity is founded on nVidia graphics performance.


The meaning of the word render, refers to the image the GPU produces. There is no reguard given to how the image was processed by the GPU. If the output image from the gpu is 4k then the rendered image is also 4k. That is what render means, its you who is trying to redefine the word render. To mean some part in the process that happens before the image is rendered.

The rendered image is the end result, not some point in the process.
No it doesn't, it simply means image synthesis by a computer programme and it doesn't matter what hardware is used to accomplish it. This is why you can have CPU rendering for example.
 
The meaning of the word render, refers to the image the GPU produces. There is no reguard given to how the image was processed by the GPU. If the output image from the gpu is 4k then the rendered image is also 4k. That is what render means, its you who is trying to redefine the word render. To mean some part in the process that happens before the image is rendered.

The rendered image is the end result, not some point in the process.

DLSS 2.0 offers users 3 image quality modes - Quality, Balanced, Performance - that control the game’s internal rendering resolution, with Performance mode enabling up to 4X super resolution (i.e. 1080p → 4K). This means more user choice, and even bigger performance boosts.

The internal rendering resolution with DLSS at 4k display resolution is lower.
 
TLDR: DLSS 2.0 is the world’s best TAA implementation. It really is an incredible technology and can offer huge performance uplifts (+20-120%) by rendering the game at a lower internal resolution and then upscaling it. It does this while avoiding many of the problems that TAA usually exhibits like ghosting, smearing, and shimmering. While it doesn’t require per-game training, it does require some work from the game developer to implement. If they are already using TAA, the effort is relatively small. Due to its AI architecture and fixed per frame overhead, its benefits are limited at higher fps and it’s more useful at higher resolutions. However, at low fps the performance uplift can be enormous, from 34 to 68 fps in Wolfenstein at 4K+RTX on a 2060.
https://www.reddit.com/r/nvidia/comments/fvgl4w/how_dlss_20_works_for_gamers/

To implement DLSS2, a game designer will need to use Nvidia’s library in place of their native TAA. This library requires as input: the lower resolution rendered frame, the motion vectors, the depth buffer, and the jitter for each frame. It feeds these into the deep learning algorithm and returns a higher resolution image. The game engine will also need to change the jitter of the lower resolution render each frame and use high resolution textures. Finally, the game’s post processing effects, like depth of field and motion blur, will need to be scaled up to run on the higher resolution output from DLSS. These changes are relatively small, especially for a game already using TAA or dynamic resolution. However, they will require work from the developer and cannot be implemented by Nvidia.

DLSS replaces TAA in the graphic pipline. Post processing is afterwards.
 
Last edited:
The meaning of the word render, refers to the image the GPU produces. There is no reguard given to how the image was processed by the GPU. If the output image from the gpu is 4k then the rendered image is also 4k. That is what render means, its you who is trying to redefine the word render. To mean some part in the process that happens before the image is rendered.

The rendered image is the end result, not some point in the process.

If the GPU draws 1440 x 2560 lines on a piece of paper and tells the screen to output that image at 3840 x 2160 how many lines on the piece of paper?

The paper is the render, that's what the GPU does, the screen simply displays that render, if the screen only has 1080 x 1920 lines the paper is still 1440 x 2560, just as is if the screen is 4K or 8K the rendered image remains the same.

You're redefining semantics to save face.
 
No he's not. He's arguing that post processing, regardless of whether that is on chip before the frame buffer (in the case of dlss) or not, does not mean the GPU is natively rendering a scene at the output resolution, which it isn't.

Yeah, he is, he has explicitly stated more than once that the output isnt the render, the original scene is. And that is wrong. Any stage of the pipeline can be broken down and it's output considered a render. You are agreeing with me but arguing that humbug is correct. He isnt!

No it doesn't, it simply means image synthesis by a computer programme and it doesn't matter what hardware is used to accomplish it. This is why you can have CPU rendering for example.
Now who's playing semantics? We are discussing an image being created buy a GPU. In the context of the discussion, zx is correct.
 
DLSS 2.0 offers users 3 image quality modes - Quality, Balanced, Performance - that control the game’s internal rendering resolution, with Performance mode enabling up to 4X super resolution (i.e. 1080p → 4K). This means more user choice, and even bigger performance boosts.

The internal rendering resolution with DLSS at 4k display resolution is lower.

The correct terms, well done. Basically DLSS is a better form of TAA which can also upscale the image as part of the process. You could call DLSS 2.0 the world’s best TAA implementation. It does not word like an upscaler. DLSS requires information from the games engine to work. The lower internal resolution rendered frame, the motion vectors, the depth buffer and the jitter for each frame. Then after that post processing takes place and then we move towards the final render we see.
 
If the GPU draws 1440 x 2560 lines on a piece of paper and tells the screen to output that image at 3840 x 2160 how many lines on the piece of paper?

The paper is the render, that's what the GPU does, the screen simply displays that render, if the screen only has 1080 x 1920 lines the paper is still 1440 x 2560, just as is if the screen is 4K or 8K the rendered image remains the same.

You're redefining semantics to save face.

Now you are conflating GPU and display scaling. That's disingenuous, stop it. if the GPU 'draws' 1440x2560 and that is what the display outputs, then that's the final render. If the display upscales that to 4k, then 4k is the final render but the display did the upscaling.

The silliest thing about this argument is that you imply the GPU does nothing other than upscaling the image when you know that's not the case with DLSS. DLSS creates detail where there wasnt any. it fills in the blanks utilising machine learning. it doesnt just use more pixels render lines.
 
Now you are conflating GPU and display scaling. That's disingenuous, stop it. if the GPU 'draws' 1440x2560 and that is what the display outputs, then that's the final render. If the display upscales that to 4k, then 4k is the final render but the display did the upscaling.

The silliest thing about this argument is that you imply the GPU does nothing other than upscaling the image when you know that's not the case with DLSS. DLSS creates detail where there wasnt any. it fills in the blanks utilising machine learning.it doesnt just use more pixels to c

The screen cannot redraw the lines on the paper no matter what the size the GPU tells the screen to display it at, if i turn on Virtual Super Resolution and run my game at 4K the GPU is rendering that game at 4K, the GPU tell's my screen to display it at 1440P because that's it's native resolution, it cannot physically display more lines than that, the rendered image is still 4K because that's what i told the GPU to do.
 
Status
Not open for further replies.
Back
Top Bottom