• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

The thread which sometimes talks about RDNA2

Status
Not open for further replies.
Because the original image was drawn at 1440 x 2560 lines, the upscales image is still 1440 x 2560 lines.

Doesnt matter how you get there, i've already said that. You're arguing that the original image is the render. That is not the case. The final image is, and that includes any pre/post processing steps taken to get there. Your argument only works if you change the meaning of the word render and even if that were acceptable, which it isnt, it's still a crappy argument.
 
Doesnt matter how you get there, i've already said that. You're arguing that the original image is the render. That is not the case. The final image is, and that includes any pre/post processing steps taken to get there. Your argument only works if you change the meaning of the word render and even if that were acceptable, which it isnt, it's still a crappy argument.

The point is by blowing up the "rendered image" you're not increasing the number of pixels in the render, you're increasing the size of the pixels.
 
The point is by blowing up the "rendered image" you're not increasing the number of pixels in the render, you're increasing the size of the pixels.


In cameras this is the difference between telescopic zoom and virtual zoom.
 
When a 3d program renders a frame, we normally refer to what we see on screen as the rendered image. The process of rendering is a black box to us. So if we see a 4k output we call that the rendered image.

https://en.wikipedia.org/wiki/Rendering_(computer_graphics)

It does not matter what took place within the gpu itself, the output is the render. They are trying to argue that the internal process set the renders resolution and not the output image. This is false, the render is the output image. If its 4k then the render is 4k. They have to fix their argument and make it logically sound.

By your logic, anything displayed on a 4k screen using upscaling is by definition a 4k render. What nonsense. We are talking about GPUs here.
The pixel count used in the initial image creation is the render resolution. The image doesn't have to be displayed for it to be rendered. If you unplug your monitor, the image will still be rendered i.e. generated.
 
and you are wrong. you ARE increasing the number of pixels. It's the number of lines depicted in the rend that doesnt increase.

Oh for crying out loud.... if i draw 1440 horizontal lines and 2560 vertical lines on a piece of paper how many intersecting lines are on my piece of paper? if i take a photo of it and blow it up to 4X the size how many lines are on it?

No matter how much i blow up the image I still have 3,686,400 intersecting lines on my paper (Pixels). by blowing up the paper i'm not increasing the number of lines i have drawn on it, think about this for a few seconds.
 
No matter how much i blow up the image I still have 3,686,400 intersecting lines on my paper. by blowing up the paper i'm not increasing the number of lines i have drawn on it, think about this for a few seconds.
Yes, that's what i said.
and you are wrong. you ARE increasing the number of pixels. It's the number of lines depicted in the rend that doesnt increase.

Tell me why the final image that arrives at the monitor doesnt fit the definition of a render.
 
Yes, that's what i said.
Tell me why the final image that arrives at the monitor doesnt fit the definition of a render.

It does fit the description but it's playing semantics.
GPUs use a rendering pipeline, you could theoretically skip a number of those stages and still render an image, display it at 4k and call it a day, but no one would argue that those early stage outputs were what is being discussed here.

I've never looked in to DLSS, is it applied after rasterisation?

Edit: Wiki says it's an upscaling technique and thus it's not native render resolution. Looks good though and bring on the MS standard so all games do it.
 
It does fit the description but it's playing semantics.
GPUs use a rendering pipeline, you could theoretically skip a number of those stages and still render an image, display it at 4k and call it a day, but no one would argue that those early stage outputs were what is being discussed here.

I've never looked in to DLSS, is it applied after rasterisation?

DLSS == Deep learning super sampling
 
By your logic, anything displayed on a 4k screen using upscaling is by definition a 4k render. What nonsense. We are talking about GPUs here.
The pixel count used in the initial image creation is the render resolution. The image doesn't have to be displayed for it to be rendered. If you unplug your monitor, the image will still be rendered i.e. generated.

The render is the output from the gpu which you see on your screen. That is the meaning of the word render. You can also render to an image file or a video file. If the output image from the gpu is 4k, then the render is 4k. By the meaning of the word render. To state that the render is not 4k is to violate law of identity. The render is 4k, so thus it is 4k.

https://en.wikipedia.org/wiki/Rendering_(computer_graphics)

Rendering or image synthesis is the process of generating a photorealistic or non-photorealistic image from a 2D or 3D model by means of a computer program. The resulting image is referred to as the render.
 
It does fit the description but it's playing semantics.
GPUs use a rendering pipeline, you could theoretically skip a number of those stages and still render an image, display it at 4k and call it a day, but no one would argue that those early stage outputs were what is being discussed here.

I've never looked in to DLSS, is it applied after rasterisation?

Edit: Wiki says it's an upscaling technique and thus it's not native render resolution. Looks good though and bring on the MS standard so all games do it.

So for their second stab at AI upscaling, NVIDIA is taking a different tack. Instead of relying on individual, per-game neural networks, NVIDIA has built a single generic neural network that they are optimizing the hell out of. And to make up for the lack of information that comes from per-game networks, the company is making up for it by integrating real-time motion vector information from the game itself, a fundamental aspect of temporal anti-aliasing (TAA) and similar techniques. The net result is that DLSS 2.0 behaves a lot more like a temporal upscaling solution, which makes it dumber in some ways, but also smarter in others.
https://www.anandtech.com/show/15648/nvidia-intros-dlss-20-adds-motion-vectors

So any game that supports TAA can also support DLSS.
 
Yes, I saw some skus for AIBs. However, not sure when they will launch. As nothing has been announced yet.

BTW MLID really burned NAAF for calling AIBs a paper launch. NAAF had to back track and admit that aibs were launched he just wasn't able to get on in his region.
17:20-19:50
20:00-22:00 MLID retorts by reminding NAAF that Nvidia actually had a paper launch and had zero stock on launch day. NAAF agreed!
22:15 Nvidia's earnings call suggests that Nvidia sent well over 1/2 their GA102 dies to miners.
23:50 NAAF finally admits to stock being in the USA but not were he is. What a drama queen!!!
24:00 MLID asked NAAF if he had access to the AMD.com numbers? NAAF said yes but didn't want to talk about it. MLID said he will talk about it and they are replenished stock. NAAF refused to admit it even though he agreed with MLID. What a joke. This NAAF is a flipflopper!!!

After this back and forth I unsub'd NAAF for his FUD.

You don't half amuse me sometimes

It seems you don't know the context behind that video existing... MLID previous video got disliked allot and comments section was insane, allot of people highlighting his arrogance and inability to admit he and his "sources" were wrong. The fact he also annoyed people in his own discord for the same reason, resulting in one of his own discord mods leaving that day (and another with foot halfway out the door). So he panic rushed out that video hours after with NaaF to try and address it (that's why it's Broken Silicon 76.5), which didn't turn out well as it got even more dislikes and people were just as frustrated as he still didn't just admit his error.

Was no EU AIB stock.. Microcenter confirmed on their own forums their US and Canada stores got zero stock, bot discords confirmed Newegg and Amazon in US also got no AIB stock... Then you've got other retailers in US saying they got triple the 3080FE+AIB stock than they did 6800 Ref+AIB if you combined the two, as they simply got no AIB stock. If that's not an AIB paper launch I don't know what is.

I also really didn't take NaaF in that video agreeing with him either, you could see the pain and cringe in his face whenever MLID kept trying to defend his position. Which is pretty much the same sentiment others had in comments also.

I really love some of MLIDs videos (especially his in depth interviews) and being in the leak game he's going to get things right and things wrong. He's doing himself no favours currently trying always to be right, then blaming the audience for "not understanding", especially when his own videos and the facts on the ground from retailers themselves don't support his claims. He should just admit it and move on, its not a big deal. His reluctance to admit his mistake is.
 
Last edited:
It does fit the description but it's playing semantics.
GPUs use a rendering pipeline, you could theoretically skip a number of those stages and still render an image, display it at 4k and call it a day, but no one would argue that those early stage outputs were what is being discussed here.

I've never looked in to DLSS, is it applied after rasterisation?
Its post-processing. so yes applied after rasterisation.

Its why it works, its how you get to a 4K image with 1440P performance, the GPU renders a 1440P image, magnifies that image and fills in some missing pixels selectively based on instructions from the drivers, for example a distant power line might not have enough pixels to fill it out completely on an image with 1440 x 2560 lines, that's why you see that as a thin line that is broken up, bits of it are missing, with 3840 x 2680 lines it might have enough pixels to fill in those gaps, Nvidia's server farms hold that information and tell the driver where to put those missing pixels to fill out the power line.

The original image the GPU rendered is still 1440P, that's how you get the near 1440P performance at 4K DLSS (1440P to 4K display image upscaled)
 
Last edited:
Status
Not open for further replies.
Back
Top Bottom