• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
Is it still that Eastcoast person banging on about bandwidth, just put then on ignore as they just won't stop banging on and on and won't listen to reason.

It can't matter that much, my 7 has what over 1tb of bandwidth and at 720p on some benchmark I ran my 7 was getting owned by almost everything! even a rx570 scored better at that 720p workload. :D So bandwidth of the card alone means little imo.

Sorry I meant to say bandwidth > everything because I have lots.
 
I think that is where it will be personally. Maybe Nvidia could try a larger GPU,but I wonder if it would be a limited release then.




505MM2 isn't a "small die" which is what the "big Navi" rumours keep saying,and would make it the largest AMD GPU ever made(Vega10 according to TPU is 495MM2). This places it between GP102 and GK110/GM200 in die area.

I think you need to ignore Turing,as it was a large die made on oldish process,with further changed technical specifications,which allowed for large chips. If you look at the last 10 years of Nvidia "large GPUs" on a new node,they have been between 470MM2~600MM2. The problem with larger dies is if you need to start disabling large sections of the GPU to keep yields and power consumption in check.

For example the GA100 is just over 800MM2 in area. It has 8192 shaders,but the A100 is launching with under 7000 shaders,and a TDP of 400W,instead of it's predecessor being 300W:
https://videocardz.com/press-release/nvidia-announces-ampere-ga100-gpu

The question,is not if the top gaming Ampere GPU is larger,but how much larger it will be. If it's 650MM2 ,and has similar yields and defects rates to the 500MM2 AMD GPU,then it might be measurably faster. But what happen's if that 650MM2 GPU has far greater defect rates,etc and Nvidia will need to cut it down more,etc?? I give you an example of this - Fermi. The GF100 GPU had to be cut down and it wasn't clocked as high as it should,as it was difficult to manufacture. ATI Cypress was 63% of the die area of the GF100,but the GTX480 was only between 5% to 10% faster overall despite more memory bandwidth and VRAM:
https://www.techpowerup.com/review/nvidia-geforce-gtx-480-fermi/32.html


Now look at the fixed GF110,when yields,etc got better. In the end most of these things will be determined by uarch effiency. We found with AMD they started to loose scaling due to various reasons,so we need to see how well Ampere and RDNA2 scales as well. Then we need to see how well features are supported by games. Nvidia is traditionally stronger at this,but then RDNA2 is in consoles,so should help AMD unlike Vega,which was neither here nor there.

Your post basically is a very long way of saying what I said. AMD need a large die to compete. I reckoned it would be 500mm2 plus, and the first line of your post agrees with. And 500mm2 on 7nm is a very big die size, like I said earlier, for reference the 2080ti would be 471mm2 on a 7nm process.

It's going to be expensive. That's the point, it's not going to be small cheap die like Eastcoasthandle suggests. And if it is a small cheap die, then it won't compete with the 3080ti in any way shape or form.

So, if you are going to reply, just answer the question. Do you think AMD will be able to compete with the 3080Ti with a small cheap die, yes or no?

Read my answer again. You seem to on purpose ignoring the last part of what I said.

I kept asking about yields. You just brushed it under the carpet. I am asking you again,IF AMD is making a 500mm2 GPU,then how big a GPU will Nvidia need to make to "beat" AMD. What is the definition of small??

If the GA102 is 700MM2,then big Navi is a "small" die.You seem to not understand this. If Nvidia has bigger problems making a larger GPU in volume,then how does that work out? You should know by now die size,does not mean an instant win for Nvidia or AMD at all.

How big a GPU will Nvidia need to beat a 500MM2 RDNA2 GPU?? Will it be smaller,will it be bigger? If so how much bigger? If it's bigger,then will it mean poorer yields and more of the GPU being cut down? ATM,we have no clue how both RDNA2 and Ampere stack up in actual uarch performance and scaling,let alone yields.

So, if you are going to reply, just answer the question.
 
Last edited:
DX12 Ultimate is a console API ported to PC, its designed first and fore most for the RDNA 2 Architecture, the only input Nvidia have is how it will work on their own GPU's, AMD will work with Microsoft in how it will work on their GPU's.

Ray Tracing is nothing to do with Nvidia. Its agnostic.

Sorry wasn't on the forums all weekend and only catching up to stuff now.

Why do you keep telling me that Ray Tracing is agnostic? I have never claimed otherwise.

DirectX 12 Ultimate is a unifying API. The Xbox series X is basically a PC. Do you think for one second that Nvidia would not have had as much input into it's development as AMD?

Getting back to Ray Tracing been agnostic. Do you think that AMD and Microsoft (and Nvidia too) learned nothing from the last couple of years that Ray Tracing has been out on the PC? I am sure there were lessons learned. And since the Xbox series X is just a PC and is going to be using an updated version of DXR, AMD can use that knowledge learned to make the Xbox series X raytracing better. They even filed a Ray Tracing Patent in June/July last year with changes from the one they submitted in December 2017.

And that leads to another point, AMD's Ray Tracing, it seems from those patents , as well as the video that @Calin Banc posted, that AMD's solution does contain hardware elements.

http://www.freepatentsonline.com/20190197761.pdf
 
Read my answer again. You seem to on purpose ignoring the last part of what I said.

I kept asking about yields. You just brushed it under the carpet. I am asking you again,IF AMD is making a 500mm2 GPU,then how big a GPU will Nvidia need to make to "beat" AMD.

Also if it is "bigger",then at what point will yields be a problem??You seem to not understand this. If Nvidia has bigger problems making a larger GPU in volume,then how does that work out? You should know by now die size,does not mean an instant win for Nvidia or AMD at all.

How big a GPU will Nvidia need to beat a 500MM2 RDNA2 GPU?? Will it be smaller,will it be bigger? If so how much bigger? If it's bigger,then will it mean poorer yields and more of the GPU being cut down? ATM,we have no clue how both RDNA2 and Ampere stack up in actual uarch performance and scaling,let alone yields.

So, if you are going to reply, just answer the question.

Nope, because it's waffle. Answer the question I asked.
 
I think that is where it will be personally. Maybe Nvidia could try a larger GPU,but I wonder if it would be a limited release then.




505MM2 isn't a "small die" which is what the "big Navi" rumours keep saying,and would make it the largest AMD GPU ever made(Vega10 according to TPU is 495MM2). This places it between GP102 and GK110/GM200 in die area.

I think you need to ignore Turing,as it was a large die made on oldish process,with further changed technical specifications,which allowed for large chips. If you look at the last 10 years of Nvidia "large GPUs" on a new node,they have been between 470MM2~600MM2. The problem with larger dies is if you need to start disabling large sections of the GPU to keep yields and power consumption in check.

For example the GA100 is just over 800MM2 in area. It has 8192 shaders,but the A100 is launching with under 7000 shaders,and a TDP of 400W,instead of it's predecessor being 300W:
https://videocardz.com/press-release/nvidia-announces-ampere-ga100-gpu

The question,is not if the top gaming Ampere GPU is larger,but how much larger it will be. If it's 650MM2 ,and has similar yields and defects rates to the 500MM2 AMD GPU,then it might be measurably faster. But what happen's if that 650MM2 GPU has far greater defect rates,etc and Nvidia will need to cut it down more,etc?? I give you an example of this - Fermi. The GF100 GPU had to be cut down and it wasn't clocked as high as it should,as it was difficult to manufacture. ATI Cypress was 63% of the die area of the GF100,but the GTX480 was only between 5% to 10% faster overall despite more memory bandwidth and VRAM:
https://www.techpowerup.com/review/nvidia-geforce-gtx-480-fermi/32.html


Now look at the fixed GF110,when yields,etc got better. In the end most of these things will be determined by uarch effiency. We found with AMD they started to loose scaling due to various reasons,so we need to see how well Ampere and RDNA2 scales as well. Then we need to see how well features are supported by games. Nvidia is traditionally stronger at this,but then RDNA2 is in consoles,so should help AMD unlike Vega,which was neither here nor there.

Your post basically is a very long way of saying what I said. AMD need a large die to compete. I reckoned it would be 500mm2 plus, and the first line of your post agrees with. And 500mm2 on 7nm is a very big die size, like I said earlier, for reference the 2080ti would be 471mm2 on a 7nm process.

It's going to be expensive. That's the point, it's not going to be small cheap die like Eastcoasthandle suggests. And if it is a small cheap die, then it won't compete with the 3080ti in any way shape or form.

So, if you are going to reply, just answer the question. Do you think AMD will be able to compete with the 3080Ti with a small cheap die, yes or no?

I think that is where it will be personally. Maybe Nvidia could try a larger GPU,but I wonder if it would be a limited release then.




505MM2 isn't a "small die" which is what the "big Navi" rumours keep saying,and would make it the largest AMD GPU ever made(Vega10 according to TPU is 495MM2). This places it between GP102 and GK110/GM200 in die area.

I think you need to ignore Turing,as it was a large die made on oldish process,with further changed technical specifications,which allowed for large chips. If you look at the last 10 years of Nvidia "large GPUs" on a new node,they have been between 470MM2~600MM2. The problem with larger dies is if you need to start disabling large sections of the GPU to keep yields and power consumption in check.

For example the GA100 is just over 800MM2 in area. It has 8192 shaders,but the A100 is launching with under 7000 shaders,and a TDP of 400W,instead of it's predecessor being 300W:
https://videocardz.com/press-release/nvidia-announces-ampere-ga100-gpu

The question,is not if the top gaming Ampere GPU is larger,but how much larger it will be. If it's 650MM2 ,and has similar yields and defects rates to the 500MM2 AMD GPU,then it might be measurably faster. But what happen's if that 650MM2 GPU has far greater defect rates,etc and Nvidia will need to cut it down more,etc?? I give you an example of this - Fermi. The GF100 GPU had to be cut down and it wasn't clocked as high as it should,as it was difficult to manufacture. ATI Cypress was 63% of the die area of the GF100,but the GTX480 was only between 5% to 10% faster overall despite more memory bandwidth and VRAM:
https://www.techpowerup.com/review/nvidia-geforce-gtx-480-fermi/32.html


Now look at the fixed GF110,when yields,etc got better. In the end most of these things will be determined by uarch effiency. We found with AMD they started to loose scaling due to various reasons,so we need to see how well Ampere and RDNA2 scales as well. Then we need to see how well features are supported by games. Nvidia is traditionally stronger at this,but then RDNA2 is in consoles,so should help AMD unlike Vega,which was neither here nor there.

Your post basically is a very long way of saying what I said. AMD need a large die to compete. I reckoned it would be 500mm2 plus, and the first line of your post agrees with. And 500mm2 on 7nm is a very big die size, like I said earlier, for reference the 2080ti would be 471mm2 on a 7nm process.

It's going to be expensive. That's the point, it's not going to be small cheap die like Eastcoasthandle suggests. And if it is a small cheap die, then it won't compete with the 3080ti in any way shape or form.

So, if you are going to reply, just answer the question. Do you think AMD will be able to compete with the 3080Ti with a small cheap die, yes or no?

Nope, because it's waffle. Answer the question I asked.

Nope, because you are talking waffle. Answer the question I asked.
 
LOL what are you like 12?
So instead of answering the questions asked,you start trying to attack people. I asked you some technical questions regarding your stance,and you refused to answer,because you couldn't answer,and then started attacking people.

Edit!!

Also thank you for admitting you are acting like a 12 year old. It seems you find your language befitting of a child.

You can't define what a "small die" is. Small die relative to what?? But how can we tell from that measurement,until we see both uarchs in action - I told you repeatedly,uarch efficiency and ease of manufacture is more important.History tells us this. R300/Turing smaller dies than competitors products but beat them. Fermi significantly bigger than Evergreen,but yields were so terrible it barely could pip past it.

You just sprout terms,but when questioned on them you have no answers. I noticed quite a few doing the same with you and giving up.

All you do is deflect by making snide remarks like you do. Continue onwards.

Second Edit!!

Looking at my original posts I was talking to TheRealDeal anyway,not you:
https://forums.overclockers.co.uk/posts/33638471/

He actually agreed with what I said,and you had to come to "prove me wrong". You were so busy argueing with eastcoast,you got me involved in an argument I didn't even want to be involved.

Since you have nearly double the posts in this thread,I will let you continue to argue,and make snide remarks with others.
 
Last edited:
So instead of answering the questions asked,you start trying to attack people. I asked you some technical questions regarding your stance,and you refused to answer,because you couldn't answer,and then started attacking people.

Attacking you? You acted like a child.

EDIT: And you continue to act like a child.
 
Last edited:
AdoredTV admits to being trolled, not a jibe at him at least he's man enough to admit it.

My opinon... 2.7Ghz Clock speed fake, could be 2.3Ghz, the PS5 IS 2.23Ghz, 20GB's GDDR6 obviously fake, GDDR6 is 14 to 16GB/s, 512Bit Bus.... don't know.

The rest is probably legit.

 
AdoredTV admits to being trolled, not a jibe at him at least he's man enough to admit it.

2.7Ghz Clock speed fake, could be 2.3Ghz, the PS5 IS 2.23Ghz, 20GB's GDDR6 obviously fake, GDDR6 is 14 to 16GB/s, 512Bit Bus.... don't know.

The rest is probably legit.


Well, I wasn't expecting this video to be accurate anyway. He gets things wrong more often than right. But at least he does come back and update his theories as he finds out new information.
 
Well, I wasn't expecting this video to be accurate anyway. He gets things wrong more often than right. But at least he does come back and update his theories are he goes along.

He got Turing right, he got Zen and Zen 2 right, looks like he got Zen 2 refresh right.... and a couple others.

One can't always be right but he's a better source than WCCF.
 
He got Turing right, he got Zen and Zen 2 right, looks like he got Zen 2 refresh right.... and a couple others.

One can't always be right but he's a better source than WCCF.

Well, he got Turing right eventually, he had several back tracks.

My nan is a better source than WCCF :p
 
So instead of answering the questions asked,you start trying to attack people. I asked you some technical questions regarding your stance,and you refused to answer,because you couldn't answer.
This is the exact method of operations I've seen. It's always them replying to you in poor taste (when one had no prior conversation with them). And when you provide any technical merit to their reply to you they get angry, name call then put you on ignore.

LOL, that's how you know you won the debate.:D
Personally, I wouldn't brag about ignoring someone you started a confrontation with.:p

As it stands now big navi is said to be a monster of a GPU. One thing I do know is that a decent upgrade would include at least 16GB of Vram. This is true, for me, if Sony decides to release more games on PC.
 
Last edited:
This is the exact method of operations I've seen. It's always them replying to you in poor taste (when one had no prior conversation with them). And when you provide any technical merit to their reply to you they get angry, name call then put you on ignore.

LOL, that's how you know you won the debate.:D
Personally, I wouldn't brag about ignoring someone you started a losing argument.

He,did it several times in the past few years. I would try to have a technical argument with him,and he just starting attacking people,and then starts playing the victim. That shows you how insecure his own arguments are,when he needs to enter a larger reality distortion echo chamber on a technical forum,which itself an echo chamber itself. He butted into a conversation I had with someone else,and he was so busy being right,I actually had answered his question already! :D

Also for a person who is so certain AMD is going to loose this battle,he does seem to be one of the top posters in this thread. Conversely hardly any posts in the Nvidia Ampere thread.Oh,well it must be brilliant then.

:D
 
This is a rumours thread,any rumour should be taken with a ton of salt. At least Wccftech and AdoredTV,have a better track record,than all the moaners. In the end if you want 100% accurate performance figures,wait until launch,and ignore these threads!
Lol.
 
Status
Not open for further replies.
Back
Top Bottom