• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
It won't beat Ampere unless nV's entire R&D team has spent all their time since Turing snorting crack off a lady of the night's bum.
Surely smoking crack or snorting cocaine? I thought that's what made crack so addictive, I guess they could probably afford it too although it would certainly shake things up for all of us if they were 'busy' doing that rather than the day job.
 
Yet they didn't take this approach with Intel. Because reasons?
Maybe because their CPU mindshare was almost non existent, they were last competitive in the CPU space nearly 20 years ago but in GPUs they've been competitive far more recently. I'm just speculating but there's so much money involved that the situation begs for shady dealing.
 
It's in both their interests to keep prices high that's how cartels work. A quiet chat in a bar between 'friends' who both work in the industry for different 'sides' to get a ballpark idea of each other's performance. Its not espionage it's collusion with zero paper trail and inherently unprovable. How else would different architectures arrive at very similar performance/price. Huawei vs Apple in mobile phones demonstrates what happens when there isn't a gentleman's agreement with similar features for radically different prices. Having just two players makes it much easier.
I don't think it's in AMDs interest at all to match prices especially at the high end where people are used to buying nvidia products which are tried and tested and people have trust in.

I'm open to buying AMD but only if it's quite a bit cheaper than nvidia for a similar level of performance. If the difference in price is only £50 then I would probably pay the extra to stick with nvidia as 50 quid when I'm already shelling out 7-800 isn't a big discount and not enough for me to take a chance on AMD.
 
I don't think it's in AMDs interest at all to match prices especially at the high end where people are used to buying nvidia products which are tried and tested and people have trust in.

I'm open to buying AMD but only if it's quite a bit cheaper than nvidia for a similar level of performance. If the difference in price is only £50 then I would probably pay the extra to stick with nvidia as 50 quid when I'm already shelling out 7-800 isn't a big discount and not enough for me to take a chance on AMD.
I agree £50 difference isn't enough, they need to disrupt the market rather than repeat the 5700XT launch. I'm desperately hoping for a 9700pro moment but not that confident it will happen. If they can replicate what they've done with CPUs in the GPU space I'll be very happy.
 
Just pointing out something which is rare when it comes to AMD here.. due to the shrink in nano's they have staggered well with the power consumption - something nvidia normally have bragging rights over. Doubled over with the higher power draw this iteration for their new lineup (30x0), we might see a very tight battle when it comes to fps/watt/£. Where it gets ultra interesting is if they can clear the 2080Ti, for decent power draw AND price it at realistic levels, then its gonna be popcorn sold out.

Someone explained that to me a long time ago in another forum. He claimed to be an engineer of sorts, don't recall the exact title. But it went a little like this:
In gist when it came to gpu's it's always better to make the die bigger as it would help dispense heat. That heat dispense which is keep well controlled, helped with the overall power consumption of that the die needed. This was why Nvidia used larger dies. It was not just about transistor count alone but managing thermals which helped control power consumption.

Packing everything in a tight confined space (this is what ATI/AMD were allegedly doing) might have decreased the die size but would effect thermals and thus power consumption. <--There was a bit more too this technically. I'm just going off memory on that.

So with everything shrinking to unheard of sizes and increasing compute there might be an equilibrium reached were one might need to do more with their GPU IP then just shrink dies.
We already know what AMD will do. That super duper all in one SOC that does EVERYTHING...from computers to phones and from cars to manufacturing.
 
Last edited:
Someone explained that to me a long time ago in another forum. He claimed to be an engineer of sorts, don't recall the exact title. But it went a little like this:
In gist when it came to gpu's it's always better to make the die bigger as it would help dispense heat. That heat dispense which is keep well controlled, helped with the overall power consumption of that the die needed. This was why Nvidia used larger dies. It was not just about transistor count alone but managing thermals which helped control power consumption.

Packing everything in a tight confined space (this is what ATI/AMD were allegedly doing) might have decreased the die size but would effect thermals and thus power consumption. <--There was a bit more too this technically. I'm just going off memory on that.

So with everything shrinking to unheard of sizes and increasing compute there might be an equilibrium reached were one might need to do more with their GPU IP then just shrink dies.
We already know what AMD will do. That super duper all in one SOC that does EVERYTHING...from computers to phones and from cars to manufacturing.

For the same number of transistors I believe a bigger package dissipates heat better - but I'm not sure how it works with higher clock speed with smaller core versus larger core at lower clock speed for the same performance - oddly with Fermi it seemed like smaller cores at higher clock speeds at the same performance level generated less heat than bigger cores at lower clock speeds but that doesn't really make sense to me.
 
It fairly true in what you say, i.e. look at ryzens surface area which would be a positive when it comes to cooling. The leap however I am referring to is this snippet I retort
"AMD has been promising big things for RDNA 2 for a while now, specifically a 50% improvement in performance per watt over first gen RDNA."
from various news sources.

Normally the red v green releases, in an attempt to compete AMD usually blast the clocks up at the expense of power efficiency - however it has got marginally better in recent times.

So if the new AMD cards are 50% better over first gen, and nvidia are going for the clocks at the expense of power efficiency, its surely going to be a close one?
 
It fairly true in what you say, i.e. look at ryzens surface area which would be a positive when it comes to cooling. The leap however I am referring to is this snippet I retort from various news sources.

Normally the red v green releases, in an attempt to compete AMD usually blast the clocks up at the expense of power efficiency - however it has got marginally better in recent times.

So if the new AMD cards are 50% better over first gen, and nvidia are going for the clocks at the expense of power efficiency, its surely going to be a close one?
Hard to say. We are only going on rumors right now after all.
AMD is allegedly going for the crown with RDNA 2. Personally I would have believed that with RDNA 3. But it's rumored with RDNA 2.
So, they would have to have powerhouse of a GPU to begin with. Something that is far and way innovative to how cards are rasterizing games today. So, is that at the expense of power?
Who is to say for certain. But I'm sure that the card would need to be able to clock higher.

However, if Nvidia, has been, re-spinning the die to get the clocks up that would mean Nvidia is playing catchup to the information they know about RDNA2. That's the only real way clock boosting become a real concern at this point of development.

So lets take 1080ti 10GB variant vs 5700xt vs Big Navi
1080ti 10GB: 471 mm² die @ 16nm process node with 11,800 million transistors
5700 XT: 251 mm² @ 7nm process with 10,300 million transistors.
Big Navi: 505 mm² @ 7nm+ process node with 21,000 million transistors (allegedly)

Albeit that the transistor count is lower on the 5700xt vs 1080ti. I believe this is the efficiency you were alluding to.

However, somethings are still not adding up. A doubling of everything doesn't necessarily equal double the performance. We are still missing a lot of information on this card. Which is why it's so hard to gauge what this Big Navi is going to do vs what RDNA2 will do on console.


For example with just release 20.7.2 from AMD we notice a pretty large uptick in performance for the 5000 series cards in Death Stranded. It would be nice for the driver team to put this effort in every game release as it shows what they can do.

In this game the 5700xt is only 1 frame behind the 1080ti at 1440p. Yet the 1080ti is beating the 2070s by 1 fps. In other words a tie between the 1080ti, 2070s and 5700xt.
 
Last edited:
However, somethings are still not adding up. A doubling of everything doesn't necessarily equal double the performance. We are still missing a lot of information on this card. Which is why it's so hard to gauge what this Big Navi is going to do vs what RDNA2 will do on console.

If you want an interesting case study of what (practically) doubling shader count, ROPs, memory bandwidth etc does for RDNA compare the 5500XT (4 or 8GB but I feel the 4GB gets held back by VRAM limitations too much so the performance delta is wider than it should be) to the 5700XT. Average clockspeeds are similar for both. When looking at Computerbase.de and TechPowerUp numbers I found that there was a 1:1 ratio between power increases and performance uplifts. At TPU the 5700XT used 74% more power than the 8GB 5500XT but was 76% faster at 1080p with 1440p and 4k having larger performance deltas. Comparing to the 4GB card the 5700XT used 94% more power and had a 90% performance uplift. Same story over at Computerbase, their samples had the 5700XT using 62% more power for 62% more performance, their 4GB performance numbers are quite a bit down vs the 8GB card and the power usage is basically the same so there for a 64% increase in power you got > 70% increase in performance.

Given this and the advertised 50% perf/watt increase for RDNA2 that would give us a 5700XT performing card at around 140W and twice the card at 280W. Provided AMD can get similar perf : power scaling for 'big navi' a 100% performance uplift (or close to) is possible although scaling workloads to that many CUs may be an issue.
 
If you want an interesting case study of what (practically) doubling shader count, ROPs, memory bandwidth etc does for RDNA compare the 5500XT (4 or 8GB but I feel the 4GB gets held back by VRAM limitations too much so the performance delta is wider than it should be) to the 5700XT. Average clockspeeds are similar for both. When looking at Computerbase.de and TechPowerUp numbers I found that there was a 1:1 ratio between power increases and performance uplifts. At TPU the 5700XT used 74% more power than the 8GB 5500XT but was 76% faster at 1080p with 1440p and 4k having larger performance deltas. Comparing to the 4GB card the 5700XT used 94% more power and had a 90% performance uplift. Same story over at Computerbase, their samples had the 5700XT using 62% more power for 62% more performance, their 4GB performance numbers are quite a bit down vs the 8GB card and the power usage is basically the same so there for a 64% increase in power you got > 70% increase in performance.

Given this and the advertised 50% perf/watt increase for RDNA2 that would give us a 5700XT performing card at around 140W and twice the card at 280W. Provided AMD can get similar perf : power scaling for 'big navi' a 100% performance uplift (or close to) is possible although scaling workloads to that many CUs may be an issue.
Interesting comparison, seems big Navi could well deliver if the rumours are true regarding specs and power budget, lets hope there's some architectural secret sauce that really brings it to Nvidia.
 
Pretty obvious NVIDIA would want to be first to market so that they can charge £££££ up until AMD can launch their card. They want to further gouge their user base while they are able ;)
 
Yet they didn't take this approach with Intel. Because reasons?
They've caught and surpassed Intel with quality of product. They're setting the pricing now and Intel are scrambling to compete. AMD can't afford to have a price war with Nvidia. Nvidia are too strong.
 
Last edited:
They've caught and surpassed Intel with quality of product. They're setting the pricing now and Intel are scrambling to compete. AMD can't afford to have price war with Nvidia. Nvidia are too strong.


Nvidia has now surpassed Intel to become the most valuable tech company out of these big 3.

So you could say that Intel is well on its way to being the underdog again with Nvidia being the big bully
 
AIB vendor claims Nvidia is close to launching Ampere gaming GPUs, while RDNA 2 cards are MIA - AIB vendors don't yet have testing GPUs from AMD - appears Nvidia is launching well ahead of AMD

https://out.reddit.com/t3_hrjkmh?url=https://www.purepc.pl/gralem-w-cyberpunk-2077-wymagania-sprzetowe-i-jakosc-grafiki#comment-725216&token=AQAAB8sOXxerOUZ_eNbY--KYiukKMMrRJVjlI04Q9wMqC0zET3c1&app_name=reddit.com

I don't see any mention of an AIB partner in that link, only the guy who wrote the article saying Ampere is around the corner, like we didn't know that already :P
 
Status
Not open for further replies.
Back
Top Bottom