• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

** The AMD Navi Thread **


Say's this isn't true, unless you have anything to back up what your saying, I know my 970GTX performance is in no way gimped compared to years ago when I bought it :)

Hi, I'm not talking about how a gpu performs in the same games today as it did 2 years ago, That's one of the big mistakes site reviewers make when they do a how does this 3 year old gpu perform today & they test it with the same old games it was optimised for 3 years ago, All they do they is prove there's no gimping going on (I bet Nvidia love that as it diverts from the real issue) instead of showing what matters today which is new releases.

This is what I think happens over time, Nvidia doesn't gimp products but they focus more on the new range for driver optimising so we see a pattern where the performance offered by last gen cards starts to gradually drop off in new & future game releases when you compare them against both their replacements & the competition later in life. People used to always claim the 980ti matched the 1070, even though it never did, they'd even claim it had the overclocking headroom to make it butt heads with a 1080, but it never did, we seem to be giving people bad advice again & again on tech forums because when you look in the latest games it's been dropping off even more than it did back in the day.

A quick Google linked this from March 2018 over at LTT's forum showing that it still goes on (I've seen the same advice being given here, but not as recently). https://linustechtips.com/main/topic/907043-gtx-980ti-vs-1070-for-gaming/
I'm going off topic a bit now so back to my point,
No I don't think Nvidia actively gimp their older gen gpu's,
I do think they move their focus when optimising & maximising performance for new & future game releases letting older gen cards fall away compared to the new stuff.
 
On Reddit/Nvidia they test new drivers each time they release on Pascal using a range of games including new ones. I’ve been checking it for the last 6 months and there has been Zero performance difference for Pascal, there is no nerfing.

Any difference you think there is, is because Nvidia makes architecture improvements on new cards and when developers uses these features they gain extra performance - but this is for new games and not old games.

Just because the 1080ti is 30% slower than the 2080ti in the last wolfenstein game, doesn’t mean the 1080ti is nerfed when it’s 50% slower in the new wolfenstein- it’s because the new game uses Adaptive Shading to gain performance for Turing, but some people think this is nerfing...just as an example of what I’ve cone across
 
Last edited:
I don't know, once both have been tweaked it seems pretty close (helps that the Radeon gets a perf boost from both a mem and a core o/c I reckon)


The frame-times for the RVII also remain excellent, it might be the 16GB HBM doing the trick but its not just about raw fps readings, hah i never thought i would write that about an AMD card.

Big Navi with 16GB on chip HBM? Probably technically impossible but you never know. It is a really tempting thought.

Why "technically impossible"? It's actually very likely. The board power of the existing Navi 10 cards is quite high already so it's not outside the realms of possibility that Navi 20 would need to go HBM just to keep the power under control (much like Vega).

I don't think we'll see Navi with HBM, Lisa Su said both architectures (Vega & Navi) will be around & that Navi is the gaming focused one so I don't think they don't need to use HBM, 12 or 16gb's of GDDR6 will be plenty & cheaper for them so why use HBM?

On Reddit/Nvidia they test new drivers each time they release on Pascal using a range of games including new ones. I’ve been checking it for the last 6 months and there has been Zero performance difference for Pascal, there is no nerfing.

Any difference you think there is, is because Nvidia makes architecture improvements on new cards and when developers uses these features they gain extra performance.

Hi, You're right Nvidia don't nerf their cards I did say that & yes Pascal had big improvements made to it's architecture that Maxwell didn't have but that's another topic as I think they purposely held features back so they'd have them for Pascal, remember the 980ti's a-sync compute feature they touted was coming? It never ended up getting it, instead it's part of what Pascal has over Maxwell. Nvidia use every trick in the book, they even add new ones but that's the nature of the beast.
 
I don't think we'll see Navi with HBM, Lisa Su said both architectures (Vega & Navi) will be around & that Navi is the gaming focused one so I don't think they don't need to use HBM, 12 or 16gb's of GDDR6 will be plenty & cheaper for them so why use HBM?

Because HBM is much faster and more power efficient. Also, allows Nano cards to be designed.
 
I don't think Navi will get HBM. It's too expensive for their mainstream cards. Vega seems to be aimed partly at productivity and it (sort of) made sense (although I think a cheaper non-HBM memory might have been more successful).

I can also see vega still being produced for a while. There is one market which neither nVidia or Navi can yet be used in; GPU's for Mac Pro's and Apple/Blackmagics eGPU which uses a Vega 56.
 
I don't think we'll see Navi with HBM, Lisa Su said both architectures (Vega & Navi) will be around & that Navi is the gaming focused one so I don't think they don't need to use HBM, 12 or 16gb's of GDDR6 will be plenty & cheaper for them so why use HBM?
Samsung have 2GB GDDR6 modules so they could get slapped onto an existing PCB for 16GB RAM on the 256 bit bus. How much more power they draw over the 1GB modules remains to be seen. I was only thinking HBM because of existing board power.
 
Because HBM is much faster and more power efficient. Also, allows Nano cards to be designed.

It also costs twice as much. Iirc 16gb of HBM2 on Radeon 7 costs $300usd direct from the supplier

I don’t think AMD wants to have to price its cards higher than Nvidia just because it’s $300usd just for VRAM.

Extrapolating out the costs assuming AMD pays market price etc maybe they get a special deal. Anyway, we swap out the 5700xt’s 8gb drr6 for 16gb HBM2. Instead of $450usd card, the 5700xt’s retail price would be $675usd. And then you have to ask was it worth it?
 
Last edited:
Because HBM is much faster and more power efficient. Also, allows Nano cards to be designed.
It also costs twice as much. Iirc 16gb of HBM2 on Radeon 7 costs $300usd direct from the supplier

I don’t think AMD wants to have to price its cards higher than Nvidia just because it’s $300usd just for VRAM.

Extrapolating out the costs assuming AMD pays market price etc maybe they get a special deal. Anyway, we swap out the 5700xt’s 8gb drr6 for 16gb HBM2. Instead of $450usd card, the 5700xt’s retail price would be $675usd. And then you have to ask was it worth it?

That's what I'm thinking, the cost is important & I don't think they're that bothered about doing another Nano, they might do but I think the Fury Nano was more about doing something unique with HBM which was getting that level of performance in the smallest footprint, showing it's potential more than anything, People thought we'd see a Nano with Vega but it didn't happen which admittedly was likely related to power levels with Vega, So if HBM does come to Navi maybe we'll see another but at the time the Fury Nano seemed pointless to me as 99% of the time you could squeeze a full Fury X into the same footprint so why lose performance by going with it's ashmatic brother. I thought it was even crazier when people chose Nano's for water loops.


Samsung have 2GB GDDR6 modules so they could get slapped onto an existing PCB for 16GB RAM on the 256 bit bus. How much more power they draw over the 1GB modules remains to be seen. I was only thinking HBM because of existing board power.

Hi, Power draw should be low enough with Navi to make it a non issue, But yeah if big Navi's no better than Vega they may go with HBM again but if the need isn't there as we've seen with Nvidia GDDR6 with a 256bit bus is more than capable, plus they can up the bus a bit more if needed.
 
The reference cooler performance will be very important too
I don't think Navi will get HBM. It's too expensive for their mainstream cards. Vega seems to be aimed partly at productivity and it (sort of) made sense (although I think a cheaper non-HBM memory might have been more successful).

I can also see vega still being produced for a while. There is one market which neither nVidia or Navi can yet be used in; GPU's for Mac Pro's and Apple/Blackmagics eGPU which uses a Vega 56.

Vega is EOL.
 
I don't think Navi will get HBM. It's too expensive for their mainstream cards. Vega seems to be aimed partly at productivity and it (sort of) made sense (although I think a cheaper non-HBM memory might have been more successful).

I can also see vega still being produced for a while. There is one market which neither nVidia or Navi can yet be used in; GPU's for Mac Pro's and Apple/Blackmagics eGPU which uses a Vega 56.


Vega production stopped months ago, they ramped up, flooded market and then ceased. The stocks are now slimming hence the ranges dwindling and some prices going up. Vega 7 is nearly totally gone!

I know there is lots of Vega 56 reference cards left, so can see a last run on those, but the custom cool stuff is all but gone now.

5700 series Sunday and can see for sure a 5600 series making an appearance in the future no doubt.
 
At this stage I can't see AMD slashing prices at best they will offer a game bundle or some sort of voucher scheme like they did with the Vega launch but at some point in the near future (maybe by the start of Autumn?) AMD will have to move on price.
 
If the 2060S hits 2070 levels though the 5700XT would still be 5-10% quicker on launch, which is significant. And then we might get fine wine with driver support. AMD included Nvidia friendly titles in those benchmarks IIRC

Those performance figures are from AMD slides, and we both know that AMD and Nvidia slides have never told the whole truth when it comes to performance. It's always the best case scenario. IF you take out the outliers the 5700xt is between -3% slower and 6% faster than the 2070. Which works out at roughly 3% faster, even if you include the outliers it only works out at just over 6% faster.

It would be great if AMD pulled a fast one and that when the reviewers get the cards the actual performance is better than their performance slides indicate. But, when has that happened? Performance has always been worse when real reviews hit.
 
If AMD don't drop the MSRP of these GPUs they are screwed. They are decent performers but $100 too expensive and a joke unless prices reflect the current marketplace. At same price as a 2070 Super for 10% less performance AMD will deserve to be ridiculed and laughed at.

AMD got greedy and attempted to call Nividia's bluff on pricing of the 20x0 super GPUs. It failed AMD, fix your pathetic Navi prices.
 
If AMD don't drop the MSRP of these GPUs they are screwed. They are decent performers but $100 too expensive and a joke unless prices reflect the current marketplace. At same price as a 2070 Super for 10% less performance AMD will deserve to be ridiculed and laughed at.

AMD got greedy and attempted to call Nividia's bluff on pricing of the 20x0 super GPUs. It failed AMD, fix your pathetic Navi prices.

The real question is this, even if they undercut the competition, will people stop throwing money at Nvidia?
 
Well it wouldn't change overnight due to the lock in with G-sync in my situation. I would switch and sell my monitor if AMD were at the right price/performance at the right time. I, like the majority in the market, not this forum, do not upgrade every year. So for me timing would be key.
 
Those performance figures are from AMD slides, and we both know that AMD and Nvidia slides have never told the whole truth when it comes to performance. It's always the best case scenario. IF you take out the outliers the 5700xt is between -3% slower and 6% faster than the 2070. Which works out at roughly 3% faster, even if you include the outliers it only works out at just over 6% faster.

It would be great if AMD pulled a fast one and that when the reviewers get the cards the actual performance is better than their performance slides indicate. But, when has that happened? Performance has always been worse when real reviews hit.

The Radeon guy was reckoning in the past this had happened and they tried to be more honest with this set of benchmarks. We'll see, hardwareunboxed do have their sample so I imagine we get unbiased benchmarks Sunday
 
Back
Top Bottom