• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 20 Faster Than An Nvidia RTX2080Ti?

The difference though is Navi isn't a shrink, it's a fresh 7nm design. Yes, it's still the 6th iteration of GCN so there's only so much that can be done with it, but it is different approach to Polaris and Vega that were designed on X process then ported to Y process. This could make zero difference, it could make a significant difference.

Just saying "well look how much we didn't get when Vega shrank to 7nm" is a bit short-sighted, I think.

Possibly, But wasn't Polaris a fresh 14nm design?
 
This makes the most sense to me, AMD only just brought out Radeon VII and to kill it off so early would be suicide for them, I reckon Q1 2020 for big Navi.

What's Navi using GDDR?


More of the same speculation.
https://www.pcgamesn.com/amd/amd-navi-gpu-release-date-performance

AMD already confirmed that Navi would be ready for both GDDR6 and HBM2 memory. The speedier GDDR memory tech is already in mass production from the big three memory manufacturers – Samsung, SK Hynix, and Micron.
 
Last edited:
Doing more than is needed is what has got NVidia into trouble with Turing hence the high prices as the cores are packed with transistors that have nothing to do with normal gaming. Ray Tracing barely works even on a RTX Titan but adds a lot to the cost of the card.

AMD need to forget Ray Tracing and any other new gimmick and concentrate on building a fast sub £700 card.

Neither NVidia or AMD should go near Ray Tracing until the node after 7nm as the price/performance ATM is just not worth it.

When I watch a pair of RTX Titans running SOTTR with Ray Tracing maxed it leaves me scratching head as to difference with it on or off.

The proof of my argument is the biggest competition for a 2080 is actually the GTX 1080 Ti, a card without ray tracing and more than 2 years old.
Agreed. RTX as it is just does not work imo (no matter what how many times mr leather man keeps saying it does), the difference it makes is so little that I would much rather not pay for the tensor and rt cores or have them use the extra transistors to boost traditional rasterising performance. It is good that there are now cards out there for developers to experiment with and make better tools and software though I suppose. I will be interested in RT once there is much better hardware out there to run it personally, right now it is just too early and not worth it.
 
They already have with the VII, With Navi still being GCN I can't see it offering the level of performance increase some seem to be expecting, Polaris was a die shrink going from 28nm to 14nm so consider how Navi's in a similar situation going from 14 to 7 & it should give us an idea of what's to come. The Vega VII on 7nm sort of shows there's nothing remarkable coming due to GCN limits already being reached, Some fine tuning and a bit of polish but unless they've done some magic with the infinity fabric we'll still be waiting for a real competitor as we move into 2020.

Well yes sure VII has but I meant Navi. You'd think with the PS5 investment the technology would technology would be moving beyond the 28nm to 14nm leap. If Navi gets variable rate shading it should give it a boost. Could Polaris have been higher end anyway? Vega used GCN too
 
This makes the most sense to me, AMD only just brought out Radeon VII and to kill it off so early would be suicide for them, I reckon Q1 2020 for big Navi.
Very few bought it due to limited supply, I don't think this card should factor into their decisions if they have something that can worry Nvidia; it would be suicide not to release such a product. After all we got 100%+ performance increases every 6 months in the old days and no one complained, actually it was very exciting and the rate of technological progress was astounding. We need a bit of that again as things are just boring and incredibly overpriced.
 
INHO I think Navi 10 will be GDDR6 and Navi 20 either very late Q4 or early Q1 next year will be HBM2, Just going by previous products.

I was a bit slow with my edit, sorry. :)

Well yes sure VII has but I meant Navi. You'd think with the PS5 investment the technology would technology would be moving beyond the 28nm to 14nm leap. If Navi gets variable rate shading it should give it a boost. Could Polaris have been higher end anyway? Vega used GCN too

Polaris wasn't designed to go beyond mid-range, theoretically it could have but it didn't, I remember similar 'what if' chatter on the forum prior to the Polaris release but it wasn't to be AMD developed it for the mid & low level while focusing on Vega and HBM2 for the high-end at a later date.

Yes but Polaris didn't have as many compute units as 290/390, it didn't have faster VRAM which this will have
Navi will have loads more funding from Microsoft and Sony for the R&D

I doubt Navi's going to have anymore cu's than you'd usually expect from a range developed for the low and mid tier cards, I get what you're saying about the Grenada cards, They were 56 & 64 cu high-end cards originally but I'm not sure that helps your point because we're talking about a similar situation again as we're talking about how they'll compare to 56 & 64 cu Vega cards, Navi could have more cu's and if they do it may make all the difference but it's meant to be a replacement for the mid-range Polaris series this time, as for Navi 20 I don't think we know enough to determine what that is yet, Is Navi being planned as a replacement for everything & Arcturus a replacement for that at a much later date? Possibly, the fact that we are getting Navi based APU's in the next gen consoles may mean AMD will keep Navi & GCN around a bit longer especially if they find a way to get multi chip cards recognised and used like regular cards. As for any extra funding from Microsoft & Sony it'll be interesting to see whether that makes much of a difference and if it does where? It may provide AMD with more of a leg-up on the APU side of things rather than with the discreet Navi gpu's, I suppose it will also depend on what sort of help Microsoft and Sony are giving, funds or expertise. It's going to be an interesting year on the tech front.
 
AMD only just brought out Radeon VII and to kill it off so early would be suicide for them

Not if the glorified PR stunt is unsustainable, it's not. The suicide would be diverting Vega 20 packages away from MI50 cards and giving up the massive margins therein, so unless Vega 20 yields really are so poor they're getting a lot of stock unfit for Instinct cards, it doesn't make long-term sense to (potentially) lose money to keep churning out an unprofitable product in Radeon VII. And bear in mind that Radeon VII's gaming performance isn't light years ahead of RX Vega 64, it is quite conceivable that the midrange Navi which will replace RX Vega and Polaris could offer up Radeon VII levels of performance and actually be profitable.

Of course this has nothing to do with a Navi 20 and Radeon VII replacement or not, if indeed there is a big version in the works it won't land just yet.
 
Yes but Polaris didn't have as many compute units as 290/390, it didn't have faster VRAM which this will have

Navi will have loads more funding from Microsoft and Sony for the R&D

Actually Big polaris used exactly the same amount 2560 and 2816 shaders, 384 bit memory bus+ 3072 bit system access Bus, 12/24 gb Gddr5 memory, it was used in the Scorpio project.
Remember the Rx490 was cancelled and Vega was brought in at the last minute, much like Radeon 7.
 
Actually Big polaris used exactly the same amount 2560 and 2816 shaders, 384 bit memory bus+ 3072 bit system access Bus, 12/24 gb Gddr5 memory, it was used in the Scorpio project.
Remember the Rx490 was cancelled and Vega was brought in at the last minute, much like Radeon 7.

There was never an RX 490 with that memory configuration releasing, just speculation along the lines of this after someone at AMD mistakenly listed a 490.


https://www.kitguru.net/components/...2016-dont-put-your-upgrade-plans-on-hold-yet/

If you read the article who it who jumped on the 490 listing to fuel their clickbait site over the coming months? yep WCCFTech.
 
There was never an RX 490 with that memory configuration releasing, just speculation along the lines of this after someone at AMD mistakenly listed a 490.


https://www.kitguru.net/components/...2016-dont-put-your-upgrade-plans-on-hold-yet/

If you read the article who it who jumped on the 490 listing to fuel their clickbait site over the coming months? yep WCCFTech.

Regardless of what it would have been named, and it being axed. The biggest Polaris was the 44cu Scorpio . It was cancelled for dgpu for many reasons.
 
Regardless of what it would have been named, and it being axed. The biggest Polaris was the 44cu Scorpio . It was cancelled for dgpu for many reasons.

Fair enough, I don't remember ever seeing anything about this, I've tried Google but can't find anything, 9 times out of 10 news like this comes from untrustworthy sources creating fake news on slow days to get the clicks. I can't find anything on Polaris every being planned with more than it's 256 bit bus or more than 36cu's but I would have been interested in having a read on it, The Xbox One X uses a custom API with 40 Polaris CU's but it doesn't pertain to there ever having been plans for a bigger Polaris gpu like that.


They're one of the worst culprits as they make a living from a site they built by telling lies & the problem is the lies end up being banded around as facts by the time they're third or fourth hand. The fact that sites like WCCFTech do it so blatantly & get away with it is pretty sad, It's why I try to avoid visiting that site, The established tech sites should avoid helping them by not reporting & quoting stuff they post but sadly a lot of those aren't much better as we've just seen with the Strange geezer & his teddy bear. Integrity is a rare thing now-a-days.
 
Fair enough, I don't remember ever seeing anything about this, I've tried Google but can't find anything, 9 times out of 10 news like this comes from untrustworthy sources creating fake news on slow days to get the clicks. I can't find anything on Polaris every being planned with more than it's 256 bit bus or more than 36cu's but I would have been interested in having a read on it, The Xbox One X uses a custom API with 40 Polaris CU's but it doesn't pertain to there ever having been plans for a bigger Polaris gpu like that..

Had a quick search and i think this is what he is referring to.

Scorpio Engine feature 40 or 44 Compute Units (CUs) for the consumer console and dev kit respectively based on Arctic Islands (Similar to Polaris, Radeon RX 4xx).

from: https://en.wikichip.org/wiki/microsoft/scorpio_engine
 
Had a quick search and i think this is what he is referring to.
Fair enough, I don't remember ever seeing anything about this, I've tried Google but can't find anything, 9 times out of 10 news like this comes from untrustworthy sources creating fake news on slow days to get the clicks. I can't find anything on Polaris every being planned with more than it's 256 bit bus or more than 36cu's but I would have been interested in having a read on it, The Xbox One X uses a custom API with 40 Polaris CU's but it doesn't pertain to there ever having been plans for a bigger Polaris gpu like that.

I apologise if this post is just too much to read, or to invest time into the links.
I won't be offended lmao :)
There is no official documentation on a bigger Polaris Dgpu, It's just my understanding of how Amd in history used a Mid range Gpu to develop with the console Apus and this partnership could also then translate into better performing Dgpu's at the next subsequent design.
If you look at the 2013 time frame then we were half way between the 28nm and the ill fated 20nm, Gcn1 High end was Tahiti, Mid range Pitcairn.
After a year of rebranding by Amd from 2012-13, the Consoles at the time used the Jaguar Apu's with Sony using tweaked Gcn 1.1/1.0 , Liverpool '7870' design, 18cu (20Cu's 2 disabled for redundancy), Xbox one used a Durango 12cu (14cu, 2 disabled For redundancy)
So in comparison these Apu to a dgpu card would be similar to the Pitcairn 7870 and Bonaire 7790. and the improved Gcn1.1 front end was then used for the Bigger Hawaii chip including scaling to their first 4 wide geometry engine.

(Here's some geeky info and history on where Amd made most of their core development during the 1st console partnerships.
http://www.redgamingtech.com/xbox-o...ysis-cpu-gpu-ram-more-part-one-tech-tribunal/
http://www.redgamingtech.com/xbox-o...cture-overview-analysis-part-2-tech-tribunal/


For the PS4 Pro it uses a custom Polaris Chip with 36 active cu's, (40cu's and 4 disabled for Redundancy).
Xbox One x uses a custom Polaris chip wiith 40 active cu's (44cu's 4 disabled except dev kit)
https://www.eurogamer.net/articles/digitalfoundry-2018-ps4-pro-and-xbox-one-x-processors-compared

There are some interesting differences between the two and how we ended up with the Polaris 10 Dgpu.
There's also an explanation as to how the custom hardware design works.

https://www.eurogamer.net/articles/...tation-4-pro-how-sony-made-a-4k-games-machine

The things I said are just my opinion, I'm not always 100% straight to the point but I def got Fiji, and Polaris spot on. so make of it as you like but back here are some of my thoughts over the time.

Apr 2017
https://forums.overclockers.co.uk/t...s-review-thread.18776812/page-9#post-30715462

Yes but the r&d development and lithography of the gpu section would just be transposed onto a separate interposer/substrate just like the sony ps4 pro and polaris 10 desktop.
Rx490 would have been 44/40 cu on a 384 bit bus, just like scorpio, the thing is Polaris 10 has an inherent weakness in memory bandwidth with it's current 256bit Bus and that with the inferior power consumption meant they axed the bigger polaris.
If they had made the desktop rx490 it would have drawn around 250w and performed around around a 1070 (160w). This would have been too expensive to make, just like when Amd suffered selling Hawaii when Nvidia were making the cheaper to manufacture gm204's. This would have been Amd's Gp104, but as you know we'll see Vega brought in to do this job instead''.

Jan 2018
https://forums.overclockers.co.uk/threads/nvidia-volta-with-gddr6-in-early-2018.18777501/page-46

Scaling up the 3072 shader Polaris was axed because of AMd's antiquated gcn core and limited 4 wide geometry engines. It could only scale so far before hitting memory bandwidth and power consumption problems vs performance.


The 384 bit memory controller and bandwidth would have brought marginal gains over the Polaris 10 (2304 shader). Whilst 6gb was too little and 12gb would further upset power consumption and price, where a Gp104 8gb would have totally smashed it.


Where did they salvage the dies, well they are in the underwhelming Xbox Scorpio at 40/44cu.


And had to use hbm2, but unfortunately the reality was it was just a redrawn Fury II that arrived late.




Gp104 caught AMD with their pants down.

Dec 2018
https://forums.overclockers.co.uk/t...these-seriously.18840953/page-4#post-32389901

All you have to do is look at the consumer market cancelled Polaris 40/44 (Scorpio).
Revisit the thread on here when we discussed reasons why it was cancelled.


Then I predict it will be redrawn onto the the 7nm with tweaks to the cache and memory controllers for Gddr6, and should be around 200-230mm2 die size without needing a 384 bit bus or 12 memory controllers, to keep the bandwidth up.
 
Last edited:
Back
Top Bottom