• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
I just realised that if AMD can flick all those CUs to work on Raytracing, this could be a beastly card for rendering.

I wonder who would come out on top between Ampere and RDNA 2

Since all the CU's can switch between normal compute and RT modes the approach should be a lot more flexible and efficient than the dedicated hardware approach.I think it will be able to switch between modes on the fly as and when needed. A random CU can switch to RT when it's just finished a Compute tasks and vice versa. Best of both worlds.
 
His reasoning falls apart because AMD stated they were going for a full stack. If we don't get a full stack it is because of issues with the design or manufacture, rather than AMD not being interested.

Good Answer. :D
Where have AMD they are going for a full stack? and where does it say what that means? So fare I only recall Laura saying on their own Bring it up interview that they are going after "that upper tier" above 1440p which might as well just be a 3070 + 10ish%.
 
I won't be "angry" with AMD if they do that. I'll just already have bought something else at that point.

Heck, I'm not going to be mad at Nvidia if when they bring out a card that's faster than the 3080...possibly even cheaper.

If I *knew* something better was imminent, I would wait and buy whatever that something better is. But I'm willing gamble a loss of value on my 3080 that "something better" won't happen this year. I want something to push my Reverb more comfortably than my 1080Ti can. I have been waiting since May and the 3080 looks like it will get the job done at a reasonable price. I know something better is coming from either AMD or Nvidia (or both), but I'm rolling the dice on *when*. It's a calculated risk.

AMD will be launching in Oct/Nov which is literally 4-6 weeks away. Why on earth would you buy knowing full well that a rival is coming within a few weeks of your purchase? I guess if you are going to buy Nvidia regardless then that's fine but for a normal savvy consumer it's best to wait to see the options. I know many cannot hold their excitement but sometimes thinking logically is better.
 
One of the Anandtech guys did over on Twitter.

Thanks, I found it in an article:

The immediate oddity here is that power efficiency is normally measured at a fixed level of power consumption, not a fixed level of performance. With power consumption of a transistor increasing at roughly the cube of the voltage, a “wider” part like Ampere with more functional blocks can clock itself at a much lower frequency to hit the same overall performance as Turing. In essence, this graph is comparing Turing at its worst to Ampere at its best, asking “what would it be like if we downclocked Ampere to be as slow as Turing” rather than “how much faster is Ampere than Turing under the same constraints”. In other words, NVIDIA’s graph is not presenting us with an apples-to-apples performance comparison at a specific power draw.

If you actually make a fixed wattage comparison, then Ampere doesn’t look quite as good in NVIDIA’s graph. Whereas Turing hits 60fps at 240W in this example, Ampere’s performance curve has it at roughly 90fps. Which to be sure, this is still a sizable improvement, but it’s only a 50% improvement in performance-per-watt. Ultimately the exact improvement in power efficiency is going to depend on where in the graph you sample, but it’s clear that NVIDIA’s power efficiency improvements with Ampere, as defined by more normal metrics, are not going to be 90% as NVIDIA’s slide claims.

All of which is reflected in the TDP ratings of the new RTX 30 series cards. The RTX 3090 draws a whopping 350 watts of power, and even the RTX 3080 pulls 320W. If we take NVIDIA’s performance claims at their word – that RTX 3080 offers up to 100% more performance than RTX 2080 – then that comes with a 49% hike in power consumption, for an effective increase in performance-per-watt of just 34%. And the comparison for the RTX 3090 is even harsher, with NVIDIA claiming a 50% performance increase for a 25% increase in power consumption, for a net power efficiency gain of just 20%.

https://www.anandtech.com/show/1605...re-for-gaming-starting-with-rtx-3080-rtx-3090

This means that if RDNA 2.0 is done right, gives the promised and targeted in AMD's own slides 50% performance per watt improvement over RDNA 1.0, the graphics cards themselves are well tuned out of the boxes, then we may very well see how RDNA 2.0 betters Ampere.
 
Where have AMD they are going for a full stack? and where does it say what that means? So fare I only recall Laura saying on their own Bring it up interview that they are going after "that upper tier" above 1440p which might as well just be a 3070 + 10ish%.
From memory i think it was in their financial analyst day earlier in the year they mentioned this. I don't remember the exact wording used though.
 
AMD will be launching in Oct/Nov which is literally 4-6 weeks away. Why on earth would you buy knowing full well that a rival is coming within a few weeks of your purchase? I guess if you are going to buy Nvidia regardless then that's fine but for a normal savvy consumer it's best to wait to see the options. I know many cannot hold their excitement but sometimes thinking logically is better.

If AMD then rush out a competitor,but have a crap cooler,QA/QC problems,driver bugs,no stock etc then the same lot will say,why can't AMD spend more time and make a better launch,and buy Nvidia. If they launched a month before Ampere,people would still wait for Ampere. AMD can't win either way - they need to think of the next 18 months to 2 years of sales,so better they have a decent launch,and make sure the cards are implemented properly.

Thanks, I found it in an article:



https://www.anandtech.com/show/1605...re-for-gaming-starting-with-rtx-3080-rtx-3090

This means that if RDNA 2.0 is done right, gives the promised and targeted in AMD's own slides 50% performance per watt improvement over RDNA 1.0, the graphics cards themselves are well tuned out of the boxes, then we may very well see how RDNA 2.0 betters Ampere.

They can target higher clockspeeds and more VRAM for similar SKUs.
 
From memory i think it was in their financial analyst day earlier in the year they mentioned this. I don't remember the exact wording used though.
Also, shortly after Turing launched AMD were asked when are they going to do ray tracing. The response was (to paraphrase) when we can reliably release it across the full product stack.
 
Since all the CU's can switch between normal compute and RT modes the approach should be a lot more flexible and efficient than the dedicated hardware approach.I think it will be able to switch between modes on the fly as and when needed. A random CU can switch to RT when it's just finished a Compute tasks and vice versa. Best of both worlds.

I think this is a misunderstanding of how AMD approaches it. There are limits as to what you can process concurrently with ray tracing as well.
 
Another thing I've seen and had an argument with a mate about was the power of the nVidia requirements for 3000 series, he reckoned they're going backwards again with power requirements... ?I didn't have a problem with the pwoer requirements at all!

Just wondering on here, who cares? When I say this, lets just say for argument sake, the 3080 needed 500W... I wouldn't bat an eyelid at this, because for me, as long as it performed, I really dont' care what power it uses to get there i.e. it's best in the world but requires tons of power, yeah I know that's not efficient blah blah blah, but for me, I'd still go and buy it even if it required 1000W to run, because it's the best... I've noticed that these new cards need that extra horsepower... it was like when I ran quadfire of two 295x2's... that was pulling from the wall incredible wattage... did I care? not really they were both water cooled and ran fine lol.

so, my question here is, if AMD release something that is exact performance of say 3080 for argumnents sake, and its £100 less BUT used 500W... what would people go for then? for me, no brainer, AMD as performance per £ is better, I couldnt' give two hoots if it uses more power to get there. However I get that people may go, well that's not efficient and poor design blah blah, but the fact is, it's as fast and cheaper...

Just wondering what peoples priority 1 is here... out and out performance? performance per £ or performance per watt? Is it it a combo of all three? Mine personally is performance per £
 
If AMD then rush out a competitor,but have a crap cooler,QA/QC problems,driver bugs,no stock etc then the same lot will say,why can't AMD spend more time and make a better launch,and buy Nvidia. If they launched a month before Ampere,people would still wait for Ampere. AMD can't win either way - they need to think of the next 18 months to 2 years of sales,so better they have a decent launch,and make sure the cards are implemented properly.



They can target higher clockspeeds and more VRAM for similar SKUs.


I agree they should aim to compete with Nvidia by not trying to compete with Nvidia. The whole GPU landscape is stacked in Nvidia's favour, the YT tech community is full of NV shills, and they have massive market share and mindshare. They didn't smash Intel to peices by rushing out chips and hoping to compete, they worked diligently at producing a far better product on their own timescale, irrespectve of what Intel were doing.
 
If AMD then rush out a competitor,but have a crap cooler,QA/QC problems,driver bugs,no stock etc then the same lot will say,why can't AMD spend more time and make a better launch,and buy Nvidia. If they launched a month before Ampere,people would still wait for Ampere. AMD can't win either way - they need to think of the next 18 months to 2 years of sales,so better they have a decent launch,and make sure the cards are implemented properly.

I think this time the stock cooler may well be decent. Weird to think a while back we expected the radeon vii cooler to be good and that turned out to be a pos as well. But they've heard the noise complaints and spoken about them in public so here's hoping the next cooler will be decent.
 
I agree they should aim to compete with Nvidia by not trying to compete with Nvidia. The whole GPU landscape is stacked in Nvidia's favour, the YT tech community is full of NV shills, and they have massive market share and mindshare. They didn't smash Intel to peices by rushing out chips and hoping to compete, they worked diligently at producing a far better product on their own timescale, irrespectve of what Intel were doing.

I think this time the stock cooler may well be decent. Weird to think a while back we expected the radeon vii cooler to be good and that turned out to be a pos as well. But they've heard the noise complaints and spoken about them in public so here's hoping the next cooler will be decent.

It is better they take the time to have a decent product and a decent launch. Look at what happened to the R9 290X - even months later when the aftermarket models launch,the memes still stuck in people's minds,and it didn't sell that well.
 
It is better they take the time to have a decent product and a decent launch. Look at what happened to the R9 290X - even months later when the aftermarket models launch,the memes still stuck in people's minds,and it didn't sell that well.


It is somewhat baffling that a company like AMD seems to not be able to produce a decent stock cooler. The reason they gave in that interview a while back was blowers offered consistent performance and didn't depend as much on the chassis, whereas axial depended more on the airflow in the case.

All well and good but do they have to be so loud? Can they not make the heat-sink wider to give more surface area for lower fan speeds? That was the big issue with the Radeon vii, it LOOKED good but when you took the card apart the actual surface area wasn't great at all, plus it had massive chunks taken out of it to accommodate the fans, instead of making it wider so the heat-sink could be taller and the fans not taking away from it.
 
AMD will be launching in Oct/Nov which is literally 4-6 weeks away. Why on earth would you buy knowing full well that a rival is coming within a few weeks of your purchase? I guess if you are going to buy Nvidia regardless then that's fine but for a normal savvy consumer it's best to wait to see the options. I know many cannot hold their excitement but sometimes thinking logically is better.

I can wait if AMD gives me a reason to wait. Just launching "something" in 4-6 weeks doesn't cut it. Something could be anything.
 
AMD will be launching in Oct/Nov which is literally 4-6 weeks away. Why on earth would you buy knowing full well that a rival is coming within a few weeks of your purchase? I guess if you are going to buy Nvidia regardless then that's fine but for a normal savvy consumer it's best to wait to see the options. I know many cannot hold their excitement but sometimes thinking logically is better.

What happens if I wait, and then AMD release a lemon that can't compete with a 3080?

I'll be stuck waiting for the 3080 to come back in stock at its RRP for potentially months because they all sold out and what little stock trickles through is price gouged to the hilt. I will not pay over the £649 that they should be
 
Status
Not open for further replies.
Back
Top Bottom