• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
Graphics isn't everything, frame rate for me is far more important than looks and PC will still be the platform of choice for this.

I will save this here and remind you of this quote the next time you mention AMD's image quality is better than Nvidia's and is one of the reasons you prefer AMD cards :p:D;)

Ah, you are wrong there TNA :p Shankly always said that the difference in image quality was down to the default colour settings.
 
Good point. AMD's HUGE market share of 20% means piles of cash for R&D. They should be miles ahead of Nvidia shouldn't they.

Just want to say that AMD have been working on this for at least 3 years, their first patent was filed at the end of 2017.

But I take it from your post that you think AMD's RT solution and RDNA 2 will be rubbish?

So, if that's your attitude, how did they come up with Ryzen?
 
Except that "dedicated hardware" isn't doing such great performance with performance penalty from notable to half.

The penalty is because Ray Tracing is extremely demanding. Look at the Minecraft and Quake Ray Tracing demos that are out.

Quake II is a good test because it has all the Ray Tracing effects, shadows, reflections, refractions, Global illumination etc.

In Quake II, At 1080P and max settings, the 1080Ti can only run at 12 fps. The 2060 can run it at 48fps.

Not sure how you can say that Nvidia's solution is bad? What are you comparing it to? But if a 2060 can beat a 1080Ti by that margin it in a game using full path tracing then it must be doing ok. Of course we want improved solutions. If both companies can't do better this time around then something is wrong or else full Ray Tracing is even further away than we thought.

I just think expecting Ray Tracing performance in the consoles to better than the 2080Ti is asking a lot, especially since it's AMD's first attempt at a Ray Tracing solution. More than likely what will happen is it will be reduced level of Ray Tracing with some clever upscaling techniques to Ray Traced games playable at 4K. Sort of like how DLSS helps current cards get over 60FPS in Ray Traced games now.
 
It is less about the shrink and more about the number of transistors. Usually you get 400+mm^2 GPUs and generally the only way to increase performance from there is to do a node shrink so you can add in more transistors. This go around for AMD there is no 400+mm^2 GPU so all they really need to do to get a similar performance uplift to what a node shrink does is release a 400+mm^2 GPU. I think the main thing stopping AMD from releasing such a card with RDNA 1 is that they would need to reduce clock speed to get it within a 300W power envelope. With RDNA2 provided the +50% perf/watt is accurate then they can get a doubling of Navi 10 into a 300W envelope with similar clockspeeds which should lead to a significant performance uplift provided workloads can scale across that many CUs.

Aye, adding transistors can really make a difference. If you look back at 8800GTX vs the 7900GTX, no die shrink but they more than doubled the size of the chip and the number of transistors.

But, performance didn't double, it's not a linear improvement. If AMD make a 500mm chip, it won't be double the performance of the 5700XT.

AMD claim up to 50% performance per watt. It's exactly what you said, how accurate this statement is will decide a lot. Does RDNA 2 only reach this figure in some lab test that has no real world application?

I hope it's accurate and even erring on the cautious side.
 
For all their faults Turing's ray tracing capabilities are a generational leap calling it just rubbish is just rubbish even if things are lacking gaming wise at the moment - doing it on the shaders is at least 6x slower like for like. I'll be surprised if the consoles outperform 2080ti's RT performance unless AMD have some additional tricks up their sleeve - the approach is basically alleviating some of the reasons why shaders are so poor for it rather than going for a best possible solution.

It is going to be funny how quickly some people change their tune once AMD has a decent RT solution and games start making proper use of such features rather than just token use for specific effects.

Well for all Nvidia's faults and failing, and people have good reason to despise them, you can't say they are bad engineers.

I also wonder how they can say it's a bad solution? What have they to compare to? Apart from cards with no Ray Tracing hardware, and even then the 2060 mops the floor with 1080Ti in Ray Tracing.
 
AMD claim up to 50% performance per watt. It's exactly what you said, how accurate this statement is will decide a lot. Does RDNA 2 only reach this figure in some lab test that has no real world application?

I hope it's accurate and even erring on the cautious side.

Yeah I think in best case scenario its that (que biased AMD slide) but in reality the "up to" will likely be half of that. As AMD generally release hotter and higher TDP cards than nvidia, it might be something thats not a chalk off point when it comes to comparing them this time round. It leaves then if they have similar power consumption, judging the stack on speed, 1% lows etc.
 
How and why? You do realise that you are comparing an APU to a full desktop core GPU.

I just don't see it happening, and released trailers already have shown that devs need to use RT on console wisely.

My point of view on all this yes next generation of the consoles will be a nice upgrade over the current consoles. But they will fall short of what the pc GPUs now has to offer.

RDNA2 on the console will not be the same for RDNA 2 on the desktop, the desktop will be a much higher clocking GPU with far greater performance for both RT and normal gaming.

Fully agree with you. It's only common sense.

And, yes I fully expect the RDNA 2 RT solution to beat the 2080Ti. I will be disappointed if they do not.
 
Not an Apple Fan has a stream scheduled for 1700 on YouTube with the caption 'Graphics cards are about to get very fast'; I wonder if it's news / 'leaks' or just further speculation / discussion.

Happens. I am always happy to put my hands up and say I got it wrong ;)
You must have sore arms :p
 
You can't compare a crappy blower stock clocked ref card, to a non ref custom OC'd card, unless we're doing that now ?

If so, i apologise :p


The PS5 should beat the 2080Ti in RT, as from what we've seen, its RT is a blury mess, as its only doing it at 1080p. :p

and theres everyone slamming DLSS for its pi$$ poor detailed loss blury quality :D

LOL you WFCCTECH posting pot stirrer. :p:D
 
I also wonder how they can say it's a bad solution? What have they to compare to? Apart from cards with no Ray Tracing hardware, and even then the 2060 mops the floor with 1080Ti in Ray Tracing.

Before I forget, as you have mentioned it a few times in this thread, when it comes to RT and comparisons and the consoles - I am not that fussed about it for the time being and want to see if the performance standard using the 2080Ti in fps - where the new hardware kicks in and leave the RT element out for now.

I would be happy enough for the "better than 2080Ti by x%" so that you can play 4k or 1440p comfortably whether it be on console or the PC.
 
Before I forget, as you have mentioned it a few times in this thread, when it comes to RT and comparisons and the consoles - I am not that fussed about it for the time being and want to see if the performance standard using the 2080Ti in fps - where the new hardware kicks in and leave the RT element out for now.

I would be happy enough for the "better than 2080Ti by x%" so that you can play 4k or 1440p comfortably whether it be on console or the PC.

Ray Tracing is the future and I knew that when it came out but the videos and screenshots didn't do anything for me. Performance was meh and DLSS wasn't great so double meh!!

But, I am interested in the technology so last year I decided to get an RTX card. The crazy price that people were paying for 1080Ti's allowed me to move from a 1080ti to a 2070 Super for very little money.

When I started playing RT games my whole stance on Ray Tracing changed. To me it's kind of like VR. The full effects of playing a game with RT on vs RT off don't come across in a video. I hope their is a big push to get it into more games and get better Ray Traced hardware out there. I don't care whose solution is best, whichever one brings the faster RT with the most effects enabled will get my money.

The catch 22 at the moment is that Ray Tracing is heavily dependent on Rasterised performance and will be for a couple of generations.
 
Not sure how you can say that Nvidia's solution is bad?
Sure it's lot better than having no slightest hardware acceleration.
But if possibly up to half of rasterization HW's resources aren't doing anything useful when raytracing is enabled, maybe design is far from optimal?
We don't have infinite transistor budgets and transistors have also power cost.
So transistors put into chip should be utilized as much and efficiently as possible.
IIRC that's been behind AMD's gaming performance (+power efficiency) lagging behind Nvidia, like Vega having lots of teraflops but no corresponding fps.


Aye, adding transistors can really make a difference. If you look back at 8800GTX vs the 7900GTX, no die shrink but they more than doubled the size of the chip and the number of transistors.

But, performance didn't double, it's not a linear improvement. If AMD make a 500mm chip, it won't be double the performance of the 5700XT.
Out of Navi 10's 250 mm2 not everything is processing units.
There's some amount of fixed functionality, which doesn't need to be doubled if doubling number of processing units.
Also with Navi 10's relatively small size clocks were probably pushed up to try to compensate, which easily hits power efficiency.
That fewer transistors gives better power effiency only when run at optimal clocks for particular manufacturing tech.

So if Nvidia can bring out supposedly performance major amount from current Turing improving architecture, why couldn't AMD double high end's performance from 5700 XT?
 
Well for all Nvidia's faults and failing, and people have good reason to despise them, you can't say they are bad engineers.
Quite. So unless nV did an Intel and drastically downsized their R&D, you wouldn't really expect AMD to leapfrog them.

As good as AMD are/can be, nV I don't feel will drop the ball with their engineering - just their pricing :p
 
Status
Not open for further replies.
Back
Top Bottom