• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
From my understanding of it they are basically adding hardware to remove the limitations (i.e. communications with other sub-systems) and throughput restrictions that prevent the general shader architecture from being good for ray tracing but ultimately there are limits to how good, even when "unleashed" the shader architecture is at ray tracing even in situations where you can achieve low or zero penalty concurrency with other game rendering utilisation.

You're right and i was wrong, what it isn't is dedicated RT cores, like Turing. The RT functions are a part of existing shaders, they just had an instruction added to them to make that happen, there probably is some sort of controller for this.

Just quoting you guys as you both seem to have an interest in the technical side of things.

AMD submitted a new Patent a few days ago.

http://appft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&u=/netahtml/PTO/search-adv.html&r=1&p=1&f=G&l=50&d=PG01&S1=20200193681.PGNR.&OS=DN/20200193681&RS=DN/20200193681

Their Ray Tracing solution has taken a new twist. Seems very late in the day to be changing things up. But, it does look interesting in theory.

EDIT: Made a mistake :) Never looked at the filed date. Thanks to @Satchfanuk for pointing it out. Maybe not so late in the day after all :p
 
Last edited:
Just quoting you guys as you both seem to have an interest in the technical side of things.

AMD submitted a new Patent a few days ago.

http://appft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&u=/netahtml/PTO/search-adv.html&r=1&p=1&f=G&l=50&d=PG01&S1=20200193681.PGNR.&OS=DN/20200193681&RS=DN/20200193681

Their Ray Tracing solution has taken a new twist. Seems very late in the day to be changing things up. But, it does look interesting in theory.

Late in the day...??? This was filed Dec 2018

By the looks of things, it was approved a few days ago or resubmitted with changes perhaps.
 
That would be funny and sad at the same time if that's true. All we need is 1200 price tag to complete how ridiculous that is.

The 2080TI is 40% faster than the 1080TI.

It will probably get a clock bump before release, 1935Mhz is not slow so again the hype in the title doesn't fit with the slide but on a refined 7nm node i can see it at 2100 to 2200Mhz which would take it to +40%. at least we hope so.

+40% is a solid upgrade and right in line with the last jump.

FQnynwD.png
 
Just quoting you guys as you both seem to have an interest in the technical side of things.

AMD submitted a new Patent a few days ago.

http://appft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&u=/netahtml/PTO/search-adv.html&r=1&p=1&f=G&l=50&d=PG01&S1=20200193681.PGNR.&OS=DN/20200193681&RS=DN/20200193681

Their Ray Tracing solution has taken a new twist. Seems very late in the day to be changing things up. But, it does look interesting in theory.

I had a quick skim over it.... it looks like the same thing Coretek was talking about, AMD's recent architecture addition "Sienna Cichlid" very interesting...... He found it so unbelievable that he didn't think it was true, but he left a little trinket at the end. Sienna is a shade of Red ;)

It has 4 SDMA Engines, each capable 1GB transfer speeds, to put that in perspective RDNA 1 has 2 SDMA engines, each capable of 4MB of transfer speed.

Now we know why AMD was so keen on getting PCIe4 on their Motherboards.

https://youtu.be/h8H_7VguCzg?t=334

Full video...

 
Last edited:
Late in the day...??? This was filed Dec 2018

By the looks of things, it was approved a few days ago or resubmitted with changes perhaps.

Would you believe I never looked at the Filed date, DOH!! And I wouldn't mind, but I should have because their other Patent was filed in 2017 but only released as a patent in 2019.
 
Whether Coreteks wants to make this stuff up or somebody else is just trolling him I don't know.

I don't know either, some of the info he has is way out there.

But, I can imagine there are some practical jokers in the marketing and tech departments of these big tech companies that release crazy info just to laugh at the reaction it gets. A slow day in the office, how about we leak info that the next GPU is going to have a RT add in board? I bet it happens all the time :p
 
I had a quick skim over it.... it looks like the same thing Coretek was talking about, AMD's recent architecture addition "Sienna Cichlid" very interesting...... He found it so unbelievable that he didn't think it was true, but he left a little trinket at the end. Sienna is a shade of Red ;)

It has 4 SDMA Engines, each capable 1GB transfer speeds, to put that in perspective RDNA 1 has 2 SDMA engines, each capable of 4MB of transfer speed.

Now we know why AMD was so keen on getting PCIe4 on their Motherboards.

https://youtu.be/h8H_7VguCzg?t=334

Full video...

That was exciting
 
Same here mate. It is ridiculous if you ask me. But Melmec will defend it :p

Even though I see where he is coming from, still does not make it right imo. Can't be apply Vaseline on a 4K image and then calling it "native"!

LOL Mr TNA, funny guy :D No need to defend any Nvidia technology with you, sure aren't you going to buy a Nvidia card no matter what :p

But, since I am here ;) Let me try two different ways of explaining why the comparison is perfectly valid.

First:

Let me ask you a question, If you, MR TNA CEO of NAviMdiaD(pronounced NA - VIM - Dee ad) invented a brand new lightweight AA to replace FXAA/TAA. AN AA solution that offered more FPS than using FXAA, while reducing Jaggies but without the blur. And you had to send in comparison screen shots to show how much better than FXAA it was. Would you send in screenshots comparing with

A: Native image with No AA

or

B: Native image with FXAA applied.

I am pretty sure your answer will be B, as that's the AA version you are showing it to be better than.

Second:

Now DLSS. When Nvidia say it's better than Native and compare it with images using FXAA, it's implied that it's also better than Native without any AA.

Let me explain why. You have native image without any AA, it will have jaggies. You apply FXAA to reduce the jaggies, which it does, but, this introduces another problem, the blur and it's much worse in some games than others. So you have an image that's better than Native in one regard, reduced Jaggies, but worse in another regard, blur.

Now you use DLSS, what happens? It's better than Native without AA because it reduces the Jaggies and it's better than Native with FXAA because there is no blur.

Future Versions of DLSS might eventually look better than Native with SSAA applied and if that happens we will be comparing it to those types of screen shots.



Look what you made me do?? Last time I will I respond to your baiting!! Honest :p:D
 
The 2080TI is 40% faster than the 1080TI.

It will probably get a clock bump before release, 1935Mhz is not slow so again the hype in the title doesn't fit with the slide but on a refined 7nm node i can see it at 2100 to 2200Mhz which would take it to +40%. at least we hope so.

+40% is a solid upgrade and right in line with the last jump.
I can only go on the rumor benchmark. Therefore, I cannot in good concious add 10% that wasn't demonstrated. That could actually be the oc. Right now the rumor is 30%
 
Turing, and its price/performance stagnation, is the outlier rather than the norm.

Pascal was pretty good, and the 1080ti, when compared to previous gen, looks like Nvidia was swinging for the fences.

Price performance stagnation, or even regression, could become the new normal, but one gen is kind of soon to make the call with multiple generations of good progress before Turing.

I guess I expect the worst from Nvidia all the time. They have never failed me in this regard.
 
I had a quick skim over it.... it looks like the same thing Coretek was talking about, AMD's recent architecture addition "Sienna Cichlid" very interesting...... He found it so unbelievable that he didn't think it was true, but he left a little trinket at the end. Sienna is a shade of Red ;)

It has 4 SDMA Engines, each capable 1GB transfer speeds, to put that in perspective RDNA 1 has 2 SDMA engines, each capable of 4MB of transfer speed.

Now we know why AMD was so keen on getting PCIe4 on their Motherboards.

https://youtu.be/h8H_7VguCzg?t=334

Full video...


It does give you the feeling that AMD have something new to show off. I would not be surprised if they have taken inspiration from the consoles they helped develop. I'm expecting similar performance with Nvidia but wildly different approaches.
 
No need to defend any Nvidia technology with you, sure aren't you going to buy a Nvidia card no matter what :p
I would not go that far, price will make a huge impact on if I go Nvidia, but I am pretty confident that they will hit a price for performance price point I am just about happy with :D

Let me ask you a question, If you, MR TNA CEO of NAviMdiaD(pronounced NA - VIM - Dee ad) invented a brand new lightweight AA to replace FXAA/TAA. AN AA solution that offered more FPS than using FXAA, while reducing Jaggies but without the blur. And you had to send in comparison screen shots to show how much better than FXAA it was. Would you send in screenshots comparing with

A: Native image with No AA

or

B: Native image with FXAA applied.

I am pretty sure your answer will be B, as that's the AA version you are showing it to be better than.
The answer to this is obvious. But I am not the CEO, I am the end user so I will not lap up marketing stuff ;)

Now DLSS. When Nvidia say it's better than Native and compare it with images using FXAA, it's implied that it's also better than Native without any AA.

This is where we have different points of view. It should not be implied. Many won't understand that it is implied. Sure you and I will get that it is implied, but majority I recon will not. Just more marketing.

If I could be asked I would install a few games and show you that they look fine and there are not many jaggies with no AA applied. To me that is native image.

We will have to agree to disagree on this one Mr Melmac ;):D:p
 
It does give you the feeling that AMD have something new to show off. I would not be surprised if they have taken inspiration from the consoles they helped develop. I'm expecting similar performance with Nvidia but wildly different approaches.

It ties right in with a youtube video on PC World where one of the AMD guys was asked about PCIe 4 bandwidth on motherboards and whether that would help things like dual GPU installs. He indicated that not really, but I always took away that he was hinting at something else coming along that would use the bandwidth. Perhaps the rumours we are seeing at the moment is just that thing.

Found it ... it starts at 28 minutes ... there is a definite pause and consideration of what to say as he knows whats down the line ...

https://www.youtube.com/watch?v=OY8qvK5XRgA
 
If AMD wants to really get people buying their cards they should offer the "Navi Credit Card" (by any credit card company) and offer 12 months interest free on purchases over $500.
6 months interest free on purchases over $300
:D
 
The 2080TI is 40% faster than the 1080TI.

It will probably get a clock bump before release, 1935Mhz is not slow so again the hype in the title doesn't fit with the slide but on a refined 7nm node i can see it at 2100 to 2200Mhz which would take it to +40%. at least we hope so.

+40% is a solid upgrade and right in line with the last jump.

FQnynwD.png

100 - 72 = 40?
 
100 - 72 = 40?

That graph is showing you that the 1080ti has 72% of the performance of the 2080ti 100%.

If you want to know how much faster the 2080ti is just divide 100 by 72 which is 1.3888888889. Obviously round up to 1.39 then times 100 = 139% with the 1080ti being the one in the graph that would be at 100%.

Obviously you could do the division and just take whats after the decimal as the percentage to which the 2080ti is faster. I ain't no maths teacher as you can see lol but works for me and i am too long out of school to remember the in's and outs.
 
That's at 1080p where the 2080ti is held back. Humbugs chart was at 4k where the 2080ti power shines through. Easy to see from these charts.

https://www.techpowerup.com/review/nvidia-geforce-rtx-2070-founders-edition/33.html

https://www.google.com/amp/s/www.pc...ti-vs-rtx-2080-ti-should-you-upgrade.amp.html

GeForce GTX 1080 Ti vs. GeForce RTX 2080 Ti
In our performance tests across a suite of various games, the GeForce RTX 2080 Ti Founders Edition outpunched an overclocked PNY GTX 1080 Ti XLR8 by an average of 33 percent, and a whopping 45 percent in Middle-earth: Shadow of War and Rainbow Six Siege. Nvidia’s new flagship cleared 60 frames per second at 4K resolution with graphics options cranked to Ultra settings in every game but Ghost Recon Wildlands, whose top-end settings were designed to utterly melt GPUs. If you ditch anti-aliasing and drop the graphics settings in games down to High—a common visual configuration for pixel-packed 4K monitors—the GeForce RTX 2080 Ti clears 80 fps in all the games in our test suite.


 
Last edited:
Status
Not open for further replies.
Back
Top Bottom