• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Navi 23 ‘NVIDIA Killer’ GPU Rumored to Support Hardware Ray Tracing, Coming Next Year

Status
Not open for further replies.
Devs being game developer not engine development like unreal engine *
Lazy development doesn't come from the engine it comes from the games developer.
To be clear, the reason why game engines are updated is do to the feedback from game developers.
Another example is Frostbite engine from Dice, a game developer. So it's not as absolute as you tend to imply.

But sure, create wasn't the correct term..."help tailor" is more apt thanks.
 
Last edited:
Right now we need to buffer consoles bandwidth superiority with higher vram. Until we get to a point that PC also uses GDDRX as main ram. Lets do a quick comparison.

DDR4-3600 28800 MB/s
DDR4-4000 32000 MB/s
DDR5-4800 38400 MB/s
XSeriesX.....560000 MB/s
PS5...........448000 MB/s
(I kept it at MB/s instead of GB/s for posterity)


As long as main memory is at the bottom of the totem pole main memory will become more and more of a hindrance to gaming then it could help IMO.
Pretty certain most higher next-gen graphics cards will beat consoles in memory bandwidth.
Especially PS5, matched already by mid level chip RX 5700 XT, is going to get smashed in VRAM bandwidth.
And absolute top cards will possibly have as much memory as whole console, not all of which is used as VRAM:
10GB in Xbox Scarlett and would expect PS5 to follow the suit. (so rather small memory for other things)

And insisting on GDDR for CPU would be just foolish:
GDDR is tuned for bandwidth at expense of some latency, because bandwidth is what GPUs need the most.
While CPU has higher priority on low latency.
Also GDDR doesn't tolerate longer traces and especially sockets.
So it would mean both CPU and memory soldered onto mobo...


Also bottom of storage totem pole is always that non-volatile mass storage!
Which is all those consoles have after that quite small for generational update memory.
Again high end PC should have 32GB of main memory giving plenty of extra buffer/cache.
With dual channel giving double bandwidth of your list and far above any SSDs. (and latencies magnitude lower)
So it's those next-gen consoles which will be playing catch up against high end PC in hardware capabilities/resources.
 
Pretty certain most higher next-gen graphics cards will beat consoles in memory bandwidth.
Especially PS5, matched already by mid level chip RX 5700 XT, is going to get smashed in VRAM bandwidth.
And absolute top cards will possibly have as much memory as whole console, not all of which is used as VRAM:
10GB in Xbox Scarlett and would expect PS5 to follow the suit. (so rather small memory for other things)
Being pretty certain is not the same as a developer tailoring a game for it. That's the disparity that you are not factoring in. As it stands right now it takes about 2x the power of a PC to equate to console now (do to overhead, etc). That margin will only grow larger with nextgen gaming. It's not about how much memory a 3090 has but what's done with it in games that equal to how developers tailor their games for console. Therefore having a 24gb video card is only marginalized by a console ported game that's not tailor for it.

Which is the point I'm bringing to light.

And insisting on GDDR for CPU would be just foolish:
GDDR is tuned for bandwidth at expense of some latency, because bandwidth is what GPUs need the most.
While CPU has higher priority on low latency.
Also GDDR doesn't tolerate longer traces and especially sockets.
So it would mean both CPU and memory soldered onto mobo...

Also bottom of storage totem pole is always that non-volatile mass storage!
Which is all those consoles have after that quite small for generational update memory.
Again high end PC should have 32GB of main memory giving plenty of extra buffer/cache.
With dual channel giving double bandwidth of your list and far above any SSDs. (and latencies magnitude lower)
So it's those next-gen consoles which will be playing catch up against high end PC in hardware capabilities/resources.
You've provided no technical aspect as to why PC's cannot use nor take advantage of more bandwidth. Other then some random speculation as to why you feel it cannot.
Can you provide any insight on what these future CPUs will be? No.
Can you hypothesize why AMD/Intel are seeking more Heterogeneous computing which is the future tech not yet seen? No.
Can you give any forcast as to the future of APUs as we know them and how they will leverage APUs in scaling in both future mobile and mainframe applications? No.
So how can I just take your word that GDDR is foolish to you when AMD is working on using HBM for the CPU?

This week's hardware news recap primarily focuses on some GN-exclusive items pertaining to AMD's plans with system memory in the future, mostly looking toward DDR5 for CPUs and HBM integration with CPUs, creating "near memory" for future products.
https://www.gamersnexus.net/news-pc/3286-hw-news-amd-going-ddr5-hbm-for-cpus-7nm-challenges

AMD and Intel are both working on their versions of "Near Memory". That will revolutionize the way memory is used today. I'm not sure why you think or believe that how PC works today will be the same 5-10 years from now. That's what makes what you say absurd. As it only relates to were we are now. It doesn't dictate what we will see by then.


Edit:
Way back when AMD bought ATI the vision that was discussed with those who worked for ATI on the forums I was associated with at the time was a simple vision. One were both CPU and GPU were as one entity. I was told that's why AMD wanted ATI. They wanted a heterogeneous platform that was all encompassing.

Out of it so far are APUs. But that's just the beginning.
 
Last edited:
Part 2
Heterogeneous GPU – Maximizing chip utilization through the use of variable width SIMD units

Perhaps even more impressive is a new patent filed by AMD that aims to improve the chip utilization in its Exascale projects. As you may know many GPU workloads are non-uniform and have numerous wavefronts with predicated-off threads. Unfortunately, the predicated instructions take up space, waste power, produce heat, and produce no useful output. Even the most modern GPU micro-architectures are unable to cope with certain dynamic runtime behaviors which are very difficult to know at compile time.

Therefore to solve this problem AMD have proposed a disruptive approach to push the chip utilization level to the limit: A new GPU architecture in which its SIMD units have different numbers of ALUs, so that each SIMD unit can run a different number of threads (fig. 7). Thus, by providing a set of execution resources within each GPU compute unit tailored to a range of execution profiles, the GPU can handle irregular workloads more efficiently.

This approach also works very well with branch divergence in a wavefront. Because of branch divergence, some threads follow a control flow path and other threads will not follow the control flow path, which means that many threads are predicated off. So effectively there will only be a few subsets of threads running. When it is determined that the active threads can be run in a smaller width SIMD unit, then the threads will be moved to the smaller width SIMD unit, and any unused SIMD units will not be powered up. Likewise, if divergence of control or other problems reduces the number of active threads on a wavefront, the more restricted execution feature can also be more efficient.

Read More: https://coreteks.tech/articles/index.php/2020/08/26/amd-master-plan-pt-2-heterogeneous-revolution/.

The point is that there is no surprise that Intel is looking to bring GPU to the forefront. They have to in order to compete with AMD. And it's not just for gaming. I am sure that Intel has a good idea of AMD's plan with Raja onboard now. Also, why Nvidia desparately seeks to buy Arm. All roads lead to Heterogeneous 'computing' in one form or another. Be it AMD, Intel or Nvidia.

When will we see it? Who know but I suspect this won't be "just for gaming". However, rumor has it that RDNA 3 will start the process of this Heterogeneous GPU. I've found no info. about this on RDNA 2 though.


Edit:
In other news (about high bandwidth):
Black Ops Cold War next-gen performance:
PS5/XSX
4K resolution
Up to 120FPS
Ray-traced global illumination
HDR enabled
3D audio support

Read more: https://www.tweaktown.com/

It's not clear if their battle royal variant will be 4k/120fps with RT GI. But that's were 16GB vram become paramount. More games to follow, guarantee.

However, what's so important with this is that this is a cross platform title. This is the 1st game that is going to pit performance between next gen consoles vs PC. This is were we will know how things will pan out.
 
Last edited:
Part 2


Read More: https://coreteks.tech/articles/index.php/2020/08/26/amd-master-plan-pt-2-heterogeneous-revolution/.

The point is that there is no surprise that Intel is looking to bring GPU to the forefront. They have to in order to compete with AMD. And it's not just for gaming. I am sure that Intel has a good idea of AMD's plan with Raja onboard now. Also, why Nvidia desparately seeks to buy Arm. All roads lead to Heterogeneous 'computing' in one form or another. Be it AMD, Intel or Nvidia.

When will we see it? Who know but I suspect this won't be "just for gaming". However, rumor has it that RDNA 3 will start the process of this Heterogeneous GPU. I've found no info. about this on RDNA 2 though.


Edit:
In other news (about high bandwidth):
Black Ops Cold War next-gen performance:
PS5/XSX
4K resolution
Up to 120FPS
Ray-traced global illumination
HDR enabled
3D audio support

Read more: https://www.tweaktown.com/

It's not clear if their battle royal variant will be 4k/120fps with RT GI. But that's were 16GB vram become paramount. More games to follow, guarantee.

However, what's so important with this is that this is a cross platform title. This is the 1st game that is going to pit performance between next gen consoles vs PC. This is were we will know how things will pan out.
I wonder if that will make the game be 400GB this time around... Current COD is getting ridiculous now with over 200GB.
 
I'm getting the feeling games will be start released on their own dedicated nvme drives:D
Honestly ps5 has 875GB if i am not mistaken for installing games on it's ssd. If current COD takes 200GB+ there is not much space left for many more games. And then you will need a dedicated fast nvme external for PS5 or even worse a proprietary ssd for Xbox. Hey we are going back to consoles supporting cartridges haha - they are just nvme cartridges..

What is in the game that holds 200GB omg. Sorry off topic.
 
You've provided no technical aspect as to why PC's cannot use nor take advantage of more bandwidth. Other then some random speculation as to why you feel it cannot.


GDDR is SLOOOOOOOW when you look at latency, and that's no use for a CPU. You really don't want GDDR for your main memory if you can help it. GDDR vs DDR is a set of tradeoffs, GDDR gives higher bandwidth, DDR returns less data but with a much lower delay. GDDR is good when you have large vectors of information you want to work on all at the same time, like in a graphics card, and you don't mind waiting for it. DDR4 wins out when you want to fetch lots of different, small pieces of information, like in a CPU. DDR4 runs at higher clock speeds with relatively low CAS latency compared to GDDR6*.

PCs could take advantage of more bandwidth, but if it came at the cost of much larger latency in main memory, that wouldn't be a good thing.

[* it's quite hard to find real figures here though, please shout if you can find any!]
 
Last edited:
GDDR is SLOOOOOOOW when you look at latency, and that's no use for a CPU. You really don't want GDDR for your main memory if you can help it. GDDR vs DDR is a set of tradeoffs, GDDR gives higher bandwidth, DDR returns less data but with a much lower delay. GDDR is good when you have large vectors of information you want to work on all at the same time, like in a graphics card, and you don't mind waiting for it. DDR4 wins out when you want to fetch lots of different, small pieces of information, like in a CPU. DDR4 runs at higher clock speeds with relatively low CAS latency compared to GDDR6*.

PCs could take advantage of more bandwidth, but if it came at the cost of much larger latency in main memory, that wouldn't be a good thing.

[* it's quite hard to find real figures here though, please shout if you can find any!]
Sloooow based on what? And what other application on CPU that isn't more sensitive to latency then games? Because this explanation contradicts the very proof in concept as the next gen console's hardware using the very thing you say makes gddr6 slow.

I've heard that explanation before and found no merit in it in practice for Home PC's. Furthermore, AMD/Intel is working on going beyond gddr6 and use HBM in Near Memory. So that's just an old wives tale back during gddr3 days. That's how old that tale is. And, I'm surprised it's still making the rounds.

Although this doesn't discuss ddr specifically there is still technical merit in understanding latency under load. This is important because that's how developers are tailoring their future games.
 
Last edited:
Sloooow based on what?

Slow based on the latency of getting data from RAM into the processor to be worked on.

And what other application on CPU that isn't more sensitive to latency then games

I'm not sure what your question means, can you restate please?


Because this explanation contradicts the very proof in concept as the next gen console's hardware using the very thing you say makes gddr6 slow.

Consoles are a compromise between price and performance, if they can be made to work well enough with higher latency RAM for the CPU then they make some gains in not having to copy things around between CPU and GPU memory and they save material costs by not having to have dedicated system RAM and graphics RAM. Obvious win for them.


I've heard that explanation before and found no merit in it in practice for Home PC's.

Then you don't know what you're looking at. One of the first things that "Steve" says in your video is that different workloads have different requirements for latency and throughput. For general purpose computing (the stuff your CPU does), latency is key because memory access is much more random and in small chunks. For graphics you're often looking at large vectors so throughput is more important. For a "home PC" you're best off with separate memory types to do different jobs.

So that's just an old wives tale back during gddr3 days. That's how old that tale is. And, I'm surprised it's still making the rounds.

Well, except it's still true, that in general DDR4 runs faster and has fewer cycles of CAS and other latency. So it's not an old wives tale. And for all you call on other people to provide evidence, you've provided none.

Yes, things may change in future, and we may find that there's a memory type that can "rule them all" down the line, or near-memory may change the world.

Have a read of this for instance - https://semiengineering.com/hbm2-vs-gddr6-tradeoffs-in-dram/

It mentions that there "I’ve seen people with DDR5 and HBM2 or GDDR6 on the same die." - now why would they do that unless there were characteristics of both that might be useful in different circumstances?
Particularly where AI is concerned, we're back to vector processing, and throughput starts to matter more than latency again, but for other workloads on the same CPU lower latency may be important.

As it stands, GDDR6 is not "better" than DDR4, they are technologies with different uses.
 
Slow based on the latency of getting data from RAM into the processor to be worked on.
You've provided no application of practice. How is anyone to know what you are talking about if you don't provide an example.


I'm not sure what your question means, can you restate please?
Jimmy, how is this a hard question? Again, you've provided no example of what application hinders latency. If you cannot provide insight there is no merit to your assertion.



Consoles are a compromise between price and performance, if they can be made to work well enough with higher latency RAM for the CPU then they make some gains in not having to copy things around between CPU and GPU memory and they save material costs by not having to have dedicated system RAM and graphics RAM. Obvious win for them.
Consoles are using AMD's CPU/GPU and 16gb gddr6 and some how you see it as a compromise? That's a self defeating statement which crumbles your house of cards assertion that CPU could not benefit from GDDR6 in games. The example is the console. :p


Then you don't know what you're looking at. One of the first things that "Steve" says in your video is that different workloads have different requirements for latency and throughput. For general purpose computing (the stuff your CPU does), latency is key because memory access is much more random and in small chunks. For graphics you're often looking at large vectors so throughput is more important. For a "home PC" you're best off with separate memory types to do different jobs.
There you go again with your fallacy to your appeal to authority. Again, no merit provided as to what's slowed down.


Well, except it's still true, that in general DDR4 runs faster and has fewer cycles of CAS and other latency. So it's not an old wives tale. And for all you call on other people to provide evidence, you've provided none.

Yes, things may change in future, and we may find that there's a memory type that can "rule them all" down the line, or near-memory may change the world.

Have a read of this for instance - https://semiengineering.com/hbm2-vs-gddr6-tradeoffs-in-dram/

It mentions that there "I’ve seen people with DDR5 and HBM2 or GDDR6 on the same die." - now why would they do that unless there were characteristics of both that might be useful in different circumstances?
Particularly where AI is concerned, we're back to vector processing, and throughput starts to matter more than latency again, but for other workloads on the same CPU lower latency may be important.

As it stands, GDDR6 is not "better" than DDR4, they are technologies with different uses.
Jimmy, there is no technical merit to why it cannot work when it's working in a computer already, console. Of course there are pro and cons between ddr/gddr/hbm. I'm not talking about AI but PC in this area of PC Gaming. And since you cannot provide any practical use case application were such a slow down would occur in PC Gaming this debate becoming circular and getting way off topic.
:D
 
Last edited:
What's the chances AMD are holding onto their news until just prior to this Nvidia keynote on Monday? Could be a stunning rain on parade moment :p
I hope that is their plan, wait and let Nvidia show their hand, then lay the smack down. But it’s RTG we are talking here, so I will not hold my breath.
 
Status
Not open for further replies.
Back
Top Bottom