• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Ratchet and Clank: Rift Apart RDNA 2 Ray Tracing

The PS4 GPU was more advanced than the PC GCN GPUs of its era

R290/x launched first, before PS4 and crushed it

https://www.techpowerup.com/gpu-specs/radeon-r9-290.c2397
https://www.techpowerup.com/gpu-specs/playstation-4-gpu.c2085


you had Crytek who really pushed technologies forward,only for PC gamers to complain about Crysis

There were complains about story/gameplay as well. Basically got the stigma of a "tech demo",wasn't all due to performance reasons.
 
With regard to RNDA2 being bad at RT he was 100% correct. The problem with Tom's 10 game average is that at least 9 of them are using a hybrid engine and 8 of them use only a subset of what RT can do. It's also worth noting that no settings were listed. e.g. we don't know if CP2077 was using SSR off (which turns RT reflections on) or Psycho GI.

DF did a decent job of comparing the 3080 with the 6800XT.

Of course you could also consider -


Toms hardware also gave the results with DLSS -

And their recommendation for RT -


It comes down to how developers use Ray Tracing.

Nvidia has a lot more RT horse power in the tank but the horsepower is only usable when heavy high resolution RT is used. If games levitate towards a hybrid low resolution approach to rayvtracing then AMD holds up extremely well because the extra horsepower Ampere has sits idle doing nothing - a lot of these hybrid games use 1/4 pixel resolution for rayvtracing, so a RT reflection would be at 1/4th of your screen resolution and Ampere likes high resolution to flex its power, it's constrained and kept on a leash at low resolution
 
For context said:
When the consoles launch they approach Nvidia and AMD to ask what they can provide and it's always some slightly customized version of their LAST gen hardware. The consoles aren't spending loads of money to provide next-gen graphics, they're borrowing from old PC tech.
Consoles don't use last generation technology - if you look at unified shaders,it first appeared on the XBox360 GPU first. It's mostly current generation technology and maybe even aspect's of future GPUs.SMT in the consumer space appeared on consoles first IIRC. The PS4 GPU was more advanced than the PC GCN GPUs of its era,the XBox Series X GPU apparently has aspects not found in RDNA2,etc. If you go back further before the X86 consoles,the N64,etc were definitely more advanced than the PCs of the era.


A poster said consoles use old technologies which is untrue,and the links you provided both say the same month of general release which shows consoles are not using old technology.Having said that I should have said similar technology(not more advanced) as they are both related.

You might also notice I didn't mention speed - I mentioned the level of technology and the PS4 specs and technology were known 7~9 months before the R9 290 launch:
https://techreport.com/news/24725/ps4-architect-discusses-consoles-custom-amd-processor/
https://www.anandtech.com/show/6770/sony-announces-playstation-4-pc-hardware-inside
https://www.extremetech.com/gaming/...avily-modified-radeon-supercharged-apu-design

An additional bus has been grafted to the GPU, providing a direct link to system memory that bypasses the GPU’s caches. This dedicated bus offers "almost 20GB/s" of bandwidth, according to Cerny.

The GPU’s L2 cache has been enhanced to better support simultaneous use by graphics and compute workloads. Compute-related cache lines are marked as "volatile" and can be written or invalidated selectively.

The number of "sources" for GPU compute commands has been increased dramatically. The GCN architecture supports one graphics source and two compute sources, according to Cerny, but the PS4 boosts the number of compute command sources to 64.

The original AMD GCN architecture allowed for one source of graphics commands, and two sources of compute commands. For PS4, we’ve worked with AMD to increase the limit to 64 sources of compute commands — the idea is if you have some asynchronous compute you want to perform, you put commands in one of these 64 queues, and then there are multiple levels of arbitration in the hardware to determine what runs, how it runs, and when it runs, alongside the graphics that’s in the system.

Sony was talking about ASync compute in early 2013. AMD only started talking about in 2015:
https://www.anandtech.com/show/9124/amd-dives-deep-on-asynchronous-shading

Consoles have limitations in die area,so you are correct PC GPUs at the top end will be faster. However,no GCN GPU in early 2013 could do what the PS4 GPU could do.Public demos of the hardware followed between April to June 2013. That makes the PS4 GPU the first demonstrated GCN GPU to move past the original GCN1.0 design in the HD7970,and the most advanced technology AMD GPU publically demonstrated at that time. This is before the R9 290(also 8 ACE).

The HD7970/HD7950/HD7870/HD7850 had two ACEs - the PS4 was an 8 ACE design with enhanced compute capabilities.An 8 ACE mainstream GCN GPU part did appear - it was AMD Tonga(R9 285) in 2014.

Every rumour hinted by then developer units were shipping with the actual SOCs by early 2013. If the final dev kits were not available 6~12 months before launch there would be no launch titles - remember the PS3 was not X86 and games had to redeveloped from PowerPC.

The R9 290 launch was relatively low volume and seemed rushed(GTX780/GTX780TI launches?). AMD slapped on a quick fix cooler,and it took months for AIB models to appear in volume.

Also there have been persistent rumours because AMD was so short of money,Sony/MS helped stump up the funds for GPU R and D. As far back as 2012/2013 AMD had started to pair down GPU R and D in favour of CPUs like Zen(why AMD had no answer to Maxwell). Even AMD touted area and power efficiency as one of the advantages of the RDNA2 RT implementation,which sounds very good for consoles.

There were complains about story/gameplay as well. Basically got the stigma of a "tech demo",wasn't all due to performance reasons.

The gameplay was mostly fine IMHO. The nanosuit for the most part allowed you to tailor the way you played and was made with keyboard and mouse in mind. The semi-openworld maps allowed you to approach targets in different ways and via different gameplay styles. The environments were destructable,so you could use that during gameplay. Vehicles had different destruction points which could be used. Even unlike many games,Crysis used different AI models. During the frost stages,there was both walking and flying enemies. Most games have walking bipedal enemies as they re-use human NPC AI models.

The part inside the alien ship looked amazing but again people moaned about that,just because they needed to use a different set of mechanics during gameplay. Even in the frost stages people were complaining about having to fight flying enemies. IMHO most of the moaning was because people didn't want to spend £200~£300 on a 8800GT/8800GTS and overclock their £100 CPU to play the game. Hence,they had a poor gameplay experience because of the low framerates IMHO.The only part which was uniformly poor was the flying part - but even Crytek agreed and removed it in the recent remastered version.

The issue is that the it gave you a choice,but too many wanted to be handheld and pushed down a narrow tunnel,so moaned as the game could be challenging if you didn't use tactics. You can see that with Crysis:Warhead which had far less complaining,as they downgraded some aspects of the graphics to make it run better,and made it more linear.

When Crytek then tried to make a more conventional hand-holding PC game in Crysis2 which ran much better,had a "better" story,a better soundtrack,PC gamers then moaned the graphics were not good enough. So what did they want great graphics which needed a decent PC,or worse graphics which ran better?? PC gamers at times just contradict themselves.

IMHO,I think AAA devs seeing what happened with Crytek pandering to the PC crowd,and still failing miserably just put off big devs from really pushing technically challenging games to PC. They were right - look how much Fortnite has made.

I really didn't understand the complaining about the story either. It was an FPS game,and Crytek sold it as such. FarCry didn't have a great story too(but people didn't seem to care as much),and games such as Quake,Doom,etc were hardly known for their storylines or memorable characters. But Crytek tried to make a more coherent story with Crysis 2 and Crysis 3 and still people didn't care.
 
Last edited:
Regardless of the nitpicky arguments you could have, I think Ratchet and Clank is a very promising showing for what these consoles can do, and indicates it'll be a while before PCs will be "held back" this time around.

It's clear they can do RT competently enough, but the whole package of the game should be looked at too. Resolution and performance is good, world density is very high, and loading times are nearly nothing.

These new consoles don't have any obvious "weakness" this time, they have great CPUs and SSDs as well as their (currently) well above average GPUs vs PCs.

So I'd guesstimate it'll take till ~2024 for you to be able to buy a cheap-ish PC that's capable of significantly outclassing them (i.e. so your "average" PC starts outclassing them, not just the best you can buy), which bodes well for game tech not being held back.

The, now previous, PS4/Xbone generation were seriously hampered by the combination of their pathetic CPUs and HDDs (crawling slowly through tight gaps to get to the next area, anyone?). And, bear in mind that any cross-generation games (even the impressive looking Horizon Forbidden West) are still being held back by this, so we have to wait for next-gen-only games to see what these new consoles can really do.

Also, with all the dynamic resolution and upscaling tech coming in now, and on a roadmap to improve a lot further, I think this bodes very well for the console cycle after this. Not wanting to make this about "ignoring the present", but:

The next-gen will likely just go for 4K60 render target (where PS5/Xbox SX are ~1440p upscaled to 4K for 60 FPS) and then upscale to 8K, as it'd be such a waste of processing power, and memory/cost, to render native 8K. And if they do this, it means there'll be a much smaller jump in raw pixel rendering requirement next time around, leading to much more processing power left over for effects/RT/new techniques.

i.e. imagine what they would be planning for the PS5/Xbox SX generation if their render target was only 1080p instead of 4K, but still had the same GPUs.
 
Regardless of the nitpicky arguments you could have, I think Ratchet and Clank is a very promising showing for what these consoles can do, and indicates it'll be a while before PCs will be "held back" this time around.

It's clear they can do RT competently enough, but the whole package of the game should be looked at too. Resolution and performance is good, world density is very high, and loading times are nearly nothing.

These new consoles don't have any obvious "weakness" this time, they have great CPUs and SSDs as well as their (currently) well above average GPUs vs PCs.

So I'd guesstimate it'll take till ~2024 for you to be able to buy a cheap-ish PC that's capable of significantly outclassing them (i.e. so your "average" PC starts outclassing them, not just the best you can buy), which bodes well for game tech not being held back.

The, now previous, PS4/Xbone generation were seriously hampered by the combination of their pathetic CPUs and HDDs (crawling slowly through tight gaps to get to the next area, anyone?). And, bear in mind that any cross-generation games (even the impressive looking Horizon Forbidden West) are still being held back by this, so we have to wait for next-gen-only games to see what these new consoles can really do.

Also, with all the dynamic resolution and upscaling tech coming in now, and on a roadmap to improve a lot further, I think this bodes very well for the console cycle after this. Not wanting to make this about "ignoring the present", but:

The next-gen will likely just go for 4K60 render target (where PS5/Xbox SX are ~1440p upscaled to 4K for 60 FPS) and then upscale to 8K, as it'd be such a waste of processing power, and memory/cost, to render native 8K. And if they do this, it means there'll be a much smaller jump in raw pixel rendering requirement next time around, leading to much more processing power left over for effects/RT/new techniques.

i.e. imagine what they would be planning for the PS5/Xbox SX generation if their render target was only 1080p instead of 4K, but still had the same GPUs.


Yeah for me the biggest enjoyment is that the new consoles have no single weakness, they are well balanced machines unlike last gen - the PS4 cpu was basically no faster than the ps3 cpu - that's 14 years of the same performance level creating a huge bottleneck.

because the new machines are balanced, so will games will be balanced and well rounded.

And we also know that there is a good upgrade path for the consoles going forward - if Sony and MS the want to make Pro consoles the hardware will be there for them
 
A poster said consoles use old technologies which is untrue,and the links you provided both say the same month of general release which shows consoles are not using old technology.Having said that I should have said similar technology(not more advanced) as they are both related.

You might also notice I didn't mention speed - I mentioned the level of technology and the PS4 specs and technology were known 7~9 months before the R9 290 launch:
https://techreport.com/news/24725/ps4-architect-discusses-consoles-custom-amd-processor/
https://www.anandtech.com/show/6770/sony-announces-playstation-4-pc-hardware-inside
https://www.extremetech.com/gaming/...avily-modified-radeon-supercharged-apu-design





Sony was talking about ASync compute in early 2013. AMD only started talking about in 2015:
https://www.anandtech.com/show/9124/amd-dives-deep-on-asynchronous-shading

Consoles have limitations in die area,so you are correct PC GPUs at the top end will be faster. However,no GCN GPU in early 2013 could do what the PS4 GPU could do.Public demos of the hardware followed between April to June 2013. That makes the PS4 GPU the first demonstrated GCN GPU to move past the original GCN1.0 design in the HD7970,and the most advanced technology AMD GPU publically demonstrated at that time. This is before the R9 290(also 8 ACE).

The HD7970/HD7950/HD7870/HD7850 had two ACEs - the PS4 was an 8 ACE design with enhanced compute capabilities.An 8 ACE mainstream GCN GPU part did appear - it was AMD Tonga(R9 285) in 2014.

Every rumour hinted by then developer units were shipping with the actual SOCs by early 2013. If the final dev kits were not available 6~12 months before launch there would be no launch titles - remember the PS3 was not X86 and games had to redeveloped from PowerPC.

The R9 290 launch was relatively low volume and seemed rushed(GTX780/GTX780TI launches?). AMD slapped on a quick fix cooler,and it took months for AIB models to appear in volume.

Also there have been persistent rumours because AMD was so short of money,Sony/MS helped stump up the funds for GPU R and D. As far back as 2012/2013 AMD had started to pair down GPU R and D in favour of CPUs like Zen(why AMD had no answer to Maxwell). Even AMD touted area and power efficiency as one of the advantages of the RDNA2 RT implementation,which sounds very good for consoles.



The gameplay was mostly fine IMHO. The nanosuit for the most part allowed you to tailor the way you played and was made with keyboard and mouse in mind. The semi-openworld maps allowed you to approach targets in different ways and via different gameplay styles. The environments were destructable,so you could use that during gameplay. Vehicles had different destruction points which could be used. Even unlike many games,Crysis used different AI models. During the frost stages,there was both walking and flying enemies. Most games have walking bipedal enemies as they re-use human NPC AI models.

The part inside the alien ship looked amazing but again people moaned about that,just because they needed to use a different set of mechanics during gameplay. Even in the frost stages people were complaining about having to fight flying enemies. IMHO most of the moaning was because people didn't want to spend £200~£300 on a 8800GT/8800GTS and overclock their £100 CPU to play the game. Hence,they had a poor gameplay experience because of the low framerates IMHO.The only part which was uniformly poor was the flying part - but even Crytek agreed and removed it in the recent remastered version.

The issue is that the it gave you a choice,but too many wanted to be handheld and pushed down a narrow tunnel,so moaned as the game could be challenging if you didn't use tactics. You can see that with Crysis:Warhead which had far less complaining,as they downgraded some aspects of the graphics to make it run better,and made it more linear.

When Crytek then tried to make a more conventional hand-holding PC game in Crysis2 which ran much better,had a "better" story,a better soundtrack,PC gamers then moaned the graphics were not good enough. So what did they want great graphics which needed a decent PC,or worse graphics which ran better?? PC gamers at times just contradict themselves.

IMHO,I think AAA devs seeing what happened with Crytek pandering to the PC crowd,and still failing miserably just put off big devs from really pushing technically challenging games to PC. They were right - look how much Fortnite has made.

I really didn't understand the complaining about the story either. It was an FPS game,and Crytek sold it as such. FarCry didn't have a great story too(but people didn't seem to care as much),and games such as Quake,Doom,etc were hardly known for their storylines or memorable characters. But Crytek tried to make a more coherent story with Crysis 2 and Crysis 3 and still people didn't care.

At the end of the day what matters is peformance and was already a PC part at that time that offered a much better performance for about the same money (sure, you still need the rest of the PC, but mainly the GPU is always the weaker part when it comes to games).
And that PC part good enough to last you for the entire console cycle.
Sure, some games are impressive for the hardware that the consoles have, but are just a handful and by far not representative for the whole industry.
Problem this time around is the prices of the GPUs. There's no RTX3080\6800xt at 4-500$ mark anymore and PC gamers were fine with that (with MSRP price I mean).
HOWEVER, it the consoles would have been 10080p@30fps machines, yeah, that would be something, but for the 4k@60fps or even 30fps, they're still not powerul enough to make it native resolution.

If HF3 would have come out with the Crysis requirements, I'm sure it would have sold a lot more than Crysis did. :)

R290 price at launch was $399, exactly as much as the whole PS4. The x version price was $549.

Yes, but was a lot faster too!

Regardless of the nitpicky arguments you could have, I think Ratchet and Clank is a very promising showing for what these consoles can do, and indicates it'll be a while before PCs will be "held back" this time around.

Thinking about 16c32t CPUs, 32\64GB RAM, rtx3080\90 or 6800xt\6900xt, they already are (of course, leaving the price aside).
 
Yes, but was a lot faster too!
But what does that mean? Sony or MS can make a console with 2X6900xt or 2X3090 graphic card but they won't sell very well. They don't put the fastest cards available in the consoles because of high price, not because they don't have access to the fastest chipsets released and even more than that.
The question is if the consoles give higher value for gaming than saying buying PC hardware for the same money. There is no doubt that the consoles have great value and the current gen has insane value if we think you can barely buy an equivalent PC video card for the same price of the whole console...if you are lucky to find one. Sure, the value is decreasing during the console lifetime but it is the same if you buy a videocard for PC. Yeah you can buy a new one and then a new one for PC but then you can also buy a PS5 and play the PS4 games or buy the XboxX and play all the games released on Xbox. :)
 
Brilliant game but can't go back to 30fps after playing 60fps. Only really missing RT and the game still looks stunning so that's good enough for me
 
But what does that mean? Sony or MS can make a console with 2X6900xt or 2X3090 graphic card but they won't sell very well. They don't put the fastest cards available in the consoles because of high price, not because they don't have access to the fastest chipsets released and even more than that.
The question is if the consoles give higher value for gaming than saying buying PC hardware for the same money. There is no doubt that the consoles have great value and the current gen has insane value if we think you can barely buy an equivalent PC video card for the same price of the whole console...if you are lucky to find one. Sure, the value is decreasing during the console lifetime but it is the same if you buy a videocard for PC. Yeah you can buy a new one and then a new one for PC but then you can also buy a PS5 and play the PS4 games or buy the XboxX and play all the games released on Xbox. :)

It means what it was, that a GPU priced the same as a console allowed you to play games in better conditions for the entire life of the console. Of course, that's not the case anymore, even at MSRP I doubt a 3070 would receive enough support from nVIDIA or a 6800 from AMD - even though it has that magical 16GB vRAM, to still be meaningful 3-4 years from now.

The good news about the new consoles is that at least they're more capable than the last ones were. They're kinda dictating, in general, how advanced the games will be for the next 6-8 years. Assassin's Creed took a step back from Unity to Valhala (first and last), due to the weak hardware and it was obvious even on PC.

Fingers crossed games will actually get better.
 
Thinking about 16c32t CPUs, 32\64GB RAM, rtx3080\90 or 6800xt\6900xt, they already are (of course, leaving the price aside).

Yeah, I touched on that in the same post.

Although this is true, it's (sadly) irrelevant in terms of devs making games.

Ratchet and Clank would've been possible on PC with Zen2/Intel-8000 series (8700k would be enough vs 8c/16t Zen2) plus Turing. So could've been done in 2018.

But it wasn't, because games are made for the current console and/or the "average" PC, because devs need scale of customer-base to make money.

Therefore, you need to look towards when the "average" PC will significantly surpass the consoles, and this will probably be 2024/2025 due to how decent these new consoles are.

And, due to Sony saying they didn't want to do another ~7 year gen with "Pro" consoles (we'll see if this turns out to be true though), and with the timeline of process nodes, we'll probably get the next gen in 2026.

So, in practice, this console gen may never meaningfully hold back the PC.

Just to touch on next-gen timing, they probably need to use the 2nm node to offer a proper generational improvement (3nm is only ~2.7x the density of 7nm, whereas 2nm should be ~5x the density), and 2nm should be in the iPhone in 2024. So 2026 would be 2 years post-Apple and ~1 year post-desktop, so should be very mature by then and on a similar timeline to the current new consoles using 7nm. (these all TSMC's plans/nodes)
 
Some screenshots I took, definitely best looking console game ever made.













Are those RT reflections in the eyes and of the forest on the ship? Pretty amazing to think an early PS5 game looks so good. I can't think of any pc games that look this good.
 
Back
Top Bottom