• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Do AMD provide any benefit to the retail GPU segment.

RTX ON, 20th century style: https://www.anandtech.com/show/298/5

Every time price is bumped by the major players they are unconsciously invoking the videocardzombie army.
First Intel, then Innosilicon, who will be next?
S3/VIA?
Matrox?
PowerVR?

Luckily for AMD they do have an extremely scalable architecture which would allow them to quickly plug any holes on the lower mid range market, however burn enough good faith and people will turn to someone else.
Even the cult of Steve is faltering in sales this year, so it's not impossible that even leather jackets will go soon out of fashion...

That's an interesting read thanks.

Bump Maps are now just one of many layers in the texturing data stack, i had no idea how it came about or where it came from, before my time, never really thought about it.

Its not all that different to a much newer technique, Parallax Occlusion Mapping.
While not invented by CIG like many things they are pioneering a more developed use case for it.

Video about it here from Digital Foundry. Along with other things, for example decoupling particle spawner's from screen space, ecte...

 
Last edited:
It was just an example of how certain features in the past were a big FPS hit, so what we're seeing now is nothing that new.
RT will probably become mainstream in 3 generations, after that something else will come.

Anything and everything has a performance penalty.

My point is you can use some of these techniques as a way to increase fidelity while at the same time reducing the performance cost, the trade off is high skill and workload, that video goes in to some of it.
 
@Bencher You're right but also wrong. Its quite possible that with more work and better techniques some of these titles may be able to reduce the VRam footprint, maybe.

But as a studio you don't always have the money to hire the very best devs and a huge number of them to get what is in that case a huge increase in workload done in good time, CIG have over 900 of some of the best devs in the industry and development on their game is very slow, however yes this vast game fits inside 8GB of VRam, just.... while still looking incredible.
 
Last edited:
Plague Tale was basically made by a small indie studio though. The games that actually hog vram like crazy on other hand are not, and they are asking a full AAA price.

You can't compare any game to any other game

In the same way you can stick a 65 HP engine from a MK1 VW Golf in to a MK7 VW Golf, it just wouldn't move, you wouldn't complain the MK7 is "Unoptimised" its an entirely different car with about 600KG of stuff on it that the MK1 Golf doesn't have.

Its like.... my card is 8GB, i should expect 2017 levels of VRam weight in a 2023 AAA game, and its all your fault if you can't do that because i paid $500 for this card.
 
Last edited:
I'll put it another way.....

2019: $500 for a high performance 8GB card.

PC Gamers: That's not enough VRam, that's going to cause problems soon.
Cultists, that's plenty, remember 4 cores was enough to last for 10 years.

A couple of years later...... PC Gamers predictions come true.
Cultists, no..... its because games are unoptimised.
 
Last edited:
To paraphrase a good comment that I saw elsewhere "some PC ports might be ****, and when they are AMD has you covered but Nvidia doesn't".

The thing is after a lot of searching i found some pricing of Memory IC's.

Its $32 for 16GB of Micron GDDR6 21GB/s, that's including sales tax, its retail if you buy 1250 units.

GPU vendors will get those for a chunk less than $32 for a 16GB kit when they are buying not 1250 units but a million.

I can't post the side for forum rules reason.

But there you are, $32 for a 16GB kit of some of the fastest GDDR6 memory available, proper high end stuff at 21GB/s, retail pricing.

It is nothing to do with the cost and that's why AMD are able to put 16GB on their $500 GPU, it just doesn't cost much to do that.
 
Last edited:
For the money the 4070 ti ought to have 24GB on it.

16 is fine, the problem is that's not possible, because Nvidia are trying to keep the costs down by using these small memory buses, its 192Bit, 6X 32Bit, so the options are 6GB or 12GB, there are no 4GB Memory IC's so they would have to use 12 2GB IC's.

The 3070 however could easily have been 16GB, but they used 1GB IC's, not 2GB like AMD did, they did that for an entirely different reason, Nvidia's market dominance is such that they can get away with shenanigans, and as a bonus AMD get the blame for it.
 
Last edited:
So you don't think there is any correlation between a game being AMD sponsored and requiring 28 petabytes of vram? None whatsoever? Take a look at this list


6 out of 7 vram hogging games are on that list, and the one that isn't is just fine on 8gb cards unless you activate RT. So....I guess it's coincidence

You can add Unreal Engine to that list, a lot of collaboration between Epic Games and AMD in the last 3 years.

I think AMD are sponsoring a lot of modern games.
 
GTX 970.
GTX 1070.
RX 5700XT for about 4 weeks.
RTX 2070 Super.

I'm not going to spend my hard earned cash on something that i just don't think is good enough, Polaris while nothing at all wrong with it it was not good enough for my needs and Vega IMO had everything wrong with it, not a single redeeming thing about it, too much power, too slow for what it was, problematic..... a lot Like ARC, actually exactly like ARC.

The 970 despite the shenanigans of its 3.5GB Buffer was a good GPU. i liked it.
The GTX 1070 was a great GPU, i really liked that one.
The RTX 2070 Super is also a good GPU but also flawed, it technically has RTX, but not really, it was never a GPU with powerful enough RTX, not even on day one let alone 3 years later, its lack of VRam became a real problem about 2 years in to its life.

However, i prefer AMD as a company, very much more, Nvidia are just the absolute worst for all the reasons @tommybhoy laid out, and more, AMD are no angels, far from it, but ##### me....

Since RDNA2 i'm happy to buy AMD again and am glad of it because the less of my money Nvidia get the more i smile.
 
Last edited:
It isn't. It was/is just crappy if such an old game needs more than 8gb at 1440p.

Funny thing though is that they've launched 4GB Fury cards (plenty of talks back then with 4gb is enough for 4k), then 8gb cards as Vega and 5700xt. Why, haven't they heard about Rust? Or was it simply because they ignore outliers/ badly optimized games?

How is this still the apology? Game A suffers VRam over flow, "its the developers fault" Game B suffers VRam over flow, "its the developers fault" Game C, Game D, E, F, G, H.... the list grows and grows and its never the hardware, unless its AMD, its always the developer, and there are now dozens of bad developers..... meanwhile: Developers: PC's are weak, But we don't have to worry about them anymore, Consoles have the bigger market share, they are better than PC's, now we can do more of the stuff that we want to do, we are no longer hamstrung by crummy hardware.
 
Last edited:
If a game like rust needs more than 8GB vRAM for 1440 is not the hardware's fault, doesn't matter if is AMD or nVIDIA.
If a game like TLOU needs more than 8GB at 1080p and beefy CPU is not the hardware's fault, doesn't matter if is AMD or nVIDIA.
If FC6 can't properly stream in and out and needs more than 8GB at 1080p for high texture pack, is not the hardware's fault, doesn't matter if is AMD or nVIDIA.

etc.

HZD does fine with 8GB even at high resolutions.
GoW does fine with 8GB even at high resolutions.

If Stalker 2 comes out with something like 12gb minimum for 1080p/1440p for high/ultra textures and at least 16gb for 4k, yeah, understandable.
In the meantime, if the devs think the PC is weak, consoles strong, what can i say... lol.

Consoles are better than Nvidia GPU's
 
There is a big thread on AT forums about this:
  1. It's the player's fault.
  2. It's the reviewer's fault.
  3. It's the developer's fault.
  4. It's AMD's fault.
  5. It's the game's fault.
  6. It's the driver's fault.
  7. It's a system configuration issue.
  8. Wrong settings were tested.
  9. Wrong area was tested.
  10. 4K is irrelevant.
  11. Texture quality is irrelevant as long as it matches a console's.
  12. Detail levels are irrelevant as long as they match a console's.
  13. There's no reason a game should use more than 640K 8GB, because a forum user said so.
  14. It's completely acceptable for 3070/3070TI/3080 owners to turn down settings while 3060 users have no issue.
  15. It's an anomaly.
  16. It's a console port.
  17. It's a conspiracy against nVidia.
  18. 8GB cards aren't meant for 4K / 1440p / 1080p / 720p gaming.
  19. It's completely acceptable to disable ray tracing on nVidia cards while AMD users have no issue.
They keep adding new ones when they pop up! I thought it was amusing.

:p

Pure gold.
 


I created a simple pause menu so you can exit the game, only takes about 30 minutes to set up, are you listening all other UE5 devs creating demo's? :P So press escape to get an exit menu, as you would any game.

About 500,000+ very high polygon ferns, no LOD's, all Nanite, normally you wouldn't pack this much in, certainly not ones this polygon dense, the grass asset layer is also very polygon dense which is half the performance hit. It will melt even high end GPU's.

Give it about 30 seconds before moving, i didn't build an on load shader compile delay, so it will do that after you load in, wait for the textures on the trees to compile to full res before moving.

Extract archive, use 7Zip, double click the old light .exe

Linux Builds on request.
 
Back
Top Bottom