• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Poll: AMD Screws Gamers: Sponsorships Likely Block DLSS

Are AMD out of order if they are found to be blocking DLSS on Starfield

  • Yes

  • No


Results are only viewable after voting.
R/Hardware is hilarious. The post about what DF guy said,has devolved into even people now defending the FX series and saying Crisis 2 overtessellation never happened. Apparently ATI is to blame for Nvidia going for non-standard FP16 shaders,when the standard called for FP24 shaders,and causing poor HL2 performance. Didn't I tell you guys about that? They are still sore 20 years later.
 
Last edited:
The point is their GPU can use FSR but they replace it with the superior dlss

That has to boil Scott herkelman's milk, knowing they paid Bethesda to put FSR in and no one wants it lmao

Why would any Nvidia user use FSR if DLSS is an option?

That has to boil Scott herkelman's milk, knowing they paid Bethesda to put FSR in and no one wants it lmao

Oh ok you're just trolling.
 
Last edited:
R/Hardware is hilarious. The post about what DF guy said,has devolved into even people now defending the FX series and saying Crisis 2 overtessellation never happened. Apparently ATI is to blame for Nvidia going for non-standard FP16 shaders,when the standard called for FP24 shader,and causing poor HL2 performance. Didn't I tell you guys about that? They are still sore 20 years later.

The crysis 2 ‘over tesselation’ was debunked many times but here it is again: https://twitter.com/dachsjaeger/status/1323218936574414849

Alex does the technical deep dives for euro gamer btw.
 
Depending if we're talking average, 1% low or 0.1% the 13900k is 20-30% ahead of the 7800x3d

Also, X3D and non X3D AMD CPUs run the same in this game, the 3D cache doesn't improve performance at all, the game is single thread heavy
They nerfed the ram on the ryzen systems. 7800x3d is way short of HUB's results using the same cpu and GPU.
 
Actually I know where 100% of the blame lies:

Microsoft

If they hadn't been distracted by billions from totally moving to a subscription model (Azure, Office 365, etc.), and weren't also trying to brute force Xbox's marketshare then ...

... They would have long made upscalers part of DirectX. One API to implement, vendors get to choose how to implement it. Nobody gets to cry that they're favourite is missing from some games.

Instead Nvidia came up with the idea and it's currently a three-way free for all (four-way of we count UE5's).
 
Last edited:
The crysis 2 ‘over tesselation’ was debunked many times but here it is again: https://twitter.com/dachsjaeger/status/1323218936574414849

Alex does the technical deep dives for euro gamer btw.

Debunked my arse, he's talking about a single aspect, that being the ocean, there were plenty of objects that had a ton of polygons thrown at them for zero visual difference. The concrete barriers when tessellation were turned on suddenly needed a ton of polygons for basically zero visual difference.

 
Debunked my arse, he's talking about a single aspect, that being the ocean, there were plenty of objects that had a ton of polygons thrown at them for zero visual difference. The concrete barriers when tessellation were turned on suddenly needed a ton of polygons for basically zero visual difference.


Without even opening that and just reading @Gerard context he's bang on.

In Cryengine the ocean is a tessellated single plane, since you can't cull the tessellation in a singular plane asset its always going to be tessellated if its obstructed and hidden or not, that's just the way it is.

If he's using that argument to debunk it then this guy has no ####### clue about how any of this works.

Other assets are MASSIVELY over tessellated, deliberately..
 
Last edited:
The crysis 2 ‘over tesselation’ was debunked many times but here it is again: https://twitter.com/dachsjaeger/status/1323218936574414849

Alex does the technical deep dives for euro gamer btw.

I don't think it ever was especially I was on here when all this broke. People could mess around with tessellation manually in games like Crisis 2 and performance shot up quite a bit. People posted tons and tons of screenshots,showing there was no perceptable difference. I even tested it myself with actual hardware. People are just relying on the younger generations not being aware of this,especially as many older sites now have shut down.

An example is Reddit trying to now take advantage of many older reviews starting to go offline from a few decades ago WRT to the Nvidia FX. I had a 9500 PRO and mates with Nvidia FX hardware. Nvidia tried to be clever with using FP16/FP32 mixed precision shaders instead of the DX9 standard FP24. But the Radeon R300 was developed by another company(ArtX) which ATI bought up. It followed the DX9 specification to the letter.

HL2 was one of the first big DX9 titles. Valve actually tested both ATI and Nvidia hardware,and the Nvidia hardware didn't do so well. They explained it very well as ATI was the only company which could run DX9 properly. No different than Nvidia hardware being best for path tracing in games(we all know AMD is weaker when RT is pushed hard).

Valve also tried its damned best to improve performance on Nvidia hardware:

The first thing that comes to mind when you see results like this is a cry of foul play; that Valve has unfairly optimized their game for ATI's hardware and thus, it does not perform well on NVIDIA's hardware. Although it is the simplest accusation, it is actually one of the less frequent that we've seen thrown around.During Gabe Newell's presentation, he insisted that they [Valve] have not optimized or doctored the engine to produce these results. It also doesn't make much sense for Valve to develop an ATI-specific game simply because the majority of the market out there does have NVIDIA based graphics cards, and it is in their best interest to make the game run as well as possible on NVIDIA GPUs.Gabe mentioned that the developers spent 5x as much time optimizing the special NV3x code path (mixed mode) as they did optimizing the generic DX9 path (what ATI's DX9 cards use). Thus, it is clear that a good attempt was made to get the game to run as well as possible on NVIDIA hardware.To those that fault Valve for spending so much time and effort trying to optimize for the NV3x family, remember that they are in the business to sell games and with the market the way it is, purposefully crippling one graphics manufacturer in favor of another would not make much business sense.Truthfully, we believe that Valve made an honest attempt to get the game running as well as possible on NV3x hardware but simply ran into other unavoidable issues (which we will get to shortly). You can attempt to attack the competence of Valve's developers; however, we are not qualified to do so. Yet, any of those who have developed something similar in complexity to Half-Life 2's source engine may feel free to do so.According to Gabe, these performance results were the reason that Valve aligned themselves more closely with ATI. As you probably know, Valve has a fairly large OEM deal with ATI that will bring Half-Life 2 as a bundled item with ATI graphics cards in the future. We'll be able to tell you more about the cards with which it will be bundled soon enough (has it been 6 months already?).With these sorts of deals, there's always money (e.g. marketing dollars) involved, and we're not debating the existence of that in this deal, but as far as Valve's official line is concerned, the deal came after the performance discovery.
Once again, we're not questioning Valve in this sense and honestly don't see much reason to, as it wouldn't make any business sense for them to cripple Half-Life 2 on NVIDIA cards. As always, we encourage you to draw your own conclusions based on the data we've provided.
 
They nerfed the ram on the ryzen systems. 7800x3d is way short of HUB's results using the same cpu and GPU.
I think PCGH religiously run CPUs at the max RAM clocks and no more. That could explain the huge difference they had been 12900K and 13900K. DDR5-4200 Vs DDR5-5600 is a huge difference, if the old Fallout 4 results are anything to go by, the memory speed rather than that clocks are probably more relevant.
 
The number of unique downloads of PureDark's DLSS mod is wild:

Untitled.png


118,821 (!)

Essentially half of all Steam Starfield players are using it:

Untitled.jpg
 
People are just relying on the younger generations not being aware of this,especially as many older sites now have shut down.


Yup, this review had a look at images in tessellated and non tessellated modes and it showed that there was bugger all difference visually, just the performance would drop. It was posted on here many times back in the day, but the screenshots are mostly gone now.


Anyone that thinks there weren't shenanigans going on in that game are off their nut. It was demonstrably obvious that polygons were being thrown around left and right for no reason but to hurt performance on both NVidia and AMD cards, it just hurt AMD performance more.
 
I don't think it ever was especially I was on here when all this broke. People could mess around with tessellation manually in games like Crisis 2 and performance shot up quite a bit. People posted tons and tons of screenshots,showing there was no perceptable difference. I even tested it myself with actual hardware. People are just relying on the younger generations not being aware of this,especially as many older sites now have shut down.

An example is Reddit trying to now take advantage of many older reviews starting to go offline from a few decades ago WRT to the Nvidia FX. I had a 9500 PRO and mates with Nvidia FX hardware. Nvidia tried to be clever with using FP16/FP32 mixed precision shaders instead of the DX9 standard FP24. But the Radeon R300 was developed by another company(ArtX) which ATI bought up. It followed the DX9 specification to the letter.

HL2 was one of the first big DX9 titles. Valve actually tested both ATI and Nvidia hardware,and the Nvidia hardware didn't do so well. They explained it very well as ATI was the only company which could run DX9 properly. No different than Nvidia hardware being best for path tracing in games(we all know AMD is weaker when RT is pushed hard).

Valve also tried its damned best to improve performance on Nvidia hardware:
History lesson's conclusion: Nvidia messed up very badly with DX9 but their marketing went into damage control overdrive, recruited a lot of their fans to become outraged, gave briefings to friendly media, etc.
However, we can all be are sure this behaviour is historical, and wouldn't happen nowadays?
That's very reassuring then!
 
I don't think it ever was especially I was on here when all this broke. People could mess around with tessellation manually in games like Crisis 2 and performance shot up quite a bit. People posted tons and tons of screenshots,showing there was no perceptable difference. I even tested it myself with actual hardware. People are just relying on the younger generations not being aware of this,especially as many older sites now have shut down.

An example is Reddit trying to now take advantage of many older reviews starting to go offline from a few decades ago WRT to the Nvidia FX. I had a 9500 PRO and mates with Nvidia FX hardware. Nvidia tried to be clever with using FP16/FP32 mixed precision shaders instead of the DX9 standard FP24. But the Radeon R300 was developed by another company(ArtX) which ATI bought up. It followed the DX9 specification to the letter.

HL2 was one of the first big DX9 titles. Valve actually tested both ATI and Nvidia hardware,and the Nvidia hardware didn't do so well. They explained it very well as ATI was the only company which could run DX9 properly. No different than Nvidia hardware being best for path tracing in games(we all know AMD is weaker when RT is pushed hard).

Valve also tried its damned best to improve performance on Nvidia hardware:

When you build a 3D asset in 3DMax, Blender.... whatever you get the raw asset wire mesh, which with the detail you baked in to said asset its usually millions of polygons, because the asset generates more and more triangles as you sculpt it.

Once the asset sculpting is complete you bake normal map textures from the detail in the asset, you then reduce that asset down to a few 1000 polygons, that removes all the detail, you use your baked normal map textures to put the detail back without the compute expensive polygons, so you reduce it down from millions of triangles to a few thousand.

Some assets in Crysis 2 like bollards have millions of polygons, ok so you might say its just a mistake, they didn't bake the normal maps. Actually yes they did. they are in there
 
Last edited:
Hairworks i think it was. The Witcher 3? That's another one, the hair follicles were tessellated 64 triangles per pixel, a ###### pixel doesn't have more than 3 dimensions so 64X vs 4X which is what AMD's software reduced it down to makes 0 visual difference, to cram 64 polygons in to a single pixel is bonkers, you would only do that if what you wanted was to grind your GPU to a screaming halt, that's all its good for.
 
Last edited:
When you build a 3D asset in 3DMax, Blender.... whatever you get the raw asset wire mesh, which with the detail you baked in to said asset its usually millions of polygons, because the asset generates more and more triangles as you sculpt it.

Once the asset sculpting is complete you bake normal map textures from the detail in the asset, you then reduce that asset down to a few 1000 polygons, that removes all the detail, you use your baked normal map textures to but the detail back without the compute expensive polygons, so you reduce it down from millions of triangles to a few thousand.

Some assets in Crysis 2 like bollards have millions of polygons, ok so you might say its just a mistake, they didn't bake the normal maps. Actually yes they did. they are in there
A few minor corrections and additions ontop of what you said.

The polygon count for sculpts are in the 10s millions to 100s millions for a character/creature asset. Maybe an intricate vase (or something of that size) would be a few millions

I think modern game engines can actually use displacement maps now (I don't think it was a thing back in the days of crisis 2 though), so you would bake a normal map and a displacement map. A displacement map, will physically displace the polygons at run time to match the high resolution sculpt (but you need enough polygons in the first place to make it look good, in comes tesselation to the rescue). A noirmal map only effects how the light will bounce of the surface during the render calculations I believe.
Also Displacement maps are huge in size. They are usually 32bit .exr files.

Final polygon counts for a model
Here are some polygon counts for the PS4 era of games

Shorter thread discussing RE Village polygon count

About your bollard example, i would say 200 triangles tops for a game asset assuming it was damaged. Undamaged, under 50 triangles (maybe as low as 20).
 
Last edited:
Yup, this review had a look at images in tessellated and non tessellated modes and it showed that there was bugger all difference visually, just the performance would drop. It was posted on here many times back in the day, but the screenshots are mostly gone now.


Anyone that thinks there weren't shenanigans going on in that game are off their nut. It was demonstrably obvious that polygons were being thrown around left and right for no reason but to hurt performance on both NVidia and AMD cards, it just hurt AMD performance more.
When you build a 3D asset in 3DMax, Blender.... whatever you get the raw asset wire mesh, which with the detail you baked in to said asset its usually millions of polygons, because the asset generates more and more triangles as you sculpt it.

Once the asset sculpting is complete you bake normal map textures from the detail in the asset, you then reduce that asset down to a few 1000 polygons, that removes all the detail, you use your baked normal map textures to put the detail back without the compute expensive polygons, so you reduce it down from millions of triangles to a few thousand.

Some assets in Crysis 2 like bollards have millions of polygons, ok so you might say its just a mistake, they didn't bake the normal maps. Actually yes they did. they are in there
But most people commenting on Reddit,etc were not actively posting back then. How many of those reviewers commenting back then even understood this? They were probably quite young. We are older so we remember and had the hardware to test too. I give you an example:

The HD5850 and GTX460 were very close together in the game,but the HD5850 was a faster card. The GTX560 was faster right? I could manually adjust the tessellation,and I remember running around for an hour taking screenshots to see if I noticed any differences. I didn't,but my the HD5850 ended up beating the GTX460 handily and being closer to the GTX560. With The Witcher 3,the AMD cards could do the same. Kepler cards got screwed and so many Kepler owners moaned,CDPR put in a slider.

History lesson's conclusion: Nvidia messed up very badly with DX9 but their marketing went into damage control overdrive, recruited a lot of their fans to become outraged, gave briefings to friendly media, etc.
However, we can all be are sure this behaviour is historical, and wouldn't happen nowadays?
That's very reassuring then!
Yet on social media in 2023,now it seems the Nvidia FX was fine,only evil ATI and Valve messed up performance in HL2,and so many other DX9 games. It was frankly the worst series Nvidia ever made. The X800 series was the only time ATI beat Nvidia in sales and that is despite the Nvidia 6000 series being amazing. Is it a coincidence that despite Rollo,etc trying their best most users agreed that they bought ATI in protest.
 
Last edited:
Back
Top Bottom