• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Balkanized Gaming - Mantle & GameWorks

@Greg,

Just because martini is having improved performance with a 290, what about the rest of the AMD range?

It's very convenient to ignore that 290/X users are the minority of AMD users in your argument that it's cool to gimp AMD performance.

290/X users are probably about what, 2% max total in regards to AMD usage.

The majority of AMD users are running much lesser hardware that can't possibly use 8x Msaa, they can only use FXaa and are getting penalised performance wise-up to 35% against comparable Nvidia hardware.

The article lamblasts both AMD/Nvidia and strictly talking Digihound is correct, serious fragmentation is in the here and now with PhysX/GW/Mantle, not good at all in the long run if it continues to escalate.

Is Mantle good for PC gaming if it resumes 'locked' from Nvidia-no.

Is PhysX good for PC gaming if it resumes 'locked' from AMD-no.

Is GW good for PC gaming if it resumes 'locked' from AMD to optimise-no.

TressFX is how it all should be done, couple of weeks head start at most for AMD they developed it, but freedom for Nvidia to fine tune their drivers for optimum performance.
 
"but it has clearly been proven that tessellation runs better on AMD hardware."

Where? What I saw from performance modeling (including examination of driver calls at various forced tessellation levels and at the default, application-set level) indicated that tessellation consumed much more time on the R9 290X than on the GTX 780 Ti.

However, this wasn't because of a GameWorks-specific library. The tessellation issue was caused by pumping huge amounts of tessellation into a scene, in a manner that arguably slowed the card without meaningfully impacting image quality that the end-user perceived. It wasn't as bad as Crysis 2, which poured millions of triangles into flat surfaces and rendered entire oceanscapes underneath solid terrain, but the impact was there.

I honestly feel you do a paid for article and don't bother doing research. We ran tests here (on this forum) and AMD hammered nVidia with tessellation.

Have a read of this
http://forums.overclockers.co.uk/showthread.php?t=18587179

I understand you do these articles for a living and I respect that but you should do proper testing prior to posting your articles.
 
I honestly feel you do a paid for article and don't bother doing research. We ran tests here (on this forum) and AMD hammered nVidia with tessellation.

Have a read of this
http://forums.overclockers.co.uk/showthread.php?t=18587179

I understand you do these articles for a living and I respect that but you should do proper testing prior to posting your articles.

Unigine surely don't use silly amounts of Tessellation though unlike Nvidia/GameWorks? That really proves nothing Greg and your questioning of Joel is bordering on embarrassing at times.
 
Unigine surely don't use silly amounts of Tessellation though unlike Nvidia/GameWorks? That really proves nothing Greg and your questioning of Joel is bordering on embarrassing at times.

The constant berating of GameWorks is just as embarrassing. This is two articles where Joel has attacked GameWorks, with no plausible reason. The first time, he admitted that AMD put him onto it and this time? I can't see why GameWorks is so bad, when it works well on AMD cards. Surely whilst it works well, there is no reason for concern or are you just taking the moral high ground?
 
The constant berating of GameWorks is just as embarrassing. This is two articles where Joel has attacked GameWorks, with no plausible reason. The first time, he admitted that AMD put him onto it and this time? I can't see why GameWorks is so bad, when it works well on AMD cards. Surely whilst it works well, there is no reason for concern or are you just taking the moral high ground?

The articles are definitely justified. Without gameworks you would not see a situation where a mid range nvidia gpu is beating out a high end amd gpu. In no other situation, on any game engine, would you see such an occurrence. The alone is worth investigating. Not the first time either that Nvidia have been up to such tricks either Greg. There is the much publicised Crysis 2 fiasco of over tessellating things that cannot be seen on screen to deliberately harm amd performance. There is no smoke without fire Greg. I welcome this type of journalism that exposes such shoddy practices. If AMD ever get caught with their hand in the cookie jar, you can be sure Joel will report on that as well. He is not pro AMD i can promise you that. Anyway this article paints negatives for both sides, rather than just raising valid questions about gameworks.
 
Last edited:
Heaven uses masses of Tessellation and the 6 series from nVidia was shocking, with AMD owning in the bench thread that is run here.

As for tessellation, this is a feature of DX11, which was created in 2009... That is 4 years ago but AMD have not bothered to improve on their architecture (if you are saying they are still rubbish at tessellation) and with it being a major plus in games, don't you think they should have by now?

Nvidia's polymorph cores do the tessellation and when tessellation isn't used, they are working on other parts. nVidia have a good tess engine.
 
Gregster,

Benchmarks performed in Unigine have little and less to do with benchmarks performed in other titles. I did not benchmark Sanctuary, I made no claims about Sanctuary, I did not publish results or statements implying otherwise. Test results in Sanctuary (a non-GW title) do not prove or illustrate anything about AMD's tessellation performance in Arkham Origins. At best and in aggregate, a survey of tessellation performance in multiple titles might tell us something about how much performance AMD and Nvidia gain or lose when that feature is enabled.

I am familiar with the amount of time the GTX 770 and R9 290X spend in-driver performing tessellation per frame because I used Nvidia and AMD's publicly available analysis tools combined with Visual Studio 2010 to capture and record a frame-by-frame breakdown of precisely where the GPU driver spent time. I then compared the results to to determine where the differences were.

In short, I learned how to use an entirely new suite of software to perform the appropriate tests and analysis rather than taking any vendor's word on the topic.

Furthermore, multiple tessellation investigations at multiple websites have demonstrated that while Nvidia retains an advantage in this area, it's advantage is far greater if the amount of tessellation applied is enormous.

http://techreport.com/review/25509/amd-radeon-r9-290x-graphics-card-reviewed/6

In the first graph, the GTX Titan is 40% faster than the R9 290X. In the second graph, the GTX Titan is 2.5x faster than the AMD R9 290X.

In short, cranking tessellation up to huge levels has a disproportionate impact on AMD in many titles. That does not mean every title must show this impact, because the "Extreme" setting in a given engine doesn't tell you how *much* tessellation is actually been applied. In Crysis 2, turning on tessellation hurt AMD far more than Nvidia because the level of tessellation was tuned to ensure it. Flat surfaces stuffed with hundreds of millions of triangles and invisible water are excellent methods of crippling a card with a known weakness where your card excels in the same area.

I thought it was understood that when I asked for proof that AMD thrashed Nvidia at tessellation, I was asking for proof in the relevant title: Arkham Origins. I have driver timing data gathered myself from my own hardware. I tested the impact (in terms of FPS and in terms of how much time the driver had to spend performing tessellation) by altering the tessellation level through the Catalyst Control Center.
 
Last edited:
Gregster,

Benchmarks performed in Unigine have little and less to do with benchmarks performed in other titles. I did not benchmark Sanctuary, I made no claims about Sanctuary, I did not publish results or statements implying otherwise. Test results in Sanctuary (a non-GW title) do not prove or illustrate anything about AMD's tessellation performance in Arkham Origins. At best and in aggregate, a survey of tessellation performance in multiple titles might tell us something about how much performance AMD and Nvidia gain or lose when that feature is enabled.

I am familiar with the amount of time the GTX 770 and R9 290X spend in-driver performing tessellation per frame because I used Nvidia and AMD's publicly available analysis tools combined with Visual Studio 2010 to capture and record a frame-by-frame breakdown of precisely where the GPU driver spent time. I then compared the results to to determine where the differences were.

In short, I learned how to use an entirely new suite of software to perform the appropriate tests and analysis rather than taking any vendor's word on the topic.

Furthermore, multiple tessellation investigations at multiple websites have demonstrated that while Nvidia retains an advantage in this area, it's advantage is far greater if the amount of tessellation applied is enormous.

http://techreport.com/review/25509/amd-radeon-r9-290x-graphics-card-reviewed/6

In the first graph, the GTX Titan is 40% faster than the R9 290X. In the second graph, the GTX Titan is 2.5x faster than the AMD R9 290X.

In short, cranking tessellation up to huge levels has a disproportionate impact on AMD in many titles. That does not mean every title must show this impact, because the "Extreme" setting in a given engine doesn't tell you how *much* tessellation is actually been applied. In Crysis 2, turning on tessellation hurt AMD far more than Nvidia because the level of tessellation was tuned to ensure it. Flat surfaces stuffed with hundreds of millions of triangles and invisible water are excellent methods of crippling a card with a known weakness where your card excels in the same area.

I thought it was understood that when I asked for proof that AMD thrashed Nvidia at tessellation, I was asking for proof in the relevant title: Arkham Origins. I have driver timing data gathered myself from my own hardware. I tested the impact (in terms of FPS and in terms of how much time the driver had to spend performing tessellation) by altering the tessellation level through the Catalyst Control Center.

We have a Batman Arkham Origins Bench thread.

1. 133 - GTX 780Ti - Vsg28
2. 127 - R9 290X - Kaapstad
3. 126 - GTX Titan - Whyscotty
4. 125 - GTX 780 - Geeman1979
5. 124 - GTX 780Ti - MJFrosty
5. 124 - GTX Titan - Gregster
6. 112 - GTX 780 - Kei
7. 102 - R9 290X - Tommybhoy

http://forums.overclockers.co.uk/showthread.php?t=18585432

Now the 290X of Kaapstad is run at 1260/1625. It took a 780Ti Kingpin Edition @ 1463/1968 to beat Kaapstad.

If WB Montreal used massive amounts of tessellation and AMD cards are so bad at it, they wouldn't be sitting in second place.

If we look at another game (Sniper Elite V2)

1. Score 76.6 GPU 780ti @1420/1965, CPU 4770k @4.4, khemist Link
2. Score 73.7 GPU 780ti @1315/1960, CPU 4960X @4.8, MjFrosty Link
3. Score 71.5 GPU 780ti @1310/1832, CPU 2600k @3.8, Parabellum Link
4. Score 69.6 GPU nvTitan @1267/1884, CPU 3930k @4.625, Gregster Link
5. Score 69.1 GPU nvTitan @1280/1877, CPU 4770k @4.7, whyscotty Link
6. Score 65.2 GPU 780 @1398/1928, CPU 4770k @4.7, whyscotty Link
7. Score 63.7 GPU 780ti @1283/1900, CPU 4770k @3.5, jh30uk Link
8. Score 57.8 GPU 290X @1260/1625, CPU 4930k @4.7, Kaapstad Link
9. Score 56.3 GPU 780 @1280/1852, CPU i7 960 @4.2, triss Link
10. Score 56.0 GPU 290X @1240/1500, CPU 4770k @4.6, uksoldierboy Link

That 1260/1625 290X can't even beat a 780, let alone a 780Ti (Don't forget, AMD sponsored game). Ohhhh and uber sampling enabled Matt ;)

Sleeping Dogs bench thread is again the same.

1. Score 89.9, GPU nvTitan @1320/1901, CPU 2500k @5.0 khemist Link
2. Score 89.3, GPU 780ti @1250/1962, CPU 4770k @4.5 zia Link
3. Score 88.8, GPU 780ti @1295/1950, CPU 4960X @4.7 MjFrosty Link 322.21 drivers
4. Score 86.5, GPU nvTitan @1280/1802, CPU 3930k @4.8 whyscotty Link
5. Score 86.1, GPU 780ti @1208/1800, CPU 4770k @4.6 Dicehunter Link 331.82 drivers
6. Score 85.3, GPU nvTitan @1280/1879, CPU 3930k @4.625 Gregster Link 332.21 drivers
7. Score 84.8, GPU 780 @1400/1850, CPU 3770k @5.0 Geeman1979 Link
8. Score 84.5, GPU 290X @1200/1625, CPU 3970X @4.9 Kaapstad Link 13.11 beta 8 drivers
9. Score 83.5, GPU 780 @1424/1877, CPU 3930k @5.0 whyscotty Link 332.21 drivers
10. Score 82.9, GPU 780ti @1150/1750, CPU 4770k @4.4 zia Link

This is the second time you have had a go at GameWorks Joel and Matt and his mates have jumped on board but I have shown you how the game you are berating, GIVES AMD the advantage.

Again, if AMD are so bad at Tessellation, how come they are doing so well in B:AO?
 
We have a Batman Arkham Origins Bench thread.

You're not benching in the same configuration I used. And that's fine -- it really is, if you want to look at figures with MSAA enabled, because MSAA is a valid part of the conversation. Nonetheless, turning MSAA to full completely changes the performance ranking.

FXAA High is the "Default" option set on cards if you load the game and then hit Backspace for "Apply Defaults." It's also the last option in the stack. If your understanding of game details is limited to "Push the slider to the right as far as it goes," then FXAA High is what you end up with on a non-Nvidia card.

Regardless of that, if you want to isolate tessellation performance, you have to do what I did, and test every single feature separately. All results below given for R9 290X vs. GTX 770. Baseline configuration: 1920x1080, All DX11 features enabled, FXAA High.

Baseline: 148 / 145.
Normal Ambient Occlusion: 184 / 181
Normal Geometry: 154 / 148
Normal Dynamic Shadows: 157 / 155.
No FXAA (All Other DX 11 Enhanced): 152 / 149.

All Normal (DX11 Still Enabled): 217 / 211.
All Normal (DX9 forced via Command Line): 229 / 222.

You can't evaluate the impact of tessellation and MSAA simultaneously -- the point is to run the game engine through tests in which each specific area is isolated to the maximum extent one can do so.

I know two things about how tessellation performs in Arkham Origins. First, I know the amount of time the GPU spends on tessellation in-frame. During the first scenes of the AO benchmark, the R9 290X spends roughly 2x as much time performing tessellation as the GTX 770 does. It also spends slightly more time performing soft shadows.

But second, we can isolate relative NV vs. AMD performance by comparing the sections of the benchmark that test tessellation. The first three scenes of the AO benchmark are a test of Dynamic Shadows and tessellation. If you look at them, you can see this -- they show Batman's cape whipping wildly in the wind, and they show some thugs walking around on the balcony. The third test is just a rotating camera that pans the balcony.

The first two tests are dynamic shadow / tessellation hybrids. The third test is pure tessellation.

In that third test, the R9 290X scores an average of 133 FPS. The GTX 770 scores 142 FPS. Two things should stand out:

1). The R9 290X "recovers" 11% of its frame rate from the early part of the test to the end (meaning its final frame rates are 11% higher). The GTX 770, in contrast, is just 2% slower in the tessellation and dynamic shadows section of the test than it is in the final measure.

2). When I use AMD's Catalyst Control Center to lower the amount of tessellation applied in-game, the R9 290X's performance in this particular scene shoots from 133 FPS to 150 FPS, a gain of 13%. Total performance in the entire benchmark rises from 148 FPS to 158 FPS.

This tells us that while tessellation does not explain why the R9 290X performs so poorly against the GTX 770 in the entire game, tessellation is deployed at a level that favors NV far more than AMD.

The ground, I might remind you, looks like this:

http://i.imgur.com/W6914aO.png

Batman's cape: http://i.imgur.com/Ae5H8ys.png


Nor is this unique to the benchmark scene. Elsewhere in Gotham, the ground looks like this:

http://i.imgur.com/c8wINJB.png

Huge amounts of polygons poured into the scenery in an area where AMD has a known performance disadvantage against Nvidia.

Interestingly enough, Gregster, even 8x MSAA doesn't completely remove the evidence. While the R9 290X is much faster than the GTX 770, if we compare performance in the first three scenes of the test we see the GTX 770 scores 62 FPS in the first three scenes and 68 FPS overall. The R9 290X scores 100 FPS in the first three scenes and 112 FPS overall.

In other words, even with 8x MSAA, the GTX 770 recovers 9% in the back half of the test, while the Radeon R9 290X recovers 12%. That fact, combined with evidence of huge amounts of tessellation, knowing the AMD card spends 2x as much time on tessellation as its NV counterpart, knowing that this has been a historic weakness of AMD cards, and knowing that this exact behavior has popped up 2x previously in TWIMTBP titles leads me to conclude that *yes*, tessellation combined with default preferences that favored the use of FXAA over MSAA was used to create an environment that favored Nvidia over AMD in this title.
 
Last edited:
Kaap was also using the 13.11 beta 8 drivers. There may be improvements from using some of the recent releases?
 
Regardless of that, if you want to isolate tessellation performance, you have to do what I did, and test every single feature separately. All results below given for R9 290X vs. GTX 770. Baseline configuration: 1920x1080, All DX11 features enabled, FXAA High.

In other words, even with 8x MSAA, the GTX 770 recovers 9% in the back half of the test, while the Radeon R9 290X recovers 12%. That fact, combined with evidence of huge amounts of tessellation, knowing the AMD card spends 2x as much time on tessellation as its NV counterpart, knowing that this has been a historic weakness of AMD cards, and knowing that this exact behavior has popped up 2x previously in TWIMTBP titles leads me to conclude that *yes*, tessellation combined with default preferences that favored the use of FXAA over MSAA was used to create an environment that favored Nvidia over AMD in this title.

So your set of contrived data is better than Gregsters because PC gamers are too stupid to know the difference between FXAA and MSAA?

Who would buy a top tier graphics card and then run on medium type settings? seriously, anyone spending that kind of money on a gaming PC and not realising the difference between FXAA and MSAA needs more help than your article can provide

You start with a theory: Gameworks libraries harm performance on AMD. You test this by turning on and off each of the Gameworks features, on AMD you get no more drop off, so theory not proven, however in a very specific set of circumstances (a single Fxaa setting, FXAA also being an nvidia piece of tech) you show an oddity of performance, a specific nvidia card performing better than it should vs Amd, so you decide to use this to "prove" the theory youve already disproven. It doesnt really stack up. Independent testing shows 290X's in fact keeping up with nvidia cards BETTER than in non-Gameworks titles.
 
Last edited:
I get a game and the first thing I do is set everything to maximum and work back from there. I have done this in all my gaming experiences and on every card I own. In the early days, I used to rely on my senses telling me what was playable and what wasn't but now we have some fantastic programs like FRAPS that do it for us. If a game felt sluggish, I start to drop settings. Batman Arkham Origins Looks pretty much playable with max settings all the way down to the 7870/660 as you can see from this graph. Of course playable frame rates are subjective.

1735c68668ba1b754b380aa939fcdefc.jpg


Nothing looking untoward on that bench - 7970 beating a 680.

4755801818aa0df29d478aece30b769a.jpg


Now when we look at this bench which is using FXAA, it looks playable all the way back to a 6870. So to me, if you have a 7850 or under or a 660 and under, turn down settings because you won't get decent frames with MSAA*8. I honestly can't believe that two articles have been made about this one game and as I have shown you, AMD do better in this game than any others that have been user tested on here. Specific AMD titles are getting owned by nVidia cards and yet Batman Arkham Origins gets owned by AMD but you issue is with lower settings??? LOL
 
The other question that no one seems to ask is; does batman use command lists?
In Civ5 a 580 levels with a 7970 (and beats them squarley so in SLI/Crossfire), was that a Gameworks title? :D
 
Back
Top Bottom