• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia’s GameWorks program usurps power from developers, end-users, and AMD

Yes, it WOULD shift the results. How can you not understand this. The memory bus on Hawaii is so much better, so any advantage is lost. This would happen in the Thief thread as well if we applied more. Sleeping Dogs also favours NV cards, but it has a weird overlaying technique so it's not directly comparable.

You're just fixiated on this GameWorks library stuff because it's convenient. You need to draw a line somewhere, it could be that it's just not very well optimised for AMD cards anyway. There wasn't even a Crossfire profile when it was released, you telling me they weren't allowed to put that in their own drivers either? Although I'm sure if someone did some tests now, performance would be improved anyway.
 
Yes, it WOULD shift the results. How can you not understand this. The memory bus on Hawaii is so much better, so any advantage is lost. This would happen in the Thief thread as well if we applied more. Sleeping Dogs also favours NV cards, but it has a weird overlaying technique so it's not directly comparable.

You're just fixiated on this GameWorks library stuff because it's convenient. You need to draw a line somewhere, it could be that it's just not very well optimised for AMD cards anyway. Although I'm sure if someone did some tests now, performance would be improved.

In no circumstance should a 770 beat a 290X. Neither should a 660 beat a 7950 boost, or a 560TI beat a 7770. Not without serious gimping going on.
 
Makes no difference. Why is performance gimped for AMD cards only when using FXAA, the least demanding form of AA? I do not consider 38 fps playable personally. Without the gimping the fps would be much higher than that. A 560 TI is averaging over 60 at 1200P and a 7770 is a t 38 fps yet the HD 7770 is a much faster card.

The HD7770 is by no stretch of the imagination a much faster card than the 560Ti, when it comes to gaming, overall the 560Ti is the superior card.

People on those graphics cards would fine 38 FPS perfectly playable, hell people game on FPS figures we'd find unbearable.

While in an ideal situation, the AMD cards shouldn't be gimped at all, I'd much rather get better performance, but less than Nvidia (With FXAA, which I repeat, is crap), but better than the prior game, than have City, which was just crap FPS all around.
 
Last edited:
The HD7770 is by no stretch of the imagination a much faster card than the 560Ti, when it comes to gaming, overall the 560Ti is the superior card.

People on those graphics cards would fine 38 FPS perfectly playable, hell people game on FPS figures we'd find unbearable.

While in an ideal situation, the AMD cards shouldn't be gimped at all, I'd much rather get better performance, but less than Nvidia (With FXAA, which I repeat, is crap), but better than the prior game, than have City, which was just crap FPS all around.

Sorry i meant the 550TI. The 7770 is just 1 fps faster than it but its a superior card. Anyway, it matters not. There is no way a 770 should be beating a 290X at 1080P but 5fps no matter how anyone tries to dress it up.
 
I dont know why its even a debate. The new batman use the same engine with almost the same features as the old one. But now they use gameworks (nvidia shaders).Only nvidia can optimize these shaders and only nvidia can optimize them. Except nvidia exclusive features all the others should be native from the engine as part of dx. Big graphics engine will not gonna use gameworks except for nvidia exclusive features so i dont worry.Either way i blame the studio to choose gameworks not nvidia. They got paid(or call it 'support' to use gameworks and deny optimizations from their customers because they use amd. Thats why i will never touch a batman game

They have done it before anyone remembers that if you had amd gpu you couldnt enable msaa? What a joke

So before you think that you belong to a green or a red army first think as a customer and person. Nvidia,amd game studios doesnt care about you. They only care about the money
 
Surely that depends on the game? Both this and Arkham city are heavy on tesselation...

A 770 won't have better tessellation performance than an R9 290X, AMD's tessellation performance since the 79XX has been "fixed" really.

Matt's right in what he's saying, a 770 shouldn't be faster than an R9 290X in this game in any settings really (But I can overlook it, because of Citys diabolical performance, and the MSAA performances)
 
Last edited:
Crysis 2, criticised for seemingly adding unnecessary amounts of Tessellation to hurt AMD Redeon 6000 series GPU's.

The very same game now performs better on Radeon 7000 series GPU's than GTX 600 Kepler GPU's, a complete turnaround.

The Radeon 7000 series is not the 6000 series, it does not have its Tessellation problems, far from it. :)
 
Well the Unigine also seems hard work for them. Wait for the next excuse with that one. Tess is still pretty under par on 290s.

Doesn't matter what bench thread gets thrown at the usual suspects there's always something. Always lol.
 
Last edited:
Why would there be an excuse with Heaven?

The fastest 770 gets nailed by the stock R9 290X (Lets not post the odd example of super clocked 770 against Apache's obviously throttling R9 290)

It's not the tessellation that's causing Nvidia's FXAA performance gains, but because City's performance was so poor (Even with AMD's ability to optimise) I can completely ignore the performance gain using FXAA as performance across the board for AMD is still better than what it was in City.
 
Last edited:
Why would there be an excuse with Heaven?

The fastest 770 gets nailed by the stock R9 290X (Lets not post the odd example of super clocked 770 against Apache's obviously throttling R9 290)

It's not the tessellation that's causing Nvidia's FXAA performance gains, but because City's performance was so poor (Even with AMD's ability to optimise) I can completely ignore the performance gain using FXAA as performance across the board for AMD is still better than what it was in City.

The 290X is not the fastest card on the Heaven 4 bench but I think it can see off a GTX 770.:D

Single 290X @1280/1625

4930k @4.7

lBV7SlB.jpg
 
Well the Unigine also seems hard work for them. Wait for the next excuse with that one. Tess is still pretty under par on 290s.

Doesn't matter what bench thread gets thrown at the usual suspects there's always something. Always lol.


Amd tessellation is on par with nvidia one. 6k series had a weak chip not anymore with tahiti or hawaii. Unigine always had an architecture bias for nvidia cards

In general you spread a lot of misinformations.

I dont know what you dont get from this. Gameworks use closed source shaders that amd cant optimize through the pipeline. Its like doing hardware msaa without optimize the pipeline. Dont try to convert bad info into positive for your own preference. I remember when nixxes didint optimize tomb raider for nvidia cards until one week after release. A bad thing for all the owners but they fixed it. If they didint offer optimizations for nvidia cards and you got already the game and you couldnt play on the optimals settings because nixxes didint want to optimize for your gpu. You would like it? I bet no
 
Amd tessellation is on par with nvidia one. 6k series had a weak chip not anymore with tahiti or hawaii. Unigine always had an architecture bias for nvidia cards

In general you spread a lot of misinformations.

I dont know what you dont get from this. Gameworks use closed source shaders that amd cant optimize through the pipeline. Its like doing hardware msaa without optimize the pipeline. Dont try to convert bad info into positive for your own preference. I remember when nixxes didint optimize tomb raider for nvidia cards until one week after release. A bad thing for all the owners but they fixed it. If they didint offer optimizations for nvidia cards and you got already the game and you couldnt play on the optimals settings because nixxes didint want to optimize for your gpu. You would like it? I bet no

Actually Heaven 4 is pretty even handed with AMD and NVidia.

For a long time the HD 7970 was top dog on the bench.
 
Because gk104 is a gimped chip. Nvidia is strong again on heaven/valley with gk110

4 GPUs

1. Score 5295, GPU 290X x 4, @1240/1625, CPU 4930k @4.8, Kaapstad
2. Score 5237, GPU nvTitan x 4, @994/1788, CPU 3930k @5.1, Kaapstad
3. Score 5195, GPU nvTitan x 4, @1176/1812, CPU 3960X @5.2, Vega
4. Score 4788, GPU nvTitan x 4, @1137/1612, CPU 3930k @4.8, Biffa
5. Score 3707, GPU 7990 x 2, @1185/1594, CPU 3930k @4.5, ToxicTBag
6. Score 3548, GPU 690 x 2, @1066/1800, CPU 3960X @4.9, Kaapstad
7. Score 1975, GPU 590 x 2, @612/855, CPU i7 980X @4.29, Kaapstad
8. Score 1275, GPU 5970 x 2, @850/1200, CPU i7 975 @4.27, Kaapstad

Some people can't take a hint.:D

If a GPU is gimped that is not the fault of the benchmark, you could also argue that the Hawaii chip is gimped compared to the GK110 as the latter has more transisters.

The benchmark is even handed, if the GPUs are not up to the job that is a different story.:)
 
Back
Top Bottom