• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia’s GameWorks program usurps power from developers, end-users, and AMD

Right the odd example lol. Not this one though.


Time to let this thread die again me thinks

:confused:

Apache has an obviously throttled (And stock) R9 290, it's only just beating the fastest (And manually overclocked) 770, As an R9 290, its performance is an outlier, as it's not actually representative of what an R9 290 stock performance is, since you know, it's not actually run stock, it's downclocked.

While I'm not going to get into "Gameworks is bad derp", I'm also not going to listen to "Tesselation!!!! derp".
 
4 GPUs

1. Score 5295, GPU 290X x 4, @1240/1625, CPU 4930k @4.8, Kaapstad
2. Score 5237, GPU nvTitan x 4, @994/1788, CPU 3930k @5.1, Kaapstad
3. Score 5195, GPU nvTitan x 4, @1176/1812, CPU 3960X @5.2, Vega
4. Score 4788, GPU nvTitan x 4, @1137/1612, CPU 3930k @4.8, Biffa
5. Score 3707, GPU 7990 x 2, @1185/1594, CPU 3930k @4.5, ToxicTBag
6. Score 3548, GPU 690 x 2, @1066/1800, CPU 3960X @4.9, Kaapstad
7. Score 1975, GPU 590 x 2, @612/855, CPU i7 980X @4.29, Kaapstad
8. Score 1275, GPU 5970 x 2, @850/1200, CPU i7 975 @4.27, Kaapstad

Some people can't take a hint.:D

If a GPU is gimped that is not the fault of the benchmark, you could also argue that the Hawaii chip is gimped compared to the GK110 as the latter has more transisters.

The benchmark is even handed, if the GPUs are not up to the job that is a different story.:)

7970 has 25% more Stream Processors and a 50% wider Bus than GK104, it is faster in Unique than the GTX 680/770.

Compare that with GK110 which has the same number of Stream Processors as Hawaii, a less dramatic Bus width difference and GK110 overclocks a lot higher than Hawaii.

Thats why.

Yet its still just as fast in Games as the 780TI because its well optimised in games, optimisation in Unique is actually not much good at all, its only because the Tahiti GPU's are much more GPU than GK104 that Tahiti is able to beat it.
 
Last edited:
They can optimise through the pipeline just fine. It's a graphics library not a programming interface. It's what they can't see you should be arguing. Sounds silly even saying it.

Now I'm arguing for you, that's definitely it for me lol.
 
7970 has 25% more Stream Processors and a 50% wider Bus than GK104, it is faster in Unique than the GTX 680/770.

Compare that with GK110 which has the same number of Stream Processors as Hawaii, a less dramatic Bus width difference and GK110 which overclocks a lot higher than Hawaii.

Thats why.

Yet its still just as fast in Games as the 780TI because its well optimess in games, optimisation in Unique is actually not much good at all, its only because the Tahiti GPU's are much more GPU than GK104 that Tahiti is able to beat it.

The bench still treats the GPUs the same, if they don't have the tools to do well that is not the fault of the bench.

Bus widths on Heaven make very little difference, I have done a lot of testing with 256bit, 384bit and 512bit.

Here is the poser for you to answer if you can though, why do 4 x 290Xs work so well together not just on Heaven 4 but on other benches too.:D
 
They can optimise through the pipeline just fine. It's a graphics library not a programming interface. It's what they can't see you should be arguing. Sounds silly even saying it.

Now I'm arguing for you, that's definitely it for me lol.

Are you talking to yourself? :p

Certainly wasn't relevant to anything I'd said, and I can't see anyone else you're replying to.
 
:confused:

Apache has an obviously throttled (And stock) R9 290, it's only just beating the fastest (And manually overclocked) 770, As an R9 290, its performance is an outlier, as it's not actually representative of what an R9 290 stock performance is, since you know, it's not actually run stock, it's downclocked.

While I'm not going to get into "Gameworks is bad derp", I'm also not going to listen to "Tesselation!!!! derp".

Don't have to listen to anything but you also have to realise tess is still not on par hence why Joel touches on the excessive use in the original article. Silly muggle.

I couldn't be bothered to quote as it was bad enough having to dance around someone using technologies way out of context. Bye bye :)
 
Last edited:
They can optimise through the pipeline just fine. It's a graphics library not a programming interface. It's what they can't see you should be arguing. Sounds silly even saying it.

Now I'm arguing for you, that's definitely it for me lol.

What are you saying? How can you optimize closed source shaders? Rofl.I dont think that you have any idea about programming or pipeline optimizations with your claims
 
What are you saying? How can you optimize closed source shaders? Rofl.I dont think that you have any idea about programming or pipeline optimizations with your claims

Now you sound clueless :P - AMD develop their own drivers they can set them up in debugging mode plus profiler and see exactly what commands are sent to the drivers and hence rebuild a picture of what the GPU is being asked to do.

Its not like the gameworks DLLs are unique in that they are closed source - the vast majority of game libraries are.
 
Now you sound clueless :P - AMD develop their own drivers they can set them up in debugging mode plus profiler and see exactly what commands are sent to the drivers and hence rebuild a picture of what the GPU is being asked to do.

Not really? If you have any experience you dont see exactly the commands when you are debugging closed source dll.Its not accurate and we talk about a pipeline when everything needs to be accurate.
 
Don't have to listen to anything but you also have to realise tess is still not on par hence why Joel touches on the excessive use in the original article. Silly muggle.

I couldn't be bothered to quote as it was bad enough having to dance around someone using technologies way out of context. Bye bye :)

I've never claimed AMD's tess to be on par, I'm stating that the R9 290X is faster than the GTX770 (This is fact, even with heavy tessellation, hence Heaven, even when comparing a heavily overclocked GTX770 to a STOCK R9 290X, let alone an overclocked one) it shouldn't be slower than the GTX770 in Batman when using FXAA, but I don't care that it is (For the aforementioned reasons of City performance blowing, it not being a gameworks title, FXAA sucks, and MSAA performance has the hierarchy "right"), but you're just being plain ignorant.

If you're going to act immature, be my guest.

I haven't even read the article, I think it's (currently) a none story.
 
Last edited:
Not really? If you have any experience you dont see exactly the commands when you are debugging closed source dll.Its not accurate and we talk about a pipeline when everything needs to be accurate.

Unlike attaching a debugger to an already compiled binary the drivers get a much more accurate representation of what is going on as everything related to the GPU stuff passes through them (ostensibly) unless someones doing some low level poking.

EDIT: No you won't see exactly what the dll is doing generally - but as far as what its doing with shaders and GPU operation you will be able to rebuild exactly what and how its using the GPU which is all you need for GPU optimisation.
 
Last edited:
I've never claimed AMD's tess to be on par, I'm stating that the R9 290X is faster than the GTX770 (This is fact, even with heavy tessellation, hence Heaven, even when comparing a heavily overclocked GTX770 to a STOCK R9 290X, let alone an overclocked one) it shouldn't be slower than the GTX770 in Batman when using FXAA, but I don't care that it is (For the aforementioned reasons of City performance blowing, it not being a gameworks title, FXAA sucks, and MSAA performance has the hierarchy "right"), but you're just being plain ignorant.

If you're going to act immature, be my guest.

Well if you're going to disregard the same engine from the previous game without GameWorks as a clear example there isn't any point talking.
 
Unlike attaching a debugger to an already compiled binary the drivers get a much more accurate representation of what is going on as everything related to the GPU stuff passes through them (ostensibly) unless someones doing some low level poking.

Gameworks works perfectly for nvidia architecture. If you cant have access to change the source code for your own hardware debugging a compiled binary is almost useless for any kind of optimizations.
 
Well if you're going to disregard the same engine from the previous game without GameWorks as a clear example there isn't any point talking.

:confused:

I don't think you're actually able to read, let alone comprehend what I'm saying..... It's honestly like you're replying to someone else, because that response doesn't mean sense to my post (Or posts I've made)
 
but I don't care that it is (For the aforementioned reasons of City performance blowing, it not being a gameworks title, FXAA sucks, and MSAA performance has the hierarchy "right"), but you're just being plain ignorant.

If you're going to act immature, be my guest.

I haven't even read the article, I think it's (currently) a none story.

Saying the person that hasn't even read the article, now I know you're not serious lol

I'm done with you kidster :). Run along
 
Sorry? Weren't you the one trying to brag an understanding of shader pipelines and integrating the word 'rofl'?

You receive what you're given ;')

Thread is a waste of time, if any of the accused were the least bit bothered they'd run their cards within this forums very own bench for the game. But they'd rather whine about it.
 
Saying the person that hasn't even read the article, now I know you're not serious lol

I'm done with you kidster :). Run along

Me not reading the article doesn't change anything.

I think gameworks is a none story. As I've said, City's performance was crap across the board, it was a none gameworks title, Batman AO has better performance across the board (So, I don't find Gameworks doom and gloom), the fact that using an AA technique which is crap has a benefit to Nvidia doesn't mean much to me, I don't care.

But the GTX770's performance advantage using FXAA isn't because of things like tessellation, because a GTX770 is inferior to an R9 290X when it comes to tessellation, I don't care that the GTX770's ahead when using FXAA, but you're being ignorant as to why that is (As said, I don't care that it is)

The only one who's acting like a child, is yourself.
 
Everyone is ignorant to it.

Aren't you as well? The same way people are to City's performance. E.g., the game is optimised badly, It's an NV title so fully uses NVAPI extensions, UE 3.0 which I touched on earlier etc.

Nobody is acting like a child, besides you accusing people of not being able to read :)

The only thing we know for certain is that WB refused code that was submitted by AMD, the very fact it was rejected suggests it was specifically part of GameWorks. I'm not entirely sure how AMD weren't able to optimise FXAA (or nearer the point, how it in anyway could be so badly crippled because of it) without access to GameWorks specifically, but there we have it. It would probably make a lot more sense if AMD piped up and explained what was rejected, considering this whole situation was instigated by them in the first place.

it's utter rubbish. Which is why this thread should just die
 
Last edited:
Back
Top Bottom