• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Are there some new nvidia drivers coming

They don't all use the same levels of GPU compute, "a compute heavy game like TombRaider" means exactly that, they use a lot of it. I don't know how you can see that as anything else, it affects one aspect of the GPU more than usual, assuming the usual is not to such level's, high dependant compute lighting, shadows, physics, 'realistic hair rendering'

...

A GPU is a compute device. Everything is compute... There are different types of data complexity, and different degrees of granularity, but it's all compute. That's what GPUs do, and that's what games require in order to display the graphics. This is what I'm trying to explain... It's not a complex concept.

Unless you want to go back 10-15 years to a time when the ability to display textures was the limiting factor then it's all compute. The key is passing the data through to the compute units (the SPs) as efficiently as possible, which is a mixture of low-level hardware and software. It has been since the Geforce 2 era.

Saying a one game or another is "particularly compute heavy" is nonsense. Tomb raider may have a higher proportion of physics effects than some other games, which place different requirements on the design of a GPU and the drivers that control it (see my above post for details on the progression from "smooth" to "lumpy" data - and yes those are technical terms). But it's all compute. That's. What. GPUs. Do.

That's why they're designed to perform as many computations (i.e. floating point operations; "FLOPS") as possible. And this is done by having a large number of processing units ("SPs") running, in parallel, at as high a speed as possible. Double the number of computations the device is capable of, and you double potential performance. The key is attempting to maintain efficiency through the pipeline with the increased data throughput, and this is the key to "efficient" GPU design generation after generation.
 
Last edited:
I have just done some practical testing

I have run both a Titan and a GTX 690 at identical clock speeds on the Heaven 4 bench. These cards boost differently from each other so to get them running at the same clock speed involved using different pre boost settings. Once the cards were up and running they both went round the Heaven bench @1058mhz. Memory settings were left at stock.

Titan settings and clock speed.
titanh1.jpg


GTX 690 settings and clock speed
heaven6901.jpg


Titan Heaven 4 result
titanvgtx690.jpg


GTX 690 Heaven 4 result
gtx690vtitan.jpg

Just doing a simple calculation

Titan = 14 SMX
GTX 690 = 16 SMX

Titan Heaven score 1500
GTX 690 Heaven score 1707

(1500/14) x16 = 1714.29

Keeping it simple, it looks like the SMX modules are scaling pretty good with these cards. It also means working on the basis that the drivers are working pretty good for the GTX 690, that there is not a lot left in the Titans for future drivers to extract.
 
nope I didn't say may I said



I'm not saying that they defiantly will I'm just saying that it is possible.

but of course I will now be pulled up on the semantics of what I said rather than the gist of the comment as is the way these things invariable work here.:rolleyes:

And again I am asking you to give any sort of factual basis for what you are saying rather than acting like a sulky teenager and respond like this

Don't be silly Roff, how dare we even suggest that Nvidia may be capable of doing the same thing that AMD did and bring out a driver that gives a nice boost to performance. :rolleyes:

when people try to explain why there won't be a big boosting performance driver from Nvidia and why there was from AMD.
 
Ease up Melmac. He did say it may be possible and without being a Duff-Man/Xsistor/Rroff/Drunkenmater, who is very sharp on these details, there is no harm in thinking there may be a boost in performance.

I read it as a little tongue in cheek and took it as such.

Anyways, it is all good :)
 
Keeping it simple, it looks like the SMX modules are scaling pretty good with these cards. It also means working on the basis that the drivers are working pretty good for the GTX 690, that there is not a lot left in the Titans for future drivers to extract.

Thanks for taking the time to do that, looks pretty definitive that there won't be incoming large performance gains from GK110 over GK104 tho without knowing the average scaling efficency and/or overheads if present on the SLI setup can't rule out smaller gains but doesn't look very promising.
 
I think where humbug is getting confused is that nvidia cards are having a tough time with openCL, whilst being great with CUDA and openGL

AMD have put a lot of effort in to trying to convince games developers to start using openCL more... but in the same way as physx, there are barely a handful of titles that have bothered to go down this path

AMD may have just made a rod for their own back though, as clearly if more game makers go this route, it will be very easy for nvidia to improve openCL support in maxwell
 
AMD may have just made a rod for their own back though, as clearly if more game makers go this route, it will be very easy for nvidia to improve openCL support in maxwell

I certainly hope that they do - open platforms are good for everyone, and we might start seeing more widespread support for GPU physics if it takes off. Unfortunately right now programming in OpenCL is far more difficult that in CUDA. There are so many pre-existing libraries and efficient packages for CUDA now, and OpenCL nowhere near this (yet).

I'm hoping that OpenCL support in the new consoles will drive development of game-related GPGPU libraries, but for now CUDA is still the default for scientific computing with GPUs. This makes it the more attractive package for PC development, even if it restricts the market.
 
I certainly hope that they do - open platforms are good for everyone, and we might start seeing more widespread support for GPU physics if it takes off. Unfortunately right now programming in OpenCL is far more difficult that in CUDA. There are so many pre-existing libraries and efficient packages for CUDA now, and OpenCL nowhere near this (yet).

I'm hoping that OpenCL support in the new consoles will drive development of game-related GPGPU libraries, but for now CUDA is still the default for scientific computing with GPUs. This makes it the more attractive package for PC development, even if it restricts the market.

Its why I'm not (so far) particularly optimistic about HSA, etc. regarding Open CL and industry - I see a lot of idealistic posts about it but people seem oblivious to the momentum the competition already have. Its not going to be enough for AMD simply to put the tools out there and get big names signed on, they are going to have to strive to get people actively involved and put some real effort to getting the ball rolling.
 
I think where humbug is getting confused is that nvidia cards are having a tough time with openCL, whilst being great with CUDA and openGL

AMD have put a lot of effort in to trying to convince games developers to start using openCL more... but in the same way as physx, there are barely a handful of titles that have bothered to go down this path

AMD may have just made a rod for their own back though, as clearly if more game makers go this route, it will be very easy for nvidia to improve openCL support in maxwell

I'm not getting confused, its exactly the point i'm making...

Its because they use those resources differently and are geared to work with different instructions, CUDA vs OpenCL for example.

Its just Duff-Man being argumentative with me while at the same time illustrating exactly what i'm saying, its quite surreal, and amusing.

Perhaps he's confused.
 
I'm not getting confused, its exactly the point i'm making...



Its just Duff-Man being argumentative with me while at the same time illustrating exactly what i'm saying, its quite surreal, and amusing.

Perhaps he's confused.

Chillax Humbug. :)
 
Its just Duff-Man being argumentative with me while at the same time illustrating exactly what i'm saying, its quite surreal, and amusing.

Perhaps he's confused.

Read my post history. I don't get into petty arguments. I have no interest in "AMD vs Nvidia" arguments, whose drivers "suck balls", or whether xyz card is a waste of money (etc). My only interest is in the technology. How GPUs work, and what the future may hold. I approach the problem scientifically, which is probably a consequence of my training through work and education (I'm a University researcher and this is related to my field), and I have a fairly good understanding of how GPUs work.

In the past I've learned a great deal from knowledgeable others on these forums and elsewhere. In the meantime I've advanced my own knowledge in the field partly through my job, and partly through independent research. I still learn the odd thing or two here, from the few other knowledgeable people we have, and I'm always grateful for the chance to do so.

Where I can I like to contribute to others understanding of GPU technology. Many see them as 'magical' black boxes, when the basic processes are actually quite well defined, and in many cases have very predictable outcomes. You can look back to my predictions of how the last few generations of GPUs would be composed, and how they would perform, if you want evidence of this. You'll find they've never been too far off the mark.


If your ego is so fragile that you must turn everything around into an "argument", where you "turn out to be in the right", then there is nothing I can do about that. That's your problem, not mine. You should think, though, what this says about you and how it makes you seem to others... Believe it or not - this was never an argument. It started out as a discussion between myself and Rroff (who is another highly knowledgeable member of this forum), and evolved into an attempt by me to increase your knowledge on the subject. Clearly that is not something you are interested in.


I have no interest in your ego, nor in arguing with you. But, when I see people make statements that are factually incorrect, or misleading, with regards GPU technology I will speak up. Not because I care one fig about you personally, or "showing you up", or however else you may choose to see it. But because having misleading information on the forum leads to others being misled.


edit:


If you would like to actually discuss something then please go ahead - list your viewpoints clearly and I'll address them from a neutral perspective (as always). I'll have all the time in the world for a mature discussion on one of my pet topics :) But if your only interest is in petty one-upsmanship and "winning" arguments, then I have no time for you.
 
Last edited:
Other than that I think the thread has been quite productive, between your workings out and Kaapstad's testing it seems unlikely that any future performance gains will come from the difference between GK104 and GK110's shaders.
 
Other than that I think the thread has been quite productive, between your workings out and Kaapstad's testing it seems unlikely that any future performance gains will come from the difference between GK104 and GK110's shaders.

It does seem that way (unfortunately - since I now have two GK110s!). Performance does seem to scale pretty linearly with computational throughput already. It would be very interesting to see some PhysX heavy benchmarks though... It's something of a surprise there isn't a "PhysX benchmark" of some kind floating about. Lumpy data is on the rise :p
 
Silly question, is it possible that the new revisions of the chips now in the 770 and upcoming 760's which might have sorted out the possibly rumoured hardware bugs in the old chips, might allow performance improvements yet to be shown in new drivers?
 
Silly question, is it possible that the new revisions of the chips now in the 770 and upcoming 760's which might have sorted out the possibly rumoured hardware bugs in the old chips, might allow performance improvements yet to be shown in new drivers?

Won't bring any changes to performance - AFAIK its mostly tweaks to stuff that wasn't playing nice with boost.
 
Silly question, is it possible that the new revisions of the chips now in the 770 and upcoming 760's which might have sorted out the possibly rumoured hardware bugs in the old chips, might allow performance improvements yet to be shown in new drivers?

It's possible, but doesn't seem particularly likely. As far as I know, the GTX770 is based on the same GK104 design as the GTX680 was. I don't think Nvidia did another revision of GK104 for the GTX770 (chime in guys if there's info to the contrary out there).

It can run at a higher clockspeed, and is allowed a little more power which helps raise clocks, but this is most likely a result of the improved cooler, and also perhaps small improvements in the stability of the 28nm process at TSMC.

It would surprise me to see the GTX770 pull away from the GTX680 due to drivers.
 
Not seen any hard information on the changes between GK104-400 and -425 the most knowledgeable/in the know people I've talked to say theres absolutely no changes except optimisation of the material process and some minor hardware bugfixes relating to boosting and power management that fixes some issues that were causing stutter.
 
Back
Top Bottom