• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

what could have been

Where's your link to the nVidia statement saying that GK110 was originally supposed to be the GTX680?

I meant GK100 sorry not GK110.

It is a reasonable assumption based on years of watching Nvidia behaviour and their own internal naming conventions.

Their Gx100 parts have always been the first release of a new die shrink for the top end parts, Gx104 has been the midrange.

Such Nvidia parts have always been large die and pushing the TDP to the maximum for the current spec.

So when you see GK104 get launched with a small die and low TDP it is a very reasonable assumption to make that the GK100 couldn't be manufactured cost effectively. We saw it with fermi where the original GF100 part was supposed to have 512 cores but couldnt be manufactrued so they made it by chopping out one cluster so it had 480 cores. Looks like for GK100 they couldn't even do that.

I don't know the above for sure of course, only people within Nvidia know for certain but it is a very reasonable assumption based on Nvidia's history.
 
I meant GK100 sorry not GK110.

It is a reasonable assumption based on years of watching Nvidia behaviour and their own internal naming conventions.

Their Gx100 parts have always been the first release of a new die shrink for the top end parts, Gx104 has been the midrange.

Such Nvidia parts have always been large die and pushing the TDP to the maximum for the current spec.

So when you see GK104 get launched with a small die and low TDP it is a very reasonable assumption to make that the GK100 couldn't be manufactured cost effectively. We saw it with fermi where the original GF100 part was supposed to have 512 cores but couldnt be manufactrued so they made it by chopping out one cluster so it had 480 cores. Looks like for GK100 they couldn't even do that.

I don't know the above for sure of course, only people within Nvidia know for certain but it is a very reasonable assumption based on Nvidia's history.

I can see where you're coming from, definitely, however it appears that nVidia has changed its naming convention, or rather altered it.

The GK1XX chips aren't necessarily always to do with their high end X8X card, but rather the largest most performant chip they produce.

It's telling that they have changed their naming to some degree, as they have never produced separate chips to differentiate their consumer cards from their pro cards.

spoffle said:
It doesn't have any compute abilities at all though does it? Not one unit as far as i understand it. I've heard (and i don't know if this is true) that the only way around it for Nvidia is to disable the compute effects via the drivers. I heard (again i don't know if this is true) that is where the massive gains came from in Tomb Raider.(45% or whatever the figure was)

It's the double precision floating point performance that GK104 suffer in, if you compare GK104 to GK110, GK110 has over 10x the double percision performance though I do believe this is purposefully by design, as I suggested earlier it would give people even less reason to buy Tesla and Quadro cards if nVidia's desktop cards had great compute performance.

The 680 has compute abilites, just really limited. Remember Nvidia put two GK104's together and called it the K10 for the Tesla market, because they had nothing else available.

As the above, I fully think this was intentional.

The 560Ti variant of GF110 has 515 GFLOPs of double precision floating precision processing power.

The GTX580 variant of GF110 has 666 GFLOPs of double precision floating precision processing power.

Put that in to perspective, compared to the above;

The GTX680 variant of kF104 has 95 (yes, ninety five) GFLOPs of double precision floating precision processing power.

If you compare that to the DP GFLOPs of the GK110 chip, at 1310 DP GFLOPs, you can see that nVidia have intentionally designed their desktop gaming chips to excel only at games, they do have the ability to do compute tasks, but aren't very good with it.

This has firstly allowed nVidia to produce a much more cost effective GPU as they haven't have to dedicate die space for DP GFLOPs performance, and secondly, they don't canilbalise Tesla sales for those who rely on CUDA accelerated apps.

Because as I said earlier, if their desktop GPUs have great double precision/compute performance, there would be lead reasoning/justification for people to buy Tesla cards.

So I believe that they are going to continue that way with Maxwell too, and I think they will continue to do it for the foreseeable future until CUDA isn't industry standard in the sectors it's widely used in, because traditionally AMD GPUs tend to have the stronger DP GFLOPs performance, for example,

7970 at 1050Mhz on the core has 1075 DP GFLOPs

If you compare that to GK110's DP performance at 1310 DP GFLOPs, that's only 1.21x the DP GFLOPs for 1.59x more die space.

Now currently, AMD's DP GFLOP performance is quite irrelevant because not much at all is really there to take advantage of it due to the persistence of CUDA.

On another note, I would say that examining the differences in nVidia's GPU DP GFLOPs performance proves that GK104 wasn't ever intended to be the mid range GPU at all, and was clearly their high end games GPU because previous nVidia GPUs had much higher DP GFLOPs performance for the same die size.

spoffle said:
Although I do agree with most of your post (agreeing with Spoffle :eek:) it is the above portion that I disagree with, myself I feel that it was quite early on in the Kepler design that Nvidia decided to go with the smaller core second tier chip rather than the usual massive top tier chip. they did undoubtedly decide somewhere along the way to strip out allot of the compute hardware, because as we all know and you quite rightly point out is that the GK104 doesn't have particularly good compute abilities.

Anyway what I would like to see with maxwell is them go with a halfway house sized core, so not the massive cores of the 580 and titan but not the smaller cores of the 680/670, leave out the compute functionality by all means just give us stonkingly fast gpus and stop charging the earth for them.

Well the issue with this is that it doesn't really work that way. Chip designs are done well in advance of them actually being produced.

Also, stripped out is a relative term in this context, but I think people use it in a literal term, and think that GK104 used to be a different design, but mid way through, nVidia decided to remove parts of the core that directly influenced double precision performance.

When really it's more GK104 has had bits stripped out relative to GF110, but the way GK104 is now is more or less what nVidia will have intended to produce. Chips can be changed, but only on a relatively small scale, rather than massive chunks being taken out and swapped out.

So generally, the argument I've seen is that GK104 in its current iteration was originally going to be used in the GTX660Ti sort of GPU, but looking at the GPU's specs, I think it's clear to see that was never the case, and I think we'll see the same for Maxwell.

A 300mm²~ GPU with very low double precision performance, but gaming performance where it should be. I can't imagine they'll make the mistake of crippling it with an inadequate memory bus this time around. The 2GB RAM is less of an issue than the memory bus for Kepler I'd say.
 
Personally I cannot see AMD being in a position to give us anything on a new process until very late in the year, probably December time. It will all depend on TSMC.
I suppose at least with these Nvidia refreshes they are covered for a while longer if TSMC do have problems.


Well, we'll see. I don't think October is out of the question.
 
It's about the technical aspects about what they've done with each GPU, and how and where they've crippled performance.

The make up of the GK104 chips is very telling really, especially when compared to previous families.

I made quite a few large technical posts about it a little while back, I'll see if I can dig them out.



I'm not being defensive, it just seemed like an odd comment to make when you probably know nVidia don't talk about this kind of thing openly.

Ok, I just read your statement as factual, like there was something around to back it up in a official capacity. If it is your view from reading various pieces of information cool.
 
Ok, I just read your statement as factual, like there was something around to back it up in a official capacity. If it is your view from reading various pieces of information cool.

Yes, it certainly is my assessment of the situation, however I wouldn't quite classify it as a view borne from reading various bits of info.

As it's more like an analysis of nVidia chips current and old, how they were composed, and how they are composed now that goes to show how GK104 was intended to be the GTX680, as if nVidia produced a new line of chips with very good CUDA performance, a lot of people would choose them over Quadro and Tesla cards.

The design of GK104 is very telling of what sort of direction nVidia is trying to take their chips in, and their intentions.
 
Yes, it certainly is my assessment of the situation, however I wouldn't quite classify it as a view borne from reading various bits of info.

As it's more like an analysis of nVidia chips current and old, how they were composed, and how they are composed now that goes to show how GK104 was intended to be the GTX680, as if nVidia produced a new line of chips with very good CUDA performance, a lot of people would choose them over Quadro and Tesla cards.

The design of GK104 is very telling of what sort of direction nVidia is trying to take their chips in, and their intentions.

I agree partly with what you are saying but I do think Nvidia did intend to have a cuda part in the original 6** release, But I tihink they were struggling to get anything near ready around the time AMD released the 79** series.

I then think when they looked at the performance of the 7970 at time to the 680 they breathed a sigh of relief as it let Nvidia off the hook.(not saying the performance was bad but maybe Nvidia were expecting more)

Just my thoughts, maybe way off but like yours i guess we will never know.

Anyway its all fun and games :)
 
I can see where you're coming from, definitely, however it appears that nVidia has changed its naming convention, or rather altered it.

You sure about that? We're just about to see the GK110 as GTX780 much as we saw last gen the GF110 as GTX580.

All points to me that GK100 wasn't viable for some reason.


The titan is an anomaly though, I would have expected that to launch as GTX780
 
you sure about that?

I am certain that right back at the beginning of the design phase that the big chip was going to be the top tier part, just as it has been with the last god know how many series of cards.
Of course somewhere along the way ideas changed, whether this was down to unmanufacturability or mid range part performance projections or the colour of the board room wallpaper we will probably never know.

100% sure of that and confirmed by both Hardocp and Guru3d when Titan released.
 
You guys do realise that these things have been on the drawing board for probably 3 years or so, and you guaranteeing us that at no time did Nvidia intend for their largest chips to be their mainstream top tier cards :rolleyes:


I am not saying that I am 100% right, I am just saying it is possible, seeing as it is the way they have done it for the last god knows how many chip designs.
 
Back
Top Bottom