• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Geforce Titan rumours.

Look at AMD, they went for a 512bit bus on the 2900XT, and then dropped to a 256 bit bus because it really wasn't needed at the time, and took a while before moving to 384.

Yeah but if you take into account the type of memory bus needed (the 2900 was over the top) the rest all fit into the expected pattern, the 680 doesn't. While what I posted is all conjecture this isn't all I'm basing it on - but thats a lot longer story. For instance if you dig into the older drivers especially early leaks they also tell the same story, stuff designated under what was originally mid-range codes, etc.
 
This doesn't really make any sense, plus it's an editorial.

nVidia always use the same chip for the "80" and "70" part, so it would have made no sense to do what's being suggested there, and the "70" would hardly make it mid range either.

The theory is that it was supposed to be the 660 based card, not a 670, and since a 670 and 680 would always use the same chip, just with a bit lasered off for the 70.

Seeing it called a 670Ti doesn't really mean much, as that would be the first time a second to the top chip has been called a "Ti".

We will never get to the real truth anyway :D
 
Yeah but if you take into account the type of memory bus needed (the 2900 was over the top) the rest all fit into the expected pattern, the 680 doesn't. While what I posted is all conjecture this isn't all I'm basing it on - but thats a lot longer story. For instance if you dig into the older drivers especially early leaks they also tell the same story, stuff designated under what was originally mid-range codes, etc.

That doesn't mean much though, because this is the first time that nVidia have produced a different chip solely for HPC sector, they'd have to call the chip something, right?

Their best chip, 110, their second best chip, 104 and so on. It doesn't denote that the cards based on the 104 are supposed to be "mid range", just that it's no longer the highest chip they produce.

The power consumption and performance of the 680s over the 580s is in line with what we'd expect for a generational bump, the performance from a GK110 based chip, well it doesn't make sense that it was ever indended to be for main stream desktop GPUs.

Plus, you can see that nVidia have been dropping the memory bus each time a new gen has come out, down from 512, they have very clearly been trying to reduce the cost of manufacture of their products.

The 256 bus on the GTX 680 and 670 was entirely intentional, and doesn't mean it was midranged, because AMD got the "jump" on them, and nVidia had plenty of time to respond, in fact, nVidia would have likely had info on the 7900s long before it was publicly available too.

If nVidia wanted to put the GK104 on a 384 bit PCB, they could have very well done so, the same way AMD put the Tahiti GPUs on the 7870 PCBs.
 
I Don't follow that logic, seems fairer to me that selecting possibly biased individual game results..surely ?

Well either one is bad. But the relative performance charts are meaningless because if you actually go through the review, you'll see what I mean.

Some games simply don't work on some cards at all, which massively skews the validity and usefulness of a relative performance chart.

If you check out the individual games, there's instances where a game just isn't running well at all on an nVidia GPU, and some on an AMD GPU.

Or some games where pretty much every card gets the same frame rate from a 7970, down to like a 660 (non-Ti).

It always makes more sense to just look at benchmarks of single games from reputable sites.

For example the chart puts a 7970 as being faster than a 5970 at 1920x1200, and a GTX580 being 90% the performance of a 7970, when now, the GTX680 is about 90% that of a 7970.

I know the latter is about the 7970Ghz edition with better drivers, but the performance hasn't gone up as much as that would suggest.
 
Last edited:
The power consumption and performance of the 680s over the 580s is in line with what we'd expect for a generational bump, the performance from a GK110 based chip, well it doesn't make sense that it was ever indended to be for main stream desktop GPUs.

Power/performance increase don't entirely fit the pattern, I don't think GK110 was ever going to be the GTX680 either.

Plus, you can see that nVidia have been dropping the memory bus each time a new gen has come out, down from 512, they have very clearly been trying to reduce the cost of manufacture of their products.

The 256 bus on the GTX 680 and 670 was entirely intentional, and doesn't mean it was midranged, because AMD got the "jump" on them, and nVidia had plenty of time to respond, in fact, nVidia would have likely had info on the 7900s long before it was publicly available too.

If nVidia wanted to put the GK104 on a 384 bit PCB, they could have very well done so, the same way AMD put the Tahiti GPUs on the 7870 PCBs.

You can't just use the interface width bit value alone but using it in comparision with the operating frequency and type of GDDR used and the resulting bandwidth and comparing that to the bandwidth requirements you can work out roughly what the step from generation to generation would be. GTX285 was a 512bit GDDR3 bus w/ 160GB/s bandwidth whereas GTX580 was 384bit GDDR5 w/ 192GB/s and the 680 256bit GDDR5 w/ 192GB/s bandwidth compared to 288GB/s on the 7970.
 
Well either one is bad. But the relative performance charts are meaningless because if you actually go through the review, you'll see what I mean.

Some games simply don't work on some cards at all, which massively skews the validity and usefulness of a relative performance chart.

If you check out the individual games, there's instances where a game just isn't running well at all on an nVidia GPU, and some on an AMD GPU.

Or some games where pretty much every card gets the same frame rate from a 7970, down to like a 660 (non-Ti).

It always makes more sense to just look at benchmarks of single games from reputable sites.

For example the chart puts a 7970 as being faster than a 5970 at 1920x1200, and a GTX580 being 90% the performance of a 7970, when now, the GTX680 is about 90% that of a 7970.

I know the latter is about the 7970Ghz edition with better drivers, but the performance hasn't gone up as much as that would suggest.

If I even found a site that purchased retail cards and tested them in a real PC and listed 3 runs for each benchmark I would be over the moon :D
 
at 2560x1600 ;)

Nvm, won't be baited into a road to nowhere discussion about relative performance in whole.

My original point in regards still stands with the 7970 getting slaughtered for selling at the same price as a 15-20% slower gpu-

I recall it more as not much gain over a 580 to be worthwhile than the price ;)

15-20% faster for the same money-£420, for double that performance advantage, Nvidia want double the money-£840.

I'm out for now jackus, cba with splitting hairs over something daft like 5%-which justifies double the money if I interpret your posts correct.
 
Last edited:
Power/performance increase don't entirely fit the pattern, I don't think GK110 was ever going to be the GTX680 either.

So you think there was another chip, in between?

If so, well that still shows that they've changed convention.



You can't just use the interface width bit value alone but using it in comparision with the operating frequency and type of GDDR used and the resulting bandwidth and comparing that to the bandwidth requirements you can work out roughly what the step from generation to generation would be. GTX285 was a 512bit GDDR3 bus w/ 160GB/s bandwidth whereas GTX580 was 384bit GDDR5 w/ 192GB/s and the 680 256bit GDDR5 w/ 192GB/s bandwidth compared to 288GB/s on the 7970.

I am very aware of all of this, I was trying to be succinct, in that, at the current speed of RAM on the GTX670 and 680, had they gone for a 384 bit bus, that would have increased the memory bandwidth by 50%. They had the advantage of knowing what AMD had out, and still chose to go for 256 bit over 384.

The short PCBs is something people use to try and prove that they were supposed to be midrange, but realistically, what do you expect when it's 256 bit? It needs less traces anyway, it can be shorter. The larger the bus, the more PCB space they need, and the larger the card will be.

There's other things to consider too, a 256bit bus means less memory chips, which means less costs, and the PCB isn't as complex either, and then add that to the GPU being roughly half the size of the first Fermi GPU, and it's all adding up to them clearly trying to reduce production costs, or rather, dramatically raise profit.

Because they will be making significantly more profit on their top tier cards than they have ever done.
 
Last edited:
Nvm, won't be baited into a road to nowhere discussion about relative performance in whole.

My original point in regards still stands with the 7970 getting slaughtered for selling at the same price as a 15-20% slower gpu-





I'm out for now jackus, cba with splitting hairs over something daft like 5%-which justifies double the money if I interpret your posts correct.

Yup me too, I'm fully gpu'd out :) ..starting to spend more time talking about them than using them :(
 
Everyone that's saying that titan is going to be available in good numbers needs to think about how many are gonna snapped up by the professional sector as cheap alternatives to k20's, also quite a few are going to Nvidia's US partners (high end PC builders). Combine this with the low yields and tbh there's probably not going to be that many available.
 
So you think there was another chip, in between?

If so, well that still shows that they've changed convention.

I definitely don't think the GTX680 was originally intended to be on GK104, I think at some point there was a decision to change focus.
 
If £600 I would buy, but £800+ is far to much.
I will go for another gtx680 in SLI then

No way its going to be £600 on here!

4GB 680's are £500!

Titan looks to be between 25%, and 50% faster and has 6GB plus that premium cooler, all for an extra £100? LOL;)
 
If £600 I would buy, but £800+ is far to much.
I will go for another gtx680 in SLI then

Trouble is, SLI is far too unreliable and reliant on profile updates. I have had major issues with SLI with FC3 and ACIII and now Crysis 3 (I think the number '3' seems to be a common theme here).

To have a single GPU card with close to the power of a 690 is very much worth it. No SLI worries at all and you have the fastest single GPU card on the market.
 
Back
Top Bottom