• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Turns out we've had GT200 all along - the 9800GX2

why not 256sps if it's 2 cores on 1 die?

roll on 4870XT :o

Assuming these rumors from that appauling site are true, It could be a cut down G92, but then it wouldn't be G92 anymore if it had SPs removed, or whould it be a rev of G92? :confused:
 
why not 256sps if it's 2 cores on 1 die?

roll on 4870XT :o

Why 192 shaders? Interesting point, while reading through this thread it occured to me, let's look at the specs Cyber-Mav was kind enough to collect:

55nm TSMC process
Single chip with "dual G92b like" cores
330-350mm2 die size
900M+ transistors
512-bit memory interface
32 ROP
192SP (24X8)
6+8 Pin
550~600W PSU min

512-bit memory interface - 2x256-bit? (G92)
32 ROP - 2x16 ROPs? (G92)
192 shaders - 2x96 (G80 8800GTS?)

Bingo! We have our card, this isn't dual 8800GTS/9800GTX's, this is dual G80 8800GTS'! All the signs seem to point towards it being a dual core GPU consisting of two G80 8800GTS', I'd say performance on par with the 9800GX2 (in the games where it does fairly well) would be reasonable to expect, assuming that there are two GPUs in one, connected together properly/directly, rather than through a crude and tiny SLi bridge with all the latencies and complications that SLi causes...

It's still beyond me why they can't just add more shaders, there's no need to double them, just add some more. Maybe make some optimisations (if there are any more to make), shrink it and increase the clock speed. Look at the 6800 series vs the 7800 series, the 7800's were quite a bit faster but only really added a few little features like transparency anti-aliasing and added more higher speed pixel and vertex pipes. Save a new architecture for the GeForce 10's, which will no doubt have DX10.2 support...
 
Why 192 shaders? Interesting point, while reading through this thread it occured to me, let's look at the specs Cyber-Mav was kind enough to collect:



512-bit memory interface - 2x256-bit? (G92)
32 ROP - 2x16 ROPs? (G92)
192 shaders - 2x96 (G80 8800GTS?)

Bingo! We have our card, this isn't dual 8800GTS/9800GTX's, this is dual G80 8800GTS'! All the signs seem to point towards it being a dual core GPU consisting of two G80 8800GTS', I'd say performance on par with the 9800GX2 (in the games where it does fairly well) would be reasonable to expect, assuming that there are two GPUs in one, connected together properly/directly, rather than through a crude and tiny SLi bridge with all the latencies and complications that SLi causes...

It's still beyond me why they can't just add more shaders, there's no need to double them, just add some more. Maybe make some optimisations (if there are any more to make), shrink it and increase the clock speed. Look at the 6800 series vs the 7800 series, the 7800's were quite a bit faster but only really added a few little features like transparency anti-aliasing and added more higher speed pixel and vertex pipes. Save a new architecture for the GeForce 10's, which will no doubt have DX10.2 support...

Why would they use G80 GTSes though? Seems a bigger step backwards than the 9800GTX, if they did this, we could all safely say NV has gone nuts :D. Though I do get why you came up with this theory, especially after they tried to palm off G92 8800GTSes as 9800GTXs, and 8800GSes as 9600GSOs, so it's not entirely impossible judging from past maneuvers by NV. :)

PS, you you don't need apostrophes for plurals "GTSes" :) no offence meant of course.
 
Why would they use G80 GTSes though? Seems a bigger step backwards than the 9800GTX, if they did this, we could all safely say NV has gone nuts :D. Though I do get why you came up with this theory, especially after they tried to palm off G92 8800GTSes as 9800GTXs, and 8800GSes as 9600GSOs, so it's not entirely impossible judging from past maneuvers by NV. :)

Well, I actually meant that it was a G92'erised 8800GTS, because the older GTS had 96 shaders. But I did also say it was two 96 shader cores, which would hardly be a step back from anything other than the 9800GX2. :P But I think it's safe to assume they'll have a GX2 version of this new card, too, just to make up for that.

I was using them for ommissions because 8800GTXes looks silly... I know it's not grammatically correct... But I think it just looks better. :P
 
Last edited:
Sounds like it could be 2x 8800M GTX cores on 1 die.

Example pic:
6psq6.gif


Coincidence? :D
 
Take a look at the transistor counts, theres no way that they can fit that many shaders and ROPS into less than 20% more transistors.

It's probably safe to say that it'll have 20% more hardware, be slightly more optimised and be clocked 20% faster due to the new proccess, so I'm estimating it'll be 50% faster than an 8800GTS, anything more just isn't feasible in that many transistors without an overhaul of their architecture.
 
OK, so it's all speculation, but accoring to this, NV have not in fact been working on any new kit.
The question is however, what is it that we are expecting from the upcoming ATI's which their fanclub keep rattling on about being "genuine 2008 tech"? Holographic displays? Crysis at 1900 V-high doing 120fps on the entry level model? I'm not grasping what they're going to be doing that's different from NV (ie, does the same stuff, a bit quicker).

Anyway, judging from the wild amounts of nonsense (or at least, bum-extracted "facts" being quoted from the grimey end of the web, I would suggest that until we have the new NV and ATi in hand, that the best use for this kind of 2nd guessing would be for me to dig this thread in round the roots of my roses ;)
 
OK, so it's all speculation, but accoring to this, NV have not in fact been working on any new kit.
The question is however, what is it that we are expecting from the upcoming ATI's which their fanclub keep rattling on about being "genuine 2008 tech"? Holographic displays? Crysis at 1900 V-high doing 120fps on the entry level model? I'm not grasping what they're going to be doing that's different from NV (ie, does the same stuff, a bit quicker).

Anyway, judging from the wild amounts of nonsense (or at least, bum-extracted "facts" being quoted from the grimey end of the web, I would suggest that until we have the new NV and ATi in hand, that the best use for this kind of 2nd guessing would be for me to dig this thread in round the roots of my roses ;)

Actually most of the aforementioned information/specifications announced so far are from a rather reliable source, whom was also accurate way before both the G80 and G92 were released when everyone all had their doubts then too :)

Personally I'm hopeful, it's definitely feasible.
 
I've been anticipating this for a couple of years now (I could probably find a really old post about it). It would seem to me to be 2 cores stuck on 1 package (like Intel do with the quads). This thing will run pretty hot, but if they beef up the cooling (such as using heatpipes) along with nVidia's hybridPower it should be OK.

Added benefit of lower latencies. 2 x 256 bit bus = 512.

They could make it appear as 1 GPU or 2 to the system. I don't see them designing a new architechture seeing as might be getting DX11 next year.. :D

At least it's good for the next gen 360 :D
 
Last edited:
Why would they use G80 GTSes though? Seems a bigger step backwards than the 9800GTX,

They aint stepped back though, the 9800 GTX is a G80 GTS, its just on a smaller die, all these cards go back to the same point as they aint moved, which is November 2006. :)
 
Last edited:
I've been anticipating this for a couple of years now (I could probably find a really old post about it). It would seem to me to be 2 cores stuck on 1 package (like Intel do with the quads). This thing will run pretty hot, but if they beef up the cooling (such as using heatpipes) along with nVidia's hybridPower it should be OK.

Added benefit of lower latencies. 2 x 256 bit bus = 512.

They could make it appear as 1 GPU or 2 to the system. I don't see them designing a new architechture seeing as might be getting DX11 next year.. :D

At least it's good for the next gen 360 :D

it just doesnt make sense:( owing to the totally modular nature of a gpu anyway, i can not see why they didnt just pile on the processing units like they've always done. 2 complete cores strapped together only means more complexity. i cant think of a single reason to do it:confused:
 
They aint stepped back though, the 9800 GTX is a G80 GTS, its just on a smaller die, all these cards go back to the same point as they aint moved, which is November 2006. :)

you sure on that? 8800gtx has 24rops, specs iv found on the 9800gtx show it to have 16 rops.

9800gtx is not a die shrink version of the 8800gtx, the 9800gtx is g92 core which means more texture mapping units added to the architecture.

im surprised yu dont know this since your the one alays posting how the 9800gtx is the same as a g92 8800gts 512.

looks like you have changed you stance on things. :confused:
 
you sure on that? 8800gtx has 24rops, specs iv found on the 9800gtx show it to have 16 rops.

9800gtx is not a die shrink version of the 8800gtx, the 9800gtx is g92 core which means more texture mapping units added to the architecture.

im surprised yu dont know this since your the one alays posting how the 9800gtx is the same as a g92 8800gts 512.

looks like you have changed you stance on things. :confused:

Yes it is, the GTX is the same tech as the 9800, every card Nvidia have released since Novemeber 2006 is the same tech, they've just shrank it, cuts bits off, slapped a couple of bits on, thought you would have known that. :)
 
Last edited:
Yes it is, the GTX is the same tech as the 9800, every card Nvidia have released since Novemeber 2006 is the same tech, they've just shrank it, cuts bits off, slapped a couple of bits on, thought you would have known that. :)

not exactly the same tech since there have been architectural changes. a like for like die shrink is what happened with the hd38 series of cards, where they went straight to a lower process without any alterations to the core.

nvidia atleast did make some changes to the core.
 
not exactly the same tech since there have been architectural changes. a like for like die shrink is what happened with the hd38 series of cards, where they went straight to a lower process without any alterations to the core.

nvidia atleast did make some changes to the core.

There were changes to the core from R600 to RV670.

Incorporates the UVD video decoder found in Radeon HD 2400 and 2600 cards, but not in the 2900 Pro and XT. It handles full entropy decode of both H.264 and VC-1.
Supports DirectX 10.1, including Shader Model 4.1, mandatory FP32 filtering, mandatory 4x multisample AA with samples exposed to shaders, index-able cube maps, etc.
Reduces power consumption when idle and especially during "light usage" scenarios, such as when rendering Vista's 3D desktop, thanks to an improvement to "PowerPlay."
Improves cache efficiency, particularly when making lots of small requests.
Offers better arbitration in the memory controller when different parts of the chips make requests
Optimizes geometry shader performance.
Improves efficiency by tweaking the render back-ends. This may be noticed in a small improvement to performance when enabling anti-aliasing.
Tunes the UVD engine to offer better performance while reducing the number of transistors.
 
Last edited:
No its not exactly the same, like for like as you say, what they did do though, was take that 8800, shrink it down, cut its bus, cut its memory, then added a few more texture units, which imo, does not make it new tech, otherwise the 3870's would be new tech, as they have had Dx10.1 added, but they aint, they are now 2900's that can do Dx10.1. :)
 
Back
Top Bottom