• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD To Launch RV770 On June 18th

Well as far as i was aware, they were just G92's doubled up with a few tweaks, like the bus being upped to 512bit etc..., also everyone was saying that the 9800 GX2 was the GT200, just on one chip, on one PCB, instead of 2x chips on 2x PCB's stuck together.

Now if they are, then the G92's must not be a refresh either, they must also be new tech if the GT200's are, as those are 8800's that have also been tweaked, with more texture units etc..., but they aint, im sure they are a refresh of the 8800, so still the same tech, but if im wrong, and the GT200's are now known to be new tech, then can someone give me a link please to the thread which has it, as i must have missed it.
 
Last edited:
Whilst they are rumours, there've been some leaked documents which essentially confirm the release dates, delays notwithstanding... That and often these rumours are surprisingly accurate. (8800GT, GTS, 9800 GTX, 9600 GT, 3850, 3870 all come to mind as recent examples that seemed pretty on the ball bar some clockspeed discrepancies - at least the numbers are usually accurate, performance predictions tend to be a right load).
 
Last edited:
Well as far as i was aware, they were just G92's doubled up with a few tweaks, like the bus being upped to 512bit etc..., also everyone was saying that the 9800 GX2 was the GT200, just on one chip, on one PCB, instead of 2x chips on 2x PCB's stuck together.

wtf? :confused:

For one thing, to assume that integrating a 512bit memory interface into a 1Bn+ transistor bit of silicon is somehow a minor revision is nothing short of mind blowing. How does this stuff work, in your mind? Do you think there is just some kind of dial, and someone says "hey, lets turn this up to 512!!!"


but if im wrong, and the GT200's are now known to be new tech, then can someone give me a link please to the thread which has it, as i must have missed it.

http://forums.overclockers.co.uk/showpost.php?p=11796937&postcount=337

You've posted in that thread plenty of times already. There is a photograph of the GPU core. It is a completely different architectural design to G80.


I ask again. What, exactly, denotes "new" technology, in your eyes? Hologramatic projection?
 
Last edited:
.....and Core 2 is just the revised P3 architecture. When are we going to have anything new :rolleyes:

Yeah, seriously. We've been using transistors for over 40 years now - when will we see something new?!

Those lazy-ass chip designers... they need to stop rehashing the same old crap. I mean, in all this time we've only seen calculation capacity increase by a factor of 500 million.... What the hell are they getting paid for?!

:p
 
Last edited:
Well no, as Core2 went a completely different way to how Intel were previously. NV aren't.

I'm with Loadsa on this one. If it was that different they'd have added DX10.1 support (if only for marketing purposes), yet it's not. In my eyes it's essentially just a stretched G92. Don't get me wrong, the ATI part doesn't seem any more different either.
 
Well no, as Core2 went a completely different way to how Intel were previously. NV aren't.

I'm with Loadsa on this one. If it was that different they'd have added DX10.1 support (if only for marketing purposes), yet it's not. In my eyes it's essentially just a stretched G92. Don't get me wrong, the ATI part doesn't seem any more different either.

Featureset support is not exactly the most reliable measure of progress. I mean, I can do virtually all the same things with my CPU now that I would 10 years ago (64-bit aside). That doesn't mean we haven't advanced. The fact is that this new hardware is a completely new design. Whether or not it adds the features you were hoping for is another matter entirely.

As for the Core2 comment - it was a just joke meant to show that all technology relies on what went previously, in one way or another. "Standing on the shoulders of giants" to paraphrase Einstein.
 
So your argument that "a GPU is a GPU" justifies that? My old Celeron 300A could do everything that my Q6600 can, but it's moved on a bit since then.

I just don't see enough difference to call this a redesign. It's more of a tweak. The closest comparison I can draw is the Pentium 4 Prescott... clearly it'll be better than the Prescott - but, in comparison to G92 it's much bigger, hotter, fatter, supposedly eating power for fun... it seems to draw a very familiar line.
 
So your argument that "a GPU is a GPU" justifies that? My old Celeron 300A could do everything that my Q6600 can, but it's moved on a bit since then.

I just don't see enough difference to call this a redesign. It's more of a tweak. The closest comparison I can draw is the Pentium 4 Prescott... clearly it'll be better than the Prescott - but, in comparison to G92 it's much bigger, hotter, fatter, supposedly eating power for fun... it seems to draw a very familiar line.

I think you misunderstand the point I'm making.

The redesign is at a hardware level.

The architecture has *physically* changed significantly. It is not simply a die-shrink, or a minor readjustment of logic pathways. It is a fundamentally new architecture. The point I was making is that you can have a fundamentally new architecture (which could in theory be virtually unrecognisable from the previous generation) which still does all the same things as the previous generation.

You have to separate out hardware advancement and featureset advancement. This is undeniably a new architecture (unlike the G80 -> G92 step), but that isn't to say it has any new features, or will act as you would like it to. That's another debate entirely.
 
If it is so new (as opposed to just the expanding as I'm seeing), then can you see any conceivable reason why they didn't move to DX10.1? If they were redesigning the core anyway it couldn't have entailed that much, and surely the benefits (as I said, just through marketing in the OEM sector) would have outweighed the negatives. If on the other hand they've taken a Prescott-like approach, then clearly an 'upgrade' to DX10.1 ought to be trickier, and as such, they may not have bothered (hence where I feel we're at now).

Yes, there's more to a chip than adding features, but if there's a feature to be added, how ever much it benefits performance or quality (very little in DX10.1s instance), then in graphics cards it always gets added - another certified logo on the box or an online shop product page counts for quite a bit.
 
If it is so new (as opposed to just the expanding as I'm seeing), then can you see any conceivable reason why they didn't move to DX10.1? If they were redesigning the core anyway it couldn't have entailed that much, and surely the benefits (as I said, just through marketing in the OEM sector) would have outweighed the negatives. If on the other hand they've taken a Prescott-like approach, then clearly an 'upgrade' to DX10.1 ought to be trickier, and as such, they may not have bothered (hence where I feel we're at now).

Yes, there's more to a chip than adding features, but if there's a feature to be added, how ever much it benefits performance or quality (very little in DX10.1s instance), then in graphics cards it always gets added - another certified logo on the box or an online shop product page counts for quite a bit.

I just believe that NV can't implement the memory virtualisation that is required for DX10.1 certification.
 
I think you misunderstand the point I'm making.

The redesign is at a hardware level.

The architecture has *physically* changed significantly. It is not simply a die-shrink, or a minor readjustment of logic pathways. It is a fundamentally new architecture. The point I was making is that you can have a fundamentally new architecture (which could in theory be virtually unrecognisable from the previous generation) which still does all the same things as the previous generation.

You have to separate out hardware advancement and featureset advancement. This is undeniably a new architecture (unlike the G80 -> G92 step), but that isn't to say it has any new features, or will act as you would like it to. That's another debate entirely.

I think you're taking Loadsa too literally. By refresh, I think he means that they haven't really expanded on what they had at G80, they've just added more of the same, which is why if the rumors are to be believed, the cards suck huge amounts of power and give off loads of heat. If they had actually done a re-design, they would have possibly made more of an efficient core rather than just brute-forcing it by slapping more in. While I doubt these are truly "dual core" GPUs, I think they're the equivalent of one. They've basically shoehorned in the contents of 2 G92 cores onto one core.
 
I think you're taking Loadsa too literally. By refresh, I think he means that they haven't really expanded on what they had at G80, they've just added more of the same, which is why if the rumors are to be believed, the cards suck huge amounts of power and give off loads of heat. If they had actually done a re-design, they would have possibly made more of an efficient core rather than just brute-forcing it by slapping more in. While I doubt these are truly "dual core" GPUs, I think they're the equivalent of one. They've basically shoehorned in the contents of 2 G92 cores onto one core.

Hasn't adding more of the same been the norm in GPU development? Before it was add more Pixel Shader Pipelines, Vertex Shader Pipelines now it is adding more SP'ers. IIRC the NV 6800 and ATI 800 series had the same number of each and had similar performance. Of course none of this is a simple as it sounds and the cost of developing these chips runs into the hundreds of millions of pounds.
 
Back
Top Bottom