• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

RV770 expected to release in May?

Looks like some credible specs are now leaked. click

So it seems like ATI have gone the way of nVidia and have broken out the shaders into a different clock domain.

I've got to say that I'm really excited about this chip now.

Simon
 
Like i said in different tread speculations was if core would been 1ghz and had 56 tmu it would have been 50% faster but now i think its going to be only 20% maybe even less

i though core will be 1ghz not 850mhz

and 32 TMUs only? not 56?

still buying one lol :D
 
With a 256-bit memory controller, we're talking about 115 to 141 GB/s of bandwidth. This number equals the memory bandwidth record set by the 2900XT 1GB GDDR4 (512-bit interface with GDDR4 at 1.1 GHz DDR).

I take it memory controllers are an appreciable part of the cost- aren't 256-bit controllers what stop G92 from beating the 8800GTX at high res/AA/AF?


Would it have killed them to stick to 512-bit and not equal but beat bandwidth performance from old cards.
 
Like i said in different tread speculations was if core would been 1ghz and had 56 tmu it would have been 50% faster but now i think its going to be only 20% maybe even less

i though core will be 1ghz not 850mhz

and 32 TMUs only? not 56?

still buying one lol :D

The shaders alone means a 50% boost, with the added TMUs and clock speed, it has to be more, 20% is WAY off, I'm guessing you just plucked it out of thin air? :D If you clocked a 3870 to 1GHz, that alone would be over a 20% performance increase, no way with extra shaders and TMU equal no extra performance. If it was only 20% from these additions, it would be totally pointless.

Edit - a core of 1050Mhz is a 35% overclock on the core, plus the faster RAM if it really is GDDR5. All based purely on the supposed specs, it wouldn't add up to 20%. I'm not saying this as if it's fact of course.
 
Last edited:
yea i just guessed i hope you right do i would like see good performance increase still going to buy one or wait till 4870x2
 
tgdaily has a lot more details on the HD 4800 from some apparently leaked details.

They are saying that it will indeed have 480 stream processors, that the launch is weeks away and details of pricing.

The 4850 256MB GDDR3 version will arrive as the successor of the 3850 256MB with a price in the sub-$200 range. The 4850 512MB GDDR3 should retail for $229, the 4850 512MB
GDDR5 will set you back about $249-269. The big daddy, the 1GB GDDR5 powered 4870 will retail between $329-349.

When it will become available the 4870 X2 will hit the market for $499.
 
I take it memory controllers are an appreciable part of the cost- aren't 256-bit controllers what stop G92 from beating the 8800GTX at high res/AA/AF?


Would it have killed them to stick to 512-bit and not equal but beat bandwidth performance from old cards.

But it will beat the bandwidth with just 256bit bus. Plus there no point in over engineering the chip in some parts and not others. It was shown by the r600 and R670. The chip just didn't need such a large amount of bandwidth, the core wasn't powerful enough to use it all.
 
256bit interface is this card going to cut it when you apply AA/AF or are we in for another 2900 "when the drivers mature" card? Is the interface not that important anymore that the bus size is going down? ie 8800gtx has a greater bus size than 9800gtx
 
As has been stated, the 2900 XT had way more bandwidth than it needed, notice that the 3870 copes fine on its 256-bit bus and keeps up with the 2900 XT fine (which, incidentally, had a 512-bit bus). The inclusion of GDDR5 is going to bring large bandwidth gains of its own. That, and a 512-bit bus coupled with the cost of GDDR5 would cost a lot to produce, they're aiming these cards at more than just the 'xtreem enthusiast' who buys a new skulltrail setup every month, so they can't afford to throw money away on something that looks lovely on paper, but in real terms brings no benefit.

Edit: Just to really hit the point home, the 4870 will actually have more bandwidth than either the 2900 XT or 8800 Ultra going by current clock speeds.
 
Last edited:
I dont like the look of this:

"ATI’s RV770 will be rated at a fill rate of 20.8-27.2 GTexel/s (excluding X2 version), which is on the lower end of the GeForce 9 series (9600 GT: 20.8; 9800 GTX: 43.2 9800 GX2: 76.8)."
 
Right, that's raster operations, it's not changing much from the 3870. Hear me out, though: most of the pixel fillrate power on the geforce 8 series goes towards anti-aliasing, thus increasing the amount of ROPs and pixel fillrate power on a GeForce 8 series cards results in a reasonable boost in performance when anti-aliasing is enabled (this is also the reason why AA performance dropped going from the 8800 GTX to the 9800 GTX).

However, since the radeon 2900 XT, all ATi cards have been performing anti-aliasing on the shaders (which since the radeon 3800 series has been used to comply with DX10.1), so increasing the amount of raster operators for the 4800 series would've been, for all intents and purposes, a complete waste of silicon. Yes, I'm aware that the raster operators do more than AA, but that's what typically hogs it most.
 
I dont like the look of this:

"ATI’s RV770 will be rated at a fill rate of 20.8-27.2 GTexel/s (excluding X2 version), which is on the lower end of the GeForce 9 series (9600 GT: 20.8; 9800 GTX: 43.2 9800 GX2: 76.8)."

Mh i think your calculation is wrong :confused:
 
Nope, he's right, RV770 supposedly only has 16 ROPs, but to be honest, that shouldn't be too much of an issue, see my post above.
 
Back
Top Bottom