• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

RV770 news

indeed, its the way forward to be honest, parallelism has always been possible with graphics cards, but having multiple cores on one die eliminates the potential slowdown caused by SLI bridges and such, imagine this for example:

quadcore GPU with 128 stream processors/core clocked at something like 2Ghz and 24 ROPs clocked at 1Ghz, imagine the shading power :p

gpu's are already MASSIVELY parallel. if they could fit that many stream processors or whatever one one die, they wouldnt need to bother with overhead-inducing sli or crossfire at all. why bother designing an sli interface in to a die when you can leave that out and have one execution core feeding them all? less overheads. it just isnt logical. just take a look at the g80 design: http://www.bit-tech.net/hardware/2006/11/08/nvidia_geforce_8800_gtx_g80/6

To be honest this seems like a logical step for AMD/ATI. There has and always will be more money in the budget market, a very small proportion of PC users are willing to pay for the be-all-end-all components, especially considering that the top end cards tend to have a rather poor price to performance ratio.

Take a look at the ultra, it is nearly 3 times more expensive than the 3870 but offers performance that isn't even double that which the latter offers.

thats an unfair example though. the ultra is priced as such because its the fastest single gpu card - nvidia can afford to price it high. if you want a fair comparison, look at the GTX
 
Last edited:
gpu's are already MASSIVELY parallel. if they could fit that many stream processors or whatever one one die, they wouldnt need to bother with overhead-inducing sli or crossfire at all. why bother designing an sli interface in to a die when you can leave that out and have one execution core feeding them all? less overheads. it just isnt logical.



thats an unfair example though. the ultra is priced as such because its the fastest single gpu card - nvidia can afford to price it high. if you want a fair comparison, look at the GTX

i know there already parallel but ATI have to see some benefit of multi-core GPUs or else they wouldn't be working on making it a reality, maybe its performance improvement or energy efficiency, but theres definately something there interested in ;)
 
afaik they arent, they are just sticking two low power cores on one pcb? there's a difference between doing that and doing it on-die. thats what i was saying, if they could do it on-die, why go to lengths designing on-die crossfire?
 
afaik they arent, they are just sticking two low power cores on one pcb? there's a difference between doing that and doing it on-die. thats what i was saying, if they could do it on-die, why go to lengths designing on-die crossfire?

not right now, but multi-core GPUs have advantages, for example having cores go to sleep when there not needed another benefit might be one core does rendering and the other does physics for example, fusion is a great idea too, but no need to go into that
 
all of that could be done without sli. actually without it, they could (and i see no reason why they couldnt) shut each shader unit off one by one if they wanted to, which would be the most efficient by far.
 
They already do this to some extent with the RV670 cores, it's a feature called Power Play, which shuts off parts of the chip which aren't being used, and is why they have low idle power consumption.

The reason Multi gpu designs are tempting for AMD is R&D costs, you only have to design one chip for your middle, high and very high products, but just add more cores.

Yields at the manufacturing level should also be much better.

Imagine what a headache Nvidia is having trying to get it's rumoured 1 billion trannie monster to yield well. Sure, maybe there not, but there is always that risk.
 
http://www.fudzilla.com/index.php?option=com_content&task=view&id=5196&Itemid=1

Engineery samples are out :D

The chip works

More than a quarter before it will see the light of the commercial shelves, AMD has the working engineering samples of RV770. This new chip will be the base for its R700 dual chip design.

The chip should end up significantly faster than the current RV670 and it will be DirectX 10.1 capable. We don’t know how many shader units or transistors does this one have but we will work on it.

RV770 will replace the RV670 based Radeon 38x0 generation and bring this market to the next level, but give it at least three to four months more.
 
AT present, I keep considering whether ot sell my 3870 and just get a cheaper 3850 until these come out. Not sure, but thinking about it. Then use the saved cash to get something better later :) Either 3870X2 or RV770 when it comes out :)

To be honest, I actually think it's good they are doing this. They have proved they can get there act together and actually make a decent card with the 38xx series (ok not the best performing but still does very well in every other department), so hopefully they have got it right this time with the RV770 stuff. 1 more quarter and it will be some interesting times for the graphics world :)

Matthew
 
Last edited:
yeah its all changing in the PC world now, its not raw Mhz that wins nowadays, its all about parallelism and efficiency, and just as important pricing, i think the current generation R670, G92 will be the last single core graphics cards, as well there proberbly all gonna have embedded DRAM like the 'xenos' processor for AA without performance hit

Agree - basically GPU manufacturers have got the same problem as Intel/AMD etc had a few years back. The energy consumption and thermal issues of building faster chips means that we're changing to multiple cores running in parallel to reap bigger benefits.
 
thats not strictly true, the 3870 isn't faster than the HD2900, its faster in some cases and the 2900 is faster in others, crysis for example is faster on the 2900, proberbly due to higher memory bandwidth and such, im shocked what happened to the HD2900 series, on paper is a monster, 512-bit ringbus memory interface (256-bit bi-directional i believe) 320 stream processors and a decent clock rate, shame didn't work out better really or we would be getting 9800GTX now/very very soon, doesn't lack of competition suck

The 2900 has a 1024 bit ring bus memory controller with a 512bit standard memory interface (I believe this is 512bit read and 512bit write)

http://www.guru3d.com/article/Videocards/431/4

Thats a good guide to the ringbus controller.
 
Back
Top Bottom