• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

R600 - When

Ulfhedjinn said:
Exactly, especially since I use 1440x900 resolution and see no reason to get a bigger monitor yet either.

The 8800s are just too big, too power hungry, too expensive and not a big enough performance increase for me to care right now. My X1900XT, great as it is, already had similar absurd requirements so until things shape up (hopefully with the refreshes) I'm not going to bother. When things shape up, I'll get a nice fat monitor and an elite graphics card, but IMO ATI and Nvidia are getting rapidly lazy.

Great, if its no use for you stop banging on about how crap it is.
 
Antx777 said:
Yes ernysmuntz and 14-18 year old Americano's with rich moms' and pa's won't be too interested in be enthusiastic about these cards. The "I want it now" society, are producing these large gigantic creatures....+ spots :)

-Ant

True, but when there's no other choice on offer, my 7800GTX ain't cutting it for a lot of games these days.

I hinted in the post, I've given in, there's no use trying to hold back....can't beat 'em join 'em...or something...its my what at attempting to not be a miserable git by taking the easy way out and wasting £4xx (can't mention prices ;)) on a nice new card...

Not really the point, think i veered off from making much sense about a third way through and just decided to air my pet peeves or whatever and see if anyone else feels the same way about the whole situation...:)
 
lay-z-boy said:
Great, if its no use for you stop banging on about how crap it is.
I never said it was crap, it's good. What I did say is that the new generation has started off as a joke.

Hopefully the refreshes of G80 and R600 will be more compact and use less power, but I doubt it because like I said Nvidia and ATI are going rapidy lazy (ATI faster.)
 
Last edited:
I'm going to go out on a limb here and disagree with ernyzmuntz - not for the sake of it, but with a bit of hindsight.

What we have seen in the world of the CPU is MHz, MHz, MHz, culminating in Preshot which was a joke as far as efficiency was concerned. AMD were doing fine with their CPUs as their pipelone was not as long as the one that goes from Tunguska through to Europe. Mercifully, the massive power consumption and rubbish output was put to bed and out resurfaced the P3-type chip, the Core 2 Duo. This just shows that chip manufacturers won't be going after massively high MHz any more, and that parallelism is now king: dual/quad/etc. core to go with multithreaded applications. Future OS releases will have numerous subroutines, each of which will require their own bit of CPU time concurrently with another, etc.

The same will eventually happen in the graphics world, too. ATi's R300 was a very interesting chip, not just because of it's longevity (9700Pro ---> X850XT (albeit in very, very tweaked form)) and the fact that it completely battered everything else, but becaue it introduced something unique at that point: the ability to communicate with another GPU. This resulted in the dual GPU cards whose pictures occasionally found their way onto the internet to raucous cries of "Ph0t0shopz0r!!!" and whatnot. Curiously, however, such solutions never made their way into the general market.

However, this parallelism is rearing its head again in the form of SLi/Crossfire, but more importantly in 7950GX2 and the 7900GT Masterworks (or whatever it's called). one of which uses two cards, the other uses two chips on one card.

I personally don't think it will be too long until Nvidia/ATi start packing two bits onto one chip/card and stir things up that way. Nvidia have the reins at his point, because they've got the fastest DX9 solution and currently the fastest DX10 capable card. But, as the technology refines itself, it will shrink and require less power whilst churning out more textures. ;Tis the way of such things.
 
So anyway, apart form the now traditional G80 is pants/no it's not banter has anyone heard anything concrete about R600? It seems very quiet for a major GPU release, i'd have expected to have heard something by now about capabilities, spec etc.
 
Jabbs said:
Funny my x1900xt don't cut through everything
Something wrong with it then. ;) At the moment I am playing Battlefield 2, Call Of Duty 2, Oblivion, Dark Messiah and City Of Villains all maxed out in 1440x900 and loving it. I think the only game that will dent it will be Crysis, but I won't upgrade for one game.
 
Ulfhedjinn said:
Something wrong with it then. ;) At the moment I am playing Battlefield 2, Call Of Duty 2, Oblivion, Dark Messiah and City Of Villains all maxed out in 1440x900 and loving it. I think the only game that will dent it will be Crysis, but I won't upgrade for one game.


Nowt wrong with this card i can get 100fps out of bf2 with everything maxed out etc, get high fps on 2142 too, but there are games that make these cards struggle oblivion being one of them, theres no way you getting really high fps out of that maxed out.

Some may think that hitting 40ish fps in the outdoor sections acceptable but nothing below 60fps in acceptable.
 
Last edited:
mrthingyx said:
I'm going to go out on a limb here and disagree with ernyzmuntz - not for the sake of it, but with a bit of hindsight.

What we have seen in the world of the CPU is MHz, MHz, MHz, culminating in Preshot which was a joke as far as efficiency was concerned. AMD were doing fine with their CPUs as their pipelone was not as long as the one that goes from Tunguska through to Europe. Mercifully, the massive power consumption and rubbish output was put to bed and out resurfaced the P3-type chip, the Core 2 Duo. This just shows that chip manufacturers won't be going after massively high MHz any more, and that parallelism is now king: dual/quad/etc. core to go with multithreaded applications. Future OS releases will have numerous subroutines, each of which will require their own bit of CPU time concurrently with another, etc.

The same will eventually happen in the graphics world, too. ATi's R300 was a very interesting chip, not just because of it's longevity (9700Pro ---> X850XT (albeit in very, very tweaked form)) and the fact that it completely battered everything else, but becaue it introduced something unique at that point: the ability to communicate with another GPU. This resulted in the dual GPU cards whose pictures occasionally found their way onto the internet to raucous cries of "Ph0t0shopz0r!!!" and whatnot. Curiously, however, such solutions never made their way into the general market.

However, this parallelism is rearing its head again in the form of SLi/Crossfire, but more importantly in 7950GX2 and the 7900GT Masterworks (or whatever it's called). one of which uses two cards, the other uses two chips on one card.

I personally don't think it will be too long until Nvidia/ATi start packing two bits onto one chip/card and stir things up that way. Nvidia have the reins at his point, because they've got the fastest DX9 solution and currently the fastest DX10 capable card. But, as the technology refines itself, it will shrink and require less power whilst churning out more textures. ;Tis the way of such things.

Nice post, made good reading :)
 
Jabbs said:
there are games that make these cards struggle oblivion being one of them, theres no way you getting really high fps out of that maxed out.

Some may think that hitting 40ish fps in the outdoor sections acceptable but nothing below 60fps in acceptable.
I stand by what I said then, my X1900XT cuts through everything. :)

Oblivion is the only game I can't play maxed out with vsync on and keep a steady 60fps, but even 30-40fps in that game is acceptable to me as it's not a fast-paced first person shooter title. ;)
 
fornowagain said:

hahaha. Where is the G80? Getting used as a drawbridge somewhere?. Has someone's cat gotten stuck up a tree?.

:p

Cleopatra comin' straight back at ya. ;)
 
Ulfhedjinn said:
I stand by what I said then, my X1900XT cuts through everything. :)

Oblivion is the only game I can't play maxed out with vsync on and keep a steady 60fps, but even 30-40fps in that game is acceptable to me as it's not a fast-paced first person shooter title. ;)

Gotta agree with what you have said mate, X1900XTX cuts through everything and its silent with the IceQ cooler and im at 1600x1200. FS-X is the only thing it can't deal with well, but thats just MS **** coding :/

I can afford a G80, and a X6800, but im not someone with more money than sense. G80 is very very very powerful, the IQ looks fab, but it does not warrent a 500 quid price tag, nor does the X6800.

:)
 
Ulfhedjinn said:
I stand by what I said then, my X1900XT cuts through everything. :)

Oblivion is the only game I can't play maxed out with vsync on and keep a steady 60fps, but even 30-40fps in that game is acceptable to me as it's not a fast-paced first person shooter title. ;)

I'm with you on this one. I don't play Oblivion though but will get around to it eventually. My card doesn't give me problems in any game so far. I'll upgrade when it starts underperforming in games. No need for me to jump on the G80 bandwagon as a week ago I had one of the top cards out. Whats changed like? Have games suddenly went crazy and all my stuff is outdated......please :rolleyes: .
 
I do wish ppl would stop saying the 8800 GTX is £500.

Thought i better edit that just in case, can't be to carefull these days. :p
 
Last edited:
LoadsaMoney said:
I do wish ppl would stop saying the 8800 GTX is £500, so just because 1x place has them up at £500 means they are all £500 does it, does it nads.
Yeah they're not £500, but they're still far too much to justify buying for a lot of us whos graphics card still cuts through everything and probably will for the year to come.

Like I said before; The physical size and power draw of my current card is bad enough, and G80/R600 aren't smaller or less power hungry, so unless the refreshes show some major improvement instead of carrying on the recent laziness I am going to be sticking with my X1900XT for now and buying a Wii for my future gaming needs.
 
Yup i only game at 1280x1024, so my x1800 XT does me fine, cuts through everything at that res maxed out, obviously if i gamed at uber WS reslolutions id be looking at a slideshow, but i aint, so i dont need a G80, or an R600 or whatever, only when my card starts to struggle (which it eventually will do as it aint gona run every game at 1280x1024 forever, as games progress and get more demanding) then ill upgarde. :)
 
Ulfhedjinn said:
I never said it was crap, it's good. What I did say is that the new generation has started off as a joke.

Hopefully the refreshes of G80 and R600 will be more compact and use less power, but I doubt it because like I said Nvidia and ATI are going rapidy lazy (ATI faster.)


I don't think it's lazy - I think it's more a refocusing of where they see their revenue coming from. It used to be PC sales of video cards, now it's the race to effectively provide supercomputing to both the supercomputer and to the games console.

The problem with the PC architecture compounds progress. It is impossible to get the performance against standard system RAM whereas a GPU+graphics ram can be designed entirely to their requirement with only the PCI-E bus being standard.

Both ATI and nVidia are pushing hard in the GPGPU market where their chips are becoming parallel processors.

I would expect nVidia's roadmap of chipset (nForceX?) to heavily support multiple GPUs (massive DMA access to main ram) and so effectively sideline the CPU (Intel/AMD) to a generic support chip..

Intel are attempting to block this by running memory on-die for speed. They are already in the 80+ lightweight threading model with this.. next step is to add graphics and you have a combined massive parallel processor with 4GB onboard ram and a graphics output system (perhaps a serial bus like HT from the frame buffer to a support chip outside).

Hmm sounds familiar to AMD's CPU+GPU on die? Well they're all racing for that cellular supercomputer chip where you can run two or four for a games console and then run 20,000 of the identical chip in your supercomputer (which may just about run Vista!).

All I can say as long as they come with a coffee heater heatsink for the office I'll be happy.

edit: additional...
Where I do see a difference is the support environment being useful.

nVidia - CUDA, basically being able to write C code to deliver the processing power. This instantly gives the tools to the developers allowing fast take up. Pros- people are used to it, they don't have to think about the structure so can expect time/effort on the functionality itself. Cons- compiler has to second guess programmer to deliver fast code, no low level access doesn't allow competitive compilers (initially).

ATI - develop a low level approach and let partners create the development tools. This is a risky strategy due to the speed of nVidia's functionality based strategy. Pros/Cons - the opposite from nVidia.. which is a knee jerk reaction and needs a reconsidation as ATI need a proper development environment that integrates with .net or gcc without big changes.

Intel - they have the hardware and massively parallel hardware development. They have their exceedingly good intel compiler set to back it all up..
 
Last edited:
Back
Top Bottom