• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Instead of slating Nvidia...

I'm talking about value for money.

THe 4870x2 was £375 at release. You still have to spend near enough that to get equal peformance in the 5870. So in my view it hasnt progressed.

If you look at US prices then it has progressed.

Our prices are dictated by the USD to GBP conversion rate, plus VAT on top and of course retailer tax too.
 
I must be strange because I don't really care who makes my next graphics card. I've had ATI card and Nvidia cards. I just buy whatever seems best for the money I want to spend at the time.
 
Even so, they couldnt slash them to the point of making the speed increases in games, physicsx and the sound options all on 1 card when its similar in price lol. The power issues are offset by good PSU's so those are a none issue to be fair, the temperatures CAN be solved by more efficient cooling which has been proved in the past with a plephora of cards

Forgive me for this being my first post, but I see it like this.

I seem to remember that when the 4870 512 was released its comparative nvidia offering was the gtx260. the 4870 512 was slightly faster and cheaper.
This caused nvidia to respond with the gtx260 216. the 4870 lost slghtly in performance but will always be remember for its value for money although not as much as the 4850. Either way compared to a gtx280, the 4870 was a fine offering for its price/performance. didnt stop it from selling .

470/480 FUTURE Tessellation capabilities if gaming developers use it.
470/480 whilst impressive is just too much for the 40nm process.
similar problems like the 1st amd phenom on 65nm.
With a 2nd revision to maybe 32nm then it can then maybe open up to 16 SMs support, and hopefully will cut the tdp down too
I'm not knocking the architecture of the gf100 it's good, but only if it gets used to its potential. which means future game developers use tessellation etc.

At the moment no matter what heatsink,fan design, improvements you carry out, you cannot simply hide the fact that there is a both an engineering design fault and low success rate with the size of the 500mm2+ die in relation to the tsmc 40 nm process. Ati have been struggling with low yield rates as you well know but the die size is in ati's favour.
and secondly you are forgetting that ati were just about able to make a single slot dual gpu card on the 40nm process, Hence then 5970 even though they were reluctant to make it and did so on scaled down freq's etc. I'd like to see a dual gpu card from nvidia on the current fabrication process.

Of course there will be alternatives cheaper crossfire and sli configs in the future.

If you can agree that in its current form, its too power hungry in full load operation, (despite it may still remain reliable at (90 degrees+ temps).
in comparsion to the ati 5850 5870.
Ok yes it does provide a good performance in gaming,
But that there is more potential in the architecture than can be seen right now, and that it can provide gains in future games by implementing tessellation, but requires a revision to a more suitable process fabrication.
and then theres the main decider the price ! Thats for the buyer to decide, for me its too expensive, but then i feel a 5850 is too expensive too.


I'm not a fanboy, I buy what i feel is the best for the money at the time.
My 4870 although getting on a bit will last me til the next developent of either rv970 or fermi II. I'll skip both ati and nvidia for now. and anyway summers coming up soon.
 
Last edited:
I must be strange because I don't really care who makes my next graphics card. I've had ATI card and Nvidia cards. I just buy whatever seems best for the money I want to spend at the time.

Ditto tbh. If it wasn't that loud, or that hot, or that expensive then I'd def consider it as my next card. I just couldn't live with those 3 issues tbh (maybe just 1 of them).

As it stands my 5870 offers a good balance of performance-per-pound with no major issues, so am happy with it atm :)
 
I don't think Nvidia engineers are morons who have no idea how to design a chip. I think the problem with Fermi and GF100 is that they decided a long while ago to take a fundamentally different design approach, and they found out that TSMC's 40nm process was just totally incapable of making effectively in large numbers. As far as i'm aware it doesn't matter how many respins you do, if the core design of the chip is too complex for a given process then you will always have trouble until you move to a different node.

If what Charlie @ S/A said is correct, and Fermi was never meant to appear in the mainstream graphics market, it would make an aweful lot of sense. If Fermi was purely for the professional and HPC market then the yield issues would have been far less of a problem. The margins in that market are so much higher that they wouldn't have had to salvage cores so aggresively and raise the voltage to make enough cards to turn a profit.

Obviously the die size is the way it is for a reason, it's not because it's a bloated monster, it's just primarily a compute part that's been pulled into service as gaming gpu. If the early benchmarks are anything to go by, it is a big success as a compute part, but i wouldn't be surprised if they cannot make it efficiently until they move away from TSMC 40nm altogether.

I think ATI will be the better choice for gaming cards for a good while to come.
 
Last edited:
mame said:
I must be strange because I don't really care who makes my next graphics card. I've had ATI card and Nvidia cards. I just buy whatever seems best for the money I want to spend at the time.

You're not the only one - I steered clear of the Graphics Card section during last week's flame war.

I think a the whole graphics industry has entered a bit of a stagnate period.

Graphics power hasn't really increased for the last 18+ months. I bought my 4870x2 on launch day then regretting it thinking i'd made an awful mistake and it was going to be a waste of money.

Prices went up of these cards and it wasnt til much longer after they came down again. But the 5870 isnt really any faster, a bit more reliable but I havent had any crossfire issues. And the card still mixes with the new fermi cards giving them a scare every now and again.

Ditto, you'd have thought that after 2 and half years, there'd finally be a single-GPU graphics card available which could run Crysis at 1920x1200, Very High and 4x AA close to or beyond 60 fps. Even now, the best still barely pushes into the 40 fps region.

Or is it just the game being extraordinarily intensive at Very High settings?
 
Last edited:
You could argue there is no difference between only enabling it on their cards and locking out all competitors but that again implies motive - where do you draw the line between enabling vendor specific features and locking out the competitor?

AA isn't AA - theres all kinds of different implementations, problems and optimizations for anti-aliasing. A varient known as MSAA has become so common people expect it to be there, but when you throw things like HDR into the picture it becomes trickier. AA in the DX spec is pretty naff at best - almost nothing actually uses it in its intended form.

http://www.brightsideofnews.com/news/2009/11/4/batmangate-amd-vs-nvidia-vs-eidos-fight-analyzed.aspx
 
You could argue there is no difference between only enabling it on their cards and locking out all competitors but that again implies motive - where do you draw the line between enabling vendor specific features and locking out the competitor?

AA isn't AA - theres all kinds of different implementations, problems and optimizations for anti-aliasing. A varient known as MSAA has become so common people expect it to be there, but when you throw things like HDR into the picture it becomes trickier. AA in the DX spec is pretty naff at best - almost nothing actually uses it in its intended form.

Give it a rest.
You ain't going to convince anyone but yourself that nothing fishy or no motive was going on amongst the wealth of info that says otherwise.
People have already made up there minds which is which & not everyone can be taken for a fool because of semantics.
 
Forgive me for this being my first post, but I see it like this.

I seem to remember that when the 4870 512 was released its comparative nvidia offering was the gtx260. the 4870 512 was slightly faster and cheaper.
This caused nvidia to respond with the gtx260 216. the 4870 lost slghtly in performance but will always be remember for its value for money although not as much as the 4850. Either way compared to a gtx280, the 4870 was a fine offering for its price/performance. didnt stop it from selling .

470/480 FUTURE Tessellation capabilities if gaming developers use it.
470/480 whilst impressive is just too much for the 40nm process.
similar problems like the 1st amd phenom on 65nm.
With a 2nd revision to maybe 32nm then it can then maybe open up to 16 SMs support, and hopefully will cut the tdp down too
I'm not knocking the architecture of the gf100 it's good, but only if it gets used to its potential. which means future game developers use tessellation etc.

At the moment no matter what heatsink,fan design, improvements you carry out, you cannot simply hide the fact that there is a both an engineering design fault and low success rate with the size of the 500mm2+ die in relation to the tsmc 40 nm process. Ati have been struggling with low yield rates as you well know but the die size is in ati's favour.
and secondly you are forgetting that ati were just about able to make a single slot dual gpu card on the 40nm process, Hence then 5970 even though they were reluctant to make it and did so on scaled down freq's etc. I'd like to see a dual gpu card from nvidia on the current fabrication process.

Of course there will be alternatives cheaper crossfire and sli configs in the future.

If you can agree that in its current form, its too power hungry in full load operation, (despite it may still remain reliable at (90 degrees+ temps).
in comparsion to the ati 5850 5870.
Ok yes it does provide a good performance in gaming,
But that there is more potential in the architecture than can be seen right now, and that it can provide gains in future games by implementing tessellation, but requires a revision to a more suitable process fabrication.
and then theres the main decider the price ! Thats for the buyer to decide, for me its too expensive, but then i feel a 5850 is too expensive too.


I'm not a fanboy, I buy what i feel is the best for the money at the time.
My 4870 although getting on a bit will last me til the next developent of either rv970 or fermi II. I'll skip both ati and nvidia for now. and anyway summers coming up soon.

I could turn the voltage up on my 5870 to 1.35v to get 1000mhz on the gpu if I wanted.

It would give me a card that whilst quick, would be a jet engine due to massively increased power consumption.

I would not do that as I am not stupid.

Now in order to create a minor lead (Including games that use tessellation the lead is only minor.) Nvidia have had to do more than that and they've had to stick turbo nutter voltage through the chip, to get clocks that will enable them to beat the competition. (Just.) They've then had to get the best cooling system they can and allow that cooling system to dump the heat anywhere as long as its not around the gpu resulting in a nice warm case for everybody stupid enough to buy one.

So no, the architecture at the minute is not very good. There's no point polishing a turd, it will still be a turd at the end of the day.
 
This really puzzles me... how on earth can nVidia have gone through all the birthing pains of the 200 series 40nm cards and not learnt anything from it and then start all over again with a design even less suited to the problematic process they had to deal with.

On top of that they messed up their memory controller.

From Anandtech:

Given the 384-bit bus, we initially assumed NVIDIA was running in to even greater memory bus issues than AMD ran in to for the 5000 series, but as it turns out that’s not the case. When we asked NVIDIA about working with GDDR5, they told us that their biggest limitation wasn’t the bus like AMD but rather deficiencies in their own I/O controller, which in turn caused them to miss their targeted memory speeds. Unlike AMD who has been using GDDR5 for nearly 2 years, NVIDIA is still relatively new at using GDDR5 (their first product was the GT 240 late last year), so we can’t say we’re completely surprised here. If nothing else, this gives NVIDIA ample room to grow in the future if they can get a 384-bit memory bus up to the same speeds as AMD has gotten their 256-bit bus.
 
I don't think Nvidia engineers are morons who have no idea how to design a chip. I think the problem with Fermi and GF100 is that they decided a long while ago to take a fundamentally different design approach, and they found out that TSMC's 40nm process was just totally incapable of making effectively in large numbers. As far as i'm aware it doesn't matter how many respins you do, if the core design of the chip is too complex for a given process then you will always have trouble until you move to a different node.

If what Charlie @ S/A said is correct, and Fermi was never meant to appear in the mainstream graphics market, it would make an aweful lot of sense. If Fermi was purely for the professional and HPC market then the yield issues would have been far less of a problem. The margins in that market are so much higher that they wouldn't have had to salvage cores so aggresively and raise the voltage to make enough cards to turn a profit.

Obviously the die size is the way it is for a reason, it's not because it's a bloated monster, it's just primarily a compute part that's been pulled into service as gaming gpu. If the early benchmarks are anything to go by, it is a big success as a compute part, but i wouldn't be surprised if they cannot make it efficiently until they move away from TSMC 40nm altogether.

I think ATI will be the better choice for gaming cards for a good while to come.

I think there's some sort of sense in there. NV have definitely aimed the Fermi chip at the GUGPU market, and it is amazing for that specific type of work. The problem is that technically I think the Fermi arch is far far better than anything AMD have released but that isn't transitioning to "in game" performance. Which kinda brings me back to what I posted a while back, I don't think this is a problem with NV's card rather that there's nothing out there designed to take advantage of it (well anything bar CUDA applications), a bit like when we started to see dual - tri - quad core processors - in the majority of benchmarks there was little increase in performance because most software wasn't multithreaded to take advantage of the multiple cores. I think we time and as we get new drivers / software capable of taking advantage of the Fermi arch it will begin to shine.
 
The problem is that technically I think the Fermi arch is far far better than anything AMD have released but that isn't transitioning to "in game" performance.

I don't think anyone is slating the arch, it's just their current implementation of it, the 48x series is hot, noisy and has design issues, combined with TMSC's issues we end up with cards that have had to have parts disabled, memory not running at target clocks and likely the GPU itself that isn't running at target clocks.

I'm guessing when we get to 28nm Nvidia can fix most of these issues and things will look a whole lot better.

Kinda how the R600 arch evolved into the 4800/5800 series.
 
GTX470 and GTX480 are decent cards, they have the single gpu performance crown back and the GTX470 is actually priced ok for what it offers and I'm sure prices will drop, so however much you dislike the noise, heat and power requirements it's not all bad.

However that being said I'm not buying either. :D
 
Again, why does it matter that they are the fastest single card?

No-one has yet managed to explain WHY it matters

Bragging rights. Both companies make a big noise about it if they have the fastest single GPU or dual GPU card. It's always been that way.
 
Bragging rights. Both companies make a big noise about it if they have the fastest single GPU or dual GPU card. It's always been that way.
Great for PR, pointless in real world - which is why I don't get it why everyone is going on about it here.

I thought it's what fanboys say when their company of choice doesn't have the fastest card lol:D
I don't care what is the fastest single card, I care what is the fastest setup for the price.
 
I think there's some sort of sense in there. NV have definitely aimed the Fermi chip at the GUGPU market, and it is amazing for that specific type of work. The problem is that technically I think the Fermi arch is far far better than anything AMD have released but that isn't transitioning to "in game" performance.
I can't agree with that. nVidia has certainly pushed for GPGPU this time around and there's nothing wrong with that but the incredibly high power consumption and operating temperatures highlight the inherent inefficiencies with their design. And at the same time it's obvious that most customers will be looking for better gaming performance, as what percentage of people are actually going to be programming themselves or running such heavy calculations? Maybe 1-2%, if that? All this is set to the backdrop of sleazy business practices and simply makes them look desperate.

At the end of the day people don't want an overpriced, underperforming card with high power consumption and a noisy cooling system. nVidia hit onto a winner with the 8xxx series but simply hasn't produced anything since that has stood up to ATi. We'll have to wait and see if they can push out a further revision to the Fermi architecture to improve things but ATi clearly has the advantage this generation.
 
Back
Top Bottom