• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia to showcase Fermi/GTX 300

Hmm well the buzzword is GPGPU. Hence they need to stick with "GPU". It's likely that the Fermi based GPUs will exist as the cards do have hardware required to accelerate graphics, however the ratio is significantly biased towards GPGPU.

As Fermi is targeting computing rather than graphics I would expect it's performance priority to be computing first. Is this bad? Not if the game support the use of the GPU's spare computing capacity to offload the computations from the CPU. The majority of time the GPU is being held back by the CPU- the majority of games show this where it's only the aliasing that differentiates and the FPS is bottlenecked.

nV are right in the fact that if you can reduce bottlenecking by offloading the maths tasks to the GPU then you can have a better game. Where they have gone wrong is attempting to use this as a tool to increase their sales without bringing the industry forward. Gamers/developers are going to be wary of a situation where they're reducing their sales segments and increasing their costs by defining multiple code paths.

Could anyone make a post thats more wrong? Most games are NOT held back by cpu's, very very few games are cpu limited and its VERY normal and not a new situation where a new card comes out and is fast enough that its "less" cpu limited, for a short time till harder games come out, get over it.

They did NOT make a cpu cruncing card with the intent of offloading normal cpu tasks to reduce this cpu limit in most games, thats just crap, theres no Nvidia were right there, they didn't do that. As for Gamers/developers are going to be wary. Really, firstly Gamers are going to be wary their sales segments will decrease and costs by increased coding will increase....... really? LIkewise, Developers will see a reduced segment by having different graphics cards to code for, and they have to define multiple code paths, really? Sounds to me like someones trying to sound like he knows what he's talking about, but doesn't.

Fermi WILL be all but identical to the GPU card they put out, there is a slim chance they release a cheaper smaller bandwidth/scalable part but it would be more akin to a 5770 version of the 5870 architecture, than a radical difference. IN all likelyhood like every other generation for decades, whats being touted as a GPGPU Fermi is identical in every way as the GPU aimed version, except in name.

The reason its likely they aren't showcasing its gaming performance, or leaking numbers, is being not final silicon it might be buggy in gpu acceleration, maybe their dx11 drivers are terrible or hell, their own dx11 benchmarks aren't finished, or a mix of all of that. basic number cruching is fairly simple, likewise those who are building number crunching boxes go for max performance, even if its 20% higher. A gamer is unlikely to upgrade to a £400 card for 20% more performance though. So non final silicon not at final clock speeds makes more of a difference in benchmarks to gamers, than to the GPGPU market so you're more likely to want to hide the gaming numbers till you have final silicon at higher clock speeds.

My personal belief is that TSMC's utterly crap process is causing NVidia's significantly higher clock speeds to be a massive stumbling block and cause for a lot of the delays. Remember ATi's whole core runs at the core clock speed, 850mhz or so, up to 1Ghz. Nvidia's largest part of the core, the shaders, run on the last gen at up to 1.5Ghz, massively higher. Leakage of power/signal is the biggest stumbling block on TSMC's 40nm process, and leakage is also worse the higher the clock speed.

TSMC have also announced another delay on their 32nm process, yippee, which apparently is in testing, at awful yields and utter crap, another screwup by TSMC :(

We're in a laughable situation that Nvidia are seriously considering Global Foundries(basically AMD's manufacturing arm) for future gpu's over TSMC, thats just how bad TSMC have been for so long.
 
They dont need to hype it up people all ready know its going to be on par with the 58**

Is it :confused: Nobody knows how good or bad it's going to be yet, unless you know something pretty much the rest of the world doesn't? Please enlighten us all with some benchmarks :rolleyes:;)
 
i hope nvidia comes out with a very competitive gpu. the thought of Ati being the only player in the dedicated gpu market and me forced to buy an ati card again which i have to stick to is very scary.
 
If the GTX300 doesnt beat the 5870 by at least 20% then it will be a failed launch in my eyes, and probably most on here. Being on par with the 5870 6 months after it launches will be awful for Nvidia and probably for all of us as well.
 
Well my gtx260 is dying but I cannot buy a 5850 or 5870 because there is no stock and will not spend that kinda money!

What is a man to do? Source a 4870x2 in MM? Just wait till me GTX260 dies?

Curse nvidia for not having a competitor to ATI!!!


Wait for a good time to RMA under warranty? ;)
 
We can all just hope that nvidea release something spectacular. We dont wont to see a repeat of the 8800 gen again, where there wont be a meaningful update for years to come.
 
If the GTX300 doesnt beat the 5870 by at least 20% then it will be a failed launch in my eyes, and probably most on here. Being on par with the 5870 6 months after it launches will be awful for Nvidia and probably for all of us as well.

Well, it has to be faster than a GTX295, doesn't it? And that is already faster than a 5870 so I wouldn't worry yourself unduly about that matchup. It's whether it beats out a 5970 by 20% that you should be looking for.

As for me, I have a GTX295 SLi system and a 30" Dell so I'm not buying anything new for a while anyway.
 
This is just another gpu compute occasion so no graphics info as Jen-Hsun Huang has said in the past already.

Jen-Hsun Huang

We didn’t announce anything on graphics because it wasn’t graphics day. When we announce GeForce and Quadro, we are going to talk about the revolutionary graphics ideas that are designed into FIRMY and so we are looking forward to do that in the near future. And so please be patient with us

Jen-Hsun Huang

The demand is really, really strong for it and we will tell you about all the great graphics features when we launch.

Source
 
Well, it has to be faster than a GTX295, doesn't it? And that is already faster than a 5870 so I wouldn't worry yourself unduly about that matchup. It's whether it beats out a 5970 by 20% that you should be looking for.

As for me, I have a GTX295 SLi system and a 30" Dell so I'm not buying anything new for a while anyway.

Doesnt have to be faster than GTX295 at all, could well be slower if Nvidia do a repeat of the old 5800 leaf blower
 
the thought of Ati being the only player in the dedicated gpu market and me forced to buy an ati card again which i have to stick to is very scary.

Being on par with the 5870 6 months after it launches will be awful for Nvidia and probably for all of us as well.

Is it not 'scary' for the market in a whole if one company dominates the other ALL the time?

I personally buy the best at the time regardless of who makes it!
 
Doesnt have to be faster than GTX295 at all, could well be slower if Nvidia do a repeat of the old 5800 leaf blower

The high end GT300 part _should_ be around 30-40% faster than the 295GTX - assuming 100% SLI scaling which almost never happens... granted ATI didn't manage to double the performance with more than double the hardware spec - but thats always been an ATI issue and not generally the case with nVidia - plus ATI's new drivers should address some of that balance.

So the high end part _should_ soundly beat both the 295GTX and 5870 and be comparable to 5870 cf in both price and performance - nVidia would really have to balls things up otherwise to produce something slower than the 295GTX.
 
The high end GT300 part _should_ be around 30-40% faster than the 295GTX - assuming 100% SLI scaling which almost never happens... granted ATI didn't manage to double the performance with more than double the hardware spec - but thats always been an ATI issue and not generally the case with nVidia - plus ATI's new drivers should address some of that balance.
So nVidia should be able to blast everyone out the water, while ATi are just struggling by? And this is based upon... oh yeah, nothing. In fact if you look at the past several generations of nVidia cards you'd see that this doesn't follow the trend. I certainly don't rule it out as a possibility but your belief in it is unfounded, hence why most people just laugh at you then move along.
 
Say goodbye to a gpu maker (nvidia) that pushes the limits, works along and funds games developers, promotes innovation and new ideas etc and welcome the new overlords(ati) that dont care about the software development, promote mediocrity, supply cheap unreliable parts etc. the future of pc gaming looks even darker.
 
So nVidia should be able to blast everyone out the water, while ATi are just struggling by? And this is based upon... oh yeah, nothing. In fact if you look at the past several generations of nVidia cards you'd see that this doesn't follow the trend. I certainly don't rule it out as a possibility but your belief in it is unfounded, hence why most people just laugh at you then move along.

Speak for yourself. Rroff gets a lot of unwarranted grief on here. I don't always agree with him but most of the stuff he posts is at least well thought through and interesting to read.

Personally I expect/hope the high-end GT300 part to be ~50% higher performance over the 295/5870 cards. Anything less is a bit meh.
 
So nVidia should be able to blast everyone out the water, while ATi are just struggling by? And this is based upon... oh yeah, nothing. In fact if you look at the past several generations of nVidia cards you'd see that this doesn't follow the trend. I certainly don't rule it out as a possibility but your belief in it is unfounded, hence why most people just laugh at you then move along.

If my info was based on nothing I'd be wrong most of the time...
 
The high end GT300 part _should_ be around 30-40% faster than the 295GTX - assuming 100% SLI scaling which almost never happens... granted ATI didn't manage to double the performance with more than double the hardware spec - but thats always been an ATI issue and not generally the case with nVidia - plus ATI's new drivers should address some of that balance.

So the high end part _should_ soundly beat both the 295GTX and 5870 and be comparable to 5870 cf in both price and performance - nVidia would really have to balls things up otherwise to produce something slower than the 295GTX.

I think you'll find they've both had some theoretical performance to actual performance difficulties with their more recent architectures. I mean look at the GTX280 and the 8800 Ultra. The GTX 280 'should' be around 75% than it but in reality is about 40% faster.
 
The high end GT300 part _should_ be around 30-40% faster than the 295GTX - assuming 100% SLI scaling which almost never happens... granted ATI didn't manage to double the performance with more than double the hardware spec - but thats always been an ATI issue and not generally the case with nVidia - plus ATI's new drivers should address some of that balance.

So the high end part _should_ soundly beat both the 295GTX and 5870 and be comparable to 5870 cf in both price and performance - nVidia would really have to balls things up otherwise to produce something slower than the 295GTX.

It's a totally new architecture, we've no way of knowing until the benchmarks show up, unless you are an Nvidia employee.
 
Back
Top Bottom