• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

GT300 rumours.

It seems reasonable that it might have 512 shaders, given the die sizes a 40nm process would allow (and given the trend in new high-end GPU cores from Nvidia; 90nm: 128, 65nm: 240, 40nm: 512?). I don't think the shader clock will be nearly as high as they speculate, though, given the assumed complexity of the core. Obviously as it's a new DirectX release, it's very likely to be a new architecture.
 
as big a market as nvidia hopes gpgpu will be it won't stake its entire market share on making something built for cpu work with gpu as a backup. Simply put they don't have the platforms available to push, why buy a latest and greatest mobo, stick in a cheap ass slow cpu just so you can get a system to do the work on a gpu. the majority of people will always buy a fast cpu for cpu based work.

My betting is "mimd" will simply for them be a marketing bs'ing way to say, we're doing what ATi do, shaders that can do more operations per clock instead of one shader per operation. But the mimd thing and sounding as much like a cpu builds on there whole gpgpu/cuda thing, which despite what they proclaim has limited to no use with very few and expensive apps that plenty of free apps around can do the same things, and better.

At a guess it simply won't be hitting 3tflops, the 4870 gets no where near with its peak theoretical and its peak theoretical includes all the core working at optimum, IE all shaders on each cluster doing an operation every clock which in reality it never comes close to for every single clock.

So anyway, its highly likely Nvidia will be switching to a more efficient size/power wise gpu a la ati and its clusters of shaders style approach, which begs the question of what 512 shaders could mean, 512 actual shaders of which all can possibly do a second operation would be very nice but a still damn huge, even for 40nm.

IT doesn't really matter, at this stage we all clearly know nothing, we can speculate all we want. What GT300 was supposed to be when it was thought up 3 years ago and what Nvidia need it to be are wildly different though. They've lost partners, lost exclusive partners by the shedload, they've lost market share, oem sales, they've lost a lot of money, sales are down due to the economy and due to ATi's prices their profits per sale are utter crap with partners making next to nothing. They have to, have to make a more competitive core in terms of yields and cost to build a card or all their exclusive partners will be flat out forced to sell ati cards to stay afloat.

the 280gtx, 285, 295 might be uber fast, the partners might love selling them for decent profit, but their bulk sales are on cards that might even be making them a loss. AFAIK ATi has won back a significant amount of the market share from Nvidia in the past 6 months and looks set to continue. ATi/AMD also have no financial worries anymore and a basically secured future of even cheaper cards with higher profits once they take over manufacturing themselves. Hell, theres even talk about Global foundries offering to build Nvidia cards, meaning every card both AMD and Nvidia sell means profits for AMD, which would be hilarious.
 
Whatever way you throw it... for raw SP performance nVidias next generation looks like blowing AMD out the water.

I think it more likely we will see 384 and 480sp versions initially but each SP will be capable of atleast 3x the performance of the GT200. I think performance in the region of 3TFlops is quite likely. (I've heard unsub rumours that the high end part is as fast for GPGPU as 2x 295GTX cards).

Whether anything will put this performance potential to actual use or not is another matter... can you imagine the potential for ingame AI if they ran it GPGPU style rather than on the CPU?
 
Nvidia will try avoiding designing another very expensive GPU. If they focus on releasing a very powerful but very expensive to make core again and if AMD manages to be close in performance they will in the same situation they are in now.
 
Nvidia will try avoiding designing another very expensive GPU. If they focus on releasing a very powerful but very expensive to make core again and if AMD manages to be close in performance they will in the same situation they are in now.

I think it's a bit late in the day for that now. RV770's design was started in 2005, this core's design likely started not long after the 8800 GTX was released. If it was going to be big a year ago, it's going to be big now.
 
Seems to be the trend now. Nvidia charge excessively for cards that are at best SLIGHTLY faster than AMDs offerings.

you get far more with the nvidia cards, extra performance is just a small part of the picture. with nvidia you get better driver support, better application support via CUDA, additional gpu use in games with features such as physx, added features such as game specific profiles, better game compatability and consistant performance across all game titles not just the major releases.

makes sense why nvidia charge more, since there really is no competition to thier cards and support.
 
According to this article it'll be a 512 shader, MIMD architecture GPGPU beast with anything up to 3TFLOPS of theoretical compute horsepower. Though that is assuming that it will have shaders at 2GHz, or that the shaders will be clocked above the core at all.

http://www.brightsideofnews.com/new...00-specifications-revealed---its-a-cgpu!.aspx

thats a hard to believe configuration. i mean all they needed to do was take the gtx285 and alter the core so it processes MIMD and from what the information out there shows is that MIMD offers a considerable speed boost in operations that require similar instructions and data all the time like game graphics.

suffice to say, its good to see nvidia innovate, 9 months later ati should ready to immitate.
 
I got a feeling is the low end and oem market could be more of a problem for nvidia not the high end as they could lose apple aswell once the contracts is up.
 
you get far more with the nvidia cards, extra performance is just a small part of the picture. with nvidia you get better driver support, better application support via CUDA, additional gpu use in games with features such as physx, added features such as game specific profiles, better game compatability and consistant performance across all game titles not just the major releases.

makes sense why nvidia charge more, since there really is no competition to thier cards and support.

lol

Well I didn't notice the difference with every other Nvidia card before my 4870X2.
 
you get far more with the nvidia cards, extra performance is just a small part of the picture. with nvidia you get better driver support, better application support via CUDA, additional gpu use in games with features such as physx, added features such as game specific profiles, better game compatability and consistant performance across all game titles not just the major releases.

makes sense why nvidia charge more, since there really is no competition to thier cards and support.

Hmmm :/

Driver support is a moot point, as Nvidia drivers aren't perfect either. PhysX and CUDA are a total waste of time.

Nvidia charge more because they want the customer to believe they offer more. Looks like they got you fooled :)
 
you get far more with the nvidia cards, extra performance is just a small part of the picture. with nvidia you get better driver support, better application support via CUDA, additional gpu use in games with features such as physx, added features such as game specific profiles, better game compatability and consistant performance across all game titles not just the major releases.

makes sense why nvidia charge more, since there really is no competition to thier cards and support.

Each to their own dude and you are of course entitled to your own opinion but tbh I think what you have written has left me quite amused. :D
 
Nvidia have their fair share of driver problems from time to time, so it aint just ATI.. Like booting up to a blank screen instead of windows :p

I like both companies personally, but for me i usually go for the one with best price to performance. Cyber-Mav do you really think that the new cards from Nvidia will be that much better than ATI to warrant (rough guess) 50-£100 more?

we will have to wait for benchmarks.

anybody expect Nvidia to drop their prices this time? or stick with the usual 300 plus for their best card?.

Also i wanted to ask something, if a midrange card comes out similar to what happened with the 8800 GT does anyone think i can get away with running 2 of those with a 750w psu? or will it be a bit risky?
 
Last edited:
Cyber-Mav do you really think that the new cards from Nvidia will be that much better than ATI to warrant (rough guess) 50-£100 more?

not sure about the new cards comming out, but the existing cards out there do warrant the additional costs. iv had cards now from both sides of the red and green, and i can see why the green costs more.
only reason to go for ati is if your after performance on the cheap, but by going ati you do loose out on some aspects.

each to thier own at the end of the day, go for which ever card works best for you really.
personally id spend the extra money to get the added features provided by nvidia from now on.
 
Back
Top Bottom