• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

GTX 380 & GTX 360 Pictured + Specs

nVidia is going to have to do something special... even if they beat the 5850, 5870 and 5970 unless its by a significant margin its not gonna have much of a launch impact...

EDIT: Also I don't think we have the true story on the shader performance... the numbers that are being banded around - even nVidias own documentation - don't seem to match the physical hardware in there.
 
Last edited:
nVidia is going to have to do something special... even if they beat the 5850, 5870 and 5970 unless its by a significant margin its not gonna have much of a launch impact...

EDIT: Also I don't think we have the true story on the shader performance... the numbers that are being banded around - even nVidias own documentation - don't seem to match the physical hardware in there.

What do you mean the physical hardware? Die shot analysis or it's not true. :cool:
 
Small fry ? This a £5 million a year company doing serious research in a lab with proper programmers and everything.

How can they help me ?

Well it's pretty simple. For tasks that parallelize nicely and don't use massive amounts of data (which luckily is lots of problems the HPC community battle against) running on CUDA enabled hardware will significantly outperform a non GUGPU accelerated rack.

So in a "real world" situation if you wanted to dig 1000 holes of 5M depth, the quickest way is to get 1000 people to dig a hole each, rather than having one bloke digging a hole, right? So essentially the hundreds of cores on a CUDA enabled card are those "diggers".
 
Already ATI have sold 300,000 5 series cards. Mabye by the time the GTX3xx cards come out, that will be in excess of a million. That's got to hurt Nvidia losing out on all those sales.


I bet a lot a lot of people just bought ATI to try out DX11 and keep them till the nvidias come out and if there any good switch to nvidia. I know a few people who are doing that.
 
380 is likely to be around £500 IMO but I'm just guessing atm... I'd have said just over which doesn't bode well for the 360 pricing... which is likely around £280+.
 
not sure what your point is unless your just trolling


Fermi is to power hungery, big , hot , and expensive. As a system upgrade its laughable.

Not may company's can will be able to use it as it based around a video card.

People would need to go back to the days of having towers sitting round everywhere and scrap a huge amount of infrastructure, and take on a huge energy bill, when every man and his dog is looking at ways to save the earth.

My point is Rroff, Fermi could work, but its best chance would be the uber rich, or bedroom user. For everyone in between Fermi is a huge problem.
 
Fermi is to power hungery, big , hot , and expensive. As a system upgrade its laughable.

Not may company's can will be able to use it as it based around a video card.

People would need to go back to the days of having towers sitting round everywhere and scrap a huge amount of infrastructure, and take on a huge energy bill, when every man and his dog is looking at ways to save the earth.

My point is Rroff, Fermi could work, but its best chance would be the uber rich, or bedroom user. For everyone in between Fermi is a huge problem.

I don't think you've quite grasped CUDA and the environments it will be used in... its not going to replace all servers or all computing platform backends... but it is viable for super computer useage and quite a lot of specific application usage... put it this way... given the right application - you have say a lab of 50 average servers taking up a ton of space, using ~8.5kW... you can replace that with one CUDA array - using under 3kW and taking up less space than your average fridge. It might be hot and use a lot of power for what it is... but the gains in performance for certain types of applications means it can do a huge amount more for that power usage and heat output compared to the alternatives.
 

Not bad on the clock speeds front, though, at up to 1.4GHz for the shader domain. It's still quite possible the geforce part will have all 512 shaders enabled, though (assuming they're still physically present in the core). I'd be interested at seeing the TDP with 512 1.4GHz shaders given they've cited up to 225W with 448 shader cores.
 
These are the tesla parts... even with DDR5 saving some on die space - they've probably had to sacrifice some shader space for all that memory.
 
These are the tesla parts...

Yeah, but the Tesla parts have a trend of sharing their shader count, memory bus width and shader clockspeeds with their GeForce counterparts.

http://www.nvidia.com/object/product_tesla_c1060_us.html <- for example, shares those listed traits with the GTX 280.

http://www.nvidia.com/docs/IO/43395/C870-BoardSpec_BD-03399-001_v04.pdf <- and again with the G80 based C870 and the 8800 GTX.

Whilst it's not a 100% cert, it's probably more accurate in regards to the geforce part's specifications than certain rumours floating about the 'net.

even with DDR5 saving some on die space - they've probably had to sacrifice some shader space for all that memory.

What? :confused:
 
Yeah, but the Tesla parts have a trend of sharing their shader count, memory bus width and shader clockspeeds with their GeForce counterparts.

http://www.nvidia.com/object/product_tesla_c1060_us.html <- for example, shares those listed traits with the GTX 280.

http://www.nvidia.com/docs/IO/43395/C870-BoardSpec_BD-03399-001_v04.pdf <- and again with the G80 based C870 and the 8800 GTX.

Whilst it's not a 100% cert, it's probably more accurate in regards to the geforce part's specifications than certain rumours floating about the 'net.



What? :confused:

I don't know what he's smoking, memory has NOTHING to do with core size or disabling parts of the core. We know as a fact the cores being made are 512sp parts, if they are disabling bits of the core to hit power/yield requirements, thats what they are doing. They would make and sell full 512sp parts IF THEY COULD. They probably can get one or two parts, but they can't sell it in volume. The gpu market will take a 380gtx ultra mega uber 512sp part that has 10 available in retail ever. The Tesla market will not, if they can't produce what they release they'll lose all creditability, that means they can not make 512sp parts in any usable number for even a small scale Tesla product, let alone a high sales gaming version.

As I hinted at with one of the other rumours of insanely cut down Fermi's with only half the SP's working(unheard of before from gpu makers), it was an indication, aswell as being 6 months late and still having awful yield problems, that the top end part was always going to be a farce in terms of numbers.


They will either go 380gtx with less shaders for a semi decent numbers in availability, or make the 380gtx a full shader part, release the few working for reviews and make almost none available for sale.


As for the specs, 1.4Ghz isn't good, its early specs in terms of floppage and raw power, were based on estimates of 1.7Ghz shader speed, also were based on 750Mhz core clocks, considering shader clocks have barely advanced, the core clock which was only 500Mhz before could be well short of the 750Mhz which would also hamper it quite badly.

Its just crappy news all around, if we were all hoping for 448sp 360gtx's at or around the 5870 price, well, we're smegged. The high end 448sp part will beat the 5870 by much less than expected, and need to be priced high to make money back. A lower SP 360gtx will still be costing a lot and could well not beat a 5870 depending on where the SP count comes in, and it will probably still cost more.


IF Nvidia really do release a 350gtx with only 256shaders, it will cost a bomb compared to a mid range variant that only ever had 256shaders. It also means Nvidia's midrange if it ever comes will most likely have to be decent amount lower clocked or less shaders than its 256shader part, or it won't be able to sell those salvaged Fermi's. Meaning they might be forced to cut their midrange performance.


Every indication for over 6 months has been this ridiculous sized core can't be made effectively on a crap process at TSMC. It just can't compete on cost, ever and I really don't know how long Nvidia can keep selling everything they make at cost(or a loss even), before they get into real trouble.
 
Not bad on the clock speeds front, though, at up to 1.4GHz for the shader domain. It's still quite possible the geforce part will have all 512 shaders enabled, though (assuming they're still physically present in the core). I'd be interested at seeing the TDP with 512 1.4GHz shaders given they've cited up to 225W with 448 shader cores.

Though the 1.4Ghz is likely a little lower for Tesla than the gaming variants, it wouldn't be hugely different. But imagine where it would be rated TDP wise with 512shaders and at the targetted clocks of 1.7Ghz/750mhz core. You'd almost certainly be above 250W, approaching a 5970 in power draw for less power and quite probably a higher cost.

We might have some competition is Nvidia do go the same route as the 260gtx and just sell at cost/loss to compete with AMD on price. But my impression is a 512sp is unlikely or going to be in stupid numbers, a 448SP part might not be widely available, if you start approacing sub 400SP's for a widely available part, if it can even beat AMD on performance becomes a question let alone the price.
 
looking decidely gloomy from this for nvidia. withthat power/heat/fewer shaders makes teh oak ridge cacellation story more likely to have a basis as well.

if ati start really going for the havok/opencl /developer support stuff now they could really hurt nvidia bad over the next year.
 
I don't think you've quite grasped CUDA and the environments it will be used in... its not going to replace all servers or all computing platform backends... but it is viable for super computer useage and quite a lot of specific application usage... put it this way... given the right application - you have say a lab of 50 average servers taking up a ton of space, using ~8.5kW... you can replace that with one CUDA array - using under 3kW and taking up less space than your average fridge. It might be hot and use a lot of power for what it is... but the gains in performance for certain types of applications means it can do a huge amount more for that power usage and heat output compared to the alternatives.

I dont think you have grasped C+ is nothing new. They need a huge case and where are you getting the 8.5Kw to 3 from ? you still need everything a server has but your losing about 60% of your space and adding graphics cards.

You need to be able to rack mount them and the power requirement needs to be more than halved before they are even worth considering.

You have just pointed out what I said half a dozen post back. It a super computer for a bedroom, or the very rich.
 
Back
Top Bottom