• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

New Nvidia card: Codename "GF100"

Soldato
Joined
24 Jun 2004
Posts
10,977
Location
Manchester
Well, since the nvidia GPU technology conference starts today we can expect to start seeing some more details about nvidia's new card. So I thought a new thread would be in order.

To get the ball rolling, Beyond3D has "leaked" some information:

http://vr-zone.com/articles/-rumour-nvidia-gt300-architecture-details-revealed/7763.html?doc=7763


The GPU appears to consist of over 3.2 Billion transistors (!), and is operating on a 384bit memory interface. It's hard to deduce what performance this will bring, as it's supposedly a completely new architecture, but GDDR5 on a 384bit memory interface would certainly give a nice bump in bandwidth from the previous generation, and even of around 50% over the 5870.

The GPUs codename is "GF100" which seems a little odd to me, but then when have product codenames ever really made sense?

Looks like this is all the 'hard' detail on the card at the moment, but please add more as and when it becomes available :)
 
Last edited:
Soldato
OP
Joined
24 Jun 2004
Posts
10,977
Location
Manchester
that card is already up and running... they're just playing a waiting game, my guess is they're waiting to see how well the 5870 X2 performs.... they'll probably release this new card in NOV/DEC

No reason that nvidia would "play the waiting game" right now - their biggest competitor has a new product on the shelves, and nvidia are losing sales to them.

There will always be a lag-time between having first engineering samples available for analysis and demo-ing, and having volume production of GPUs that can be passed to partners. There is also a further lag time while the partners build and distribute the cards. Anyway, I imagine they're working as fast as they damn well can to get these products on the shelves.


On an unrelated note, guys on the beyond3D forums have been picking apart that blurry diagram, and seem to think the card will have 512 shader pipes. Which would be double the previous generation. This isn't a big surprise, given that the number of transistors is more than doubled from GT200.
 
Soldato
OP
Joined
24 Jun 2004
Posts
10,977
Location
Manchester
Where are people getting this £500 number from?

I should point out: Nowhere has any pricing been indicated. At this stage the price of the cards is nothing more than guesswork. But given the state of the market right now, a £500 pricetag would be very unlikely.
 
Soldato
OP
Joined
24 Jun 2004
Posts
10,977
Location
Manchester
It is more important the g300 series scales down to where it matters as the lower end where the money is.

That's still true, but less so than with previous generations.

The GPGPU market is growing at an incredible rate, and will always be interested in the highest-end hardware. The markup for HPC-branded cards over equivalent gaming cards is also huge, meaning that profit margins are sky-high. Nvidia would need to sell a hell of a lot of mid-range GPUs to match the income from a single HPC-branded product.
 
Soldato
OP
Joined
24 Jun 2004
Posts
10,977
Location
Manchester
3.2Billion trannies? Won't that have a significant impact on heat?

I'd love to see how it'd perform though. :eek:

And won't the 5870X2 have a 512-bit memory?

Effectively, yes - the 5870x2 will have two channels of 256bit memory. So the total memory bandwidth will be similar to that of a single card running with a 512bit interface. Of course, by the same logic an "x2" version of this GF100 card would effectively have a 768bit interface.

As for heat, well yes - I think it's reasonable to expect that this card will consume a fair amount of power, given the number of transistors it has. I wouldn't be surprised if it came in at around 220W or so, but that's just my own personal speculation. Of course, all of this power must be dissipated as heat, so a good cooling system will be essential.
 
Soldato
OP
Joined
24 Jun 2004
Posts
10,977
Location
Manchester
So is this basically their ultra-end card? I know it's all speculation but is it still on NV's behemoth of a chip or is it all dropping down to 40nm?

Yep, this is Nvidia's high end GPU.

It's definitely on a 40nm process, no doubt about that (since it's being manufactured at TSMC). But still, at 3.2Bn transistors it's still going to be a beast of a chip. I would expect it to be similar in size to the original 65nm GTX280, which came in at 576mm^2. We'll have to wait to see the exact size, but somewhere in the region of 550 - 600mm^2 is about what you would expect given the number of transistors on a 40nm process.
 
Soldato
OP
Joined
24 Jun 2004
Posts
10,977
Location
Manchester
Fudzilla says that the GF100 will be shown today at the nvidia conference, probably at Jen-Hsun Huang's keynote at 1pm.

So, that translates to 9pm UK time. Lets wait and see if it happens, and if so, how much they actually reveal.
 
Soldato
OP
Joined
24 Jun 2004
Posts
10,977
Location
Manchester
well ive just bought 2 x 5870s, BUT if the new nvidia cards are better by a long way then i have the money in the bank waiting for them.

And you had two GTX295s before that? :eek: Good to be you I guess!

I'm going to wait and see, but if the GF100 is MIMD capable then I will go for it in preference to the ATI cards (it will be useful my work...). Now that I have more spare cash, I might be tempted to get two. That might be overkill though, even for a 30" screen.
 
Soldato
OP
Joined
24 Jun 2004
Posts
10,977
Location
Manchester
Responding to drunkenmaster above:

I agree they won't be cheap.

Why do you think that yields are crap? We have only one rumour to suggest that, which was explicitly denied by nvidia.

I agree that 2x5850s will probably offer better raw performance at a cheaper price. But we all know that a single GPU solution is preferable if possible. Also, for better performance (if needed) we can add a second GF100 GPU, but we can't add another two 5850s (short of buying two 5850x2s). In addition, we all know that SLI / x-fire scaling drops off significantly with more than two GPUs.

I'm not sure about GPGPU focus taking away from the graphics market. Designing a GPU to be ideal for GPGPU applications neccesitates better encapsulation and more cache per cluster. These things should also help push more pixels through, by improving efficiency (even if total theoretical FPU performance is not increased by much). But as you rightly point out, we will have to just wait and see. It's a completely new architecture, so we can't really draw many conclusions about performance based on specs.

Regarding bandwidth, it should become more relevant as the resolution increases. Also, do you have any links to show the effect of separate memory and core clocking with the 5870? I would be surprised if raising the memory bandwidth really does have such a small effect.

Thanks for the well considered post by the way - always nice to have alternate viewpoints floating around :)
 
Soldato
OP
Joined
24 Jun 2004
Posts
10,977
Location
Manchester
Nice find, HazardO.

If it really can do half as many double precision as single precision operations then that's going to be ideal for me.

Support for up to 6Gb of GDDR5 really shows how hard they are pushing the HPC market. I doubt we will see a PCI-e card version with 6Gb though - to be honest I think the card would just need to be too damn big. It will probably be confined to rackmount-type versions.
 
Soldato
OP
Joined
24 Jun 2004
Posts
10,977
Location
Manchester
That chart is utter BS. Nothing fits in with the more concrete data that we have. I expect it to be proven incorrect by the end of the day.

I would completely ignore it to be honest (and maybe link to it instead, since it's rather big).
 
Soldato
OP
Joined
24 Jun 2004
Posts
10,977
Location
Manchester
I think that must be a mistake or a misprint. How can the GTX275 draw more power than the 285? :confused: No way that the GTX275 draws nearly 220W.

Also, the number of transistors for the new card doesn't match what we have from the current leaks. Apart from all this, they don't state where the information comes from.
 
Soldato
OP
Joined
24 Jun 2004
Posts
10,977
Location
Manchester

Do enlighten us then.

The most expensive nvidia GPU was the GTX280, which retailed at $650, at a time when there was no competition. As soon as viable competition appeared (two weeks later), the price was reduced by $150. Now you want us to believe that this card will retail at over $850, against strong competition? Be realistic.
 
Soldato
OP
Joined
24 Jun 2004
Posts
10,977
Location
Manchester
Fair enough, water-cooled parts or special edition parts (like the mars bar) will command rediculously high prices. This is a consequence of a niche market. The standard high-end single GPU part will not be much more than £400 at the very most.

The most likely scenario is a $499 retail price, which would translate to around £375 given current exchange rate, VAT, and small import costs.
 
Soldato
OP
Joined
24 Jun 2004
Posts
10,977
Location
Manchester
"In practically every case the 5870 card scaled best when the GPU/shaders were OC’ed rather than memory: performance typically improved by 4-5% in most apps when the GPU was running at 930MHz, while OC’ing the memory to 1320MHz only improved performance by 2-3% in the same games."

http://www.firingsquad.com/hardware/ati_radeon_hd_5850_performance_preview/page20.asp

Thanks, doppleganger.

I expected that the core clock would be the most important thing, but it's good to see that there is still some improvement to be had from memory bandwidth increases.
 
Soldato
OP
Joined
24 Jun 2004
Posts
10,977
Location
Manchester
That ties in nicely with what we have already seen, shedman.

I doubt it will happen, but it would be nice to see some kind of performance results during this nvidia conference, even if they are horribly biased marketing-type benchmarks!
 
Soldato
OP
Joined
24 Jun 2004
Posts
10,977
Location
Manchester

Wow... They really are going all-out at the HPC market. Not even a hint of a mention of graphics / gaming performance anywhere in that white-paper.

The improved scheduling and kernal executions may help efficiency in gaming, but we'll have to see by how much. No mention of total theoretical floating point performance either.

It will be really interesting to see what the performance in games is. Could tell us a lot about how PC GPUs will perform in the future.
 
Back
Top Bottom