• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Some interesting Kepler leaks

I disagree, I think drunkenmaster posts are pretty accurate. Obviously you need to deduct 3/4 of anything he says but once you do that it's pretty much spot on.

anyway, I heard some news today from a source which shall remain secret, which basically said kepler (the high end card only) would ship with 4GB GDDR5 :eek: If you'd asked me a week ago I'd bet my house on them shipping with a maximum of 2GB. Nvidia seem to like beefing up the GPU, but then starving it off RAM. Guess it might be different this time, but can they really learn from past mistakes :confused:
 
I was interested in that too, so I did a little looking around and it seems the current cards can just about saturate a x8 PCIe 2.0 slot, so no, they will be nowhere near saturating a full x16 slot.

So its unlikely the next gen cards will come close either.

Anyone else confirm or refute this?

Yeah, the tiny performance difference between 8x/8x and 16x/16x SLI or crossfire setups suggests that the bandwidth of PCI-e 8x is barely being saturated with current-gen cards.

I can't see there being much of a performance bump heading to PCI-e 3.0 with this generation. I'd hazard a guess at a maximum of maybe 1-2% maximum improvement over a PCI-e 2.0 16x slot, and even then only in certain circumstances.

For graphics, it doesn't look like it, though the double GPU cards may actually see benefits. The NVIDIA high end may also see benefit if they really do continue to make monolithic GPUs like the 580 with their 28nm process.

The additional bandwidth will be useful for compute purposes however.

Also, I remember reading some time ago (before PCI-SIG completed the specification) that PCI-E 3 will allow higher power draw from the socket, so that might also help with the super-high end and dual GPU cards.

Thank you guys, its pretty much what I've read too!

Though didn't know that the PCI-E3 socket would allow more power draw?
 
Thank you guys, its pretty much what I've read too!

Though didn't know that the PCI-E3 socket would allow more power draw?

disclaimer: I'm not sure if that's still the case. I can't remember my account pasword to access the PCI-SIG specification whitepapers
 
anyway, I heard some news today from a source which shall remain secret, which basically said kepler (the high end card only) would ship with 4GB GDDR5 :eek: If you'd asked me a week ago I'd bet my house on them shipping with a maximum of 2GB. Nvidia seem to like beefing up the GPU, but then starving it off RAM. Guess it might be different this time, but can they really learn from past mistakes :confused:

That sounds like the GK112 part that's rumoured to appear towards the end of 2011. This would be the "full" Kepler part, having 1024 CUDA cores and a 512-bit memory interface. In this card, having (at least an option for) 4Gb memory would make sense.

The GK104 part that's rumoured to appear within the next three months will more likely be based on a 384-bit memory interface, which would imply 1.5Gb or 3Gb cards (just like AMD's 7-series). A dual-GPU GK104 card is also scheduled ("GTX790"), but I can't see a dual GK112 part appearing - if for no other reason than power draw.
 
That sounds like the GK112 part that's rumoured to appear towards the end of 2011. This would be the "full" Kepler part, having 1024 CUDA cores and a 512-bit memory interface. In this card, having (at least an option for) 4Gb memory would make sense.

The GK104 part that's rumoured to appear within the next three months will more likely be based on a 384-bit memory interface, which would imply 1.5Gb or 3Gb cards (just like AMD's 7-series). A dual-GPU GK104 card is also scheduled ("GTX790"), but I can't see a dual GK112 part appearing - if for no other reason than power draw.

Yeah, a dual GK112 does seem unlikely. It's probably meant as a really high-end single-GPU part (so that there are no SLI issues) to run on the heels of the dual GK104.
 
I was interested in that too, so I did a little looking around and it seems the current cards can just about saturate a x8 PCIe 2.0 slot, so no, they will be nowhere near saturating a full x16 slot.

So its unlikely the next gen cards will come close either.

Anyone else confirm or refute this?

Actually the 7970 is faster with some Compute functions when running in a PCI-E 3.0 slot.
 
Do you have a link to any benchmarks showing that?

The only one I've seen so far that tests PCI-e 2.0 vs 3.0 with compute applications is this. You can see that, in these tests, the difference is less than 0.2% - well within the margin for error.

If there are other applications where the difference is more significant, then I'd be interested to see them :)
 
Thanks - I missed that :)

Interesting stuff. When the cards start showing up in the wild I'm sure we'll get a better picture of where the PCI-e 3.0 bandwidth can be useful.



edit: Now that I think about it, I suppose there are any number of (non-gaming) applications where increased PCI-e bandwidth should be useful... Anything involving large-scale linear algebra (like most scientific simulations) will probably need to pass assloads of data between the CPU and the GPU, and a faster pipeline between the two can only improve things.
 
Last edited:
Thanks - I missed that :)

Interesting stuff. When the cards start showing up in the wild I'm sure we'll get a better picture of where the PCI-e 3.0 bandwidth can be useful.



edit: Now that I think about it, I suppose there are any number of (non-gaming) applications where increased PCI-e bandwidth should be useful... Anything involving large-scale linear algebra (like most scientific simulations) will probably need to pass assloads of data between the CPU and the GPU, and a faster pipeline between the two can only improve things.

Pretty much, it seems doubtful that PCI-E 3.0 will be saturated by games any time soon, maybe with the possible exception of new games coming out with some heavy PhysX programming.
 
Back
Top Bottom