• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

nvidia demonstration on now.

cudagpu.jpg
 
Can see now why Intel and the other supercomputer giants aren't too happy with NVidia's foray into industries where CPU's would previously have been. :p
 
9ffa7f3b-7f4b-4362-b25b-6119d9001110.jpg


the KRAKEN and the 8 fermi blade server that matches its computational power.

Yeah but what does it match its power in? Does it totally match it in every single way or are there just one or two scenarios that the fermis can match it. Im guessing the latter.
 
Yeah but what does it match its power in? Does it totally match it in every single way or are there just one or two scenarios that the fermis can match it. Im guessing the latter.

cant really remember, was something to do with molecular engineering and flops.

basically the kraken took some amount of time(think it was 1 day) to do a 42nano second demonstration of molecular interaction.

the 8 fermi's did a 54 nano second demonstration (so longer) in the same amount of time.

its something useful for bio scientists anyway.

edit- the Kraken was in 4th spot of supercomputers in june 2010, http://www.top500.org/
 
Last edited:
cant really remember, was something to do with molecular engineering and flops.

basically the kraken took some amount of time(think it was 1 day) to do a 42nano second demonstration of molecular interaction.

the 8 fermi's did a 54 nano second demonstration (so longer) in the same amount of time.

its something useful for bio scientists anyway.

edit- the Kraken was in 4th spot of supercomputers in june 2010, http://www.top500.org/

Will it do a steady min 60fps on Crysis ;)
 
Yup, thats the problem, a computer designed for serial processing won't do parralel stuff uber fast.

I can be certain, without a shadow of a doubt that programs that are incredibly serial, infact a whole lot of stuff will run 1000x's of times faster on the Kraken than the Fermi box and thats the problem you don't buy a Kraken to run things a Fermi can run faster on, you don't buy a box with a bunch of Fermi's, or 5870's, to run things a Kraken can run faster.
 
KrakenVFermi=Marketing&stunt@Simples.

Little thing v big thing anyone? Looks good. If it is that good...do it in the first place.

I hope their refresh slams amd back into place. Can't see it happening though.

Looking at their stock price a lot of people bought into that idea today, if they deliver, good news for the company. If they don't it is going to be pretty dire.
 

BS detector going into red.
ALERT ALERT! Marketing lies! ALERT ALERT!

If that graph were true, then they're going to make something approximately 9-14 times faster than a fermi that uses the same power.

Perhaps I'm being pessimistic but I just can't see games in 3 years being literally 10 times more complex in their graphics sophistication.
 
BS detector going into red.
ALERT ALERT! Marketing lies! ALERT ALERT!

If that graph were true, then they're going to make something approximately 9-14 times faster than a fermi that uses the same power.

Perhaps I'm being pessimistic but I just can't see games in 3 years being literally 10 times more complex in their graphics sophistication.

These new gen of GPUs aren't JUST about gaming now, the GPU has taken a huge jump into different areas of usage that they never were considered before.

Science is one area, number crunching is another.

It was always thought that the GPU would take over complex tasks from CPUs anyway and now we're almost there.
 
BS detector going into red.
ALERT ALERT! Marketing lies! ALERT ALERT!

If that graph were true, then they're going to make something approximately 9-14 times faster than a fermi that uses the same power.

Perhaps I'm being pessimistic but I just can't see games in 3 years being literally 10 times more complex in their graphics sophistication.

Who said anything about it being games that would be that much faster? It's a cuda roadmap. DM made some good posts in the other thread about this.

http://forums.overclockers.co.uk/showpost.php?p=17423362&postcount=16

http://forums.overclockers.co.uk/showpost.php?p=17423974&postcount=22
 
Last edited:
Who said anything about it being games that would be that much faster? It's a cuda roadmap. DM made a good post in the other thread about this.

http://forums.overclockers.co.uk/showpost.php?p=17423362&postcount=16


WIth the current throughput for DP ops in both AMD and Nvidia cards you could VERY easily increase the DP per Watt while massively decreasing gaming performance.

Or you can increase DP per watt by making a carbon copy of a 5870 or a 480gtx only on 28nm, because the same size core/speeds/power would probably use around, complete guess and hard to say how much power would be saved, but 28nm is two jumps so potentially a 5870 in a 80-90W power bracket, and a 480gtx at 150W or so. Considering you'd have identical performance and half the power, performance per watt would double, with literally no actual increase in performance in DP or SP.

BOth companies for now are bridging the gap with two products, at some stage the cost of all the "gaming SP crap" will simply not be worth it to get the DP performance, and they'll make GPGPU only cards. I mean lets say for instance Fermi remains almost identical, and the 5870 architecture for the 28nm cards, and both double their shader counts, you'd also double DP performance, however you'd be in the case of a doubled 5870, adding 1600 shaders of which only 320 would go towards increased DP performance. I won't be remotely surprised if the 4 way shader architecture allows AMD to go up significantly from their 1/5 DP throughput, at worst to 1/4, quite possibly 1/2, which will give them a theoretical DP throughout that Nvidia couldn't hope to match, the question would be of course, would the far more complex shader structure be more easily utilised in GPGPu work, than gaming, potentially, some things probably would, some probably wouldn't.

Thats the single failing of the small core/efficient architecture, extracting the full theoretical performance is insanely difficult

IF AMD could max out all the 4+1 shaders on every clock in gaming Nvidia would have a card only half as fast as AMD right now, however its essentially impossible, in games at least to max out that 4+1 shaders, very rarely you will, but just as often you'll only be able to use 1 of the 5 shaders, most of the time it will be able to use 2 or 3 of the shaders. Nvidia's architecture is simple, but the byproduct of that is its easily maxed out.
 
wait up, wait up..... there's actually going to be an Nvidia card that runs faster and cooler than current models! and that could possibly beat an ATi.
 
Back
Top Bottom