• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia news direct from Jen-Hsun Huang!

Sorry if im going to sound inexperianced but doe's a 16x dp mean a great improvement for games?Or is this like folding @home or 3d max improvement's.

I wonder what kind of gpu the next playstation or xbox will use,also i wonder if they will use multi gpu's in one package,like what we see in high end power vr equiped smartphones.

I think it make's more sense like how the cpu industry went,put lower clocked but multiple cpu's onto 1 chip to overall increase performance.I mean the gpu industry must hit a limit sometime in the future with clock speed's like the cpu industry did?

PS4 with 8gpu's? :)
 
i see nothing but words of wisdom and unbiased knowledge comeing direct from DM

yet again

Sorry but before you say something like that, infact while you say something like that, please point out what I said that was biased or incorrect, go ahead.

It would require much more than a 'tweak' to make DP as fast as SP- I would guess that it is impossible.
A large part of why Fermi is poor when it comes to performance per watt IS the process. The 40nm process has a lot of leakage which ATi anticipated and because of that they did a much better job executing their design than Nvidia did. That doesn't mean that Nvida didn't mess up, they did, but it is possible that they would see a huge jump in performance per watt increase JUST by moving to 28nm- way above what ATi will get as ATi protected against it by using a slightly larger die than they would have liked.

Why, oh why can't people take an EXAMPLE for what an EXAMPLE is, I said FOR INSTANCE. In reality the 280gtx, I really do forget, I think its DP throughput was around 1/8th of its SP throughput, they took this up to 1/4 throughput or so, with a doubling of shaders on top giving a heck of an increase.

Nvidia would actually only take a "fairly" small tweak to improve the DP throughput as each shader is essentially individual, AMD's current DP throughput is horrible due to teh 4+1 architecture, it basically has 1/5th of the SP throughput. Nvidia's pretty simple and separate shaders could be quite easily tweaked to increase the DP throughput against SP dramatically without a huge amount of extra work, to do so in the same die size limits would probably require dropping overall shader count for more of the DP shaders and less overal SP shaders, it would certainly require a specific and different version(not just different bios's) for the Tesla and consumer GPU versions as if you say cut half the shaders to put in more DP shaders, well it would have horrible gaming performance.

Nvidia have an architecture that sucks for core size, but is easy to increase DP power on, the point where they can afford to push through wafers of GPGPU only versions though, Tesla's only LAST YEAR were only available in servers from a SINGLE OEM, thats a heck of a lot of R&D cost, and production cost for very small output. THe newer cards are far more widely available in terms of companies that supply them, I haven't seen figures for actual quantity of cards available compared to the older Telsa cards I wouldn't be surprised if it was lower.

As for the power, you're wrong I'm afraid to say, the 480gtx is 60% bigger and uses 60% more power than the 5870, its really no less power efficient per transistor or per mm2 than AMD(well marginally worse but not hugely), the issue is the size, and the size is because of the architecture. They had this issue at 65nm and 55nm with delays on every part for the past two years on 3 separate processes AMD has had no power issues on. Its NOT the process. The process has bad yields, and more leakage than you'd and its effected AMD exactly as much as Nvidia, if Nvidia made a core 40% smaller, it would have 40% lower power usage, as evidenced by the 460gtx, roughly 35% smaller, roughly 35% less power usage. IF you really want to get into the specifics, Nvidia added pretty much nothing to the size to accomodate the poor yielding process(at over 500mm2, it wouldn't have made a difference), AMD added some 10-15% die size JUST for accomodating the yields/process that DON'T add any performance, 15% and they are still that much smaller. Also leakage isn't fixed by any of the added 10-15% die size, it INCREASES leakage, leakage is bad on the process, it effects EVERY transistor to basically the same degree. Architecture is the only reason the core is over 500mm2. Doubling up transistor count with a process shrink will maintain a similarly large core and thats the fundamental problem not the process.

Overclock the crap out of a 460gtx it will perform the same as a 470gtx, and use more power than it.

EDIT:- 280gtx had DP throughput = 1/8 of SP, 480gtx does have 1/2 DP throughout, so it as 4x the throughput as standard, it has double the SP throughput(or was aimed to have) so that makes it 8x the throughput total, and considering the 285gtx was 204 W and the 480gtx with 8 times the performance is at 250W, well, its got a little under 7x the performance per W for DP performance.

Hence 4x the performance is a fairly small increase all told from designs 2 years apart.
 
Last edited:
Sorry if im going to sound inexperianced but doe's a 16x dp mean a great improvement for games?Or is this like folding @home or 3d max improvement's.

I wonder what kind of gpu the next playstation or xbox will use,also i wonder if they will use multi gpu's in one package,like what we see in high end power vr equiped smartphones.

I think it make's more sense like how the cpu industry went,put lower clocked but multiple cpu's onto 1 chip to overall increase performance.I mean the gpu industry must hit a limit sometime in the future with clock speed's like the cpu industry did?

PS4 with 8gpu's? :)


DP = not for gaming, I'm not entirely sure if physx uses it, but it would be a waste. Essentially DP gives you FAR more accuracy, in very expensive complex software for calculating weather, stock market trends, things billion dollar industries are based on the results of a calculation accuracy is worth its weight in whatevers more expensive than gold, all together, with gold.

Physx can give you an anwser to 1decimal point, or 50, you wouldn't see the difference in the end result, the bullet took 0.002 or 0.002586769 seconds to get to the target, with games "close enough" is WAY more than good enough.

At some stage its very likely AMD/Nvidia will bring out GPGPU only versions, less overall shaders but higher DP throughput, they'd be horrible compared to "only gaming" versions with much more "simple" shaders that can do gaming type calculations at the same speed as a "complex DP shader".
 
I think it was a compliment mate.

Clearly he was being sarcastic, because rjkoneill, like myself, has the sense to see what a load of utter tripe drunkenmaster is posting. Over and over. The same crap. I can rarely bring myself to finish reading any of his posts, as I am just too embarrassed to read the mass of unfounded assumptions he makes, and the strange conclusions he draws from his massively over-simplified and biased viewpoint.

I don't have the time to dissect and respond to the series of drivelling pseudo-essays that will inevitably follow this post (I actually have a job to do...), but allow me to make some basic observations, drunkenmaster:

1. You do not understand the technical design process that goes into the creation of a complex piece of silicon. In particular, you do not understand the aspect of scalability.

2. You are an armchair critic. You sit and pontificate about the failings of nvidia and Fermi, as if it were something that could have been predicted by you (the great and all-seeing eye) right from the start. This ties in with point 1. I guess simple minds will always seek to over-simplify complex issues.

3. You have an inherent bias against nvidia. I am no nvidia fan, but it is sometimes painful to read the crap you write. Yes, I KNOW that you don't realise this, but it is there to see plain as day for anyone who has an objective eye. You remind me of Charlie Demerjian, with the exception that he knows full-well that he is biased (and why).


Please, for the sake of everyone, STOP posting such long and drivelling tripe. Try to make concise points that people can actually respond to without wading through an essay. STOP assuming you understand things which you have no comprehension of - it's laughable. And STOP making unfounded assumptions based on your own misguided perception. Case in point (one of many hundreds):

well it is but its a disaster due to design, not the process

You have absolutely no basis for this statement. You don't know that the same design on a smaller process would not allow much better scaling of clockspeed / power efficiency. You don't know that a more efficient silicon manufacturing process (i.e. not the poor 40nm process at TSMC) would not allow a more concise alignment of transistors, leading to a more efficient architecture. You don't have any idea, on the design-level, what is the reason why the fermi architecture requires so much power. No-one does, outside the design team at nvidia. BUT none of this stops you from droning on about it, making the same points over hundreds of posts, each one further reinforcing the odd fantasy you have concocted in your mind about the current state of the GPU market.


Remember folks, the 5800 leafblower was the basis for the technology which evolved into the 6800, and played a large part in the design of G80. In the same way, the "failed" ATI 2800 was the basis for the technology that evolved into the 5870, which was undisputed king-of-the-hill for six months. Try to have a little forsight, and don't assume that the thousands of nvidia (or AMD) engineers who have infinitely more knowledge than you and design with their eye on the next two generations of GPUs operate in such simplistic terms.
 
DM does write long essays. Some interesting points he makes, but it seems the same old thing time after time in every new thread, almost a cut and paste job from previous postings.
 
DM does write long essays. Some interesting points he makes, but it seems the same old thing time after time in every new thread, almost a cut and paste job from previous postings.

He just repeats pretty much everything in every post, which means he doesn't make any valid points whatsoever and most likely types essays to try and come across as someone who knows their stuff.
 
Is Dear Leader Jen-Hsun Huang actually going to demonstrate anything new at GTC or will it all be just a lot marketing hype and power point slides? Both Intel and AMD have given us demonstrations of there new upcoming technology so it would be nice to see some of these put into action.
 
He just repeats pretty much everything in every post, which means he doesn't make any valid points whatsoever and most likely types essays to try and come across as someone who knows their stuff.

If the point(s) he makes was valid to start with then it will continue to be valid every time he makes it until the circumstances change or you or someone else proves him wrong.

By the way Duff-Man, if you're going to launch an attack on DM like that, the least you could do is try and prove him wrong. I'm not going to pretend I understand the details of making a GPU, but I do appreciate DM's efforts, so the least you can do is make an argument of it. Why is he so horribly wrong in your eyes?
 
If the point(s) he makes was valid to start with then it will continue to be valid every time he makes it until the circumstances change or you or someone else proves him wrong.

By the way Duff-Man, if you're going to launch an attack on DM like that, the least you could do is try and prove him wrong. I'm not going to pretend I understand the details of making a GPU, but I do appreciate DM's efforts, so the least you can do is make an argument of it. Why is he so horribly wrong in your eyes?

There's a saying at another forum I go to which encourages lively debate and it's 'attack the argument not the person' it might be time that this was stressed upon with more vigour around here.
 
What drunkenmaster does though, is attack the design of Nvidia's GPUs, as if he is an expert in electronic design.

Like Duff-Man said, we have no idea if Fermi would have performed better on a smaller process, so we really can't slam it for being a bad design.

Hopefully, I will understand it one day, but not right now, so I can't comment. Unless drunkenmaster would like to tell us that he is indeed an electronic engineer with experience in designing GPUs, then I don't think he is entitled to condemn a design like he has with Fermi.
 
Last edited:
I'm with the Duff-Man on this one. Well said.

Thank you...

What drunkenmaster does though, is attack the design of Nvidia's GPUs, as if he is an expert in electronic design.

A nice concise version of the point I was trying to make...


By the way Duff-Man, if you're going to launch an attack on DM like that, the least you could do is try and prove him wrong. I'm not going to pretend I understand the details of making a GPU, but I do appreciate DM's efforts, so the least you can do is make an argument of it. Why is he so horribly wrong in your eyes?

I chose one specific example (the quote from post #16). I don't have the patience to trawl through hundreds of essays to extract dozens more examples of the exact same thing in all its varieties. You've all read enough of his stuff to know exactly what I'm talking about, whether you agree with me or not.


Perhaps I went a little over the top with my post... If so, I apologise. Lets call it the proverbial straw that broke the camel's back. This "armchair expert" fermi-bashing has been going on for far too long now. It's all just getting a little much.
 
Thank you...



A nice concise version of the point I was trying to make...




I chose one specific example (the quote from post #16). I don't have the patience to trawl through hundreds of essays to extract dozens more examples of the exact same thing in all its varieties. You've all read enough of his stuff to know exactly what I'm talking about, whether you agree with me or not.


Perhaps I went a little over the top with my post... If so, I apologise. Lets call it the proverbial straw that broke the camel's back. This "armchair expert" fermi-bashing has been going on for far too long now. It's all just getting a little much.

That's cool, but just out of curiosity, why doesn't fermi suck in terms of performance to size/cost/heat/power etc?

Is it purely because DM doesn't offer tangible evidence in your opinion, or do you have evidence to the contrary? I'm not trying to start a row here, I actually am interested.
 
Back
Top Bottom