• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia: Next-Generation Maxwell Architecture Will Break New Grounds.

If you have nothing useful to say, stay out the thread. Both Nvida and AMD have NDA's and nothing will be confirmed till that is lifted (usually just before launch). There is plenty of AMD threads for you to wallow in.



I think you are comparing Nvidia to AMD in this department and it doesn't work like that Matt. Direct compute is better on the 7970/50/580 over the 680 but it comes down to OpenGL and OpenCL if I am not mistaken. There is a couple of games that use compute and BF3/Civ 5 are 2 of them.

OpenGL isn't related to compute. It's a graphics API that has been around for a long time.

OpenCL is an API but isn't the same as direct compute. Direct Compute is part of the DX11 spec.

Traditionally AMD cards have a lot more raw power so excel in compute tasks compared to nVidia which makes it even more of a shame that OpenCL isn't being used as widely as CUDA for high performance computing yet.

It'll happen at some point of course, as the advantage is that OpenCL runs regardless of brand. But CUDA has a good few development years on OpenCL.
 
Anyway, based on the speech below by nVidia's CEO at their GPU 2013 tech conference, I am even more excited by the next architecture after Maxwell called Volta:


P.S: How do you embed the video in the post? (Thanks Sliver for answering this)
 
Last edited:
Considering how small the 670's are PCB wise do you reckon Maxwell will see even smaller PCB's Greg and maybe even high end graphics card require only a single 8 pin connector like the current 7850's and 660's?

As things shrink, it is quite possible. Take Titan for example, A massive die size and still very efficient, with only needing 8+6 power connectors.
 
I think upgrade plan for this year then will be at Birthday get a nice tablet for WORK ;) and then upgrade my 6850 to whatever the equivalent of the 7850 is when that comes out.
 
Love or hate AMD/Nvidia, we need both to be competing. If AMD or Nvidia were to fold, that wouldn't bode well for us PC enthusiasts and if you think Titan is a ridiculous price, imagine what you would be paying with only one supplier....

^^ Indeed.
 
Love or hate AMD/Nvidia, we need both to be competing. If AMD or Nvidia were to fold, that wouldn't bode well for us PC enthusiasts and if you think Titan is a ridiculous price, imagine what you would be paying with only one supplier....

It doesn't really work like that. The higher the price, the less they sell basically, and it's not a linear curve. Sure, it seems like a lot of people are buying Titans, but you're looking at the "enthusiast" market for proof of general discrete GPU selling histories, which is like looking at super cars for sales of general city cars.

The prices for graphics cards won't sky rocket because people just won't buy them at silly prices.
 
It doesn't have any compute abilities at all though does it? Not one unit as far as i understand it. I've heard (and i don't know if this is true) that the only way around it for Nvidia is to disable the compute effects via the drivers. I heard (again i don't know if this is true) that is where the massive gains came from in Tomb Raider.(45% or whatever the figure was)

The 680 has compute abilites, just really limited. Remember Nvidia put two GK104's together and called it the K10 for the Tesla market, because they had nothing else available.
 
What do you think Gregster? You thought it was interesting enough to post :)

Jen's term of "Crush Kepler" is very interesting and hopefully will live up to that statement. I am an enthusiast and want to see competition between Nvidia and AMD and the consumers are the winners (price dependant of course).
 
Jen's term of "Crush Kepler" is very interesting and hopefully will live up to that statement. I am an enthusiast and want to see competition between Nvidia and AMD and the consumers are the winners (price dependant of course).

I agree about the need for competition, look at how th CPU market is stagnating a bit.

I don't really understand the gfx card market too well; what's the big deal about having an arm processor on a graphics card?

Also, is the HSA concept similar to the PS4(and mobile SoC) having GDDR5 memory shared between the CPU and the GPU?
 
If anyone feels the need to bait/troll/take thread off topic any further then expect to be suspended. It's getting extremely tedious having to watch threads get ruined by children.
 
Well having opted for AMD over Nvidia this gen I hope Maxwell breaks new ground in price vs performance, doubt it will though.
 
I agree about the need for competition, look at how th CPU market is stagnating a bit.

I don't really understand the gfx card market too well; what's the big deal about having an arm processor on a graphics card?

Also, is the HSA concept similar to the PS4(and mobile SoC) having GDDR5 memory shared between the CPU and the GPU?

The computing market is stagnating full stop really. It's because hardware has been advancing at a greater speed than software, so hardware has become much more powerful than the average person needs for their average computer requirements.

When you look at games, this has also been happening to an extent, the only time you really see the need for high end hardware is with a few titles.

The average I would say is probably people playing games at 1080P (average high end PC gamer) with something like a 7870/660Ti or below because most people just don't need the power that a 7970/680 or multi GPUs of each provide unless they are using a res that's above 1080P.

As for having an ARM CPU on board a graphics card, I'd say makes sense really as it will help with general purpose computing, and I think it's going to be interesting seeing what they apply it to.

The HSA concept is similar to the PS4 situation too, and the reason why nVidia seemingly will struggle is because they don't have the ability (licensing reasons) to produce CPUs that run X86-64 instruction sets, which means anything of that kind that they produce, would have limited use for most computers without them having an x86 license.

It might not be so much of an issue in the future, as I think they could produce purely x64 chips through ARM, but a lot of software now still uses x86 instruction sets, so would be of limited use being purely x64 (x86 = 32bit, x64 = 64 bit).
 
It doesn't have any compute abilities at all though does it? Not one unit as far as i understand it. I've heard (and i don't know if this is true) that the only way around it for Nvidia is to disable the compute effects via the drivers. I heard (again i don't know if this is true) that is where the massive gains came from in Tomb Raider.(45% or whatever the figure was)

It's the double precision floating point performance that GK104 suffer in, if you compare GK104 to GK110, GK110 has over 10x the double percision performance though I do believe this is purposefully by design, as I suggested earlier it would give people even less reason to buy Tesla and Quadro cards if nVidia's desktop cards had great compute performance.

The 680 has compute abilites, just really limited. Remember Nvidia put two GK104's together and called it the K10 for the Tesla market, because they had nothing else available.

As the above, I fully think this was intentional.

The 560Ti variant of GF110 has 515 GFLOPs of double precision floating precision processing power.

The GTX580 variant of GF110 has 666 GFLOPs of double precision floating precision processing power.

Put that in to perspective, compared to the above;

The GTX680 variant of kF104 has 95 (yes, ninety five) GFLOPs of double precision floating precision processing power.

If you compare that to the DP GFLOPs of the GK110 chip, at 1310 DP GFLOPs, you can see that nVidia have intentionally designed their desktop gaming chips to excel only at games, they do have the ability to do compute tasks, but aren't very good with it.

This has firstly allowed nVidia to produce a much more cost effective GPU as they haven't have to dedicate die space for DP GFLOPs performance, and secondly, they don't canilbalise Tesla sales for those who rely on CUDA accelerated apps.

Because as I said earlier, if their desktop GPUs have great double precision/compute performance, there would be lead reasoning/justification for people to buy Tesla cards.

So I believe that they are going to continue that way with Maxwell too, and I think they will continue to do it for the foreseeable future until CUDA isn't industry standard in the sectors it's widely used in, because traditionally AMD GPUs tend to have the stronger DP GFLOPs performance, for example,

7970 at 1050Mhz on the core has 1075 DP GFLOPs

If you compare that to GK110's DP performance at 1310 DP GFLOPs, that's only 1.21x the DP GFLOPs for 1.59x more die space.

Now currently, AMD's DP GFLOP performance is quite irrelevant because not much at all is really there to take advantage of it due to the persistence of CUDA.

On another note, I would say that examining the differences in nVidia's GPU DP GFLOPs performance proves that GK104 wasn't ever intended to be the mid range GPU at all, and was clearly their high end games GPU because previous nVidia GPUs had much higher DP GFLOPs performance for the same die size.
 
Last edited:
On another note, I would say that examining the differences in nVidia's GPU DP GFLOPs performance proves that GK104 wasn't ever intended to be the mid range GPU at all, and was clearly their high end games GPU because previous nVidia GPUs had much higher DP GFLOPs performance for the same die size.

Although I do agree with most of your post (agreeing with Spoffle :eek:) it is the above portion that I disagree with, myself I feel that it was quite early on in the Kepler design that Nvidia decided to go with the smaller core second tier chip rather than the usual massive top tier chip. they did undoubtedly decide somewhere along the way to strip out allot of the compute hardware, because as we all know and you quite rightly point out is that the GK104 doesn't have particularly good compute abilities.

Anyway what I would like to see with maxwell is them go with a halfway house sized core, so not the massive cores of the 580 and titan but not the smaller cores of the 680/670, leave out the compute functionality by all means just give us stonkingly fast gpus and stop charging the earth for them.
 
The computing market is stagnating full stop really. It's because hardware has been advancing at a greater speed than software, so hardware has become much more powerful than the average person needs for their average computer requirements.

When you look at games, this has also been happening to an extent, the only time you really see the need for high end hardware is with a few titles.

The average I would say is probably people playing games at 1080P (average high end PC gamer) with something like a 7870/660Ti or below because most people just don't need the power that a 7970/680 or multi GPUs of each provide unless they are using a res that's above 1080P.

As for having an ARM CPU on board a graphics card, I'd say makes sense really as it will help with general purpose computing, and I think it's going to be interesting seeing what they apply it to.

The HSA concept is similar to the PS4 situation too, and the reason why nVidia seemingly will struggle is because they don't have the ability (licensing reasons) to produce CPUs that run X86-64 instruction sets, which means anything of that kind that they produce, would have limited use for most computers without them having an x86 license.

It might not be so much of an issue in the future, as I think they could produce purely x64 chips through ARM, but a lot of software now still uses x86 instruction sets, so would be of limited use being purely x64 (x86 = 32bit, x64 = 64 bit).

Yeah, its not really any different to what AMD are doing, I.E. HSA: Mashing the CPU and GPU together. (licensing reasons) is probably the key word.

x64 is AMD IP, they invented it and own it, if Nvidia want to use it they would have to ask AMD for a licence to use it.

Its the same with x86, wich is Intel, AMD and Intel licence IP to eachother as a matter of course because they need eachother to operate in the software space.

Nvidia don't make CPU's so they don't need that IP, but now that they want to do what is all but by another name HSA in Maxwell they need a CPU and the IP licenses, ARM licence x86 and x64 from AMD and Intel, so, there is a natural coming together there.
 
Last edited:
Back
Top Bottom