• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia: Next-Generation Maxwell Architecture Will Break New Grounds.

Anyway what I would like to see with maxwell is them go with a halfway house sized core, so not the massive cores of the 580 and titan but not the smaller cores of the 680/670, leave out the compute functionality by all means just give us stonkingly fast gpus and stop charging the earth for them.

This. I hope I don't get verbally beaten to death for admitting that I'm quite excited to see what happens with maxwell. Because i am. I just wish that nvidia would be a little more reasonable with their pricing.
 
Although I do agree with most of your post (agreeing with Spoffle :eek:) it is the above portion that I disagree with, myself I feel that it was quite early on in the Kepler design that Nvidia decided to go with the smaller core second tier chip rather than the usual massive top tier chip. they did undoubtedly decide somewhere along the way to strip out allot of the compute hardware, because as we all know and you quite rightly point out is that the GK104 doesn't have particularly good compute abilities.

Anyway what I would like to see with maxwell is them go with a halfway house sized core, so not the massive cores of the 580 and titan but not the smaller cores of the 680/670, leave out the compute functionality by all means just give us stonkingly fast gpus and stop charging the earth for them.

Well the issue with this is that it doesn't really work that way. Chip designs are done well in advance of them actually being produced.

Also, stripped out is a relative term in this context, but I think people use it in a literal term, and think that GK104 used to be a different design, but mid way through, nVidia decided to remove parts of the core that directly influenced double precision performance.

When really it's more GK104 has had bits stripped out relative to GF110, but the way GK104 is now is more or less what nVidia will have intended to produce. Chips can be changed, but only on a relatively small scale, rather than massive chunks being taken out and swapped out.

So generally, the argument I've seen is that GK104 in its current iteration was originally going to be used in the GTX660Ti sort of GPU, but looking at the GPU's specs, I think it's clear to see that was never the case, and I think we'll see the same for Maxwell.

A 300mm²~ GPU with very low double precision performance, but gaming performance where it should be. I can't imagine they'll make the mistake of crippling it with an inadequate memory bus this time around. The 2GB RAM is less of an issue than the memory bus for Kepler I'd say.
 
This probably won't come off quite the way I mean it but (not trying to be high and mighty or anything) - some really really poor understanding of compute in this thread, aside from spoffle (who mostly seems to know what hes talking about on the subject) most people seem to have almost no understanding of it and even spoffle was a little inaccurate on a couple of things - being a little pedantic AMD had/has its own CUDA - last named Stream can't remember what it was called before that, lack of support has relegated it to an almost unused media accelerator. Also DirectCompute in a generalised sense is the same as Open CL and CUDA tho they all differ a little in what kind of processing they are designed around handling and how they go about it.

All the latest nVidia and AMD graphic cards support "compute" functionality, while double precision is severely gimped on the GeForce 600 series its not used that much in gaming currently and has sufficent single precision performance for gaming tho somewhat underwhelming on the 670/680 for high end GPUs.


EDIT: Point of my post being - it might be a good idea for pretty much anyone other than spoffle that has posted on the subject in this thread to open up google and do a bit of research on it before posting about compute in future :P
 
Last edited:
This probably won't come off quite the way I mean it but (not trying to be high and mighty or anything) - some really really poor understanding of compute in this thread, aside from spoffle (who mostly seems to know what hes talking about on the subject) most people seem to have almost no understanding of it and even spoffle was a little inaccurate on a couple of things - being a little pedantic AMD had/has its own CUDA - last named Stream can't remember what it was called before that, lack of support has relegated it to an almost unused media accelerator. Also DirectCompute in a generalised sense is the same as Open CL and CUDA tho they all differ a little in what kind of processing they are designed around handling and how they go about it.

What I meant when I said that AMD doesn't have something like CUDA wasn't about the API, if you go over the part where I said it;


AMD haven't got something like CUDA, ie, a compute API that brings in revenue the way nVidia has with CUDA, so it makes sense why they'd want to do that regardless of how people feel about it

I was getting at the same thing you said, it doesn't get used, so I wasn't counting it as a valid API, and to be honest, I can't remember anything even using it aside from a few tools from AMD.

But I am well aware that AMD does have its own proprietary compute API, but they may as well not because it's not used.

I also know that Direct Compute is the same as CUDA and OpenCL in that they are all compute APIs, but when I say that they aren't the same, I am saying it in the way you might say it if someone said that OpenGL and Direct X are the same.

I was affirming that whilst they do the same job, they are different APIs




All the latest nVidia and AMD graphic cards support "compute" functionality, while double precision is severely gimped on the GeForce 600 series its not used that much in gaming currently and has sufficent single precision performance for gaming tho somewhat underwhelming on the 670/680 for high end GPUs.

This is what leads me to believe that nVidia had no intentions of pushing out GK104 as mid range, because even last gen's mid range nVidia GPU had significantly more double precision performance than GK104.

That shows a conscious effort was put in to reducing the die sizes (due to the unsustainable nature of 500mm²+ GPUs.

As well as the fact that GTX5xx series cards were definitely, to some degree, eating sales of Tesla cards.

I've read on more than a few occasions (well, quite a lot) on various 3D modelling/design/CAD forums that a fair amount of people held the opinion that Quadro and Tesla cards were poor value for money because you got pretty much the same end product by buying a GeForce card instead.

The main caveat was the lack of support from nVidia that comes with buying a Tesla or Quadro card. and occasionally the smaller amounts of RAM, but for the most part those who simply needed a card for Viewport performance would either go AMD (for the larger amount of RAM, which is helpful in viewport rendering) or with a GTX580 if they needed something for CUDA performance.

I think it was this, coupled with wanting to produce smaller die GPUs that pushed them over the edge for actually getting down to it and making their gaming chips smaller.

EDIT: Point of my post being - it might be a good idea for pretty much anyone other than spoffle that has posted on the subject in this thread to open up google and do a bit of research on it before posting about compute in future :P

I strongly agree with this.
 
Anyway, based on the speech below by nVidia's CEO at their GPU 2013 tech conference, I am even more excited by the next architecture after Maxwell called Volta:


Volta sounds insane. Im really excited for it going to save for 3x SLI of the most expensive Volta card ^.^ Im also looking forward to Tegra 4i. Kepler graphics with ARM CPU... I think I died and went to mobile heaven :p
 
I was getting at the same thing you said, it doesn't get used, so I wasn't counting it as a valid API, and to be honest, I can't remember anything even using it aside from a few tools from AMD.

But I am well aware that AMD does have its own proprietary compute API, but they may as well not because it's not used.

I also know that Direct Compute is the same as CUDA and OpenCL in that they are all compute APIs, but when I say that they aren't the same, I am saying it in the way you might say it if someone said that OpenGL and Direct X are the same.

I was affirming that whilst they do the same job, they are different APIs

Ah sorry mis-read the first one.
 
Well the issue with this is that it doesn't really work that way. Chip designs are done well in advance of them actually being produced.

Also, stripped out is a relative term in this context, but I think people use it in a literal term, and think that GK104 used to be a different design, but mid way through, nVidia decided to remove parts of the core that directly influenced double precision performance.

When really it's more GK104 has had bits stripped out relative to GF110, but the way GK104 is now is more or less what nVidia will have intended to produce. Chips can be changed, but only on a relatively small scale, rather than massive chunks being taken out and swapped out.

So generally, the argument I've seen is that GK104 in its current iteration was originally going to be used in the GTX660Ti sort of GPU, but looking at the GPU's specs, I think it's clear to see that was never the case, and I think we'll see the same for Maxwell.

A 300mm²~ GPU with very low double precision performance, but gaming performance where it should be. I can't imagine they'll make the mistake of crippling it with an inadequate memory bus this time around. The 2GB RAM is less of an issue than the memory bus for Kepler I'd say.

I think we are basically saying the same thing, maybe I misinterpreted your original post a little. Yes of course any design changes are done at the pen and paper stage (probably not actually done on with pen and paper lol more likely cad package and simulation) but certainly well before any actual silicon is created.

Yes "stripped out" is probably the wrong term to use.
 
Surely we as a consumers benefit from the fight between team red and green?

I agree. Competition always pushes technology forward and helps regulate price rises. When you have a monopoly in any market you'll often see the market stagnate with prices being set at the whim of the sole provider.

It's good that Nvidia and AMD are competing. It's bad that a minority of their customers feel they need to compete too on the tinterweb.
 
I don't understand why anybody would be so loyal to one company with something like graphics cards.

I have owned both ATI and Nvidia cards and will always get the best at the time.

We NEED ATI to start trashing Nivida because the state of the GPU market is disgusting IMO. Prices are through the roof and in this current state I doubt I will buy a top of the line GPU again because the prices are just stupid.


Hopefully we see some fierce competition which results in a price war with these next cards
 
The graphics card sub forum is like an american jail. If you don't join a gang (red or green) then you are preyed upon for being able to give unbiased opinions. Spoffle speaks a lot of sense on here but many don't get on with him as they see his posts as attacking their side when in fact he's usually spot on.
 
I don't understand why anybody would be so loyal to one company with something like graphics cards.

I have owned both ATI and Nvidia cards and will always get the best at the time.

We NEED ATI to start trashing Nivida because the state of the GPU market is disgusting IMO. Prices are through the roof and in this current state I doubt I will buy a top of the line GPU again because the prices are just stupid.


Hopefully we see some fierce competition which results in a price war with these next cards

I can understand how people get into that frame of mind. You get a brand that does really well for a while( amd graphics cards ATM for value for money, intel with sandybridge and to a lesser extent with ivy bridge ); the next generation comes around and it might not do as well but your past judgements cloud your view and you don't see things objectively. I was speccing up a machine and I was stuck in the thought process that amd CPUs are rubbish (silly I know; they are a lot more competitive now- especially so for multithreaded) for everybody's needs. But it isn't that clear cut; it depends on your individual needs. Sorry if this came across patronising- you guys know a hell of a lot more than me! :)
 
The fanboys have missed the best bit.

British design and IP (ARM) on a tier 1 graphics card.

Wasn't lost on me :P I've said a couple of times I kind of wish in a way ARM would enter the GPU market proper (other than mali, etc.) but its not really the way they do business.
 
Stay on topic people

That video was quite interesting. I always enjoy reading about some of the underlying technologies and software that drive GPUs but - being honest - most of the time I don't care and worry how many FPS it gets in BF3

Good times ahead whether you buy NV or AMD
 
The fanboys have missed the best bit.

British design and IP (ARM) on a tier 1 graphics card.

I honestly don't see the importance of this, arbitrary borders around the location where something was designed is of little of significance to me and a lot of others.

Though for argument's sake, smartphones and tablets generally have ARM based hardware in them anyway, which are more prolific than top end discrete graphics cards.
 
I honestly don't see the importance of this, arbitrary borders around the location where something was designed is of little of significance to me and a lot of others.

Though for argument's sake, smartphones and tablets generally have ARM based hardware in them anyway, which are more prolific than top end discrete graphics cards.

That kind of thinking works until one of your kids decide they want a job working for a world class firm in the UK and realise there are none and instead has to work at a call centre.

This sort of design win is significant in keeping momentum going in what might become world domination (and lots of corporation tax) for a company based in Cambridge and not Silicon Valley.
 
I always buy nvidia based on improvement over last nvidia, i don't even compare speed against latest AMD. I suffered through ATI's joke that they call drivers years ago and never bought from them again.
 
Back
Top Bottom