• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Are there any rumours as to when the next Nvidia GPUs will be released?

Moore's laws bust now, this is increasingly becoming apparent

This is applicable to both cpu's and back on topic increasingly gpu's. This is not Intel, nvidia or amd's fault it just physics

Nvidia will.most likely launch 'small' Pascal (think circa 980ti performance ish) in March to April 2016.

assume retail pricing starting at least around £400 in the UK before any launch premium is applied and more likely around £450.

If they follow a similar pattern to before expect the 'big' premium Pascal (titan?) sometime later in the year with a £800-£1,000 ish price tag if not more followed 3-5 months later by the more mainstream high end part. (ti?) for circa £600-750

Moore's Law is very much alive with the Pascal GPUs.

Big Pascal will have 17 billion transistors and will be available at the same time as Small Pascal but probably only in NVidia's compute cards to start with.
 
Moore's Law is very much alive with the Pascal GPUs.

Big Pascal will have 17 billion transistors and will be available at the same time as Small Pascal but probably only in NVidia's compute cards to start with.

Sorry how long have we been a 28nm????? You do know what moores law says? Pascal over previous gen may represent a big jump but overall progress has faltered with gpu's much like with cpu's

Gpu process size of course lags behind cpu process size. Gpu's are also massively more parallel in their design and run a lot lot slower the contemporary gpu's..
 
Last edited:
I can still see small pascal being faster than current cards by a good bit and wont be a repeat of 780 ti > 980 hoping this is true anyways...
 
Sorry how long have we been a 28nm????? You do know what moores law says?

Gpu process size of course lags behind cpu process size. Gpu's are also massively more parallel in their design and run a lot lot slower the contemporary gpu's..

This is why a 4 year old GPU will totally destroy intel's latest 8 core desktop CPUs in compute functions lol.

Pascal is not on 28nm either.

It is intel who have been holding things up, unfortunately for them the world is moving on.
 
This is why a 4 year old GPU will totally destroy intel's latest 8 core desktop CPUs in compute functions lol.

Pascal is not on 28nm either.

It is intel who have been holding things up, unfortunately for them the world is moving on.

I never said Pascal was 28nm I just pointed out that we have been on it for a while with a an obvious inherent limit to the amount of transistors than can be squeezed in on the same process. Pascal may look like a big jump forward, and the full fat products should be but its been a long time coming and you wont see the full fat product for some time yet..

Don't know why your comparing a four year gpu to a current cpu. Is the CPU faster in compute functions then a contemporary gpu, how about a four year old cpu vs the gpu?
.
 
Last edited:
I never said Pascal was 28nm I just pointed out that we have been on it for a while with a an obvious inherent limit to the amount of transistors than can be squeezed in on the same process. Pascal may look like a big jump forward, and the full fat products should be but its been a long time coming and you wont see the full fat product for some time yet..
.

The full fat product will be out with NVidia's professional cards and that won't take long as Maxwell is not good for that.

The truth of the matter is intel have been dragging their feet as we should be seeing desktop CPUs with 20+ cores selling for £250 and a windows operating system able to use them.

intel have been ripping their customers off for years as they have no real competition.
 
Don't know why your comparing a four year gpu to a current cpu. Is the CPU faster in compute functions then a contemporary gpu, how about a four year old cpu vs the gpu?
.

Because it is embarrassing seeing a 5960X take on a modern GPU when it comes to compute stuff.

Why do you think super computers use shedloads of GPUs to do the number crunching ?
 
AMD changed their blueprint from 16nm to 14nm mainly because of TSMC yields, so their new GPUs wont be here before summer or even Q4, and some rumor going around that AMD secured time exclusive on HBM2, add to that how hardit was to implement HBM1 just few months ago, makes the Q1/Q2 timeframe for HBM2 GPUs quiet unlikely.
if Nvidia stay on track to release new Pascal in Q2, i find it more likely to be GDDR5X, not HBM, an they will probably start with low end like they did with 750Ti, with mid-end in Q3, and top end Q4.
so basicaly the performance goodness everyone is waiting for wont happen untill late summer at best.
the variable that makes things hard for Nvidia, is VR, AMD apparently have an edge with GCN arch and the asynchronous computing, if nvidia have no workaround, they will have to introduce a new product line.
 
Last edited:
Even paper AMD launches will be disruptive this year. Intel and Nvidia will want to encourage you to buy one of their products ahead of 2017. How they do that should be interesting. There's some real competition on the way, and the first salvos will be fired in 2016.
 
Because it is embarrassing seeing a 5960X take on a modern GPU when it comes to compute stuff.

Why do you think super computers use shedloads of GPUs to do the number crunching ?

I'm confused now, I stated that gpu's were massively more parallel (hence more suited for number crunching) you then weirdly suggest that I should compare a four year old gpu to a modern cpu? It appears we agree at least on this? Moores law is pretty much broken regardless we have not seen it in action for cpu's or gpu's largely for the past few years as multiple companies have struggled to get their processes down below 20 odd nm and as such we have not seen the accompanying increase in transistors at the required rate for moores law not to be broken. Their may be a ' big' pascal out for compute earlier but as a consumer you can expect at least none months suspect until were see the equivalent of the 'ti'

The full fat product will be out with NVidia's professional cards and that won't take long as Maxwell is not good for that.

The truth of the matter is intel have been dragging their feet as we should be seeing desktop CPUs with 20+ cores selling for £250 and a windows operating system able to use them.

intel have been ripping their customers off for years as they have no real competition.

Intel are not known for their operating systems so you can hardly blame them for that. The reality is that the sort of work consumers require of their computers favours fewer more highly clocked cores. A 20+ core CPU for the average consumer would be a terrible waste (witness the clock speeds of the xeons with many cores) you seem to think that moores law obliges Intel to sell you a processor that complies with your own interpretation of it ie that the transistor count should double every 18-24 months and that that increase should pretty much exclusively be directed towards more cores regardless of whether this will actually improve our computing experience as more cores tend to show increasingly diminishing returns especially when you factor in achievable clock rates for typical consumer workloads.

The reality is that you and I are a relative aberration when I comes to the bulk of Intel's retail customers were the growth is in low power devices with lots integrated onto one chip which are suitable for mobile use whilst retaining enough processing power for general computing. It is this ethos which drives Intel's consumer cpu's and with good reason! If they followed your direction they would no doubt be in financial difficulty by now! The 'enthusiast' market quite sensibly gets dealt with by borrowing from the commercial side of things with motherboards and chips adapted from the business xeon range.

Intel are still in competition, with themselves! They know that it years past that they could sell a new cpu to a consumer every year that would be a tangible upgrade from the last one. This pattern has largely faltered in the past five years and its costing Intel sales. The are up against physics however
 
Last edited:
Have a vanilla 980 TwinFrozr at present, at 3440x1440 it runs games fine, but whack up the FSAA and demanding titles can see below 60fps. I just want a single card setup as I have no intention on changing the cpu and mobo as doing so would only result in a minor performance boost and not justified by the huge cost in doing so.

Pascal seems to be the GFX ticket for my needs then. A few more months to wait :cool:

Seems like perfect timing with The Division coming out around then too.
 
I'm confused now, I stated that gpu's were massively more parallel (hence more suited for number crunching) you then weirdly suggest that I should compare a four year old gpu to a modern cpu? It appears we agree at least on this? Moores law is pretty much broken regardless we have not seen it in action for cpu's or gpu's largely for the past few years as multiple companies have struggled to get their processes down below 20 odd nm and as such we have not seen the accompanying increase in transistors at the required rate for moores law not to be broken. Their may be a ' big' pascal out for compute earlier but as a consumer you can expect at least none months suspect until were see the equivalent of the 'ti'



Intel are not known for their operating systems so you can hardly blame them for that. The reality is that the sort of work consumers require of their computers favours fewer more highly clocked cores. A 20+ core CPU for the average consumer would be a terrible waste (witness the clock speeds of the xeons with many cores) you seem to think that moores law obliges Intel to sell you a processor that complies with your own interpretation of it ie that the transistor count should double every 18-24 months and that that increase should pretty much exclusively be directed towards more cores regardless of whether this will actually improve our computing experience as more cores tend to show increasingly diminishing returns especially when you factor in achievable clock rates for typical consumer workloads.

The reality is that you and I are a relative aberration when I comes to the bulk of Intel's retail customers were the growth is in low power devices with lots integrated onto one chip which are suitable for mobile use whilst retaining enough processing power for general computing. It is this ethos which drives Intel's consumer cpu's and with good reason! If they followed your direction they would no doubt be in financial difficulty by now! The 'enthusiast' market quite sensibly gets dealt with by borrowing from the commercial side of things with motherboards and chips adapted from the business xeon range.

Intel are still in competition, with themselves! They know that it years past that they could sell a new cpu to a consumer every year that would be a tangible upgrade from the last one. This pattern has largely faltered in the past five years and its costing Intel sales. The are up against physics however

The fact that graphics cards are used demonstrates that intel have failed in the CPU dept.

There is nothing wrong with a large number of CPU cores as long as there is software written to use them.
 
The fact that graphics cards are used demonstrates that intel have failed in the CPU dept.

There is nothing wrong with a large number of CPU cores as long as there is software written to use them.

Have they really failed though? :). Most applications just wouldn't have a need to use so many cores, just specific uses do, ie, graphics processing. I remember reading that GPU's could be utilised to provide additional processing power (non-gaming) to the OS/CPU but have so far we not seen that. Why is that? I assume it's just a software solution that needs to be implemented to make use of graphics cards for additional processing power. Ie, when using Virtualisation why can we not also use the cores available on a GPU, and the associated memory too ?

I'm not so sure...The 980 ti will still be a desirable card 780 non ti go for 150 still...

980ti and 780 and ti still on same 28nm and no change to Direct X - both will be changing for the next gen, well,will be better optimised/more fully supported in the case of Direct X 12 and Pascal I think
 
Last edited:
I'm not so sure...The 980 ti will still be a desirable card 780 non ti go for 150 still...

The 780 is also 28nm card that can still turn in over half the performance of a 980ti. Not bad for a card that's 2 generations behind. With a double die shrink you can expect a much greater performance delta. The 980ti could be old hat by autumn. I wouldn't bet 500 quid on it myself.
 
I'm confused now, I stated that gpu's were massively more parallel (hence more suited for number crunching) you then weirdly suggest that I should compare a four year old gpu to a modern cpu? It appears we agree at least on this? Moores law is pretty much broken regardless we have not seen it in action for cpu's or gpu's largely for the past few years as multiple companies have struggled to get their processes down below 20 odd nm and as such we have not seen the accompanying increase in transistors at the required rate for moores law not to be broken. Their may be a ' big' pascal out for compute earlier but as a consumer you can expect at least none months suspect until were see the equivalent of the 'ti'



Intel are not known for their operating systems so you can hardly blame them for that. The reality is that the sort of work consumers require of their computers favours fewer more highly clocked cores. A 20+ core CPU for the average consumer would be a terrible waste (witness the clock speeds of the xeons with many cores) you seem to think that moores law obliges Intel to sell you a processor that complies with your own interpretation of it ie that the transistor count should double every 18-24 months and that that increase should pretty much exclusively be directed towards more cores regardless of whether this will actually improve our computing experience as more cores tend to show increasingly diminishing returns especially when you factor in achievable clock rates for typical consumer workloads.

The reality is that you and I are a relative aberration when I comes to the bulk of Intel's retail customers were the growth is in low power devices with lots integrated onto one chip which are suitable for mobile use whilst retaining enough processing power for general computing. It is this ethos which drives Intel's consumer cpu's and with good reason! If they followed your direction they would no doubt be in financial difficulty by now! The 'enthusiast' market quite sensibly gets dealt with by borrowing from the commercial side of things with motherboards and chips adapted from the business xeon range.

Intel are still in competition, with themselves! They know that it years past that they could sell a new cpu to a consumer every year that would be a tangible upgrade from the last one. This pattern has largely faltered in the past five years and its costing Intel sales. The are up against physics however

Great post, I fully agree with everything you are saying here. There is very little reason for the average consumer to upgrade their CPU. Even high end gamers are only getting marginal improvements from a CPU upgrade and even that will be less noticeable if Dx12 is any good.
 
Have they really failed though? :). Most applications just wouldn't have a need to use so many cores, just specific uses do, ie, graphics processing. I remember reading that GPU's could be utilised to provide additional processing power (non-gaming) to the OS/CPU but have so far we not seen that. Why is that? I assume it's just a software solution that needs to be implemented to make use of graphics cards for additional processing power. Ie, when using Virtualisation why can we not also use the cores available on a GPU, and the associated memory too ?

They have failed because intel and Microsoft like things the way they are.

Intel CPUs are full of backwards compatibility and obsolete stuff.

Windows operating systems are big and clumsy.
 
Back
Top Bottom