• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

** The Official Nvidia GeForce 'Pascal' Thread - for general gossip and discussions **

Soldato
Joined
19 Dec 2010
Posts
12,069
For a number of reasons Pascal is unlikely to be cheap.

But history is against a lot of the claims in regards to "1070" performance:

GTX670 was convincingly faster than the GTX580, GTX470 stomped all over the GTX285 the GTX260 was often almost twice as fast as the 9800GTX(+) - nVidia has no problems killing off the value of older hardware, often the next generation x70 position card was priced keenly against the previous generation high end card or even significantly cheaper while matching or beating its performance and the 980ti certainly won't be immune to that.

History also shows that every generation the gap between current high end and next gens mid range is shrinking. The 970 and the 780Ti are very close in performance.

Sure there is a die shrink involved but die shrinks don't give the performance increases they used to.

It would not surprise me if the 980Ti was faster than the next gens mid range.
 
Man of Honour
Joined
13 Oct 2006
Posts
92,160
History also shows that every generation the gap between current high end and next gens mid range is shrinking. The 970 and the 780Ti are very close in performance.

Sure there is a die shrink involved but die shrinks don't give the performance increases they used to.

It would not surprise me if the 980Ti was faster than the next gens mid range.

There is somewhat of an anomaly there in terms of 28nm though.
 
Permabanned
Joined
29 Jul 2009
Posts
1,964
Location
Stoke on trent
Atleast one of those images is purely mocked up and not based off actual hardware :S

Taken from the link,

"Pascal at GTC 2015

Unfortunately the second mockup
(GTC2015) has no frontal pic available, so you have to trust my photoshop skills here. We also have no dimensions for the new form factor (although it is very likely it’s the same as GTC2014). However just be sure, we are also comparing HBM1 stack mounted on the GPU, just so we have two points of reference."
 
Soldato
Joined
19 Dec 2010
Posts
12,069
There is somewhat of an anomaly there in terms of 28nm though.

I agree, but, still doesn't change what I am saying. Die Shrinks don't mean huge gains in performance anymore, it's more to do with getting the same performance with less power.

And they are becoming more complex and costly.

Maybe we will find out on Tuesday, but I doubt it.
 
Man of Honour
Joined
13 Oct 2006
Posts
92,160
I agree, but, still doesn't change what I am saying. Die Shrinks don't mean huge gains in performance anymore, it's more to do with getting the same performance with less power.

And they are becoming more complex and costly.

Maybe we will find out on Tuesday, but I doubt it.

Not denying the overall trend but the length of time on 28nm has exaggerated it a lot.
 
Associate
Joined
28 Jan 2010
Posts
1,547
Location
Brighton
I agree, but, still doesn't change what I am saying. Die Shrinks don't mean huge gains in performance anymore, it's more to do with getting the same performance with less power.

This is a misunderstanding the direction things are moving in (and have been for some time).

Die shrinks are still giving huge gains in performance, but in parallel tasks. If you double performance per watt, you're doubling performance in general. As long as the task can be made parallel enough to use the extra cores you can make in the same power budget.

In CPUs you don't see this as a 'consumer' because we just get 4-6 cores mostly, and all our 'normal' software is built for 4 cores if you're lucky. But it's partly because, for now, we don't need more CPU performance really. So they don't bother.

At the high end though, you're telling me the Xeon E7-8890V3 isn't massively faster than what you could get a few years ago, on 32nm or 45nm? As long as it's written for 36 threads of course.

And then back to GPUs. Happily graphics is naturally a massively parallel task, and we should remain to see the 'expected' gain from die shrinks and u-arch updates.
 
Soldato
Joined
30 Nov 2011
Posts
11,358
You can't take what's happening with CPUs and apply it to GPUs.

Exactly, CPU dies have gotten smaller whilst also including more and more space dedicated to GPU, if you actually include the GPU in the peformance reference the CPU's have effecticely gotten faster with each shrink, its just thay we ignore that when we use a dedicated GPU

GPU die sizes stay pretty much around the same sizes and are all dedicated to GPU performance, so Pascal might be a little smaller and might go for slightly more power efficient design sizes, but there is still room there for something a bit faster than a 980ti but at 970/980 type prices, as with every other die shrink previous.
 
Soldato
Joined
6 Jan 2013
Posts
21,944
Location
Rollergirl
there is still room there for something a bit faster than a 980ti but at 970/980 type prices

Obviously!

If it was more expensive and slower than a 980Ti then what would be the point of it, people would just buy a 980Ti. If it was cheaper and slower it would just be another 970.

We don't need to know the square root of a nanometer to work out any of this! The X70/80 will be less expensive and faster than a 980Ti else it will be completely irrelevant.

Edit: rereading this post, it comes across as a bit condescending... Not intended. What I'm saying is that the mid range will be cheaper and marginally faster than a 980Ti else there will be no point.
 
Last edited:
Soldato
Joined
19 Dec 2010
Posts
12,069
You can't take what's happening with CPUs and apply it to GPUs.

But I am not, I didn't mention CPU's in my post?

This is a misunderstanding the direction things are moving in (and have been for some time).

Die shrinks are still giving huge gains in performance, but in parallel tasks. If you double performance per watt, you're doubling performance in general. As long as the task can be made parallel enough to use the extra cores you can make in the same power budget.

In CPUs you don't see this as a 'consumer' because we just get 4-6 cores mostly, and all our 'normal' software is built for 4 cores if you're lucky. But it's partly because, for now, we don't need more CPU performance really. So they don't bother.

At the high end though, you're telling me the Xeon E7-8890V3 isn't massively faster than what you could get a few years ago, on 32nm or 45nm? As long as it's written for 36 threads of course.

And then back to GPUs. Happily graphics is naturally a massively parallel task, and we should remain to see the 'expected' gain from die shrinks and u-arch updates.

Again, show me where I talked about CPU's? I never mentioned them at all.

The performance gains from Die shrinks have been getting smaller, but the process has been getting more expensive and more complex each time.
 
Back
Top Bottom