• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

** The Official Nvidia GeForce 'Pascal' Thread - for general gossip and discussions **

Didn't take you long to get over the Fury X hype I see. :p

Lol, I'm always looking forward :), never keep a GPU for long. Annoyingly for me my Fury X going back for RMA due to pump noise. Will look out for another though once the pump issue is def sorted.

Pascal looks it could be an absolute monster, new architecture, die shrink and HBM 2.0. Should be a big step.
 
^^ Yup.

Flagship Titan Pascal GP200 could well have 12GB / 16GB of HBM. With the normal Flagship GP204 having 6GB / 8GB of HBM.

They are going to be pricey for sure, but performance is going to be mind-blowing.

We're getting Gen 2.0 HBM along with a massive die shrink either 16nm/14nm down from 28nm. Plus a new architecture. These cards are going to be scary !

Be careful what you wish for.:)
 
You only need to look at the jump from 40NM to 28NM to see where things will end up in the first year.


What was the price?

I really want to try gsync / freesync but at the same time I need to upgrade my graphic card as well, but then when I thought of pascal will be like 50% faster than current card then my heart is telling me to wait...:(
 
Exciting stuff for sure. IBM are out of the fab business now though aren't they? Presumably they intend to license this technology to Samsung and GloFab.

After being stuck on 28nm for what feels like forever, it's exciting to think that 7nm could be just a few years around the corner; can't help but feel however that the challenges of moving to silicon-germanium and EUV lithography will inevitably lead to delays...
 
Why people think you'll be able to pay the same money in a year and a half and get double the performance is beyond me.

First Pascal cards will be an improvement, but not 80% better than what we have today (TX/Ti)
 
Why people think you'll be able to pay the same money in a year and a half and get double the performance is beyond me.

First Pascal cards will be an improvement, but not 80% better than what we have today (TX/Ti)

Because there is a fairly big jump even with improvements to the 28nm process from what even the TX is on and 16nm FF+, sub 15nm finfet, etc. (due to skipping 20nm planar itself) while as I mentioned before I don't believe a direct optical shrink is possible due to the changes in lithography if you took GM200 and did shrink it to 16nm FF+ the potential performance increase would be fairly big - atleast 45% depending on clock speeds - I believe 55% was touted about for stuff like SRAM and that isn't even taking into account a new architecture designed to take advantage of the process.

EDIT: Also while I doubt we will see as big a jump with conventional titles future DX12 titles will likely benefit in a much bigger fashion - possibly even a fair bit over 2x the performance depending on hardware implementation/optimisation.

EDIT2: Its always possible of course that nvidia will drip feed progressively faster versions of whats possible and/or take advantage of increased efficiency to reduce costs to themselves rather than head directly towards performance.
 
Last edited:
Its always possible of course that nvidia will drip feed progressively faster versions of whats possible and/or take advantage of increased efficiency to reduce costs to themselves rather than head directly towards performance.

This is what's going to happen if one company has the performance advantage over the other, they will drip feed it too us to make it last as long as they can in order too rinse every penny they can. Just like how the Titan/780 had a good run with the TB/Ti sat on the sidelines.
 
Couple of things to bear in mind.

For the last couple of generations NVidia has more or less had parity with AMD at each segment with a card that has had a smaller memory bus, 256 vs 384 then 384 vs 512. With Pascal and whatever AMD do next both using HBM that difference will be gone, both using the same memory bus might make a big difference.

The other thing is NVidia have generally made a few new chips for each achitecture (9 Fermi's, 5 Kepler's, 5 Maxwell's) AMD on the other hand have managed with less (4 GCN 1.0, 2 GCN 1.1, 2 GCN 1.2)
If both sides are going for HBM then it will mean new chips across the range, with AMD cutting R&D like they have been, can they keep up.

Whatever happens it will be interesting to see what both sides come up with.
 
EDIT2: Its always possible of course that nvidia will drip feed progressively faster versions of whats possible and/or take advantage of increased efficiency to reduce costs to themselves rather than head directly towards performance.

This sounds like the nVidia we know. They'll take the Intel approach :(
 
http://www.kitguru.net/components/c...0nm-risk-production-actual-chips-due-in-2017/

Maybe this is why NV were courting Samsung? Who btw have just announced an even more aggressive push towards 10nm. What I read from all this is that 14/16 will be disappointing, not sure why else they would be suicidally throwing themselves at such a tricky and expensive engineering problem, aside from Apple's patronage of course.
 
http://www.kitguru.net/components/c...0nm-risk-production-actual-chips-due-in-2017/

Maybe this is why NV were courting Samsung? Who btw have just announced an even more aggressive push towards 10nm. What I read from all this is that 14/16 will be disappointing, not sure why else they would be suicidally throwing themselves at such a tricky and expensive engineering problem, aside from Apple's patronage of course.

Well if 10nm etc doesn't come until 2017/2018 I'm fine with that. As long as next year (2016) we do get a shrink to 16nm/14nm. A couple years on that node will be fine, second year will be more mature, then 2018 would be good time for 10nm etc.
 
Or they might have 5 year old off the shelf x64 parts.
That could very well be the case but I wouldn't be shocked either way.

If a company wants its console to stand out from the crowd then adding a hardware ray tracing unit along side a typical GPU would be a great way to stand out. Going pure ray tracing at this stage is crazy but a hybrid has a number of advantages. A hybrid also saves money and time in game development costs which could attract more devs. The technology is there, a GPU company is pushing it but who knows what path the consoles will take.
 
Back
Top Bottom