• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Nvidia Geforce 'Maxwell' Thread

Soldato
OP
Joined
2 Jan 2012
Posts
11,996
Location
UK.
Would make far more sense based on the currently known information to have a round of GM204 on 28nm (the 750 must have been done for some reason as they wouldn't have done that lightly due to the whole lithography deal) then the full fat GK110 replacement on 16nm (due to the whole bitching about 20nm and full fat Maxwell suitability).

Maybe, TSMC are going to hit volume production of 16nm in 2015. God knows how long we would have to wait for GPU's on 16nm though.. Q4 2015? :eek:

Products manufactured using 16nm FinFET+ will offer up to 40 per cent speed improvement over chips made using 20nm technology.

^^ Could be a good reason to skip over 20nm...

http://www.kitguru.net/components/g...ess-vows-to-start-10nm-production-in-q4-2015/

I still think we'll see 28nm cards this year, and some 20nm cards starting early next year with a Titan moniker card in Feb 2015.

16nm just seems to far out?

2014 28nm Maxwell
2015 20nm Maxwell
2016 16nm Pascal

Seems more realistic to me.
 
Last edited:
Man of Honour
Joined
13 Oct 2006
Posts
92,179
Well some sources claim that nVidia was forced to rework the low and mid-range for 28nm as TSMC couldn't fit capacity for both the high end GPUs and the mid-range and below onto 20nm so they had to choose one or the other (in which case 20nm GPUs would start appearing early, really early, next year).

Its known nVidia was not happy at all with 20nm for the full fat Maxwell though - so it is (indeed) quite possible we'd see something like GTX880 - 28nm (Sept 2014), Titan - 20nm (Jan/Feb 2015), GTX980 (full fat Maxwell) - 16nm (2H 2015).

EDIT: Loads of edits as I've got a crappy cold and not making too much sense.

EDIT2: Given the tech they want to put on Pascal I'm not sure we'll see it before sub 16nm anyhow.
 
Last edited:
Soldato
OP
Joined
2 Jan 2012
Posts
11,996
Location
UK.
Well some sources claim that nVidia was forced to rework the low and mid-range for 28nm as TSMC couldn't fit capacity for both the high end GPUs and the mid-range and below onto 20nm so they had to choose one or the other (in which case 20nm GPUs would start appearing early, really early, next year).

Its known nVidia was not happy at all with 20nm for the full fat Maxwell though - so it is (indeed) quite possible we'd see something like GTX880 - 28nm (Sept 2014), Titan - 20nm (Jan/Feb 2015), GTX980 (full fat Maxwell) - 16nm (2H 2015).

EDIT: Loads of edits as I've got a crappy cold and not making too much sense.

EDIT2: Given the tech they want to put on Pascal I'm not sure we'll see it before sub 16nm anyhow.

Yeah one thing is for sure though, whatever high end cards Nvidia and AMD put out next are going to have ridiculous performance. 290/290X and 780/780Ti/Titan already have awesome performance, This next round of high end cards are going to be even better. GPU's have come a long way in past few years. The race for decent performance at 4K should result in even better performance improvements over the next few years. Could see the lower end cards being able to cope with 1080P with ease.
 
Man of Honour
Joined
13 Oct 2006
Posts
92,179
If they start slashing things like ROPs to fit stuff onto processes they weren't designed for we aren't gonna see great 4K performance for awhile :S
 
Man of Honour
Joined
13 Oct 2006
Posts
92,179
I personally think gpu advancements have slowed down allot. The market is full of rebrands.

AFAIK its more a foundry issue than a GPU vendor one, nVidia have had 20nm design kits and a complete Maxwell design for ages, I forget how long exactly now but it must be in the region of 2 years.
 
Soldato
Joined
1 Apr 2010
Posts
3,034
AFAIK its more a foundry issue than a GPU vendor one, nVidia have had 20nm design kits and a complete Maxwell design for ages, I forget how long exactly now but it must be in the region of 2 years.

Either way we are still being effected and I am not only taking about 20nm being late to the party.

How's about nvidia releasing a 680 as a flagship card, due to the 7970 not being released at its best performance. Also the longer gaps between next gen releases. The 7970 was amds top card for way too long (nearly two years). The gtx580 was just a refresh of the gtx480 and that lasted way too long. IMO things have slowed down allot and prices especially from nvidia have got silly.

I can see why things have slowed right down as these companies can recoup more cash for each cycle instead of moving forward and eating up more r&d costs. Slowing the market down is intentional in my eyes.

Either way I just want new, fast top end tech and not the drip feeding we have been getting lately. :(
 
Soldato
OP
Joined
2 Jan 2012
Posts
11,996
Location
UK.
Either way we are still being effected and I am not only taking about 20nm being late to the party.

How's about nvidia releasing a 680 as a flagship card, due to the 7970 not being released at its best performance. Also the longer gaps between next gen releases. The 7970 was amds top card for way too long (nearly two years). The gtx580 was just a refresh of the gtx480 and that lasted way too long. IMO things have slowed down allot and prices especially from nvidia have got silly.

I can see why things have slowed right down as these companies can recoup more cash for each cycle instead of moving forward and eating up more r&d costs. Slowing the market down is intentional in my eyes.

Either way I just want new, fast top end tech and not the drip feeding we have been getting lately. :(

I disagree, the GTX 680 was still a decent improvement over the GTX 580 in performance http://www.anandtech.com/bench/product/772?vs=767 , and also performance per watt. Likewise the 7970 was night and day over the 6970 http://www.anandtech.com/bench/product/509?vs=508

Then the GTX 780 / 780 Ti and R9 290 R9 290X again improved significantly over the GTX 680 and 7970.

Compare that to CPU's progress and your talking maybe 5% IPC improvement per new arch and AMD seem to be going backwards :(

GPU's have progressed well even with die shrinks not being viable.
 
Man of Honour
Joined
13 Oct 2006
Posts
92,179
480->580 was fairly much inline with a typical refresh cycle, Kepler has been an odd one though - the 680 lags behind what we normally would have seen moving from one high end generation to a new high end generation whereas the GK110 cards are above what we would normally have got - if you put them into previous generation perspective they'd have been slightly ahead of the refresh cards like the 285, 580, etc.

I'm 99% certain the 7970 and 680 didn't happen by accident and that the 7970 was originally the 7870, etc. in the long run it might be better for everyone though if it gave the companies better room to recoup costs during the economic down turn though.
 
Last edited:
Soldato
Joined
1 Apr 2010
Posts
3,034
480->580 was fairly much inline with a typical refresh cycle, Kepler has been an odd one though - the 680 lags behind what we normally would have seen moving from one high end generation to a new high end generation whereas the GK110 cards are above what we would normally have got - if you put them into previous generation perspective they'd have been slightly ahead of the refresh cards like the 285, 580, etc.


580 was defo a refresh more than a next gen which I agree, how ever they kept the 580 for way too long. Gone are the days of huge performance jumps ie 7800 to 8800gtx, 9800 to 280gtx, 280gtx to 480gtx, 3870 to 4870 or 4870 to 5870


Boom - the 680 performance jump was small compared to the 280gtx to 480gtx, plus it had a 256bit bus. I really do believe if amd had released the 7970 with the drivers we have today nvidia would have released something different than the 680 gk104 chip.

CPU progression has never been the same as gpu progression but I do agree that has been very poor due to the lack of competition.
 
Man of Honour
Joined
13 Oct 2006
Posts
92,179
I think one of the other things that made a big difference in the past was feature leaps - 7000 series to 8000 series saw the inclusion of shader pipelines in the place of fixed function hardware which made a huge difference in performance and visual fidelity beyond just the raw performance increase, 200 series again saw a big increase in compute/shader capabilities (along with a jump from ~400GFlop to ~950GFlop) which made a big apparent difference in games that pushed the envelope and so on.

Fermi and Kepler in that regard aren't so different from a consumer perspective to top up any difference in performance the same way.
 
Last edited:
Soldato
Joined
3 Feb 2012
Posts
14,413
Location
Peterborough
I really do believe if amd had released the 7970 with the drivers we have today nvidia would have released something different than the 680 gk104 chip.

CPU progression has never been the same as gpu progression but I do agree that has been very poor due to the lack of competition.

The 7970 wasn't THAT much faster than the 680 with the newer drivers. Sure it was faster but not that much.
 
Soldato
Joined
26 Apr 2003
Posts
5,744
Location
West Midlands
I really do miss the feature bumps over the speed bumps, they made the biggest difference for me growing up with PC gaming.

Seeing proper texture filtering in Nearest-neighbour to Bi-Linear, jaggies to AA, flat textures to bump maps and pixel shaders. Sure we've had tessellation and the like but it's hasn't been anywhere near difference that other landmark features have provided. It's a shame.
 
Soldato
Joined
1 Apr 2010
Posts
3,034
The 7970 wasn't THAT much faster than the 680 with the newer drivers. Sure it was faster but not that much.



That's the point though. Nvidia would have never released their top end card slower than amds flagship as nvidia always like to hold the crown (even if it is a small lead). When they did release the 680 it was slightly ahead of the 7970 and then amd sorted the drivers out which put the 7970 just slightly ahead. The 680 IMO was never meant to be nvidia flagship and really we all should have got a full fat chip instead.


Believe the hype or not I still think gpu advancement has slowed down and both companies are happy to drip feed tech as it benefits their pocket. Why bring out new tech every twelve months with big performance jumps when they can lengthen the cycles, reduce performance jumps, up the prices and get away with it.
 
Soldato
Joined
31 Dec 2010
Posts
2,604
Location
Sussex
The 7970 wasn't THAT much faster than the 680 with the newer drivers. Sure it was faster but not that much.

The drivers weren't the only problem though. For some reason AMD simple refuse to do any binning of their parts. Not even for 7990 or R9-295X2. Nvidia on the other hand bin like crazy. If AMD had been willing to bin Tahiti they could have released 7970 at GHz or better with the same power and temps as the released 7970. Plenty of people running 7970 at 1.05V or lower, but AMD released the 7970GHz BIOS at 1.25V.

The other thing is that, while AMD are very generous with giving quite good DP performance while Nvidia disable that from their gaming cards, the bigger issues that AMD do not seem to be doing the same amount of power gating for the FP64 parts of the chip. Which is costing them a lot in terms perf/watt.

Strange thing is that, for their APUs AMD are actually very good at power gating so that their APUs often idle better than Intel. Which considering their lower R&D budget and the importance Intel has been attaching to mobile recently is quite impressive.
 
Caporegime
Joined
18 Oct 2002
Posts
33,188
Why bring out new tech every twelve months with big performance jumps when they can lengthen the cycles, reduce performance jumps, up the prices and get away with it.

Why? Because of process tech. Every significant performance improvement has come from doubling transistor count, this happens with a change in process node, simple as that. You can't double performance within the same boundaries of the same process node(obviously talking about the maximum realistic die size that can be made on a process in terms of power, size/yields).

They CAN'T bring out new tech every 12 months on the same process node with huge performance jumps, it's literally impossible, it has never happened and won't ever happen because they can't.

It's taken ARM 2-3 years to get from A15 to A57, which is around a 40% IPC improvement, and is only being done on a new node because that is the only way to make the chip in relatively the same power/die size as they could make the A15 at 28nm. It's the same for all silicon tech once you reach the relative maximum power/die sizes trade offs for a given market.

ARM chips came closer together and with bigger performance boosts before they hit those die size/power limits, since they have it took a new node to delivery a15 with the right performance/size at 28nm, and 20nm for A57. A57 would use too much power and be too big to be financially viable at 28nm, same way A15 on 40nm isn't viable.

We have 7billion transistor gpu's at around 500mm^2 at 250W on 28nm, you CAN'T make a 14billion transistor gpu at any size at less than 400W on 28nm... you can make it at 500mm^2 and roughly 250W at 16nm. That is where the doubling of transistor count will happen.

Every "real" next gen architecture that gives say 70%+ performance has come with roughly speaking double the transistor count.


As for 7970, it was never meant to be the 7870, it's a silly suggestion, if it was and there was a "real" 7970(ie the 290x) it wouldn't have come out that much later.


The 680gtx was always going to be marketed as such and it was always really the midrange part. Nvidia had 280gtx being lower yield than they wanted, higher priced and not as good. The 480gtx was a disaster, it was stupid late, had awful yields and their higher volume 460gtx came so late because their plan for years was high end with low end 6 months later, with salvaged parts filling the gap. 500mm^2+ parts on processes below 65nm were too difficult to get out early with high yields.

GK104 was an active plan to get the mainstream part out first because it was smaller and would give much better yields than the high end part. THis was coming after 2 generations of difficulties trying to make a 500mm^2 part early in a process cycle. Nvidia was ALWAYS going to delay the high end low yield part, they were always going to make the mainstream part first and because of how Nvidia are they were always going to market it as a uber fast part and rinse customers for it.

THey didn't delay the Titan and just go with 680gtx, Titan wasn't ready and wouldn't have yielded well enough for a high volume part. AMD didn't upgrade a midrange part to high end then "delay" the high end part by the best part of two years because that is nonsense, you don't delay a part that is ready that could be making you money, ever. You also don't put 384bit bus or go for 250W on a midrange part.
 
Caporegime
Joined
18 Oct 2002
Posts
39,526
Location
Ireland
The drivers weren't the only problem though. For some reason AMD simple refuse to do any binning of their parts. Not even for 7990 or R9-295X2. Nvidia on the other hand bin like crazy.

.

Supposedly they did for the 7990, though considering how many people I've seen saying their 7990 runs hot and loud as opposed to the cool and quiet review samples it must have been a pretty short binning process. If anything they just seemed to use any core they had on the 7990 to shift inventory on an outgoing gpu.
 
Soldato
Joined
1 Apr 2010
Posts
3,034
Wow a wall of text. Essaymaster is on form today. :D

Your good at writing but not at reading other peoples posts. No one mentioned doubling performance or a 7970 was meant to be a 7870. :confused:



The you don't delay a part that is ready that could be making you money, ever. You also don't put 384bit bus or go for 250W on a midrange part.


Of course you do if your competition has nothing which is wiping the floor with your products and your current products are selling well. If anything it's good business sense to hold it back and milk your current products for as much as possible, hence more profit from that cycle.
 
Last edited:
Soldato
Joined
3 Feb 2012
Posts
14,413
Location
Peterborough
That's the point though. Nvidia would have never released their top end card slower than amds flagship as nvidia always like to hold the crown (even if it is a small lead). When they did release the 680 it was slightly ahead of the 7970 and then amd sorted the drivers out which put the 7970 just slightly ahead. The 680 IMO was never meant to be nvidia flagship and really we all should have got a full fat chip instead.

Believe the hype or not I still think gpu advancement has slowed down and both companies are happy to drip feed tech as it benefits their pocket. Why bring out new tech every twelve months with big performance jumps when they can lengthen the cycles, reduce performance jumps, up the prices and get away with it.

If the 780 / Titan was ready at this point they'd have released it. It would have left them how long with a much faster chip than AMD? I think the 680 / 7970 just reflected the realities at that time.

nVidia wouldn't have released the 680 thinking that it would be faster than the 7970 forever. The specs alone show that the 7970 should have been the better card. Luckily AMDs driver team got their backsides in gear and got the hierarchy about where it should have been :).
 
Back
Top Bottom