• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Geforce GTX1180/2080 Speculation thread

Dream on id move tier one bit up and thats what we get. Titan is hardly fadter than ti.
Think you didn't get my point, I was basically saying the difference we got between and Titan and 80Ti is no more than the difference between the old days big chip flagship 70 and 80 cards such as the GTX470 vs GTX480 or GTX570 vs GTX580.

The pushed all their cards at least 1 tier upward with the new model naming approach so they could sell lesser cards at higher pricing.
 
The Titan XM was available well before the 980 Ti and has sufficient VRAM to still be useful today.

Useful for what though? I moved from a Gigabyte 980Ti G1 Gaming to a 1080Ti FTW3, just to gain a cooler and quieter 1440p/60Hz experience. The 6GB of VRAM that the 980Ti comes with was never an issue.
 
Useful for what though? I moved from a Gigabyte 980Ti G1 Gaming to a 1080Ti FTW3, just to gain a cooler and quieter 1440p/60Hz experience. The 6GB of VRAM that the 980Ti comes with was never an issue.

Yup, 980Ti still a well balanced card in terms of graphical power and vRam :)

+1

Not worth the premium really. For the difference in price one could have easily purchased a 980Ti, sold it and purchased a 1080Ti when it came out.
 
Any thoughts as to what kind of power consumption the 1180 might be? Do you think we're likely to see less than than the current cards or is the same figure of 250W more probable?
 
Any thoughts as to what kind of power consumption the 1180 might be? Do you think we're likely to see less than than the current cards or is the same figure of 250W more probable?


likely the same or slightly more. note the 1080 has a max power draw of 180w, only the 1080ti has a max draw of 250w.
 
likely the same or slightly more. note the 1080 has a max power draw of 180w, only the 1080ti has a max draw of 250w.

1180 and 1180 Ti are likely to have much the same power draw as your figures for the 1080 and 1080 Ti.

The Titan V does not draw that much power either despite having a huge chip at its core.

Rupn5ty.jpg

A quoted TDP of 250 Watts is pretty good for something that size. I can not see the next gen gaming cards using any more than that.
 
1180 and 1180 Ti are likely to have much the same power draw as your figures for the 1080 and 1080 Ti.

The Titan V does not draw that much power either despite having a huge chip at its core.

Rupn5ty.jpg

A quoted TDP of 250 Watts is pretty good for something that size. I can not see the next gen gaming cards using any more than that.

Obviously HBM had a hand in keeping power requirements down. Still an utterly ridiculously priced card regardless.
 
Obviously HBM had a hand in keeping power requirements down. Still an utterly ridiculously priced card regardless.

I think once we see GDDR6 enabled Titans with the same 5120 SP cores used for gaming as the Titan V uses we will see this myth about how much power HBM2 saves debunked. I really don't think in practice it saves much at all but the bandwidth is needed for a professional card.

As to the price once NVidia remove the Tensor cores, DP cores and HBM2 the gaming cards will sell for a lot less.
 
I think once we see GDDR6 enabled Titans with the same 5120 SP cores used for gaming as the Titan V uses we will see this myth about how much power HBM2 saves debunked. I really don't think in practice it saves much at all but the bandwidth is needed for a professional card.

As to the price once NVidia remove the Tensor cores, DP cores and HBM2 the gaming cards will sell for a lot less.

It's not just about power, bandwidth as mentioned is also a feature plus being able to make the pcb's shorter. Gddr adds a lot of complexity to graphics cards pcb layout compared to having it all bundled onto a small area around the gpu die.

As for power, there's this video that talks about it, its not a myth at all:

 
It's not just about power, bandwidth as mentioned is also a feature plus being able to make the pcb's shorter. Gddr adds a lot of complexity to graphics cards pcb layout compared to having it all bundled onto a small area around the gpu die.

As for power, there's this video that talks about it, its not a myth at all:


and HBM adds to its cost with the cost and complexity of the interposer. If theirs a issue with interposer you have wasted a GPU and HBM dies. the classic issue of integration. Now if there wa sa way in making the HMB stacks part of the main die it self at least you have the option to do salvage parts. Then thers the psychical fragility of the exposed dies compared to normal GDDR.

Swings and roundabouts
 
It's not just about power, bandwidth as mentioned is also a feature plus being able to make the pcb's shorter. Gddr adds a lot of complexity to graphics cards pcb layout compared to having it all bundled onto a small area around the gpu die.

As for power, there's this video that talks about it, its not a myth at all:

The bandwdith is a bit of a myth though. The 1080ti has more bandwidth than Vega64, and GDRR6 will increase this a lot.

The PCB complexity is a trade off, and I wouldn't even say complexity but size. GDDR needs more PCB space, and there are more visible connections. But mounting GDDR moduels is very simple, failure rates are low and if a module is not properly seated it is easy to repair after testing. If the problem lies within the die then that die can be used for a lower end SKU.

With HBM, the PCB is smaller and it looks simpler but the complexity is moved to the HBM chips, the interposer and the GPU die. In total it is much more complex, much more fragile, far harder to manufacture and when there is a failure the whole interposer, HBM and and GPU become a paperweight (imagine a Vega64 with only 1 working HBM stack, lol).

This is kid of of obvious otherwise HBM would have been used a long time ago and would be far cheaper. The whole concept of stacked chips and interposer is very obvious, and have been used since the late 70s (TSC technology wa patened in 1962!). But the process complexity has prohibited the technology until recently.
 
and HBM adds to its cost with the cost and complexity of the interposer. If theirs a issue with interposer you have wasted a GPU and HBM dies. the classic issue of integration. Now if there wa sa way in making the HMB stacks part of the main die it self at least you have the option to do salvage parts. Then thers the psychical fragility of the exposed dies compared to normal GDDR.

Swings and roundabouts


You can;t really salvage HBM GPUs since you just don't have redundancy. And as you say, the bad thing is is that there is no way to remount a bad HBM stack etc.
 
and HBM adds to its cost with the cost and complexity of the interposer. If theirs a issue with interposer you have wasted a GPU and HBM dies. the classic issue of integration. Now if there wa sa way in making the HMB stacks part of the main die it self at least you have the option to do salvage parts. Then thers the psychical fragility of the exposed dies compared to normal GDDR.

Swings and roundabouts


You can;t really salvage HBM GPUs since you just don't have redundancy. And as you say, the bad thing is is that there is no way to remount a bad HBM stack etc.
 
It's not just about power, bandwidth as mentioned is also a feature plus being able to make the pcb's shorter. Gddr adds a lot of complexity to graphics cards pcb layout compared to having it all bundled onto a small area around the gpu die.

As for power, there's this video that talks about it, its not a myth at all:


When NVidia launch a gaming Titan with 5120 SP cores and GDDR6 memory will be the time to compare power usage to a the Titan V, until then it is all theory.

I happen to believe that HBM actually increases power consumption a bit due to its proximity to the GPU core resulting in poorer cooling and higher running temps. As we all know the hotter something runs the more power is needed to make it happen.
 
Nvidia have increased the limit on their existing range in their web shop from 2 to 10 per person. Seems they are trying to shift as much stock as possible before the end of the month.
 
I happen to believe that HBM actually increases power consumption a bit due to its proximity to the GPU core resulting in poorer cooling and higher running temps. As we all know the hotter something runs the more power is needed to make it happen.


Somehow, i think the team of engineers would have realised that in the decade plus it took to get it to production if it were the case.
 
The bandwdith is a bit of a myth though. The 1080ti has more bandwidth than Vega64, and GDRR6 will increase this a lot.

Vega has 50% the memory bus width of Fiji - 2048-bit vs 4096-bit. Also, historically, AMD aren't able to optimise their chips with the first iteration of a new memory.
With GDDR5, they had RV790 sticking to full-memory speed at idle.

So, 50% less memory bandwidth, and ultimately, slower clocks on their own.
 
Back
Top Bottom