• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA Beats AMD to Market On HBM2 - Announces Tesla P100

Man of Honour
Joined
21 May 2012
Posts
31,953
Location
Dalek flagship
UZSxzSI.jpg


TiJV0wh.jpg


kwjH41p.jpg


ilMXZpX.jpg


Uj08yrU.jpg


KI1cuU6.jpg


https://www.techpowerup.com/232248/nvidia-beats-amd-to-market-on-hbm2-announces-tesla-p100
 
I'm more bothered which company brings it to gaming chips first and whether it provides the performance in games over GDDR5X.
 
I'm more bothered which company brings it to gaming chips first and whether it provides the performance in games over GDDR5X.

^^

This, although we already know Nvidia is too tight to add it their gaming GPU's yet. As it would effect their massive bottom line.

Will probably first see HBM on Nvidia GPU on a Titan X type card for the reasonable price of £1200+. Coming in the future peeps ..
 
^^

This, although we already know Nvidia is too tight to add it their gaming GPU's yet. As it would effect their massive bottom line.

Will probably first see HBM on Nvidia GPU on a Titan X type card for the reasonable price of £1200+. Coming in the future peeps ..

id rather 12GB of GDDR5X than 8GB of HBM2.

VRAM usage is going up with 4K
 
I'm more bothered which company brings it to gaming chips first and whether it provides the performance in games over GDDR5X.
Wont really make a noticeable difference to games.

Memory bandwidth is much more important in some HPC work like deep learning.
 
This is very very old news Kaapstad.

This was all shown back at in April 2016 by NVIDIA at GTC, and has been available in their DGX-1 Racks since then as well.
http://images.nvidia.com/content/technologies/deep-learning/pdf/Datasheet-DGX1.pdf

Hell the documents that article is based on still shows 2016 as well.
http://images.nvidia.com/content/tesla/pdf/nvidia-tesla-p100-datasheet.pdf



From GTC 2016
But as yo say, wasn't available as a stand alone card but part of an expensive rack system because early on yields and HBM availability were not sufficient.
No coincidence that nvidia can now release a full GP102 part for gaming and a GP100 part for more 'mainstream' HPC use.
 
Nothing to do with being tight, HBM isn't needed on gaming cards, as they are showing.

Oh so un informed..

Even Pascal current Pascal with HBM would be a massive improvement. Even less power consumption, Massively higher bandwidth. Smaller designs. I would much rather my GTX 1080 Ti had 8GB HBM 2.0 than this GDDR5X stop gap xD. Nvidia have stated multiple times HBM to expensive for them to implement currently. Would effect their huge profit margins. Once Nvidia can get just as cheap as GDDR5~, you best believe they will go with HBM.
 
Oh so un informed..

Even Pascal current Pascal with HBM would be a massive improvement. Even less power consumption, Massively higher bandwidth. Smaller designs. I would much rather my GTX 1080 Ti had 8GB HBM 2.0 than this GDDR5X stop gap xD. Nvidia have stated multiple times HBM to expensive for them to implement currently. Would effect their huge profit margins. Once Nvidia can get just as cheap as GDDR5~, you best believe they will go with HBM.

The Tesla P100 uses as much power as a full fat Titan XP. This despite the fact that the Tesla card has lower clocks and less cores. If HBM does reduce power usage what on earth is the Tesla card doing with the rest of the watts it is using.
 
The Tesla P100 uses as much power as a full fat Titan XP. This despite the fact that the Tesla card has lower clocks and less cores. If HBM does reduce power usage what on earth is the Tesla card doing with the rest of the watts it is using.

In what way does a P100 which has 3584 SP cores and 1792 DP cores have less cores than a GP102 which has 3840 cores? Are we back to the part where you can't count? P100 is significantly larger than GP102, with more transistors, higher bandwidth cost and running workloads that pull more data from off card in general it SHOULD draw more power. HPC workloads is generally smaller work on much greater data sets, meaning pulling huge data across the pci-e bus, which means more power usage, and 64bit cores have always used significantly more power than 32bit cores. The entire reason for stripping FP64 out of Maxwell was power saving.

But you're right, HBM sucks, it uses more power and makes everything worse, it has worse latency... because you say so and it makes Fury suck at 1080p.... despite GDDR5 and GDDR5x cards like 980ti/Titan XP/1080 also showing less of a performance difference at 1080p than 4k to the cards below it.

But then I'm responding to someone who thinks 3840 > 3584 + 1792.
 
In what way does a P100 which has 3584 SP cores and 1792 DP cores have less cores than a GP102 which has 3840 cores? Are we back to the part where you can't count? P100 is significantly larger than GP102, with more transistors, higher bandwidth cost and running workloads that pull more data from off card in general it SHOULD draw more power. HPC workloads is generally smaller work on much greater data sets, meaning pulling huge data across the pci-e bus, which means more power usage, and 64bit cores have always used significantly more power than 32bit cores. The entire reason for stripping FP64 out of Maxwell was power saving.

But you're right, HBM sucks, it uses more power and makes everything worse, it has worse latency... because you say so and it makes Fury suck at 1080p.... despite GDDR5 and GDDR5x cards like 980ti/Titan XP/1080 also showing less of a performance difference at 1080p than 4k to the cards below it.

But then I'm responding to someone who thinks 3840 > 3584 + 1792.

Why don't you just once write something positive about NVidia when they have some success, then we will all stop thinking you are biased.

For example don't you think the new Titan XP is a very interesting piece of engineering where you could talk about the technical achievements and not just the price.
 
Why don't you just once write something positive about NVidia when they have some success, then we will all stop thinking you are biased.

For example don't you think the new Titan XP is a very interesting piece of engineering where you could talk about the technical achievements and not just the price.


So.... you can't explain yourself is what you're saying? The post is the height of hypocrisy as well. You posted saying that HBM sucks... even after Nvidia use it, you're tainting HBM, a new and exciting technology, just because AMD helped develop it and used it first. This thread isn't about the Titan XP performance, it's not about Titan XP at all, you decided to have another pop at HBM and used the Titan XP shader count to try and say that HBM is somehow bad for the Tesla.

I responded on THAT subject, and surprisingly didn't use the opportunity of you trying to absurdly attack HBM yet again, to randomly post in praise of a Titan XP which isn't the subject matter of this thread or your post.

You are so incredibly biased against AMD that you are attacking a new technology in HBM at every opportunity, once again lying about the shader count difference between Gp102 and P100 to try and imply that HBM doesn't help at all. You've posted this exact thing before and it's been pointed out to you many times that P100 has drastically more shaders than a GP102... yet months and months after being informed you're wrong you are purposefully lying to 'attack' HBM yet again.

But you accuse me of bias..... lol.
 
So.... you can't explain yourself is what you're saying? The post is the height of hypocrisy as well. You posted saying that HBM sucks... even after Nvidia use it, you're tainting HBM, a new and exciting technology, just because AMD helped develop it and used it first. This thread isn't about the Titan XP performance, it's not about Titan XP at all, you decided to have another pop at HBM and used the Titan XP shader count to try and say that HBM is somehow bad for the Tesla.

I responded on THAT subject, and surprisingly didn't use the opportunity of you trying to absurdly attack HBM yet again, to randomly post in praise of a Titan XP which isn't the subject matter of this thread or your post.

You are so incredibly biased against AMD that you are attacking a new technology in HBM at every opportunity, once again lying about the shader count difference between Gp102 and P100 to try and imply that HBM doesn't help at all. You've posted this exact thing before and it's been pointed out to you many times that P100 has drastically more shaders than a GP102... yet months and months after being informed you're wrong you are purposefully lying to 'attack' HBM yet again.

But you accuse me of bias..... lol.

So many assumptions this is not even worth replying to.

Just one thing though, if you read my last post very carefully you will realise that I did not offer an opinion about the Titan XP.
 
This is how I see it too.

Not using it on gaming gpu's may be why they were able to beat AMD to the workstation market.

Unigine Superposition benchmark is coming out very soon, from what I understand of it this bench does show the value of HBM and what it can do at higher resolutions.
 
So many assumptions this is not even worth replying to.

Just one thing though, if you read my last post very carefully you will realise that I did not offer an opinion about the Titan XP.

Kaap his reply to you was spot on. You got it wrong so just admit that and move on. Titan Xp does not have more Shaders and GP100 uses more power due to having a lot more. Without HBM2 Nvidia may have had to lower clocks even more to keep the power down with GDDR5. Nvidia don't use a more expensive option unless there are benefits.
 
Kaap his reply to you was spot on. You got it wrong so just admit that and move on. Titan Xp does not have more Shaders and GP100 uses more power due to having a lot more. Without HBM2 Nvidia may have had to lower clocks even more to keep the power down with GDDR5. Nvidia don't use a more expensive option unless there are benefits.

Have I ?

How many of those cores does it use when doing various tasks ?

Remember the original Titans did DP where as the 780 Ti did not yet the power consumption was about the same.
 
Have I ?

How many of those cores does it use when doing various tasks ?

Remember the original Titans did DP where as the 780 Ti did not yet the power consumption was about the same.

In gaming that's not surprising as dp compute is not much of a factor from what I understand. Fp32 is important while gaming not fp64. HBM reduces power consumption true or false? NV used it for a reason whether that's extra bandwidth, lower power draw or a smaller board. Maybe all 3 combined is why.
 
Back
Top Bottom