• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

The Nvidia Hopper thread

Soldato
Joined
1 Apr 2014
Posts
18,631
Location
Aberdeen
Well, someone had to start it!

All we have so far is this article from WCCFTech, not the most reliable source. Not even a reliable source, come to think of it.

TLDR: Hopper will see a change to MCM architecture a la Ryzen.
 
Soldato
Joined
6 Feb 2019
Posts
17,582
Multiple dies on a single card. Let's hope just like for CPU's, we get an explosion in core counts. Let's see 20,000 Cuda cores on RTX4000
 
Permabanned
Joined
2 Sep 2017
Posts
10,490
this article from WCCFTech, not the most reliable source. Not even a reliable source, come to think of it.

TLDR: Hopper will see a change to MCM architecture a la Ryzen.

Unfortunately not.

They said: "NVIDIA's architectures are always based on computer pioneers and this one appears to be no different. Nvidia's Hopper architecture is based on Grace Hopper who was one of the pioneers of computer science and one of the first programmers of Harvard Mark 1 and inventor of the first linkers."

Except that Tesla, Fermi, Kepler, Maxwell, Pascal, Turing and Ampere have nothing to do with computers. Actually, the computer was invented much later after them.

Multiple dies on a single card. Let's hope just like for CPU's, we get an explosion in core counts. Let's see 20,000 Cuda cores on RTX4000

Many people have been arguing that chiplets are not suitable yet for GPU acceleration because of CrossFire/SLi related issues with the synchronisation between the different parts.
20K cores is extremely highly unlikely because you have very strong diminishing returns after 4096 (64 CU) shaders which was the maximum of the Vega, and for a reason.

Not monolithic design could be strictly for HPC/Data centre accelerations and not for gaming!

This thread could be 2 years old before we get any proper credible info.

Well, Hopper as a name was revealed by nvidia during the conference call on Q3 2019 financial results. But yeah, 2021-2022 time frame sounds pretty likely.
 
Soldato
Joined
6 Feb 2019
Posts
17,582
Unfortunately not.

They said: "NVIDIA's architectures are always based on computer pioneers and this one appears to be no different. Nvidia's Hopper architecture is based on Grace Hopper who was one of the pioneers of computer science and one of the first programmers of Harvard Mark 1 and inventor of the first linkers."

Except that Tesla, Fermi, Kepler, Maxwell, Pascal, Turing and Ampere have nothing to do with computers. Actually, the computer was invented much later after them.



Many people have been arguing that chiplets are not suitable yet for GPU acceleration because of CrossFire/SLi related issues with the synchronisation between the different parts.
20K cores is extremely highly unlikely because you have very strong diminishing returns after 4096 (64 CU) shaders which was the maximum of the Vega, and for a reason.

Not monolithic design could be strictly for HPC/Data centre accelerations and not for gaming!



Well, Hopper as a name was revealed by nvidia during the conference call on Q3 2019 financial results. But yeah, 2021-2022 time frame sounds pretty likely.

speak for yourself Turing scales well beyond your arbitrary core count number which may be some AMD architecture issue
 
Permabanned
Joined
2 Sep 2017
Posts
10,490
speak for yourself Turing scales well beyond your arbitrary core count number which may be some AMD architecture issue

Arbitary core count number is only in your trolling imagination! Show some respect to the Amdahl's Law :D

2560px-Amdahls-Law-svg.png
 
Soldato
Joined
6 Feb 2019
Posts
17,582
Arbitary core count number is only in your trolling imagination! Show some respect to the Amdahl's Law :D

2560px-Amdahls-Law-svg.png

But by that logic a Titan RTX should be no faster than a 2080ti because they've both exceed this 4096 "core limit". Yet it's a good 10% faster with just 10% more cores
 
Soldato
Joined
28 May 2007
Posts
10,067
Ive never seen it bottleneck at 4K even with a ryzen 3000 cpu. What game gets bottlenecked at 4K?

Just from what Kaap has showed on here. You can see the cpu bottleneck at 1080p/1440p and from what Kaap has showed there is still more in the tank at 4k as well. You can see the trend in reviews and i think the gpu still has more in the tank at 4k as well as it keeps stretching it's lead as the resolution goes up. @Kaapstad is the best person to answer though.

You can see it in some of these games with Titan RTX v 2080ti. Literally no difference at times. Sometimes the extra spec gives a difference and sometimes it don't. More dx12/ Vulkan games could come into play which would probably help but i think dx11 Witcher 3 showed the biggest advantage here.

https://www.youtube.com/watch?v=4rYIpEEabDk
 
Last edited:
Permabanned
Joined
2 Sep 2017
Posts
10,490
But by that logic a Titan RTX should be no faster than a 2080ti because they've both exceed this 4096 "core limit". Yet it's a good 10% faster with just 10% more cores

That is because their performance depends on other things, not only the "arbitrary" core count number:

RTX 2080 Ti - 4352 shaders | 272 TMUs | 88 ROPs | 11 GB at 352-bit MI |
1545 MHz - 1750 MHz

Titan RTX - 4608 shaders | 288 TMUs | 96 ROPs | 24 GB at 384-bit MI |
1770 MHz - 1750 MHz

The Titan clocks at 14.9% higher frequency, has more memory bandwidth, more available memory, more TMUs and ROPs!
 
Man of Honour
Joined
21 May 2012
Posts
31,940
Location
Dalek flagship
That is because their performance depends on other things, not only the "arbitrary" core count number:

RTX 2080 Ti - 4352 shaders | 272 TMUs | 88 ROPs | 11 GB at 352-bit MI |
1545 MHz - 1750 MHz

Titan RTX - 4608 shaders | 288 TMUs | 96 ROPs | 24 GB at 384-bit MI |
1770 MHz - 1750 MHz

The Titan clocks at 14.9% higher frequency, has more memory bandwidth, more available memory, more TMUs and ROPs!

Both the 2080 Ti and RTX Titan boost to about the same clockspeed, the default clockspeeds are not important.

Performance difference is about 7% or 8% between the cards.

When comparing the cards care must be taken to compare like for like, for example my RTX Titans have their stock air coolers and bios where as most 2080 Ti cards have aftermarket coolers or water cooling and even shunt mods and extreme bios setups.

When comparing fast cards 2160p and ultra settings is best to use as there is no CPU bottleneck.

CPU bottleneck even using DX12 becomes a pain on lower resolutions when the fps gets to around 120fps to 130fps, each game and bench is slightly different.

As for err Amdahl's Law in that graph above, what a joke. If I use DX12 SLI on my RTX Titans in SOTTR @2160 the scaling is really good despite using a total of 9216 cores.:eek::D:)
 
Soldato
Joined
6 Feb 2019
Posts
17,582
Well said Kaapstad. I think 4K is just trying very hard to justify why AMD is unable to build more powerful graphics cards - which has absolutely no relevance to Nvidia.
 
Man of Honour
Joined
13 Oct 2006
Posts
91,128
Many people have been arguing that chiplets are not suitable yet for GPU acceleration because of CrossFire/SLi related issues with the synchronisation between the different parts.
20K cores is extremely highly unlikely because you have very strong diminishing returns after 4096 (64 CU) shaders which was the maximum of the Vega, and for a reason.

Not monolithic design could be strictly for HPC/Data centre accelerations and not for gaming!

People seem to have this vision of a Ryzen like chiplets approach (as per old nVidia Einstein diagrams) being the only approach to MCM but it isn't - advances in substrate technology and sub 7nm nodes allow for all kind of non-monolithic approaches.

Diminishing returns with core counts depends on architecture and the kind of data you are processing - older GPUs for instance would run into problems scaling beyond ~256 cores before you ran into inefficiencies while newer ones can go to 2, 4 even 8K and it can be different depending on compute v gaming, etc. and the make up of types of processing required for a game but it is why "leaks" that simply multiply current core counts for next gen GPUs are 9 times out of 10 fake.

With multiple GPU packages the efficiency would be how effectively you can chop up the work load potentially bypassing to some degree the problem of utilising ever increasing core counts. But we are a long way from a situation of using multiple discrete GPU packages using some kind of SLI/crossfire approach without all the same old problems so unlikely it will be utilised in that fashion for gaming GPUs.
 
Last edited:
Soldato
Joined
18 May 2010
Posts
22,376
Location
London
If GPU's continue to get more and more expensive, then one can argue we aren't seeing advancement in this space.

More people will buy consoles anyway if this continues.

Also there's a point at which the Nvidia market will not be able to sustain the price increase and a lot of people will just jump on AMD. I mean if the current Navi's had RT support then I don't think the 2000 series would be anywhere near as expensive as they currently are.
 
Last edited:
Back
Top Bottom