• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA ‘Ampere’ 8nm Graphics Cards

I don't know why Kaap is using the performance of his cards in SLI as a predicter of what RTX performance will be like in Ampere. Nvidia will have learned a lot from the Turing generation of cards. You can see how performance has increased through the year from just software optimisations, how big of an increase will we see if they throw more hardware at it?

For me, I think there will be small jump in Rasterization performance but I fully expect Nvidia to massively increase the performance in Ray Tracing. I think they have to do one or the other, large increase in Ray Tracing performance or large increase in Rasterized performance, and since they have hung their hat on Ray Tracing, I think that's where the biggest performance improvements will be.

I am using SLI performance in SOTTR to highlight the fact that despite exceptionally good scaling the performance of Ray Tracing is still killing fps.

Even if Ampere cards were twice as good as an RTX Titan at Ray Tracing it would still not be enough.

For Ray Tracing to be acceptable it would have to be 4X as fast as it is on an RTX Titan on cards like the 3070 or 3080, this is not going to happen.

It would be totally unacceptable for just the 3080 Ti and Ampere Titan to be just fast enough to allow Ray Tracing to run at low fps, remember there are 120htz 2160p monitors they are going to have to deal with.
 
From driver optimisation?

Wasn't specifically talking about that - ray tracing is generally not something well understood by your average game developer and they are hamstrung somewhat by the needs to cater to the majority of the audience who don't have systems capable of a feasible form of ray tracing.
 
Maybe miss understood your wording it implied 2000 series cards have more to give in context of DXR?

Never thought of the devs side of things
 
Maybe miss understood your wording it implied 2000 series cards have more to give in context of DXR?

Never thought of the devs side of things

The actual backend implementation of ray tracing/RTX in new games is actually quite some way behind the path tracing implementation in Quake 2 RTX - people see the low geometry detail and lacking modern rendering features such as static meshes, geometry shaders, volumetric effects, etc. and then write off RT but the actual RT backend in Quake 2 RTX is capable of much more than the engine allows for and runs at reasonable performance on 2000 series cards. Adding in those modern features doesn't completely decimate the performance potential of the rendering backend used in Quake 2 RTX and even that is a fairly preliminary implementation. That isn't to say it is going to magically transform anything but the potential is much higher than people are giving credit for based on a few less than ideal implementations.

Are there any pure RT games? Ones without the overhead of any of the other graphics technologies?

Only one currently that uses RTX in place of traditional rendering techniques throughout is Quake 2 and that doesn't really show off what RT can do due to the age of the rest of the engine/features.

EDIT: There are a couple of tech demos you can find on Steam - I forget the name off the top of my head - but they still have more limited implementation.

EDIT: https://store.steampowered.com/app/1081330/Stay_in_the_Light/ I think that one is purely using RT features (well sort of - it isn't very good though really).
 
Last edited:
My post never said it would. Though 2000 series cards are capable of a lot better than we've seen so far but only the 2080ti has anything like viable enough performance.
2000 series is just a demo of the tech as far as I am concerned. In the end I get the feeling AMD’s implementation will end up being used long term. Really hope the 3000 series is something much better.
 
In the end I get the feeling AMD’s implementation will end up being used long term.

Unless they have something to pull out the bag I have my doubts - AMD's current approach has too many compromises which might work better in the short term but are dead-ends long term.
 
Unless they have something to pull out the bag I have my doubts - AMD's current approach has too many compromises which might work better in the short term but are dead-ends long term.
Will be interesting to see how it turns out, but one thing is for sure unless Nvidia keep paying devs extra money, any RT stuff we see will be designed for consoles which will run AMD’s implementation.
 
Will be interesting to see how it turns out, but one thing is for sure unless Nvidia keep paying devs extra money, any RT stuff we see will be designed for consoles which will run AMD’s implementation.

Depends how prevalent devs use standard DXR or Vulkan (currently supports nVidia only) approaches versus custom implementation.
 
Depends how prevalent devs use standard DXR or Vulkan (currently supports nVidia only) approaches versus custom implementation.
Unless paid for, why will they spend extra resources to code for Nvidia’s custom implementation? Nvidia will have to pay I would imagine. Nvidia will need to provide something MUCH better than the RT that the 2000 series provides in order to stand out and have devs want to code for that without being paid extra to do so.
 
Unless paid for, why will they spend extra resources to code for Nvidia’s custom implementation? Nvidia will have to pay I would imagine. Nvidia will need to provide something MUCH better than the RT that the 2000 series provides in order to stand out and have devs want to code for that without being paid extra to do so.

Devs, even on console, are much more likely to support a standardised implementation either DXR or Vulkan once AMD has something that works with it and at that point it won't matter whether it is AMD or nVidia hardware underneath it will just matter how fast that hardware is.
 
Would be a failure. They have 50% higher transistor density to play with - anything under 50% would be a disappointment for the engineers.

It has 18600 M transistors over 754 mm2 which equals ~24.59Mtr/mm2.
7nm Navi 10 is ~41.035Mtr/mm2.
Difference is ~66.88%.

But even if they want to equal RTX 2080 Ti performance, they still need a chip larger than 300-350 mm2...
 
It looked like they were shrinking them and going for lower power for years but with this RT were at square 1 again with chip size and power consumption.


Whats the actual limit? 400w Ampere at nearly max gpu length would sell i think i can just about get the 2080ti Gaming x trio in my case and that card is HUGE. My psu is also under no real sweat AX860i can handle it.
 
Eh once you start playing games with decent ray tracing you will quickly find even good rasterization undesirable. There are a lot of small touches with dynamic indirect lighting and better reflection accuracy, etc. that once you get used to you quickly notice the lack of.
I've played games with decent ray tracing (my best mate has a 2080Ti). It's nice, for sure. looks ok. But I really just see it more as a graphics slider that looks "decent" but TANKS performance. Plenty games I have thought... oh wow is that ray tracing!? and its not. the difference just isnt nearly as amazing as people are getting swept up in it belive. It just a graphics option like hairworks or ambient occlusion. unfortunately its a graphics option that literally ruins gameplay FPS.
 
It has 18600 M transistors over 754 mm2 which equals ~24.59Mtr/mm2.
7nm Navi 10 is ~41.035Mtr/mm2.
Difference is ~66.88%.

But even if they want to equal RTX 2080 Ti performance, they still need a chip larger than 300-350 mm2...


Navi 10 is built on 7nm DUV, Ampere will be built on 7nm EUV, that is basically the same jump as going from 16nm to 7nm DUV.
Moreover, 7nm EUV offers higher yields so bigger chips are more viable.
 
It looked like they were shrinking them and going for lower power for years but with this RT were at square 1 again with chip size and power consumption.


Whats the actual limit? 400w Ampere at nearly max gpu length would sell i think i can just about get the 2080ti Gaming x trio in my case and that card is HUGE. My psu is also under no real sweat AX860i can handle it.

Nonsense, RTX in Turing added 5-6% of the surface area. Tensor cores added another 5%, but the tensor cores are used for FP16 support so the transistor would be used elsewhere within the CUDA core.
 
Back
Top Bottom