• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA ‘Ampere’ 8nm Graphics Cards

Some people still think raytracing is an nvidia thing only, so it doesnt surprise me.. ^


It doesn't help when no other GPU manufacturer releases a consumer-grade product with those features in-action and available benchmarks.

Maybe if AMD released videos showing Cyberpunk or virtually any ray traced game, the dissociation with NVIDIA and Raytracing would begin.
 
The technology has been around for a long time, it only festered as the amount of processing required to do it in real time was unheard of. Yes I agree if AMD reveal something soon of a game or two using the feature it would firstly get nitpicked apart but allow us to know there is an alternative coming.
 
RT is already confirmed since the XBX /PS5 have the tech. DLSS type scaling should also be introduced as there is mention of ML for resolution scaling in the XBX features.


I don't think consoles are interested in DLSS. They will rather you have the PS5 Pro in 2-3 years time.

Heck, I'm kind of shocked NVIDIA is interested in DLSS. As good as DLSS is, it kind of goes against the whole needing to upgrade ASAP path which hardware manufacturers love.
 
Well it gives them a leg up in benchmarks so if the game has it the fps is maintained with excellent visuals. The issue is not enough games have it.
 
I don't think consoles are interested in DLSS. They will rather you have the PS5 Pro in 2-3 years time.

Heck, I'm kind of shocked NVIDIA is interested in DLSS. As good as DLSS is, it kind of goes against the whole needing to upgrade ASAP path which hardware manufacturers love.

The consoles have always used similar to DLSS upscaling.

PS5 won't differ.

Realtime AI Upscaling Could Redefine PS5 Backwards Compatibility, Game File Sizes
https://www.playstationlifestyle.net/2020/02/07/ps5-backwards-compatibility-ai-upscaling/
 
So I take it all the posters, who said they were going to buy consoles if the price was too high, will be getting a 3080?

I'm also amazed at the uturn people are doing on the FE design, it seems a now well loved design and the AIB's are crude/horrid?
Probably something to do with the FE's being cheaper for a change:D

Guess it was to be expected:rolleyes:

Turn on FE design? It's completely different to previous FE designs and looks great tbh. The AIB for most part are always cheap and tacky looking.
 
What are 'tensor cores'? Sure it's some fancy name for dedicated parts of the gpu that perform RT and ML tasks but why does AMD or anyone else NEED to have them? Are you saying RT cannot be done using stream processors in AMD gpu's?
You do understand that RT is performed by lots of mathematical calculations which can be done by any processing unit right? There is RT in the Crysis remastered which is being done in software and that seems fine. Here's a demo of Cryengine software RT:


On Tensor cores: https://www.youtube.com/watch?v=yyR0ZoCeBO8

You need to have them because real time ray tracing is so taxing on hardware that general purpose hardware cannot do ray tracing in real time, it's way too slow. You need to have dedicated hardware which can do ray tracing "operations" much faster, the trade off being that they can only do this one type of operation, they can't be used for general purpose calculations in the same way. You can do ray tracing operations on any general purpose hardware, that's not the issue, in fact you can even do it on a regular CPU and there has been demos of this over the years. The issue is that general purpose hardware is so monumentally slow at it that it's essentially useless for real time gaming. And that basically comes down to the fact that ray tracing is a lot more demanding on hardware than rasterization, to such a degree that few people really appreciate.

With hybrid rasterization and ray tracing obviously what load you put on your GPU depends to the degree on how much ray tracing you're doing. You can cast literally just a few rays to do something like calculate the bounce of an audio source to do better directional audio, and the cost of a few rays is very tiny. Or you can do real time full scene ray tracing which obliterates even modern GPUs, even the 3xxx range with all the hardware optimized ray tracing hardware can only do full scene ray tracing at like 720p@30fps, upscalled only with DLSS.

Some people still think raytracing is an nvidia thing only, so it doesnt surprise me.. ^

Literally never ever said that, but thanks for putting those words into my mouth. It's not "nvidia thing only" in fact ray tracing in software has been around for an extremely long time, whether or not you can render a frame with ray tracing is NOT the issue, it's whether you can render at >=1080p @ 60fps in real time. And the answer on general purpose hardware is, no. Not even slightly close.

In the hardware world, when you use transistors to do logic/math, if you want general purpose calculation then it's inefficient. Certain more complex operations can be done faster using the same number of transistors but the trade off is that you can only do those specific operations with those specific dedicated transistors. This happens all the time, a good example is AES encryption. For a long time it was expensive using general purpose calculations. If you wanted to do FDE (Full Disk Encryption) on say your SSD/HDD with an AES based cipher, so everything written/read from disk was encrypted/decrypted it had an insane hit on the CPU, because it was inefficient at AES via general purpose calculations. And then Intel (first) threw onto their chips a bunch of transistors reserved for AES operations so that it could be "done in hardware" and suddenly AES became so cheap on the CPU that you could just AES encrypt all your disks and it was essentially free, which is what I do across all my disks. But the trade off is that those transistors on the chip are taken away from the pool of those used to do general purpose calculations.

Ray tracing is like that only the performance ramifications are off the chart by comparison. if you want to do RT Ops in any kind of sensible, real world way, you need an equivalent of RT/Tensor cores in your silicon. If you don't and you just go full general purpose (in the case of GPUs, general purpose is really more "rasterization" than anything else) then you wont get real time performance. This is why when AMD boast support for RT via DX that's lovely, all you need is the driver paths to support that. But when they talk about doing things like no visual effects, and instead things like positional audio (which requires very few rays projected) that's a big red flag they're not investing in dedicating transistors in the console GPUs towards ray tracing. I can't know that for sure, that's just my bet, based on the available information. If we see hybrid RT effects in the next gen console similar to thsoe we see in say BFV with RTX on, I'd be extremely surprised.
 
Last edited:
What are 'tensor cores'? Sure it's some fancy name for dedicated parts of the gpu that perform RT and ML tasks but why does AMD or anyone else NEED to have them? Are you saying RT cannot be done using stream processors in AMD gpu's?

I don't think anybody here thinks that. Unified architecture is great but it's always going to be a compromise. nvidia does RT with dedicated hardware, AMD doesnt. Both approaches are valid. One requires more expense, one requires reduced performance in other areas while RT calcs are running. Which one is better of rthe customer? nobody knows...yet.
 
Turn on FE design? It's completely different to previous FE designs and looks great tbh. The AIB for most part are always cheap and tacky looking.
In the previous few hundred pages there was quite a bit of "the FE design is meh"
Once the price was revealed, out come the adoring fans.
And i meant the 3000 series design - not previous ones.

I like the FE design as well as the AIB ones.
 
In the previous few hundred pages there was quite a bit of "the FE design is meh"
Once the price was revealed, out come the adoring fans.
And i meant the 3000 series design - not previous ones.

I like the FE design as well as the AIB ones.

not so much the price but more because the AIB have made some of the ugliest and biggest boards ever!

the only ones I like come with a premium of several hundred pounds.

I have always loved the new FE design. My only criticism is where the power connector is for a clean look inside my case.
 
not so much the price but more because the AIB have made some of the ugliest and biggest boards ever!

the only ones I like come with a premium of several hundred pounds.

I have always loved the new FE design. My only criticism is where the power connector is for a clean look inside my case.

That's exactly how I feel about it. Only annoyance is the power connector. Going to look a little odd.
 
Was thinking earlier about the lack of NVLink/SLI on all the 3000 series except 3090, I wondered if it was due to it undermining the 3090 if you could run a pair of lesser cards but the more I thought about it that doesn't really make a lot of sense, support isn't great these days and you might start to run into VRAM limits. Powering it could be difficult also.

I guess the real question will be whether 3090-SLI is good for anything other than benching, in theory it might be able to handle 4k at a steady 144fps but I guess the reality might be different with microstutter and suchlike, in which case it'd be a bit pointless.
 
Was thinking earlier about the lack of NVLink/SLI on all the 3000 series except 3090, I wondered if it was due to it undermining the 3090 if you could run a pair of lesser cards but the more I thought about it that doesn't really make a lot of sense, support isn't great these days and you might start to run into VRAM limits. Powering it could be difficult also.

I guess the real question will be whether 3090-SLI is good for anything other than benching, in theory it might be able to handle 4k at a steady 144fps but I guess the reality might be different with microstutter and suchlike, in which case it'd be a bit pointless.

Won't they be good for a cheap(er) ML/HEDT/workstation setup?
 
Back
Top Bottom