• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Poll: Ray Tracing - Do we care?

Ray Tracing - Do you care?


  • Total voters
    183
  • Poll closed .
RE NVLINK 2.0:

Yeah, he did make a massive deal over saying nvlink would never be on a consumer GPU and had nothing to do with SLI, now nvidia are saying NVlink allows the two GPU's to communicate faster without saturating the pcie bus. You'll never get him to admit what he was banging on about for months was wrong though.
 
You dug up a quote showing me correct? Is NVlink being used to provide faster or more access to system memory(the context of that post that you left out)? Nope, it's using pci-e, NVlink is being used as... a pci-e switch. To provide a lane for SLI communication, thereby removing any and all benefit that NVlink was created for.

But way to take it out of context and then also completely misunderstand it AND still get it wrong. I see not much has changed in your posting style or understanding of this area of technology.

You have an incredibly strange definition of "precisely nothing" lol. Maybe if your own understanding was as good as you think it was, you wouldn't have wrongly predicted what would happen on this generation. Nevermind, though. Keep up the good cynicism, even if it is entirely unfounded. It's not as though you have the slightest clue what data is actually passing over the bridge, anyway.

To question my intellect on any subject, when you didn't even realise that board layers played a huge part in memory overclocking is also quite funny. Carry on, Charles.
 
Here's a personal opinion I put to a friend which, for me, sums up this entire malarky. See what you think...

If I can see the movement and muzzle flash of enemies behind me reflecting in the car I'm crouching next to, and as a result I don't get shot by those crafty buggers, then Ray Tracing is worth it because that's not only an immersive graphical feature but a great gameplay mechanic. And maybe then I'd drop over a grand on a graphics card because it will tangibly improve my experience.

If a game mechanic and experience is built around light, shadow and reflectivity, then Ray Tracing is worth it because the experience is greatly enhanced.

Yeah, I'd agree with that. If it adds another dimension to the gameplay (and I'd include ray tracing being used for aural physics in that - huge potential there), I'm all for it. Personally, I don't have a problem with eye candy improvements being used to add to the emotional experience (realistic, creepy shadows in horror games e.g.). But not at these prices. The tech is incredible, and part of me understands why the pricing is high. But I also worry that it sets a precedent - if these prices become normal, it could do long term damage to PC gaming. Just IMO of course. Hope I'm wrong.
 
I was reading through the DLSS part of https://www.vg247.com/2018/08/22/he...s-new-ray-tracing-dlss-tech-geforce-rtx-2080/

Thought it was quite interesting. How well do we think this DLSS is actually going to work? AA is usually quite expensive to the GPU, so having that load move over to the tensor(?) cores to free up workload on the rest of the GPU should theoretically increase frame performance? Thought it was interesting that two of the games on there supporting it are ARK and PUBG, two not very well optimised games. Must have decent benefit?
 
I was reading through the DLSS part of https://www.vg247.com/2018/08/22/he...s-new-ray-tracing-dlss-tech-geforce-rtx-2080/

Thought it was quite interesting. How well do we think this DLSS is actually going to work? AA is usually quite expensive to the GPU, so having that load move over to the tensor(?) cores to free up workload on the rest of the GPU should theoretically increase frame performance? Thought it was interesting that two of the games on there supporting it are ARK and PUBG, two not very well optimised games. Must have decent benefit?

More or less. DLSS is not the "normal" AA functionality we see today, but of a new type like Ray Tracing and is strapped over the current workload.
And yes it will be operated by the Tensor cores. However at what cost is unknown, especially if at the same time those Tensor cores have to do Ray Tracing also.
 
I mentioned this over on Bit-Tech, I'm interested to see if it gives a performance boost by using DLSS instead of normal AA. I don't know about PUBG, but I'm not sure anything can save the performance of Ark :D
I also wondered if, since it's based off prediction, if there would be cases of the prediction messing up.
 
I mentioned this over on Bit-Tech, I'm interested to see if it gives a performance boost by using DLSS instead of normal AA. I don't know about PUBG, but I'm not sure anything can save the performance of Ark :D
I also wondered if, since it's based off prediction, if there would be cases of the prediction messing up.

ARK used to be worse than it is now!!
 
Pretty much, yes (I presume)

So with that in mind is it relatively safe to assume that Tomb Raider will probably see a lot more optimisation vs the recent reports of it running in HD at 30-50fps. Presumably it should run pretty well if all this hardware does its job correctly. Or is it the case that the slowest part of the chip ends up dictating performance, presumably in this instance the RT cores.

This also raises a load more questions, what does 10 gigarays equal? Does resolution have an effect on this? If so how do the shaders affect it, do they just sit around doing not much if its RT limited at 1080p? Is it a choice of RT or disabling RT and then having a really fast 4k game? To make the most of this tech does a game have to be specifically designed for the architecture, ie making to most of each section of the chip? How did they decide how many cores of each section was the right amount?

Interesting times.
 
Tensor cores are very good at specific types of operations so you get more of a performance offset when they can be applied to a task than the hit on traditional hardware when having to do such a procedure. For Turing you effectively get the output from the traditional cores like rendering with MSAA, etc. off and then DLSS is applied kind of like FXAA - though not sure it can be strapped over the top quite as easily as FXAA.

What DLSS is basically doing is using AI to estimate what <X> image "should" look like with AA applied and then building that image so will be interesting to see if it ever goes crazy and produces something stupid LOL.
 
Meh, you can save money by just turning the monitors contrast setting up to 100 and it will look just as **** for free :p
Not even the same ballpark :rolleyes: ;) do yourself a massive favour get physically in front of a proper HDR display with a proper HDR device running a decent HDR game as that is the ONLY way to see the difference. Watching it on a non HDR display will do nothing whatsoever you have to be there to see the difference with your own eyes.
 
Not even the same ballpark :rolleyes: ;) do yourself a massive favour get physically in front of a proper HDR display with a proper HDR device running a decent HDR game as that is the ONLY way to see the difference.
I have, just looks like the contrast is set too high. That's the trouble with things like HDR which aren't straight improvements just subjective changes, some people like them some don't.
 
Back
Top Bottom