• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

The thread which sometimes talks about RDNA2

Status
Not open for further replies.
Oh my, you didn't even deny the dup account either. Just post that I'm attacking you. Which account are you referencing this one or Shaz12? I'm confused? :D


That has no relevance for me. As a CPU manufacture, AMD can choose who they want to work with regarding SAM. Personally I would tell Nvidia to pound sand. But if you just applied a little bit of reasoning you would know that Intel would need to work with Nvidia. Which is the post you replied to. Which had nothing to do with AMD.
:D

Omg you still bothering with this clown? add him to ignore already, the thread is much more sensible that way :)
 
RT otimisation for AMD cards is going to be about reducing features and reducing quality. See the Spider-man video I posted about otimisations. If you are happy with that and can accept the reduced quality then get a 6800xt, if not get a RTX 3080. Rays tracing is limited by how many rays the hardware can handle. Its pure hardware performance.

Hardware alone is not going to drive RT perf. Optimisations in data structures and early rejection can bump up RT execution. Navi has a very flexible implementation that can make use of programmable checks and approximations for faster render throughput.
 
Omg you still bothering with this clown? add him to ignore already, the thread is much more sensible that way :)

Trying to ostracize a forum user is harassment and gets you a long suspension on most forums. The last person I know that tried got a 3 month forum holiday from the world of warcraft forums.
 
Hardware alone is not going to drive RT perf. Optimisations in data structures and early rejection can bump up RT execution. Navi has a very flexible implementation that can make use of programmable checks and approximations

Hardware does, as I stated its the amount of rays the hardware can cast. If you know how RT works you would know I am right.


 
Last edited:
Are you going to replay a game because it has RT? Are you going to buy a game you skipped because of RT? Do you think devs are going to take away resources from future games to assign it to an old game that has no future monetary value to the company?

Isn't that just sophistry? Whilst I agree on financials, that has nothing to do with the differences between a "tacked on" versus a "next gen/proper" implementation.

Are our definitions the same here, I wonder? I'm looking at it from a technical/development perspective only. If what you consider as "next gen/proper" is an implementation that only requires h/w ray-tracing (ie. just as we all need gpus with gpgpu compute), then even that should only be an issue when that particular game doesn't have the usual fall-back rasterization/compute method for what their solving with ray-tracing (unlikely, save for an in-house engine maybe?), or simple toggle. Otherwise, if it's not technical but rather in terms of the narrative/gameplay (eg. story event relies on reflection of something beyond screen-space, dynamic diffuse indirect illumination controlling 'feel' of full environment, or maybe even non-graphics related use case) itself forcing ray-tracing as a baseline, then sure (I don't think any of those examples are possible considering perf of the new consoles, so it likely won't translate to pc - maybe I'm not creative enough in my thinking though).

And implement it in a way that is better than turning it off?

This makes no sense to me. Is there an example of a case where a hybrid-RT implementation would have been better off "turned off" in favour of the standard raster/compute approach? Mind you, I suppose "better" is pretty subjective here. I have a background in engine development, so I'm biased towards putting these seemingly small advances high up on a pedestal, where most consumers wouldn't... and that's understandable.

It is not an easy "tack on"

Sure, there will be differences between all engines. However, for the most part it's not too much of a stretch to assume most engines have rendering pipelines that lend themselves to dropping in ray-traced implementations without too much hassle (I define "hassle" here as a few programmers a few weeks to months, without having to change much existing code - depends on RT method. 'Justice' from 'Netease' is an example of this). Things like RT shadows, specular indirect, diffuse indirect, and ambient occlusion, are already handled in a pretty standardised way where inputs and outputs don't force swathing changes to be made. Over the years most game engine renderers follow similar design trajectories to each other due to knowledge sharing and openness in research. That helps in the sense that proposed hybrid-RT approaches have implementation requirements that end up somewhat satisfied already, making adoption easier.

you need to go through and check how all the shaders look and respond to this new system, as well as debugging and performance testing the entire game.

That goes without saying. Justify your efforts using comparisons to the ground truth weighted against the performance cost and controllability. That's literally half of the job and implicit in everything a real-time graphics engineer does.
 
Omg you still bothering with this clown? add him to ignore already, the thread is much more sensible that way :)

I had my suspicions that he was posting under duplicate accounts. But didn't think it would be that easy for him to admit it. Even in his most recent response he implies I was talking to him all along.
I simply wanted to bring transparency to this thread so others will see what he is doing and in the manner in which he is going about doing it.

Now that I, and the community at large knows we all can ignore him as he has discredited himself.
:D
 
But if you just applied a little bit of reasoning you would know that Intel would need to work with Nvidia
Why would Intel not work with Nvidia? especially since AMD confirmed in the video that they are happy to work with both Nvidia and intel in bring SAM to all platforms with both sets of GPUs, Intel refusing to would just put them even further behind and make intel even less desirable than they already are for the majority of people.
 
Hardware does, as I stated its the amount of rays the hardware can cast. If you know how RT works you would know I am right.

It's not just about ray casting.. youve got to do ray box and ray triangle intersection tests. The BVH data structure can be enhanced to let the program entirely skip boxes and full hierarchies without initiating intersection tests (no reflective surface in this box - skip). Navi can do this as traversals are programmable.. nvidia is fully fixed function.
 
Guys, chill please. Let's be fair here and apart from prices, AMD have done a blinder with the 6800XT and I am looking forward to seeing the 6900XT performance. I am a massive RT fan and whilst it is still early days, I can see the usefulness of it, from both a gamers POV as well as a DEV'S POV. With both AMD and NVidia on board, I can only see it gaining traction in the next couple of years.

If I was GPUless, I would certainly be looking hard at the 6xxx series and hopefully this line continues for the foreseeable future.
 
The new AMD cards are impressive but they come with a big caveat. If you're interested in good performing RT and working DLSS = Buy Nvidia.

The numbers are out there, the new AMD cards basically lack the hardware to compete. This will also be the case with AMD's version of DLLS (when it arrives) Nvidia uses their tensor cores for DLSS, where as AMD will use have to rely on their CUDA cores and will probably have to sacrifice efficiency/power to produce a similar effect.

Personally I feel if you have 2 cards that trade blows in various games pretty much on par with each other (3080/6800XT) but one can produce great RT performance for a few quid more then this to me seems like the sensible purchase.
 
Isn't that just sophistry? Whilst I agree on financials, that has nothing to do with the differences between a "tacked on" versus a "next gen/proper" implementation.

Are our definitions the same here, I wonder? I'm looking at it from a technical/development perspective only. If what you consider as "next gen/proper" is an implementation that only requires h/w ray-tracing (ie. just as we all need gpus with gpgpu compute), then even that should only be an issue when that particular game doesn't have the usual fall-back rasterization/compute method for what their solving with ray-tracing (unlikely, save for an in-house engine maybe?), or simple toggle. Otherwise, if it's not technical but rather in terms of the narrative/gameplay (eg. story event relies on reflection of something beyond screen-space, dynamic diffuse indirect illumination controlling 'feel' of full environment, or maybe even non-graphics related use case) itself forcing ray-tracing as a baseline, then sure (I don't think any of those examples are possible considering perf of the new consoles, so it likely won't translate to pc - maybe I'm not creative enough in my thinking though).



This makes no sense to me. Is there an example of a case where a hybrid-RT implementation would have been better off "turned off" in favour of the standard raster/compute approach? Mind you, I suppose "better" is pretty subjective here. I have a background in engine development, so I'm biased towards putting these seemingly small advances high up on a pedestal, where most consumers wouldn't... and that's understandable.



Sure, there will be differences between all engines. However, for the most part it's not too much of a stretch to assume most engines have rendering pipelines that lend themselves to dropping in ray-traced implementations without too much hassle (I define "hassle" here as a few programmers a few weeks to months, without having to change much existing code - depends on RT method. 'Justice' from 'Netease' is an example of this). Things like RT shadows, specular indirect, diffuse indirect, and ambient occlusion, are already handled in a pretty standardised way where inputs and outputs don't force swathing changes to be made. Over the years most game engine renderers follow similar design trajectories to each other due to knowledge sharing and openness in research. That helps in the sense that proposed hybrid-RT approaches have implementation requirements that end up somewhat satisfied already, making adoption easier.



That goes without saying. Justify your efforts using comparisons to the ground truth weighted against the performance cost and controllability. That's literally half of the job and implicit in everything a real-time graphics engineer does.

Ray tracing needs hacks (GI is one hack) because there are features it can't do well. It uses hybrid rendering for speed and for better quality. Nvidia has a good AI denoiser which helps reduce the spp need for an image. This runs on the tensor cores, AMD hardware has no tensor cores for AI. Pathtracing fixes the issues with ray tracing and does not need hacks for some features but is far more complex to process. Pathtracing is minecraft and Quake RTX. Ray tracing is Control. Dirt 5 is resterisation with RT shadows on the cars only.
 
Hopefully when supply eases and we see more SKUs from both sides, we'll see better prices.

RT is all well and good but AMD have nailed rasterisation performance, such an improvement gen on gen is very, very impressive in this day and age. Time will tell which approach pays dividends, both AMD and NV will be design locked in for their next gen cards as well.

We've yet to see AMD image reconstruction, that too could surprisingly good. It's not like Nvidia had it right from launch, in the handful of games that support it
 
Nvidia has a good AI denoiser which helps reduce the rays need for an image. This runs on the tensor cores, AMD hardware has no tensor cores for AI

This was refuted by rroff (a member on this forum) who has hands on experience with RTX projects.. nvidia implements SpatioTemporalVariance Led Filtering which is performed using fp32 ALUs. I intended to read it but couldnt,maybe i will pick it up on one of these weekends after i get my graphic cards
 
This was refuted by rroff (a member on this forum) who has hands on experience with RTX projects.. nvidia implements SpatioTemporalVariance Led Filtering which is performed using fp32 ALUs. I intended to read it but couldnt,maybe i will pick it up on one of these weekends after i get my graphic cards

You mean he refuted this paper https://research.nvidia.com/publica...rlo-image-sequences-using-recurrent-denoising.

Video here https://drive.google.com/file/d/0B6eg_ib7k4PoaVhWN3VYRHo0NDQ/view

https://declanrussell.com/portfolio/nvidia-ai-denoiser/

Source code here https://github.com/DeclanRussell/NvidiaAIDenoiser

V-ray 3ds max


Reducing render times by 92%! Denoiser showdown 1:31
 
Status
Not open for further replies.
Back
Top Bottom