• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

RTX performance overhead - 9.2ms (ouch!)

You keep quoting this, but after you said it last time the only thing I could find was one article stating they had Titan V’s but the features were not unlocked?
This is the thing. Devs have stated that they only had a couple of weeks to work with the first iterations that were shown, so either the devs are telling porkies and Poneros knows it all or Poneros knows bugger all and the devs are correct.
 
This is the thing. Devs have stated that they only had a couple of weeks to work with the first iterations that were shown, so either the devs are telling porkies and Poneros knows it all or Poneros knows bugger all and the devs are correct.

Hmmmm, I wonder which is more likely.;)
 
You keep quoting this, but after you said it last time the only thing I could find was one article stating they had Titan V’s but the features were not unlocked?

I've also read that they were working on the features from February using Titan V's, and only got the Turing cards a couple of weeks before the launch event. Just can't remember where I read it!! Anandtech maybe?
 
You keep quoting this, but after you said it last time the only thing I could find was one article stating they had Titan V’s but the features were not unlocked?

Nvidia seeded developers with Titan V hardware earlier this year - but this lacks specific ray tracing acceleration. Cards can be used in parallel to offer up something close to actual RTX performance but the bottom line is this: DICE had just two weeks with final hardware, which was dubbed simply as an 'Nvidia Graphics Device' in the device manager. In short, the developer wasn't even sure which RTX device they were working with. And as we shall discover, there's still plenty of work to do before launch in optimising an already impressive showing.
[...] During development, the team at DICE were developing the game on Titan V cards

This is the thing. Devs have stated that they only had a couple of weeks to work with the first iterations that were shown, so either the devs are telling porkies and Poneros knows it all or Poneros knows bugger all and the devs are correct.

Hmmm.
 
From your own article that you are using as back up.

Nvidia seeded developers with Titan V hardware earlier this year - but this lacks specific ray tracing acceleration. Cards can be used in parallel to offer up something close to actual RTX performance but the bottom line is this: DICE had just two weeks with final hardware, which was dubbed simply as an 'Nvidia Graphics Device' in the device manager.

So yer, hmmmm indeed!
 
Well maybe it does, but go look at the tomb raider bench on here, the 1080ti is only getting mid 80's for frame rates at 1080p, so if they can get 60fps at 1080p with the RTX effects on then its only a 20% drop for the better effects. which in my opinion isn't that bad.
no their not. their getting 80fps at 4X Super sampling going TAA i get 150fps
 
Ok so using very good quality AA you get 80 and yet people are moaning about only going to get 60 with realistic ray tracing effects.

Guess what if you don't want to use them turn them off and go back to your inferior AA method and higher frame rates.

The point was that RTX is going to cripple frame rate, well compared to running with all the bells and whistles on, not so much, maybe a 20% drop off according to what we have seen so far.

Ill just leave this here, to demonstrate how difficult actually doing ray tracing can be.

simple-rtx-bench.jpg

What I find most impressive is that the 2080 isn't that far behind the 2080ti considering the difference in tensor and RT cores.

2080ti 4352 cuda cores 544 tensor cores 68 RT cores
2080 2944 cuda cores 368 tensor cores 46 RT cores
2070 2304 cuda cores 288 tensor cores 36 RT cores

So even the 2070 is going to absolutely spank the 1080ti. ( yes I do believe that the top 1080 is suppose to say 1080ti. )
 
Yeah that is the one, it states that the Titans lacked the hardware acceleration and it had to be ran in software.

I thought the V’s had the hardware, unless they mean it was not available to them.

Well, it lacks the RT cores of Turing, so it doesn't run as fast, but as far as the implementation goes, it's still the same thing. So in that respect Titan V doesn't have the hardware for it, though of course it obviously can still run on it, just as all the demos can run on Pascal & other GPUs.

My point, in any case, was merely to point out that Dice hadn't only taken a few days to implement RT in BF V, but rather had worked on it for months before. So people who are pining their hopes on performance improving massively any time soon based on "they only had it for 3 days, so ofc it couldn't run well" are being overly optimistic about the outcome.

Ok so using very good quality AA you get 80 and yet people are moaning about only going to get 60 with realistic ray tracing effects.

You're not going to get 60 fps, you're going to get 45ish from 80 adding the assumed 9.2ms. (1000ms/sec / 45FPS = 22.222.. ms per frame vs 1000ms/sec / 80FPS = 12.5). That is ofc a conservative scenario, but as we don't have any better numbers let's assume it is the "worst case scenario".
 
Last edited:
You seem to be confused about the topic. The initial point was that development didn't start when they received the RTX cards. Are you disputing that?
Not confused at all and Dice had 2 weeks with the hardware. RT has been around for years but that doesn't mean that Dice had years to work with it. They were given the hardware and have worked with it in that time.
 
Well, it lacks the RT cores of Turing, so it doesn't run as fast, but as far as the implementation goes, it's still the same thing. So in that respect Titan V doesn't have the hardware for it, though of course it obviously can still run on it, just as all the demos can run on Pascal & other GPUs.

At best, you're clutching at straws. At worst, you're making an invalid statement. Even in your above sentence, you're claiming that software and hardware are the same things on a fundamental level. The whole statement is messed up.

Implementing a strictly CUDA/Tensor function of ray tracing doesn't tell us anything, so if you base performance off this alone you're (royal you) leading people up the garden path.
 
At best, you're clutching at straws. At worst, you're making an invalid statement. Even in your above sentence, you're claiming that software and hardware are the same things on a fundamental level. The whole statement is messed up.

Implementing a strictly CUDA/Tensor function of ray tracing doesn't tell us anything, so if you base performance off this alone you're (royal you) leading people up the garden path.

Take a deep breath, and relax. You're projecting a lot of things unto what I said.

If you think the CUDA/Tensor implementation tells us nothing at all, then you've lost track of what words mean. Clearly it does tell us something, especially because we also have data points where that exact implementation is compared to the one with RT cores, it's called the SW demo and you can very clearly see the performance differential. If you want to hand-wave that away as "nothing" and instead rely solely on your imagination, feel free. I'll stick to the data available.
 
Back
Top Bottom