• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Hardware Unboxed: Why No ReBAR Testing?


edit Rebar starts @16.30

Steve wasn't looking his usual 'couldn't give a Castlemain XXXX' self!:D

He contradicts his own data a few times too.

Just say it Steve, like RT'ing, it gives one vendor an advantage!

How you can test one vendors feature set(DLSS) but not one of AMD's feature set(SAM), doesn't shine you in the brightest light, imo it's stubbornness over bias.

HUB seem to me have been captured by Nvidia, they seem to be narrating the sort of testing methodology and reasoning Nvidia would script.

The video below as an example, people called them out for some odd Halo 5 results, in that most reviews had the AMD GPU's significantly ahead of the Nvidia GPU's, HUB had Nvidia winning by a small amount.

Despite Others Like Digital Foundry saying AMD's GPU tended to do better in more complex scenes and actually showing this in a side by side comparison, HUB reasoned their results were because Nvidia GPU's did better in more complex scenes.

I think there is a serious problem with the HUB analysis, the scenes HUB used were not that demanding, they were slightly more demanding than not at all, they had maybe 6 NPC's on screen and there really was not a lot going on.

This is critical, its just enough to put some load on the GPU, but not a lot of load, by putting some, but not a lot of load on the GPU what they are doing is moving enough load away from the CPU to the GPU but not enough to stress the GPU, as we all know Nvidia GPU's do not do well with high CPU loads, no where near as well as AMD, so in a very complex scene with a lot of NPC stuff going on AMD's Draw Call efficiency will win out with higher performance. That's why the higher AMD performance in complex scenes result most got.

To me all of this seems way too contrived to be a coincidence and a simple laps in knowledge and understanding on HUB part.


 
Last edited:
once you enable it in bios you can toggle it on or off in adrenaline. No real effort involved. They would then have to do the same with nvidia so guessing it just isn't worth the extra work. That being said you don't have to listen to the bias that Steve sprouts. You can just read the graphs yourself and make your mind up. Off on both is apples to apples.
 
Despite Others Like Digital Foundry saying AMD's GPU tended to do better in more complex scenes and actually showing this in a side by side comparison, HUB reasoned their results were because Nvidia GPU's did better in more complex scenes.

Link please :)
 
Not even sure why this is a topic of conversation.

Resize bar testing should be done across the board.

Making Nvidia look bad.... well Nvidia turn it on or off in the drivers so in theory it is is of benefit to Nvidia in game it will be on if it regresses performance it will be off.

I am cool with fact that resize bar in a game which Nvidia has it turned off gives AMD an advantage. That's fine. That's Nvidia issue to address as far as I am concerned.

As long as any tests are done across AMD / Intel CPUs with both AMD and Nvidia GPUs then I think everything is fair game.

I have resize bar turned on as standard for example.
 
I'll say again. AMD's Thread Scheduler is on chip, its hardware based, a dedicated piece of silicon on the GPU is doing the work.
By comparison Nvidia use software in the driver, it uses your CPU to do the same work, that is a higher load, 30% higher on the CPU vs AMD, so higher CPU loads, like high NPC action or moving through the world at high speed will result in AMD frame rates being more consistent, or as DF put it "Tighter" IE the frame rates do not drop as much as they do on Nvidia.

HUB know this and IMO they are full of crap.
 
I'll say again. AMD's Thread Scheduler is on chip, its hardware based, a dedicated piece of silicon on the GPU is doing the work.
By comparison Nvidia use software in the driver, it uses your CPU to do the same work, that is a higher load, 30% higher on the CPU vs AMD, so higher CPU loads, like high NPC action or moving through the world at high speed will result in AMD frame rates being more consistent, or as DF put it "Tighter" IE the frame rates do not drop as much as they do on Nvidia.

HUB know this and IMO they are full of crap.

Almost sounds like a Freesync vs Gsync debate just reversed.
 
Cheers! I'm not discounting the video, but I do wish they'd have shown the 6800xt and 3080 in combat since they said the drops were across all gpus. They did also mention more 1% spikes for AMD, but I didn't really see much in their frame time graph. They did show the 6800xt/3080 in the driving scene. To me it looks like crap drivers from nvidia, but then I've not been impressed with the performance of any gpus (imo) on halo infinite. I'm not blown away by the game to make me think 70fps on a 6800xt while they were driving around is good, and lets not mention the laughable 55fps of the 3080.
 
Almost sounds like a Freesync vs Gsync debate just reversed.

Hardware vs software solution, sure but that is where the similarities end, in extreme circumstances it can have a dramatic effect.

And yes while extreme circumstances are not the norm there is an ocean of grey between that, so in some sistuations it can have your performance dipping 10 FPS more than someone with a 6800XT, maybe even a more than that might not be uncommon, and if GPU's keep out-pacing CPU performance it will just keep getting worse for Nvidia if they don't change.

Here is the extreme.

RTX 3090: 100% performance
RX 5700XT: 117% performance

Notice the 6900XT is also strangled in this but its still 20% faster.

Its the sort of thing you would see, one can see it in this very case where Nvidia's software Thread Scheduling would cause 10, 20 even 30% higher dips than AMD. Its not the end of the world as its only likely to happen during intense NPC scenes or anything that relies heavily on the CPU, but when what you want is frame rate consistency.... yeah its a problem

609st93.png
 
They should test the thread scheduler with higher end chip as well. Testing with older low end chip's just magnifies the difference to make it look bad. How many PC's have a 3090 and a 1600X?
 
Well, I don't like saying "I told you so" when saying the same thing in the Halo Infinite thread but...

With that poll in their discord I can only assume there is some level of continued backlash of those Halo Infinite results. And, with people like me disregarding his recent gpu review I don't think their credibility is looking good. And, things haven't "blown over" by now.
 
If its available, you would test it, surely? No idea how easy SAM is to include, but if reBAR is included, SAM certainly should be.

Though I did see the sense in showing results without, as if you didn't have an AMD processor, it wasn't available. Now it works across the board (I think) there's not really a reason to avoid it. Although it would be good to have a with and without comparison, as that would show where reBAR and SAM actually regress performance too.
 
I'm surprised by the number of people in that poll that said they don't care.

Yeah me too.

"No, i don't want to know if i can get better performance by simply pressing a button, thank you very much, good day sir"
 
I'm surprised by the number of people in that poll that said they don't care.
Probably people that cannot use the feature, due to incompatible hardware.

No one is going to say no to free performance of 5-20% in games.
 
Yeah me too.

"No, i don't want to know if i can get better performance by simply pressing a button, thank you very much, good day sir"

Exactly. It’s the job of review sites like HUB to show all the different permutations so viewers get the full picture. I don’t understand why people wouldn’t want to know all the information. Surely it’s better to be fully informed? If ReBAR benefits one gpu provider over another, or causes performance problems in one game or another, it’s better to know this, isn’t it?

I can kind of understand what HUB are trying to say when they tell us they don’t want to test with it enabled by default because it affects different setups differently and may causes issues with some titles and not others, but I don’t agree with them that it shouldn’t be tested because of that. I want to know if ReBAR benefits my particular set up! And if i were in the market for a new gpu a big swing in performance for one gpu vendor over another in games I want to play might make all the difference
 
Back
Top Bottom