• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

New consoles = better CPU optimization?

Are those on the latest drivers? I'm looking at this one:

96msYPR.png

(Source ht4u)

That's max settings, no AA. Latest AMD and nvidia drivers.

Why is it you only ever see the most AMD positive benchmarks.
I own a 7970 1GHZ, but I'm not going to kid myself lol.

Even assuming said benchmark is legit (In that everyone running those cards at that resolution, those settings) and because of the MSAA AMD's gained a lead, that set up and settings of 4x MSAA are in the vast minority, to say the 7970 is besting the 780/Titan is wrong as it's doing so in 1 of about 100 set ups.

I can explain this one for you. Quite simply the default setting of Ultra, AMD is faster at. There is a custom DDOF filter you can use. It adds no image quality improvement, all it does is lower fps significantly. Nvidia is slightly faster if this setting is used. I prefer, as do many others, the original setting.

Unfortunately paid reviews almost always pick the option that favours Nvidia. Ad money. ;)

Its a bit like when they do Tomb Raider benches and disable TressFX. Yes, lets just disable a part of Directx11 because it hurts one sides performance.

Anyway, don't get me started. :p
 
Last edited:
The gpu does compute, (PS4 particularly.) and the cpu and gpu can both read and write to the same memory pool.

The PC is at a disadvantage when it comes to this, and potentially big one.
 
Meaningless without knowing what the corresponding min/avg frame rates were. High vram usage is one thing, high vram usage and maintaining 60fps quite another.

The game engine or gpu drivers won't have been optimized yet so the figures would mean nothing anyway.
 
Even assuming said benchmark is legit (In that everyone running those cards at that resolution, those settings) and because of the MSAA AMD's gained a lead, that set up and settings of 4x MSAA are in the vast minority, to say the 7970 is besting the 780/Titan is wrong as it's doing so in 1 of about 100 set ups.

I was mentioning this game specifically because if a 7970 could match a Titan in this particular game, there's no reason not to think the new GPU couldn't beat one in BF4. It'd be ridiculous to say that a 7970 can generally beat a Titan.
 
Who mentioned new GPU's against Titan for BF4?
I'd be surprised if it didn't at least have parity with Titan in BF4 ; The 9970.

And I didn't mean in general, I mean Bioshock.

And I don't buy LTMatt's explanation, as the other results don't make any sense with the explanation lol. The explanation makes sense, but the results don't support it.
 
Last edited:
caching only occurs during the second map loading. ;)

Negative. It probably caches more upon map changing as there's more available but when I was testing my 680s and 7950s the 7950s at 5760*1080 immediately started using 2.3-2.4GB whereas the 680s were using 1.9GB+.
 
And I don't buy LTMatt's explanation, as the other results don't make any sense with the explanation lol. The explanation makes sense, but the results don't support it.

Generally speaking AMD is faster with the Ultra preset, post processing setting. Nvidia is faster with the custom post processing setting (DDOF).

There is no noticeable difference between the two except the latter has a significant hit on fps. Unfortunately most benchmark sites favour the latter option.

Post processing normal = Ultra stock. It changes to custom if you use the DDOF filter.

ydTjWkL.jpg
 
Negative. It probably caches more upon map changing as there's more available but when I was testing my 680s and 7950s the 7950s at 5760*1080 immediately started using 2.3-2.4GB whereas the 680s were using 1.9GB+.

I don't find that, even to this day and i keep an eye on my vram usage. Caching only occurs once the current map has ended and the new map is being loaded. During that time the old map is cached and the usage creeps up. The process repeats every map change until it reaches a plateau.

I also don't believe there was any caching, or any serious amount of caching on that bf4 benchmark as the game download was only 2 gig. There would have only been one map included so caching of other maps would not have been possible.
 
I don't find that, even to this day and i keep an eye on my vram usage. Caching only occurs once the current map has ended and the new map is being loaded. During that time the old map is cached and the usage creeps up. The process repeats every map change until it reaches a plateau.

I also don't believe there was any caching, or any serious amount of caching on that bf4 benchmark as the game download was only 2 gig. There would have only been one map included so caching of other maps would not have been possible.

Just reporting what I found having tested both 256 bit 2GB 680s and 384 bit 3GB 7950s. The VRAM usage was different from the off. Way outside margin of error territory as well.

As I said, it probably caches differently based on the available amount of memory left but it definitely still occurs to some degree immediately.

Anyway, I'm not allowed to talk about GPUs until the middle of September so I'll make a swift exit :p.
 
LT, the results don't actually give that impression.

http://www.techspot.com/review/655-bioshock-infinite-performance/page4.html

You said it gives them a slight advantage if they are running that custom, there isn't a slight difference.

And then in the result that Teppic posted, the FPS figures are all down,and the Nvidia isn't ahead (Although the 7970 is about the same FPS as the techspots)
Techspot says Ultra DDOF.

Neither of those support the explanation.
 
LT, the results don't actually give that impression.

http://www.techspot.com/review/655-bioshock-infinite-performance/page4.html

You said it gives them a slight advantage if they are running that custom, there isn't a slight difference.

And then in the result that Teppic posted, the FPS figures are all down,and the Nvidia isn't ahead (Although the 7970 is about the same FPS as the techspots)

Neither of those support the explanation.

Just when i thought i was out of the game for good Martini drags me back in.

You know they're using the custom DDOF filter because the 680 is faster than the 7970. If they were using the standard filter the AMD card would likely be slightly faster.

I know this as ive seen a few benchmarks where the custom DDOF filter is not used and the AMD card equivalents were always ahead.
 
Our definition of slightly differs then.
Teppic's shows a slight advantage to the 7970 (With lower FPS then techspots)
Techspots doesn't show a slight advantage.
 
Techspot's usually very reliable I think. I wouldn't imagine they'd enable DDOF just to make nvidia cards come out better (some sites might), rather because they thought it improves the ultra setting. They list is as using DDOF on top of ultra. In saying that, if it dramatically changes the benchmarks, that needs pointing out in the review.
 
Back
Top Bottom