• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Both RX Vega 64 & 56 Beat GTX 1080 Ti in Forza 7 DX12 benchmarks

Does the history really matter?

Fact is dx12 has failed spectacularly thus far.

3 actual dx12 games
A handful of games where a dx12 overlay has been patched in, with varying degrees of success from woeful to excellent and it really does seem to depend on the user with no rhyme or reason for its success/failure.

Maybe it will improve in the future, maybe we'll go straight to dx13. Let's be honest only nvidia knows the answer!

The market has changed and the DX12 standard is too variable maybe. Patching in some DX12 features to a DX11 based code probably isn't going to offer that much.

Is DX12 even a standard as it seems you can pick and choose features while still claiming to offer DX12.
 
This is max for that level
typedef enum D3D12_RESOURCE_HEAP_TIER {
D3D12_RESOURCE_HEAP_TIER_1 = 1,
D3D12_RESOURCE_HEAP_TIER_2 = 2
} D3D12_RESOURCE_HEAP_TIER;

D3D12_RESOURCE_HEAP_TIER_1

Indicates that heaps can only support resources from a single resource category. For the list of resource categories, see Remarks. In tier 1, these resource categories are mutually exclusive and cannot be used with the same heap. The resource category must be declared when creating a heap, using the correct D3D12_HEAP_FLAGS enumeration constant. Applications cannot create heaps with flags that allow all three categories.


More info what each Tier needs to get the full level 3 - VEGA supports them all
WJGR3JU.jpg


Thanks for clearing the above up.

The real tragedy of DX12 though is that devs won't go to the effort and expense of using all the resources even if they are available. mGPU springs to mind with very poor dev support in DX12 and this is not going to change anytime soon until the card vendors realise that they can no longer get significant performance gains by relying on die shrinks and optimisations alone.
 
Thanks for clearing the above up.

The real tragedy of DX12 though is that devs won't go to the effort and expense of using all the resources even if they are available. mGPU springs to mind with very poor dev support in DX12 and this is not going to change anytime soon until the card vendors realise that they can no longer get significant performance gains by relying on die shrinks and optimisations alone.

No Problem :D
Well already FP16 is going to be used, Wolfenstein 2 and FarCry 5
 
Thanks for clearing the above up.

The real tragedy of DX12 though is that devs won't go to the effort and expense of using all the resources even if they are available. mGPU springs to mind with very poor dev support in DX12 and this is not going to change anytime soon until the card vendors realise that they can no longer get significant performance gains by relying on die shrinks and optimisations alone.

Why would developers not want to get the most performance possible from the product. They reach a bigger market from doing that. Please tell me what games developer's CBA taking advantage of a PC hardware, because if what you're saying is common place then the PC is done as a platform.
 
Why would developers not want to get the most performance possible from the product. They reach a bigger market. Please tell me what games developer CBA taking advantage of a PC hardware, because if what you're saying is common place then the PC is done as a platform.

As I said above mGPU support in DX12 is a total joke for both vendors cards, unfortunately there is very little that they can do about it as in DX12 it is the game devs who have the responsibility to provide the support.
 
As I said above mGPU support in DX12 is a total joke for both vendors cards, unfortunately there is very little that they can do about it as in DX12 it is the game devs who have the responsibility to provide the support.

Well you say that but I've seen MGPU running with an AMD card and Nvidia card and the performance was pretty staggering.
 
Why would developers not want to get the most performance possible from the product. They reach a bigger market from doing that. Please tell me what games developer's CBA taking advantage of a PC hardware, because if what you're saying is common place then the PC is done as a platform.
Oh look here comes Forza 7, from the benches we have seen so far it looks like that developer couldn't be bothered to get the most performance from the product.
 
Ignoring the GPU aspect, it only runs on a single thread on the CPU, so how is that not taking advantage of the PC hardware.
 
Well you say that but I've seen MGPU running with an AMD card and Nvidia card and the performance was pretty staggering.
It works great with lots of games that support that feature, typically older games. I really want to try sli but realistically it's been dropped like a hot potato from both vendors and going forward it's not offering much.
 
It works great with lots of games that support that feature, typically older games. I really want to try sli but realistically it's been dropped like a hot potato from both vendors and going forward it's not offering much.

Then you have to ask why. My hunch is Nvidia suck at MGPU and no amount of driver hand holding helps or developers just dont want to get stuck with a deep driver stack. That or Nvidia feel they can make more money by pushing cards to higher price brackets. It's probably a bit of everything.
 
If that was the case wouldn't Intel smash Ryzen performance? After all Intel have better single thread performance.
https://segmentnext.com/2017/10/03/forza-7-cpu-benchmarks/


Well it has been confirmed by the developer.

Hello everyone,

Some users may notice that the game utilizes nearly 100% of one of their processor cores. This is expected behavior; we intentionally run in this manner so we can react as fast as possible in order to minimize input latency. Users on power-constrained devices, such as laptops and tablets, might want to use a Performance Target of “30 FPS (V-SYNC),” which will reduce processor usage and minimize power consumption.
Turn 10 | Community Liaison | Gamertag: SOY

https://forums.forzamotorsport.net/...tilizes-only-one-thread-core.aspx#post_769942
 
Back
Top Bottom