• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Could AMD cpu's be viable with dx12

For gaming I can understand why Intel is dominant but why is AMD still lagging in the corporate market? I bet a 6 or 8 core AMD cpu would seriously beat any dual or quad core intel in corporate environments where people are just using MS word/Excel, etc. Cost wise I'm sure AMD are priced pretty competitively yet many businesses still use Intel.
 
if DX12 frees up CPU in one area, game devs will "make use" of whatever is freed up, I say make use but it could just become laziness in optimising, after all we already have some DX11 games which multithread very well but the vast majority of them don't

I think you will continue to need a high end CPU to go with a highend GPU setup

apparently Zen is dropping the split resources for a pair of cores approach so at least it should be vaguely competitive again, but pricing wise it's still a complete unknown
 
if DX12 frees up CPU in one area, game devs will "make use" of whatever is freed up, I say make use but it could just become laziness in optimising, after all we already have some DX11 games which multithread very well but the vast majority of them don't

I think you will continue to need a high end CPU to go with a highend GPU setup

apparently Zen is dropping the split resources for a pair of cores approach so at least it should be vaguely competitive again, but pricing wise it's still a complete unknown

exactly. Just like monetary inflation.
 
why is AMD still lagging in the corporate market? I bet a 6 or 8 core AMD cpu would seriously beat any dual or quad core intel in corporate environments where people are just using MS word/Excel, etc. Cost wise I'm sure AMD are priced pretty competitively yet many businesses still use Intel.

Kickbacks. Intel spent billions paying off companies to exclusively use their chips. One of those companies was Dell. Who were one of the largest producers of both home and corporate hardware in the world at the time. Most corporate environments in the west at least would use Dell.

Intel have been taken to court several times by AMD for doing stuff like that. Intel settled for a few million in one court case and after fighting another they ended up having to pay AMD over a billion dollars.

AMD, back in the day, had the better chips. Intel started shelling out money in kickbacks for exclusivity (we're talking billions here, millions paid out to multiple companies, of which Dell was just one, other Asian based manufacturers allegedly also received kickbacks). By the time the Core architecture came out, and all the money paid out in kickbacks, the damage was done and AMD have never really recovered. Despite the fact they're a much more cost effective option (if not performance).

And Intel has the bank to fight this stuff in court. AMD doesn't. Intel spent well over 100 million fighting a case they lost in which AMD was awarded over 10 million. Then there was the billion plus payout to AMD. And Intel can advertise and market. Everyone associates quality with "Intel Inside" like AMD is some Asian knock off because they've never heard of it. When's the last time you saw an AMD advert on TV? I can't remember ever seeing one. I certainly remember seeing plenty of Intel ones though.
 
Last edited:
Yes good points to refresh OhEsEcks, instead of posts requesting AMD bring something to the table to compete with intel they need to understand the principles of why they are behind in the first place.
 
So far we only have one DX12 game to test, Ashes of the Singularity. And only a demo benchmark is available, so it's far too early to draw a conclusion.

Nonetheless, this is the best we have go to on at the moment:

iuQ8aoN.png


Source: http://www.pcper.com/reviews/Graphics-Cards/DX12-GPU-and-CPU-Performance-Tested-Ashes-Singularity-Benchmark/Results-Heavy

As the above illustrates, nothing much has changed. Intel dual cores still beat AMD's 8 module CPU's. AMD and Intel CPU's both get a huge bonus from running DX12 code, with Intel still comfortably in the lead.

Also one must remember that AMD FX CPU's have a very dated chipset, compared to the Z170 and x99 chipset. Plus Intel's performance per watt is still far superior to the FX CPU's.

Hopefully Zen, once it's released in late 2016/early 2017 will level the playing field, until then I wouldn't recommend a FX CPU to any gamer.
 
Not sure what to make of that graph, I would have suspicions of any game where the i3 comfortably beats the FX in a "multi-threaded" game. Strange, and where are the i5's?
 
Not sure what to make of that graph, I would have suspicions of any game where the i3 comfortably beats the FX in a "multi-threaded" game. Strange, and where are the i5's?

Not looking good for my theory so far :P .

I expected better from an 8 core...
 
Not sure what to make of that graph, I would have suspicions of any game where the i3 comfortably beats the FX in a "multi-threaded" game. Strange, and where are the i5's?

The Ashes of the Singularity developer attempted to address the difference between AMD and Intel performance in the benchmark, I'll copy and paste a detailed post from overclock.net below: (credit to the poster "Mahigan" for this analysis of the developer notes)

Source: http://www.overclock.net/t/1569897/various-ashes-of-the-singularity-dx12-benchmarks/850#post_24335709

---------------------------------------------------------------------------------------
Alright folks,

I have news for you from Tim Kipp over at Oxide on the CPU Optimizations found in Ashes of the Singularity. Tim also went into detail about the Memory bandwidth issues which can arise (which may explain the AMD CPU issues in the benchmark) as well as other tidbits of information we can all use in order to better understand just what is happening, behind the scenes, which leads to the results we are seeing.

Ashes Developer said:
Hi xxxxxxxxx,

Thanks for your interest in the Ashes of the Singularity benchmark.

In order to get an accurate picture of how well a given CPU will perform, it's important to look at the CPU Frame rate with Infinite GPU on and off ( a check box exists on the benchmark settings panel ). Note, while on, you may see some graphical corruption due to use of async shaders, however the results will be valid.

With Infinite GPU, you should see %90+ workload on your CPU. In this mode, we do not "wait" in the case where the GPU is still busy. You should see excellent scaling between 4-16 thread machines.This can only be tracked on DX12.

Without Infinite GPU, the CPU will "Wait" on a signal from the GPU that the ready to process another frame. During this wait, the CPU tends to power down when there isn't any additional work to do and effectively serializes a portion of the frame. This serialization is what causes the CPU frame rate discrepancy between Infinite GPU on and off.

In addition, due to this "wait", one interesting stat to track is your power draw. On DX11 the power draw tends to be much higher than on DX12, as the additional serial threads that the driver needs to process the GPU commands effectively forces the CPU to be active even if it is only using a fraction of it's cores. This tends to be an overlooked benefit to DX12 since the API is designed so that engines can evenly distribute work.


Regarding specific CPU workloads and the differences between AMD and Intel it will be important to note a few things.

1. We have heavily invested in SSE ( mostly 2 for compatibility reasons ) and a significant portion of the engine is executing that code during the benchmark. It could very well be 40% of the frame. Possibly more.

2. While we do have large contiguous blocks of SSE code ( mainly in our simulations ) it is also rather heavily woven into the entire game via our math libraries. Our AI and gameplay code tend to be very math heavy.

3. The Nitrous engine is designed to be data oriented ( basically we know what memory we need and when ). Because of this, we can effectively utilize the SSE streaming memory instructions in conjunction with prefetch ( both temporal and non temporal ). In addition, because our memory accesses are more predictable the hardware prefetcher tends to be better utilized.

4. Memory bandwidth is definitely something to consider. The larger the scope of the application, paired with going highly parallel puts a lot of pressure on the Memory System. On my i7 3770s i'm hitting close to peak bandwith on 40% of the frame.


I hope this information helps point you in the right direction for your investigation into the performance differences between AMD and Intel. We haven't done exhaustive comparative tests, but generally speaking we have found AMD chips to compare more favorably to Intel than what is displayed via synthetic benchmarks. I'm looking forward to your results.

# # #
Notes (added as time permits):

- The good news is there are no AVX optimisations. Oxide have used SSE2 instead, for compatibility reasons as mentioned. This should give Intel processors only a slight edge, nothing incredible.

- The better utilization of the hardware prefetcher would point to far better performance on Vishera than on Bulldozer. One of Vishera's selling points, over Bulldozer, were the improvements to its hardware prefetcher. Steamroller did not improve further in terms of prefetching therefore the better performance of the A10-7870K cannot be attributed to this factor. We will have to look elsewhere.

- The integer and floating point register files were increased in size in Steamroller while Load operations (two operands) were compressed in order to fit a single entry in the physical register file, which helps increase the effective size of each RF over both Bulldozer and Vishera. This would give Steamroller an edge in terms of integer execution which could account for some of the performance variance between Steamroller and Vishera\Bulldozer. The scheduling windows were made bigger in Steamroller which allow for greater utilization of execution resources (better for Draw Call execution for example). These improvements, together, could account for the performance increase we see with Steamroller. Steamroller also benefits from around 30% overall Ops per cycle over Vishera as a result of its improved FPU. the following slide, provided by AMD, gives us a glimpse as to some of the improvements that arrived with Steamroller:
CREATOR: gd-jpeg v1.0 (using IJG JPEG v62), quality = 90


- Memory bandwidth is also an important part of the equation. The Core i7-3770K is an Ivy-Bridge part which is most likely paired with a socket 1155 motherboard utilizing the Z77 chipset. The usual memory configuration for an Ivy Bridge part is a Dual Channel 1600MHz DDR3 configuration. This usually allows for around 20GB/s of Read and Write bandwidth. If 40% of the frame is leading to peak bandwidth usage then the same should be considered for an AMD FX-8350 part paired with an AMD 990FX Chipset whose peak bandwidth is around 19GB/s. It is no secret that the AMD FX-8350 benefits, moreso, than Intel parts by running faster memory (usually 1866MHz is recommended) due to the architecture being memory bandwidth starved. Therefore we can conclude that memory bandwidth could be, at least partially, to blame for the performance difference between AMDs FX series and Intel's Core ix series in AotS
 
For gaming I can understand why Intel is dominant but why is AMD still lagging in the corporate market? I bet a 6 or 8 core AMD cpu would seriously beat any dual or quad core intel in corporate environments where people are just using MS word/Excel, etc. Cost wise I'm sure AMD are priced pretty competitively yet many businesses still use Intel.

Even back in the days when AMD made the better chips (had a number of AMD cpu's myself) OEM PC's were almost exclusively Intel. Inertia in the market I figured, companies especially when they're dealing with volume production stick with what they know and don't like to rock the boat, better the devil you know, etc etc.
 
Last edited:
Even back in the days when AMD made the better chips (had a number of AMD cpu's myself) OEM PC's were almost exclusively Intel. Inertia in the market I figured, companies especially when they're dealing with volume production stick with what they know and don't like to rock the boat, better the devil you know, etc etc.

Performance per watt is also very important for businesses, something that Intel excel at.
 
Even back in the days when AMD made the better chips (had a number of AMD cpu's myself) OEM PC's were almost exclusively Intel. Inertia in the market I figured, companies especially when they're dealing with volume production stick with what they know and don't like to rock the boat, better the devil you know, etc etc.

A lot of it was probably down to motherboard chipsets, AMD's better chipsets tended to be VIA which had a bit of a bad reputation. It was common knowledge on forums such as this that Intel chipsets were usually the most stable.
 
If things like HSA take off, AMD could do well with their APU's.
In fact their APU's are very likely the ones that will benefit most from DirectX12.
Their problem however is it looks like there is no new APU's for quite a while.
 
Back
Top Bottom