• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD's next big Mantle Game

Caporegime
Joined
17 Mar 2012
Posts
50,251
Location
ARC-L1, Stanton System
Beyond Earth


http://www.gamespot.com/articles/beyond-earth-takes-civilization-to-the-stars/1100-6418906/

http://www.gametrailers.com/news-post/73123/sid-meiers-civilization-goes-sci-fi-with-beyond-earth

In addition to its use of DirectX 11, Sid Meier’s Civilization: Beyond Earth will be among the first wave of products optimized for the latest in AMD graphics technologies, including: the new Mantle graphics API, for enhanced graphics performance; AMD CrossFire™, for UltraHD resolutions and extreme image quality; and AMD Eyefinity, which allows for a panoramic gameplay experience on up to six different HD displays off of a single graphics card.

http://www.techspot.com/news/56376-...-is-sid-meiers-civilization-beyond-earth.html

AMD has scored another win for their Mantle API, with upcoming title Sid Meier's Civilization: Beyond Earth to support it alongside DirectX 11. Previous Civilization titles, such as Civilization V, were reasonably CPU intensive, so the reduced CPU overhead and lower-level features of Mantle should help gamers run the game on entry-level hardware including AMD's own APUs.
 
Last edited:
Folks what does mantle exactly do to help in games? Ive read up a little about it but Im non the wiser.

Mantle is the only Low level API currently in existence; its purpose is to reduce the CPU Bottleneck when compared with DirectX,
the result of that is you get much better performance, especially with the Minimum FPS and if you have a weaker CPU,
it’s not so much effective if you’re running an i7 3960K or 4960X, but then how many of us do?

More on Mantle here

So, this....


 
The starswarm demo is a little misleading, as it also supports an option called "defered contexts" which when enabled on Nvidia machines gives a similar performance boost... The most recent nvidia drivers also give a good boost in this demo

So far mantle seems to give around a 5% boost in a lot of situations, and bigger when severely CPU limited, like low end cpu with high end gpu, or crossfire

Basically it is a little handy boost in 2 games with a few others announced, in all likelihood it will be deprecated by DX12

Still, Civ games have historically been cpu limited, so a good choice for Mantle marketing, will be interesting to see if they use command lists again (as they made a pair of 580's exceed a pair of 7970's in Civ5)

Blanket statements. Your being very misleading, "defered contexts" does not give twice the performance, more like 20%

Nvidia's new driver only give a boost where the CPU is already very powerful, like a £450 3960K, does not help with CPUs that do not cost £450, thats why Nvidia only used the 3960K in their PR.

Nvidia's Drivers do not give anywhere near the same performance as mantle.
 
Last edited:
Some people have recorded some MASSIVE gains from the new Nvidia Driver.....

And lets not get into blanket statements when you're calling Mantle the only low level API in existence.

Is there any reason why you feel the need to but in on our debate here?

So where are these 100% gains from weaker CPU's then?
 
Blanket statement Humbug i'm afraid my old i7 960 sees a increase with the new driver Hardly a very powerful CPU by todays standards

Thats not what i said.
I said Nvidia's Driver does see improvements "in Some Games" andybird said it gives similar performance to Mantle, its doesn't, not even close.
 
I would have thought it is hard to compare the two, since mantle supports only a few titles at the moment (if that), where as the Nvidia driver applies across the board to various games.

"applies across the board to various games."

I don't understand that double talk. is it "across the board" or "to various games" its not across the board.

Its 2 out of 10 games Anand tested, and its 10% in those games, Total War: Rome II is because Nvidia added the SLI Profile to it.

http://www.anandtech.com/show/7926/...r-offers-significant-performance-improvements

Its a new driver with some improvements, nothing so special about it. its a bit like AMD's 12.11 Driver. its a good driver, but its not Mantle.
 
Last edited:
More than mantle at present, though. So, hard to compare the two surely?

Its very easy to compare the two.

Take the same or similar CPU, not a £450 CPU, that tells readers nothing at all. run it on AMD in DX11, then Mantle. and then run it on Nvidia first with the old driver, then with the new.

Simple.
 
Yes, I can see what you mean about across the board conflicting with various titles, though I don't see the comparison when between the two (the recent nvidia driver / mantle) when, as far as I'm aware, mantle doesn't yet support the titles that you linked?

Thats right, it doesn't, in the same was this new Driver from Nvidia is selective in the games that it improves.

This Driver is not in competition with Mantle, it certainly does not have similar results to Mantle.

Mantle is what it is and Nvidia's Drivers are what they are.

I was asked what is Mantle and what does it do, i gave the requested answer, i said nothing at all about Nvidia's Drivers, it has nothing to do with them
I was not the one making the comparison. Despite this; yet again it was turned into Nvidia vs AMD for completely invented reasons. you cannot turn around in this forum without that crap happening!
 
Last edited:
My curiosity was generated when you wrote : 'I said Nvidia's Driver does see improvements "in Some Games" andybird said it gives similar performance to Mantle, its doesn't, not even close.'

Hence my question about the comparison. How can it be 'not even close' when there's nothing to compare (AFAIK).


If there is nothing to compare, if the Driver does not support the same games, then its not similar performance in that sense, nor is it similar performance in any other sense as it does not give the same level of CPU scaling in anything that it does scale in.
 
Dan Baker of Oxide games helped make DX11, and he was also the graphics lead on Civilization 5. Civ 5’s DX11 mode has some of the highest draw calls of any DX11 game ever made, so he knows a thing or two about how DirectX 11 works. He was also a pioneer and an industry expert in experimenting with DX11 deferred contexts, another technique that doesn’t improve performance like anyone thought it should—but NVIDIA still pushes developers to use it. Again, why? Because deferred contexts improve performance for the first five minutes of gameplay, then the accumulated contexts eventually slow down the framerates below an engine without them. But how long do benchmarks take? ;) Shrewd, no? You should read Dan’s comments in this interview. I think you’ll find them very enlightening.

Like everything else in this, its all smoke an mirrors on the NV side.

I said deferred contexts gives similar gains in STAR SWARM, which it does, 100% gains in average fps, which is what your slide shows

Which it doesn't

Starswarm test on rig in sig, 1080p, extreme.

Default:

== Results ================================================
Test Duration: 360 Seconds
Total Frames: 13234

Average FPS: 36.76
Average Unit Count: 4038
Maximum Unit Count: 5474
Average Batches/MS: 714.64
Maximum Batches/MS: 1631.72
Average Batch Count: 19742
Maximum Batch Count: 122244
===========================================================

Turning Deferred Contexts on:

== Results ================================================
Test Duration: 360 Seconds
Total Frames: 17090

Average FPS: 47.47
Average Unit Count: 4291
Maximum Unit Count: 5641
Average Batches/MS: 971.05
Maximum Batches/MS: 2874.18
Average Batch Count: 21390
Maximum Batch Count: 112223
===========================================================

That's a 30% improvement.

Anyway, it's about time somebody kicked Microsoft's arse into a thorough reworking of DX and eliminating the bloat, and I'm glad AMD have picked up the gauntlet.

DirectX

===========================================================
Oxide Games
Star Swarm Stress Test - ©2013
C:\Users\########\Documents\Star Swarm\Output_14_04_11_1437.txt
Version 1.10
04/11/2014 14:37
===========================================================

== Hardware Configuration =================================
GPU: AMD Radeon R9 200 Series
CPU: AuthenticAMD
AMD Phenom(tm) II X6 1090T Processor
Physical Cores: 6
Logical Cores: 6
Physical Memory: 8549400576
Allocatable Memory: 140737488224256
===========================================================


== Configuration ==========================================
API: DirectX
Scenario: ScenarioFollow.csv
User Input: Disabled
Resolution: 1920x1080
Fullscreen: True
GameCore Update: 16.6 ms
Bloom Quality: High
PointLight Quality: High
ToneCurve Quality: High
Glare Overdraw: 16
Shading Samples: 64
Shade Quality: Mid
Deferred Contexts: Disabled
Temporal AA Duration: 16
Temporal AA Time Slice: 2
Detailed Frame Info: C:\Users\#####\Documents\Star Swarm\FrameDump_14_04_11_1437.csv
===========================================================

== Results ================================================
Test Duration: 360 Seconds
Total Frames: 10007

Average FPS: 27.78
Average Unit Count: 4033
Maximum Unit Count: 5460
Average Batches/MS: 463.11
Maximum Batches/MS: 933.67
Average Batch Count: 18124
Maximum Batch Count: 110446
===========================================================

Mantle

===========================================================
Oxide Games
Star Swarm Stress Test - ©2013
C:\Users\####\Documents\Star Swarm\Output_14_04_11_1444.txt
Version 1.10
04/11/2014 14:44
===========================================================

== Hardware Configuration =================================
GPU: AMD Radeon R9 200 Series
CPU: AuthenticAMD
AMD Phenom(tm) II X6 1090T Processor
Physical Cores: 6
Logical Cores: 6
Physical Memory: 8549400576
Allocatable Memory: 140737488224256
===========================================================


== Configuration ==========================================
API: Mantle
Scenario: ScenarioFollow.csv
User Input: Disabled
Resolution: 1920x1080
Fullscreen: True
GameCore Update: 16.6 ms
Bloom Quality: High
PointLight Quality: High
ToneCurve Quality: High
Glare Overdraw: 16
Shading Samples: 64
Shade Quality: Mid
Deferred Contexts: Disabled
Temporal AA Duration: 16
Temporal AA Time Slice: 2
Detailed Frame Info: C:\Users\#####\Documents\Star Swarm\FrameDump_14_04_11_1444.csv
===========================================================

== Results ================================================
Test Duration: 360 Seconds
Total Frames: 20453

Average FPS: 56.81
Average Unit Count: 4635
Maximum Unit Count: 5596
Average Batches/MS: 941.75
Maximum Batches/MS: 3639.21
Average Batch Count: 20102
Maximum Batch Count: 146378
===========================================================


100% improvement.
 
Last edited:
I get 30's with it off and 60's with DC and new drivers, and Gregster is still top of the score table with 70 odd
And of i turn off the pointless motion blur (which I would in every game anyway) i get 125+fps

Im not sure you are really trying to prove with starswarm other than it is an awful example to try to make any kind of point with as it is all over the show

Last time you ran the starswarm demo and got 50ish you were telling us it was a 38% improvment, so not sure why your dx score has tanked in the meantime

Really?

i had a 7870XT when i got a 38% improvement, do you know how much more powerful an R9 290 is? you don't understand the concept of a CPU Bottleneck at all do you?

You also know Star Swarm has been patched many times.
 
Youve upgrade to a 290, got the same score in mantle, yet a much lower score in DX?
Umkay
Because the things been patched yet again.

Look at Quartz, he got near 10 FPS less than me on a 780TI, Less than 10 more on DX, tell me. why is that?
 
Last edited:
Without being rude if star swarm patch has made a 290 score what a 7870 used to something is wrong, Same if its dropped 780's etc btw thats a huge difference in card to score the same what did they change?

I have no idea but Quartz on a 780TI also gets less than i did on the 7870XT 'pre the latest patch', think about it.
 
Oops i think i posted my results in the wrong thread. I meant to post them here lol.

I'm Z77, as is Lamb Shanks and no problems here so another +1 for Z77.

Here are my starswarm results.


DirectX 11.1

===========================================================
Oxide Games
Star Swarm Stress Test - ©2013
C:\Users\matt\Documents\Star Swarm\Output_14_04_14_1340.txt
Version 1.10
04/14/2014 13:40
===========================================================

== Hardware Configuration =================================
GPU: AMD Radeon R9 200 Series
CPU: GenuineIntel
Intel(R) Core(TM) i7-2700K CPU @ 3.50GHz
Physical Cores: 4
Logical Cores: 8
Physical Memory: 17118543872
Allocatable Memory: 140737488224256
===========================================================


== Configuration ==========================================
API: DirectX
Scenario: ScenarioFollow.csv
User Input: Disabled
Resolution: 1920x1080
Fullscreen: True
GameCore Update: 16.6 ms
Bloom Quality: High
PointLight Quality: High
ToneCurve Quality: High
Glare Overdraw: 16
Shading Samples: 64
Shade Quality: Mid
Deferred Contexts: Disabled
Temporal AA Duration: 16
Temporal AA Time Slice: 2
Detailed Frame Info: Off
===========================================================


== Results ================================================
Test Duration: 360 Seconds
Total Frames: 15324

Average FPS: 42.57
Average Unit Count: 4273
Maximum Unit Count: 5662
Average Batches/MS: 640.49
Maximum Batches/MS: 1689.98
Average Batch Count: 17323
Maximum Batch Count: 133495
===========================================================



Mantle

===========================================================
Oxide Games
Star Swarm Stress Test - ©2013
C:\Users\matt\Documents\Star Swarm\Output_14_04_14_1333.txt
Version 1.10
04/14/2014 13:33
===========================================================

== Hardware Configuration =================================
GPU: AMD Radeon R9 200 Series
CPU: GenuineIntel
Intel(R) Core(TM) i7-2700K CPU @ 3.50GHz
Physical Cores: 4
Logical Cores: 8
Physical Memory: 17118543872
Allocatable Memory: 140737488224256
===========================================================


== Configuration ==========================================
API: Mantle
Scenario: ScenarioFollow.csv
User Input: Disabled
Resolution: 1920x1080
Fullscreen: True
GameCore Update: 16.6 ms
Bloom Quality: High
PointLight Quality: High
ToneCurve Quality: High
Glare Overdraw: 16
Shading Samples: 64
Shade Quality: Mid
Deferred Contexts: Disabled
Temporal AA Duration: 16
Temporal AA Time Slice: 2
Detailed Frame Info: Off
===========================================================


== Results ================================================
Test Duration: 360 Seconds
Total Frames: 17400

Average FPS: 48.33
Average Unit Count: 4449
Maximum Unit Count: 5599
Average Batches/MS: 933.97
Maximum Batches/MS: 2983.30
Average Batch Count: 21678
Maximum Batch Count: 99092
===========================================================

Thanks Matt :)
 
Woah Woah Woah, so the maker of the game is openly admitting that he is deliberately increasing the batch count in direct response to nvidia optimising for high batch counts, the effect of which is poorer to unplayable performance on both nvidia and AMD hardware, but it retains the "ooh look at how mantle is better than DX" effect, so that makes it perfectly ok

Lmfao

Thats not what i said, i said "i think he increased Batching intensity", it makes the CPU work harder.
 
Yes, because more batches = more draw calls... Re-check the nvidia slides, "draw" was one of the main functions they optimised for

Either way, rerunning these tests now, it is pretty obvious that they are deliberatley patching the game to make performance WORSE and not better, which is what you would expect patches and optimisation to bring, and that they are decreasing nvidia performance more than AMD performance is pretty obvious why

At least there was a question mark over Gameworks, this is red handed smoking gun territory

They are sabotaging Nvidia?

Its just to make the CPU (including those 12 thread £450 i7's) work more for it, the bench is designed to test CPU performance, they obviously felt it wasn't working CPU's like the 3960K hard enough, (if my guess is right)

There is no conspiracy.
 
Last edited:
This. Just for the record, I had back in February ~ 36fps vs. ~59 fps, while today I have ~ 36fps vs. ~50 fps. Don't know what they've done to it, but at least for me it looks like the DX performance is about the same, while Mantle performance went down about 50%, from an improvement of ~ 63% to about ~ 42%.

It' clear that if you build your game going for Mantle full support and features with insane number of batches (justified, of course), DX cannot catch up. That's the whole point of the benchmark.

Also notice when you talk about increases:

X card could do theoretically 107FPS
Y card could do theoretically 120FPS

X card could do 100fps + Mantle 105fps
Y card could do 90fps + new driivers 110fps.

What that tells you? Well, 1st - the previous driver might not have used that Y card to the best of what could have been done without the latest research that enabled even more performance under DX; in contrast, X card would have already pushing to the max of what that hardware could do under DX. Basically, Y is faster so it's normal to be faster if the SW isn't holding her back to much and it DOESN'T mean that the new drivers are better than Mantle.

2nd - there is no "standard" point of reference to compare the SW since it's not running on the same HW and that's why you go back to the 1st point which makes "versus" standoffs a little bit pointless.

It would have been useful to see how these drivers increase apply to lower powered hardware or a little bit of disproportionate performance components mixed together.

Also food for thought, I can show 0% with Mantle or about 40-50% against DX, it's all about willingness to do that. How? Test the darn thing in a proper situation for which it caters! Same goes for nVIDIA drivers or any kind of testing.

This is what i have been saying since Mantle's inception, reviewers, or about 90% of them run nothing but top end; often overclocked CPU's.

Thats completely useless information, its as if they don't understand the concept of a CPU Bottleneck, they obviously do and yet use a CPU they know would not Bottleneck.

And they are doing the same thing with the Nvidia wonder-driver, mind you so are Nvidia.

Its all very strange to demonstrate the benefits of using whatever system to alleviate a CPU bottleneck by using the least likely CPU to bottleneck they can find.
 
Back
Top Bottom