• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Poll: ** The AMD VEGA Thread **

On or off the hype train?

  • (off) Train has derailed

    Votes: 207 39.2%
  • (on) Overcrowding, standing room only

    Votes: 100 18.9%
  • (never ever got on) Chinese escalator

    Votes: 221 41.9%

  • Total voters
    528
Status
Not open for further replies.
Talking about DX12 I happened to find this link over at AMD reddit. A pot from 6 years ago on APIs
https://forums.tripwireinteractive....nsight-into-the-issue-of-draw-calls-on-the-pc

Partial Quote


I've yet to read the whole thing, but from what i've read it does seem legit.
Considering how low power and cheap Ryzen CPUs are, I think we might see some custom ryzen CPUs in the next-gen/next refresh of consoles; At that point PC gaming would lose its advantage.

It's been a known issue for years, Nvidia have got around the draw call issue a few years ago when they lunched Maxwell which had a dramatically different front end and scheduler which allowed them effectively re-write code across multiple CPU threads when they released this driver:

TbnOGku.jpg


Just goes to show that you CAN expect large increases in performance with driver updates and just how important the software is. I would even go as far to say when reviewing a video card your not only reviewing the hardware but your effectively reviewing the software support as well.

It's a shame AMD aren't able to do the same with their hardware, I was wondering if AMD had changed anything in Vega that might allow them to fix these overhead issues through software optimisations. Remains to be seen I guess but I don't know how much different Vega is from GNC.
 
bum di
I also backed up your claims with my own mantle vs dx 11 benchmarks shankly (in fact we had a number of people and even video comparisons) but nah, lets not bring "facts" with hard evidence into this sub-forum, instead lets just spout any old rubbish :D



Yup

And remember rise of the tomb raider, launched with just dx 11, eventually got a broken dx 12 patch which didn't see much improvement for any card and the nvidia lot were saying that async was included and then a few months later, a refinement patch came with async added and then we saw some nice gains for amd yet again :cool: Funnily the nvidia defence league went very quiet after that :p

What makes the above even funnier is that the xbox one had async on day 1 so it was purposely removed for the PC version...... oh and this:
NdnAhFO.png
Even though, purehair was using amd's work :p

But that is what "sponsorship" grants you ;)


Nvidia purposely got involved with the PC version of Rise of the Tomb Raider late in the PC version's development phase so they could ensure any tech such as a sync compute which they struggled at was removed, On PC it became a DX11 title that ran in way's that favour Nvidia hardware. AMD dropped the ball with that one.
At the time AMD and more so the fanboy's were blowing hot air about how 4gb's of HBM was like 5 or 6 gb's of GDDR5, Nvidia knew this was rubbish and made sure the ram usage exceeded 4gb's at all resolutions so that the 4gb Fiji cards would be unable to use the highest texture setting. Which in turn made the 6gb 980ti and 12gb Titan X look like the next level up of gpu, which they were anyway but, Nvidia just wanted to take advantage of every opportunity to highlight it.
 
bum di



Nvidia purposely got involved with the PC version of Rise of the Tomb Raider late in the PC version's development phase so they could ensure any tech such as a sync compute which they struggled at was removed, On PC it became a DX11 title that ran in way's that favour Nvidia hardware. AMD dropped the ball with that one.
At the time AMD and more so the fanboy's were blowing hot air about how 4gb's of HBM was like 5 or 6 gb's of GDDR5, Nvidia knew this was rubbish and made sure the ram usage exceeded 4gb's at all resolutions so that the 4gb Fiji cards would be unable to use the highest texture setting. Which in turn made the 6gb 980ti and 12gb Titan X look like the next level up of gpu, which they were anyway but, Nvidia just wanted to take advantage of every opportunity to highlight it.
I'm curious as to how your thought process goes when it comes to deciding when to use an apostrophe.
 
bum di



Nvidia purposely got involved with the PC version of Rise of the Tomb Raider late in the PC version's development phase so they could ensure any tech such as a sync compute which they struggled at was removed, On PC it became a DX11 title that ran in way's that favour Nvidia hardware. AMD dropped the ball with that one.
At the time AMD and more so the fanboy's were blowing hot air about how 4gb's of HBM was like 5 or 6 gb's of GDDR5, Nvidia knew this was rubbish and made sure the ram usage exceeded 4gb's at all resolutions so that the 4gb Fiji cards would be unable to use the highest texture setting. Which in turn made the 6gb 980ti and 12gb Titan X look like the next level up of gpu, which they were anyway but, Nvidia just wanted to take advantage of every opportunity to highlight it.

Yup, they're proper streetfighters whereas AMD are still in kindergarten:D
 

Came across this last night, amd earlier this year showed doom running at 4k ultra at 60fps maxed out on vulkan, there's a video of the fury x doing more or less the same thing, just without nightmare textures enabled because of the 4 gig buffer.
 
bum di



Nvidia purposely got involved with the PC version of Rise of the Tomb Raider late in the PC version's development phase so they could ensure any tech such as a sync compute which they struggled at was removed, On PC it became a DX11 title that ran in way's that favour Nvidia hardware. AMD dropped the ball with that one.
At the time AMD and more so the fanboy's were blowing hot air about how 4gb's of HBM was like 5 or 6 gb's of GDDR5, Nvidia knew this was rubbish and made sure the ram usage exceeded 4gb's at all resolutions so that the 4gb Fiji cards would be unable to use the highest texture setting. Which in turn made the 6gb 980ti and 12gb Titan X look like the next level up of gpu, which they were anyway but, Nvidia just wanted to take advantage of every opportunity to highlight it.

When the performance DX12 driver came out,I found the game would sometimes blow past the 8GB of VRAM on my GTX1080 and cause performance to crash. Not sure if things are better now,but VRAM usage on the highest texture settings is very high - was getting just under 8GB at qHD normally and over 7GB at 1080p IIRC.
 
It's been a known issue for years, Nvidia have got around the draw call issue a few years ago when they lunched Maxwell which had a dramatically different front end and scheduler which allowed them effectively re-write code across multiple CPU threads when they released this driver:

TbnOGku.jpg


Just goes to show that you CAN expect large increases in performance with driver updates and just how important the software is. I would even go as far to say when reviewing a video card your not only reviewing the hardware but your effectively reviewing the software support as well.

It's a shame AMD aren't able to do the same with their hardware, I was wondering if AMD had changed anything in Vega that might allow them to fix these overhead issues through software optimisations. Remains to be seen I guess but I don't know how much different Vega is from GNC.


That wonder driver was just marketing, it didn't make any difference in most games. the 71% figure was in Rome Total War and you know how they did that? Well quite simple, Rome Total War didn't have support for SLI before that driver and when it got support game performance improved, by.. yes, 71%

The other problem with the Driver is that didn't actually reduce the CPU bottlenecks. You still needed a high end CPU to see any gains from this driver.

Other than the two outliers, it was just pretty standard driver update, but well marketed to deflect attention away from Mantle.
 
Just goes to show that you CAN expect large increases in performance with driver updates and just how important the software is. I would even go as far to say when reviewing a video card your not only reviewing the hardware but your effectively reviewing the software support as well.

It's a shame AMD aren't able to do the same with their hardware, I was wondering if AMD had changed anything in Vega that might allow them to fix these overhead issues through software optimisations. Remains to be seen I guess but I don't know how much different Vega is from GNC.

AMD can't do the same because they have a fully functioning hardware scheduler. Nvidia removed some/most of the hardware scheduler and moved it to the drivers, hence they were able to multi thread DX11 draw calls. (I believe that this is were most of the power efficiency for Nvidia GPUs come from). Its the reason why AMD is reliant on Vulkan and DX12, to take off and why Nvidia is so keen to stop it.
 
Has anyone seen this?


32,000 GPU score in 3DMark11.

For comparison my 1070 with a casual overclock, and by casual overclock i mean i can't even be bothered to unlock volts, its just +100Mhz and +400Mhz on the memory.... meh that'll do.... i know if i bump the volts up i get another 60Mhz on the core and the memory is perfectly happy to run +500Mhz+

26,600 GPU score http://www.3dmark.com/3dm11/12261141

I mean 32,000 is not to be sniffed at in its own right, its a seriously quick card, but its a 500mm^2 300 Watt GPU with AIB 1080 performance at a time when that 1080 is not that far off getting replaced by Volta, what if the 1170/80 are a step up from the 1070/80 like the 'they were from the 970/80, AMD would be two and a half generations behind.

Right now with a 300 watt 500mm^2 card AMD need to be on par with the 1080TI 'AT LEAST'
In 8 months time my 26,600 scoring GTX 1070 becomes the £250 GTX 1160.

Not good enough, nothing like good enough.
 
I've experienced dx12 once under my 290x running man kind divided.
It was crap so I used dx11 and haven't used dx12 since

DX12 on Mankind Divided was implemented through testing. The first couple of DX12 patches didn't help really but it's a lot better now.

Yup that game on launch was a disaster, not only was performance diabolical but it was missing graphical effects, which the consoles had :o

I played it around March there and the game was great then, all graphics present, great performance and as per usual dx 12 boosted fps a bit and made the experience much smoother :cool:

yGRqv8G.png


hGoWsQt.png


wNMyRgo.png


Mvf6i0N.png
 
That wonder driver was just marketing, it didn't make any difference in most games. the 71% figure was in Rome Total War and you know how they did that? Well quite simple, Rome Total War didn't have support for SLI before that driver and when it got support game performance improved, by.. yes, 71%

The other problem with the Driver is that didn't actually reduce the CPU bottlenecks. You still needed a high end CPU to see any gains from this driver.

Other than the two outliers, it was just pretty standard driver update, but well marketed to deflect attention away from Mantle.

I have a GTX1080 and a Xeon E3 1230 V2,and in the Geothermal Valley in ROTR I did see an uplift with DX12,but the driver was very unstable,and RAM usage was all over the place.
 
Has anyone seen this?


32,000 GPU score in 3DMark11.

For comparison my 1070 with a casual overclock, and by casual overclock i mean i can't even be bothered to unlock volts, its just +100Mhz and +400Mhz on the memory.... meh that'll do.... i know if i bump the volts up i get another 60Mhz on the core and the memory is perfectly happy to run +500Mhz+

26,600 GPU score http://www.3dmark.com/3dm11/12261141

I mean 32,000 is not to be sniffed at in its own right, its a seriously quick card, but its a 500mm^2 300 Watt GPU with AIB 1080 performance at a time when that 1080 is not that far off getting replaced by Volta, what if the 1170/80 are a step up from the 1070/80 like the 'they were from the 970/80, AMD would be two and a half generations behind.

Right now with a 300 watt 500mm^2 card AMD need to be on par with the 1080TI 'AT LEAST'
In 8 months time my 26,600 scoring GTX 1070 becomes the £250 GTX 1160.

Not good enough, nothing like good enough.

Old news and most likely cpu bottle necked. The gtx1080ti running on a similar system was only pulling 33900 so only a little faster. It answers little tbh as it's run at 720p and is old hat.
 
Status
Not open for further replies.
Back
Top Bottom